hexsha
stringlengths 40
40
| size
int64 6
14.9M
| ext
stringclasses 1
value | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 6
260
| max_stars_repo_name
stringlengths 6
119
| max_stars_repo_head_hexsha
stringlengths 40
41
| max_stars_repo_licenses
sequence | max_stars_count
int64 1
191k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 6
260
| max_issues_repo_name
stringlengths 6
119
| max_issues_repo_head_hexsha
stringlengths 40
41
| max_issues_repo_licenses
sequence | max_issues_count
int64 1
67k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 6
260
| max_forks_repo_name
stringlengths 6
119
| max_forks_repo_head_hexsha
stringlengths 40
41
| max_forks_repo_licenses
sequence | max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | avg_line_length
float64 2
1.04M
| max_line_length
int64 2
11.2M
| alphanum_fraction
float64 0
1
| cells
sequence | cell_types
sequence | cell_type_groups
sequence |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
d0c5a72c196d67aed69a9967f6239895d1032a3c | 100,603 | ipynb | Jupyter Notebook | notebooks/contributions/DAPA/DAPA_Tutorial_3_-_Timeseries_-_Sentinel-2.ipynb | gobaRules/notebooks | f66a86841fe47931520d56b856de5bc78249c512 | [
"MIT"
] | null | null | null | notebooks/contributions/DAPA/DAPA_Tutorial_3_-_Timeseries_-_Sentinel-2.ipynb | gobaRules/notebooks | f66a86841fe47931520d56b856de5bc78249c512 | [
"MIT"
] | null | null | null | notebooks/contributions/DAPA/DAPA_Tutorial_3_-_Timeseries_-_Sentinel-2.ipynb | gobaRules/notebooks | f66a86841fe47931520d56b856de5bc78249c512 | [
"MIT"
] | null | null | null | 63.192839 | 31,092 | 0.762293 | [
[
[
"# DAPA Tutorial #3: Timeseries - Sentinel-2",
"_____no_output_____"
],
[
"## Load environment variables\nPlease make sure that the environment variable \"DAPA_URL\" is set in the `custom.env` file. You can check this by executing the following block. \n\nIf DAPA_URL is not set, please create a text file named `custom.env` in your home directory with the following input: \n>DAPA_URL=YOUR-PERSONAL-DAPA-APP-URL",
"_____no_output_____"
]
],
[
[
"from edc import setup_environment_variables\nsetup_environment_variables()",
"_____no_output_____"
]
],
[
[
"## Check notebook compabtibility\n**Please note:** If you conduct this notebook again at a later time, the base image of this Jupyter Hub service can include newer versions of the libraries installed. Thus, the notebook execution can fail. This compatibility check is only necessary when something is broken. ",
"_____no_output_____"
]
],
[
[
"from edc import check_compatibility\ncheck_compatibility(\"user-0.24.5\", dependencies=[])",
"_____no_output_____"
]
],
[
[
"## Load libraries\nPython libraries used in this tutorial will be loaded.",
"_____no_output_____"
]
],
[
[
"import os\nimport xarray as xr\nimport pandas as pd\nimport requests\nimport matplotlib\nfrom ipyleaflet import Map, Rectangle, Marker, DrawControl, basemaps, basemap_to_tiles\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"## Set DAPA endpoint\nExecute the following code to check if the DAPA_URL is available in the environment variable and to set the `/dapa` endpoint. ",
"_____no_output_____"
]
],
[
[
"service_url = None\ndapa_url = None\n\nif 'DAPA_URL' not in os.environ:\n print('!! DAPA_URL does not exist as environment variable. Please make sure this is the case - see first block of this notebook! !!')\nelse: \n service_url = os.environ['DAPA_URL']\n dapa_url = '{}/{}'.format(service_url, 'oapi')\n print('DAPA path: {}'.format(dapa_url.replace(service_url, '')))",
"DAPA path: /oapi\n"
]
],
[
[
"## Get collections supported by this endpoint\nThis request provides a list of collections. The path of each collection is used as starting path of this service.",
"_____no_output_____"
]
],
[
[
"collections_url = '{}/{}'.format(dapa_url, 'collections')\ncollections = requests.get(collections_url, headers={'Accept': 'application/json'})\n\nprint('DAPA path: {}'.format(collections.url.replace(service_url, '')))\ncollections.json()",
"DAPA path: /oapi/collections\n"
]
],
[
[
"## Get fields of collection Sentinel-2 L2A\nThe fields (or variables in other DAPA endpoints - these are the bands of the raster data) can be retrieved in all requests to the DAPA endpoint. In addition to the fixed set of fields, \"virtual\" fields can be used to conduct math operations (e.g., the calculation of indices). ",
"_____no_output_____"
]
],
[
[
"collection = 'S2L2A'\n\nfields_url = '{}/{}/{}/{}'.format(dapa_url, 'collections', collection, 'dapa/fields')\nfields = requests.get(fields_url, headers={'Accept': 'application/json'})\n\nprint('DAPA path: {}'.format(fields.url.replace(service_url, '')))\nfields.json()",
"DAPA path: /oapi/collections/S2L2A/dapa/fields\n"
]
],
[
[
"## Retrieve NDVI as 1d time-series extraced for a single point",
"_____no_output_____"
],
[
"### Set DAPA URL and parameters\nThe output of this request is a time-series requested from a point of interest (`timeseries/position` endpoint). As the input collection (S2L2A) is a multi-temporal raster and the requested geometry is a point, no aggregation is conducted.\n\nTo retrieve a time-series of a point, the parameter `point` needs to be provided. The `time` parameter allows to extract data only within a specific time span. The band (`field`) from which the point is being extracted needs to be specified as well.",
"_____no_output_____"
]
],
[
[
"# DAPA URL\nurl = '{}/{}/{}/{}'.format(dapa_url, 'collections', collection, 'dapa/timeseries/position')\n\n# Parameters for this request\nparams = {\n 'point': '11.49,48.05',\n 'time': '2018-04-01T00:00:00Z/2018-05-01T00:00:00Z',\n 'fields': 'NDVI=(B08-B04)/(B08%2BB04)' # Please note: + signs need to be URL encoded -> %2B\n}\n\n# show point in the map\nlocation = list(reversed([float(coord) for coord in params['point'].split(',')]))\nm = Map(\n basemap=basemap_to_tiles(basemaps.OpenStreetMap.Mapnik),\n center=location,\n zoom=10\n)\n\nmarker = Marker(location=location, draggable=False)\nm.add_layer(marker)\n\nm",
"_____no_output_____"
]
],
[
[
"### Build request URL and conduct request",
"_____no_output_____"
]
],
[
[
"params_str = \"&\".join(\"%s=%s\" % (k, v) for k,v in params.items())\nr = requests.get(url, params=params_str)\n\nprint('DAPA path: {}'.format(r.url.replace(service_url, '')))\nprint('Status code: {}'.format(r.status_code))",
"DAPA path: /oapi/collections/S2L2A/dapa/timeseries/position?point=11.49,48.05&time=2018-04-01T00:00:00Z/2018-05-01T00:00:00Z&fields=NDVI=(B08-B04)/(B08%2BB04)\nStatus code: 200\n"
]
],
[
[
"### Write timeseries dataset to CSV file\nThe response of this request returns data as CSV including headers splitted by comma. Additional output formats (e.g., CSV with headers included) will be integrated within the testbed activtiy. \n\nYou can either write the response to file or use it as string (`r.content` variable). ",
"_____no_output_____"
]
],
[
[
"# write time-series data to CSV file\nwith open('timeseries_s2.csv', 'wb') as filew:\n filew.write(r.content)",
"_____no_output_____"
]
],
[
[
"### Open timeseries dataset with pandas\nTime-series data can be opened, processed, and plotted easily with the `Pandas` library. You only need to specify the `datetime` column to automatically convert dates from string to a datetime object. ",
"_____no_output_____"
]
],
[
[
"# read data into Pandas dataframe\nds = pd.read_csv('timeseries_s2.csv', parse_dates=['datetime'])\n\n# set index to datetime column\nds.set_index('datetime', inplace=True)\n\n# show dataframe\nds",
"_____no_output_____"
]
],
[
[
"### Plot NDVI data",
"_____no_output_____"
]
],
[
[
"ds.plot()",
"_____no_output_____"
]
],
[
[
"### Output CSV file",
"_____no_output_____"
]
],
[
[
"!cat timeseries_s2.csv",
"datetime,NDVI\r\r\n2018-04-02T10:24:35Z,0.7012165\r\r\n2018-04-04T10:10:21Z,0.22945666\r\r\n2018-04-07T10:20:20Z,0.8333333\r\r\n2018-04-09T10:13:43Z,0.63461536\r\r\n2018-04-12T10:20:24Z,0.10066674\r\r\n2018-04-14T10:15:36Z,0.5091714\r\r\n2018-04-17T10:20:21Z,0.49720672\r\r\n2018-04-19T10:14:57Z,0.68678766\r\r\n2018-04-22T10:21:15Z,0.67164177\r\r\n2018-04-24T10:15:26Z,0.0168028\r\r\n2018-04-27T10:20:22Z,0.77434456\r\r\n2018-04-29T10:12:58Z,0.2144858\r\r\n"
]
],
[
[
"## Time-series aggregated over area",
"_____no_output_____"
]
],
[
[
"# DAPA URL\nurl = '{}/{}/{}/{}'.format(dapa_url, 'collections', collection, 'dapa/timeseries/area')\n\n# Parameters for this request\nparams = {\n #'point': '11.49,48.05',\n 'bbox': '11.49,48.05,11.66,48.22',\n 'aggregate': 'min,max,avg',\n 'time': '2018-04-01T00:00:00Z/2018-05-01T00:00:00Z',\n 'fields': 'NDVI=(B08-B04)/(B08%2BB04)' # Please note: + signs need to be URL encoded -> %2B\n}\n\nparams_str = \"&\".join(\"%s=%s\" % (k, v) for k,v in params.items())\nr = requests.get(url, params=params_str)\n\nprint('DAPA path: {}'.format(r.url.replace(service_url, '')))\nprint('Status code: {}'.format(r.status_code))",
"DAPA path: /oapi/collections/S2L2A/dapa/timeseries/area?bbox=11.49,48.05,11.66,48.22&aggregate=min,max,avg&time=2018-04-01T00:00:00Z/2018-05-01T00:00:00Z&fields=NDVI=(B08-B04)/(B08%2BB04)\nStatus code: 200\n"
],
[
"# read data into Pandas dataframe\nfrom io import StringIO\nds = pd.read_csv(StringIO(r.text), parse_dates=['datetime'])\n\n# set index to datetime column\nds.set_index('datetime', inplace=True)\n\n# show dataframe\nds",
"_____no_output_____"
],
[
"ds.plot()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
d0c5c08c2f7c7d408f7a1ba07d93ea98536394c5 | 603,011 | ipynb | Jupyter Notebook | XGBoost.ipynb | OmkarMetri/Sentence-to-Sentence-Similarity | 38f151f3e800e35d97d7d454934bc6a3951821a8 | [
"MIT"
] | 2 | 2019-06-02T16:03:29.000Z | 2020-02-23T11:22:13.000Z | XGBoost.ipynb | OmkarMetri/Sentence-to-Sentence-Similarity | 38f151f3e800e35d97d7d454934bc6a3951821a8 | [
"MIT"
] | null | null | null | XGBoost.ipynb | OmkarMetri/Sentence-to-Sentence-Similarity | 38f151f3e800e35d97d7d454934bc6a3951821a8 | [
"MIT"
] | null | null | null | 556.28321 | 474,100 | 0.93863 | [
[
[
"import numpy as np # linear algebra\nimport pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)\nimport os\nimport gc\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n%matplotlib inline\n\npal = sns.color_palette()\n",
"_____no_output_____"
],
[
"df_train = pd.read_csv('train.csv')\ndf_train.head()",
"_____no_output_____"
],
[
"print('Total number of question pairs for training: {}'.format(len(df_train)))\nprint('Duplicate pairs: {}%'.format(round(df_train['is_duplicate'].mean()*100, 2)))\nqids = pd.Series(df_train['qid1'].tolist() + df_train['qid2'].tolist())\nprint('Total number of questions in the training data: {}'.format(len(\n np.unique(qids))))\nprint('Number of questions that appear multiple times: {}'.format(np.sum(qids.value_counts() > 1)))\n\nplt.figure(figsize=(12, 5))\nplt.hist(qids.value_counts(), bins=50)\nplt.yscale('log', nonposy='clip')\nplt.title('Log-Histogram of question appearance counts')\nplt.xlabel('Number of occurences of question')\nplt.ylabel('Number of questions')\nprint()",
"Total number of question pairs for training: 404290\nDuplicate pairs: 36.92%\nTotal number of questions in the training data: 537933\nNumber of questions that appear multiple times: 111780\n\n"
],
[
"df_test = pd.read_csv('test.csv')\ndf_test.head()",
"_____no_output_____"
],
[
"print('Total number of question pairs for testing: {}'.format(len(df_test)))",
"Total number of question pairs for testing: 200000\n"
],
[
"train_qs = pd.Series(df_train['question1'].tolist() + df_train['question2'].tolist()).astype(str)\ntest_qs = pd.Series(df_test['question1'].tolist() + df_test['question2'].tolist()).astype(str)\n\ndist_train = train_qs.apply(len)\ndist_test = test_qs.apply(len)\nplt.figure(figsize=(15, 10))\nplt.hist(dist_train, bins=200, range=[0, 200], color=pal[2], normed=True, label='train')\nplt.hist(dist_test, bins=200, range=[0, 200], color=pal[1], normed=True, alpha=0.5, label='test')\nplt.title('Normalised histogram of character count in questions', fontsize=15)\nplt.legend()\nplt.xlabel('Number of characters', fontsize=15)\nplt.ylabel('Probability', fontsize=15)\n\nprint('mean-train {:.2f} std-train {:.2f} mean-test {:.2f} std-test {:.2f} max-train {:.2f} max-test {:.2f}'.format(dist_train.mean(), \n dist_train.std(), dist_test.mean(), dist_test.std(), dist_train.max(), dist_test.max()))",
"/home/manjunath/anaconda3/lib/python3.7/site-packages/matplotlib/axes/_axes.py:6521: MatplotlibDeprecationWarning: \nThe 'normed' kwarg was deprecated in Matplotlib 2.1 and will be removed in 3.1. Use 'density' instead.\n alternative=\"'density'\", removal=\"3.1\")\n/home/manjunath/anaconda3/lib/python3.7/site-packages/matplotlib/axes/_axes.py:6521: MatplotlibDeprecationWarning: \nThe 'normed' kwarg was deprecated in Matplotlib 2.1 and will be removed in 3.1. Use 'density' instead.\n alternative=\"'density'\", removal=\"3.1\")\n"
]
],
[
[
"We can see that most questions have anywhere from 15 to 150 characters in them. It seems that the test distribution is a little different from the train one, but not too much so.",
"_____no_output_____"
]
],
[
[
"dist_train = train_qs.apply(lambda x: len(x.split(' ')))\ndist_test = test_qs.apply(lambda x: len(x.split(' ')))\n\nplt.figure(figsize=(15, 10))\nplt.hist(dist_train, bins=50, range=[0, 50], color=pal[2], normed=True, label='train')\nplt.hist(dist_test, bins=50, range=[0, 50], color=pal[1], normed=True, alpha=0.5, label='test')\nplt.title('Normalised histogram of word count in questions', fontsize=15)\nplt.legend()\nplt.xlabel('Number of words', fontsize=15)\nplt.ylabel('Probability', fontsize=15)\n\nprint('mean-train {:.2f} std-train {:.2f} mean-test {:.2f} std-test {:.2f} max-train {:.2f} max-test {:.2f}'.format(dist_train.mean(), \n dist_train.std(), dist_test.mean(), dist_test.std(), dist_train.max(), dist_test.max()))",
"/home/manjunath/anaconda3/lib/python3.7/site-packages/matplotlib/axes/_axes.py:6521: MatplotlibDeprecationWarning: \nThe 'normed' kwarg was deprecated in Matplotlib 2.1 and will be removed in 3.1. Use 'density' instead.\n alternative=\"'density'\", removal=\"3.1\")\n/home/manjunath/anaconda3/lib/python3.7/site-packages/matplotlib/axes/_axes.py:6521: MatplotlibDeprecationWarning: \nThe 'normed' kwarg was deprecated in Matplotlib 2.1 and will be removed in 3.1. Use 'density' instead.\n alternative=\"'density'\", removal=\"3.1\")\n"
]
],
[
[
"### WordCloud",
"_____no_output_____"
]
],
[
[
"from wordcloud import WordCloud\ncloud = WordCloud(width=1440, height=1080).generate(\" \".join(train_qs.astype(str)))\nplt.figure(figsize=(20, 15))\nplt.imshow(cloud)\nplt.axis('off')",
"_____no_output_____"
]
],
[
[
"## Semantic Analysis",
"_____no_output_____"
]
],
[
[
"qmarks = np.mean(train_qs.apply(lambda x: '?' in x))\nmath = np.mean(train_qs.apply(lambda x: '[math]' in x))\nfullstop = np.mean(train_qs.apply(lambda x: '.' in x))\ncapital_first = np.mean(train_qs.apply(lambda x: x[0].isupper()))\ncapitals = np.mean(train_qs.apply(lambda x: max([y.isupper() for y in x])))\nnumbers = np.mean(train_qs.apply(lambda x: max([y.isdigit() for y in x])))\n\nprint('Questions with question marks: {:.2f}%'.format(qmarks * 100))\nprint('Questions with [math] tags: {:.2f}%'.format(math * 100))\nprint('Questions with full stops: {:.2f}%'.format(fullstop * 100))\nprint('Questions with capitalised first letters: {:.2f}%'.format(capital_first * 100))\nprint('Questions with capital letters: {:.2f}%'.format(capitals * 100))\nprint('Questions with numbers: {:.2f}%'.format(numbers * 100))",
"Questions with question marks: 99.87%\nQuestions with [math] tags: 0.12%\nQuestions with full stops: 6.31%\nQuestions with capitalised first letters: 99.81%\nQuestions with capital letters: 99.95%\nQuestions with numbers: 11.83%\n"
]
],
[
[
"# Initial Feature Analysis\n\nBefore we create a model, we should take a look at how powerful some features are. I will start off with the word share feature from the benchmark model.",
"_____no_output_____"
]
],
[
[
"from nltk.corpus import stopwords\n\nstops = set(stopwords.words(\"english\"))\n\ndef word_match_share(row):\n q1words = {}\n q2words = {}\n for word in str(row['question1']).lower().split():\n if word not in stops:\n q1words[word] = 1\n for word in str(row['question2']).lower().split():\n if word not in stops:\n q2words[word] = 1\n if len(q1words) == 0 or len(q2words) == 0:\n # The computer-generated chaff includes a few questions that are nothing but stopwords\n return 0\n shared_words_in_q1 = [w for w in q1words.keys() if w in q2words]\n shared_words_in_q2 = [w for w in q2words.keys() if w in q1words]\n R = (len(shared_words_in_q1) + len(shared_words_in_q2))/(len(q1words) + len(q2words))\n return R\n\nplt.figure(figsize=(15, 5))\ntrain_word_match = df_train.apply(word_match_share, axis=1, raw=True)\nplt.hist(train_word_match[df_train['is_duplicate'] == 0], bins=20, normed=True, label='Not Duplicate')\nplt.hist(train_word_match[df_train['is_duplicate'] == 1], bins=20, normed=True, alpha=0.7, label='Duplicate')\nplt.legend()\nplt.title('Label distribution over word_match_share', fontsize=15)\nplt.xlabel('word_match_share', fontsize=15)",
"/home/manjunath/anaconda3/lib/python3.7/site-packages/matplotlib/axes/_axes.py:6521: MatplotlibDeprecationWarning: \nThe 'normed' kwarg was deprecated in Matplotlib 2.1 and will be removed in 3.1. Use 'density' instead.\n alternative=\"'density'\", removal=\"3.1\")\n/home/manjunath/anaconda3/lib/python3.7/site-packages/matplotlib/axes/_axes.py:6521: MatplotlibDeprecationWarning: \nThe 'normed' kwarg was deprecated in Matplotlib 2.1 and will be removed in 3.1. Use 'density' instead.\n alternative=\"'density'\", removal=\"3.1\")\n"
],
[
"from collections import Counter\n\n# If a word appears only once, we ignore it completely (likely a typo)\n# Epsilon defines a smoothing constant, which makes the effect of extremely rare words smaller\ndef get_weight(count, eps=10000, min_count=2):\n if count < min_count:\n return 0\n else:\n return 1 / (count + eps)\n\neps = 5000 \nwords = (\" \".join(train_qs)).lower().split()\ncounts = Counter(words)\nweights = {word: get_weight(count) for word, count in counts.items()}",
"_____no_output_____"
],
[
"print('Most common words and weights: \\n')\nprint(sorted(weights.items(), key=lambda x: x[1] if x[1] > 0 else 9999)[:10])\nprint('\\nLeast common words and weights: ')\n(sorted(weights.items(), key=lambda x: x[1], reverse=True)[:10])",
"Most common words and weights: \n\n[('the', 2.5891040146646852e-06), ('what', 3.115623919267953e-06), ('is', 3.5861702928825277e-06), ('how', 4.366449945201053e-06), ('i', 4.4805878531263305e-06), ('a', 4.540645588989843e-06), ('to', 4.671434644293609e-06), ('in', 4.884625153865692e-06), ('of', 5.920242493132519e-06), ('do', 6.070908207867897e-06)]\n\nLeast common words and weights: \n"
],
[
"def tfidf_word_match_share(row):\n q1words = {}\n q2words = {}\n for word in str(row['question1']).lower().split():\n if word not in stops:\n q1words[word] = 1\n for word in str(row['question2']).lower().split():\n if word not in stops:\n q2words[word] = 1\n if len(q1words) == 0 or len(q2words) == 0:\n # The computer-generated chaff includes a few questions that are nothing but stopwords\n return 0\n \n shared_weights = [weights.get(w, 0) for w in q1words.keys() if w in q2words] + [weights.get(w, 0) for w in q2words.keys() if w in q1words]\n total_weights = [weights.get(w, 0) for w in q1words] + [weights.get(w, 0) for w in q2words]\n \n R = np.sum(shared_weights) / np.sum(total_weights)\n return R",
"_____no_output_____"
],
[
"plt.figure(figsize=(15, 5))\ntfidf_train_word_match = df_train.apply(tfidf_word_match_share, axis=1, raw=True)\nplt.hist(tfidf_train_word_match[df_train['is_duplicate'] == 0].fillna(0), bins=20, normed=True, label='Not Duplicate')\nplt.hist(tfidf_train_word_match[df_train['is_duplicate'] == 1].fillna(0), bins=20, normed=True, alpha=0.7, label='Duplicate')\nplt.legend()\nplt.title('Label distribution over tfidf_word_match_share', fontsize=15)\nplt.xlabel('word_match_share', fontsize=15)",
"/home/manjunath/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:17: RuntimeWarning: invalid value encountered in double_scalars\n/home/manjunath/anaconda3/lib/python3.7/site-packages/matplotlib/axes/_axes.py:6521: MatplotlibDeprecationWarning: \nThe 'normed' kwarg was deprecated in Matplotlib 2.1 and will be removed in 3.1. Use 'density' instead.\n alternative=\"'density'\", removal=\"3.1\")\n/home/manjunath/anaconda3/lib/python3.7/site-packages/matplotlib/axes/_axes.py:6521: MatplotlibDeprecationWarning: \nThe 'normed' kwarg was deprecated in Matplotlib 2.1 and will be removed in 3.1. Use 'density' instead.\n alternative=\"'density'\", removal=\"3.1\")\n"
],
[
"from sklearn.metrics import roc_auc_score\nprint('Original AUC:', roc_auc_score(df_train['is_duplicate'], train_word_match))\nprint(' TFIDF AUC:', roc_auc_score(df_train['is_duplicate'], tfidf_train_word_match.fillna(0)))",
"Original AUC: 0.7804327049353577\n TFIDF AUC: 0.7704802292218704\n"
]
],
[
[
"## Rebalancing the Data\nHowever, before I do this, I would like to rebalance the data that XGBoost receives, since we have 37% positive class in our training data, and only 17% in the test data. By re-balancing the data so our training set has 17% positives, we can ensure that XGBoost outputs probabilities that will better match the data, and should get a better score (since LogLoss looks at the probabilities themselves and not just the order of the predictions like AUC)",
"_____no_output_____"
]
],
[
[
"# First we create our training and testing data\nx_train = pd.DataFrame()\nx_test = pd.DataFrame()\nx_train['word_match'] = train_word_match\nx_train['tfidf_word_match'] = tfidf_train_word_match\nx_test['word_match'] = df_test.apply(word_match_share, axis=1, raw=True)\nx_test['tfidf_word_match'] = df_test.apply(tfidf_word_match_share, axis=1, raw=True)\n\ny_train = df_train['is_duplicate'].values",
"/home/manjunath/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:17: RuntimeWarning: invalid value encountered in double_scalars\n/home/manjunath/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:17: RuntimeWarning: invalid value encountered in long_scalars\n"
],
[
"pos_train = x_train[y_train == 1]\nneg_train = x_train[y_train == 0]\n\n# Now we oversample the negative class\n# There is likely a much more elegant way to do this...\np = 0.165\nscale = ((len(pos_train) / (len(pos_train) + len(neg_train))) / p) - 1\nwhile scale > 1:\n neg_train = pd.concat([neg_train, neg_train])\n scale -=1\nneg_train = pd.concat([neg_train, neg_train[:int(scale * len(neg_train))]])\nprint(len(pos_train) / (len(pos_train) + len(neg_train)))\n\nx_train = pd.concat([pos_train, neg_train])\ny_train = (np.zeros(len(pos_train)) + 1).tolist() + np.zeros(len(neg_train)).tolist()\ndel pos_train, neg_train",
"0.19124366100096607\n"
],
[
"# Finally, we split some of the data off for validation\nfrom sklearn.model_selection import train_test_split\n\nx_train, x_valid, y_train, y_valid = train_test_split(x_train, y_train, test_size=0.2, random_state=4242)",
"_____no_output_____"
]
],
[
[
"## XGBoost",
"_____no_output_____"
]
],
[
[
"import xgboost as xgb\n\n# Set our parameters for xgboost\nparams = {}\nparams['objective'] = 'binary:logistic'\nparams['eval_metric'] = 'logloss'\nparams['eta'] = 0.02\nparams['max_depth'] = 4\n\nd_train = xgb.DMatrix(x_train, label=y_train)\nd_valid = xgb.DMatrix(x_valid, label=y_valid)\n\nwatchlist = [(d_train, 'train'), (d_valid, 'valid')]\n\nbst = xgb.train(params, d_train, 400, watchlist, early_stopping_rounds=50, verbose_eval=10)",
"[0]\ttrain-logloss:0.68269\tvalid-logloss:0.683374\nMultiple eval metrics have been passed: 'valid-logloss' will be used for early stopping.\n\nWill train until valid-logloss hasn't improved in 50 rounds.\n[10]\ttrain-logloss:0.602603\tvalid-logloss:0.602534\n[20]\ttrain-logloss:0.545461\tvalid-logloss:0.545854\n[30]\ttrain-logloss:0.503691\tvalid-logloss:0.504447\n[40]\ttrain-logloss:0.471857\tvalid-logloss:0.473559\n[50]\ttrain-logloss:0.448647\tvalid-logloss:0.450074\n[60]\ttrain-logloss:0.430427\tvalid-logloss:0.431971\n[70]\ttrain-logloss:0.416376\tvalid-logloss:0.418081\n[80]\ttrain-logloss:0.405567\tvalid-logloss:0.407152\n[90]\ttrain-logloss:0.396786\tvalid-logloss:0.398582\n[100]\ttrain-logloss:0.3901\tvalid-logloss:0.391865\n[110]\ttrain-logloss:0.384277\tvalid-logloss:0.386504\n[120]\ttrain-logloss:0.380254\tvalid-logloss:0.382223\n[130]\ttrain-logloss:0.376898\tvalid-logloss:0.378791\n[140]\ttrain-logloss:0.374257\tvalid-logloss:0.376091\n[150]\ttrain-logloss:0.371744\tvalid-logloss:0.373982\n[160]\ttrain-logloss:0.370124\tvalid-logloss:0.372241\n[170]\ttrain-logloss:0.368652\tvalid-logloss:0.37084\n[180]\ttrain-logloss:0.367913\tvalid-logloss:0.369715\n[190]\ttrain-logloss:0.366871\tvalid-logloss:0.368881\n[200]\ttrain-logloss:0.36575\tvalid-logloss:0.368148\n[210]\ttrain-logloss:0.365308\tvalid-logloss:0.367528\n[220]\ttrain-logloss:0.365011\tvalid-logloss:0.367085\n[230]\ttrain-logloss:0.364313\tvalid-logloss:0.366691\n[240]\ttrain-logloss:0.363959\tvalid-logloss:0.366344\n[250]\ttrain-logloss:0.363817\tvalid-logloss:0.366055\n[260]\ttrain-logloss:0.363658\tvalid-logloss:0.365773\n[270]\ttrain-logloss:0.363366\tvalid-logloss:0.365585\n[280]\ttrain-logloss:0.363265\tvalid-logloss:0.365279\n[290]\ttrain-logloss:0.363065\tvalid-logloss:0.365113\n[300]\ttrain-logloss:0.362607\tvalid-logloss:0.365007\n[310]\ttrain-logloss:0.362527\tvalid-logloss:0.364859\n[320]\ttrain-logloss:0.362374\tvalid-logloss:0.364667\n[330]\ttrain-logloss:0.362283\tvalid-logloss:0.36456\n[340]\ttrain-logloss:0.362097\tvalid-logloss:0.364431\n[350]\ttrain-logloss:0.361984\tvalid-logloss:0.364321\n[360]\ttrain-logloss:0.361897\tvalid-logloss:0.364218\n[370]\ttrain-logloss:0.361728\tvalid-logloss:0.36413\n[380]\ttrain-logloss:0.361598\tvalid-logloss:0.364054\n[390]\ttrain-logloss:0.361526\tvalid-logloss:0.36398\n[399]\ttrain-logloss:0.36143\tvalid-logloss:0.36393\n"
],
[
"d_test = xgb.DMatrix(x_test)\np_test = bst.predict(d_test)\n\nsub = pd.DataFrame()\nsub['test_id'] = df_test['test_id']\nsub['is_duplicate'] = p_test\nsub.to_csv('simple_xgb.csv', index=False)",
"_____no_output_____"
],
[
"sub.head()",
"_____no_output_____"
],
[
"def logloss(ptest):\n s = 0\n for res in ptest:\n s+=np.log(res)\n return -s\n\nprint(logloss(p_test)/len(p_test))",
"3.9213494079892337\n"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
d0c5cb248112e75b78754a1a2844e03462dbf96a | 14,419 | ipynb | Jupyter Notebook | nbs/109_models.OmniScaleCNN.ipynb | ansari1375/tsai | 0e9a4537452a72392900667a713ce759f19f88ea | [
"Apache-2.0"
] | 1 | 2022-01-02T18:21:27.000Z | 2022-01-02T18:21:27.000Z | nbs/109_models.OmniScaleCNN.ipynb | ansari1375/tsai | 0e9a4537452a72392900667a713ce759f19f88ea | [
"Apache-2.0"
] | 31 | 2021-12-01T23:08:51.000Z | 2021-12-29T02:59:49.000Z | nbs/109_models.OmniScaleCNN.ipynb | ansari1375/tsai | 0e9a4537452a72392900667a713ce759f19f88ea | [
"Apache-2.0"
] | null | null | null | 46.66343 | 2,820 | 0.610445 | [
[
[
"# default_exp models.OmniScaleCNN",
"_____no_output_____"
]
],
[
[
"# OmniScaleCNN\n\n> This is an unofficial PyTorch implementation by Ignacio Oguiza - [email protected] based on:\n\n* Rußwurm, M., & Körner, M. (2019). Self-attention for raw optical satellite time series classification. arXiv preprint arXiv:1910.10536.\n* Official implementation: https://github.com/dl4sits/BreizhCrops/blob/master/breizhcrops/models/OmniScaleCNN.py",
"_____no_output_____"
]
],
[
[
"#export\nfrom tsai.imports import *\nfrom tsai.models.layers import *\nfrom tsai.models.utils import *",
"_____no_output_____"
],
[
"#export\n#This is an unofficial PyTorch implementation by Ignacio Oguiza - [email protected] based on:\n# Rußwurm, M., & Körner, M. (2019). Self-attention for raw optical satellite time series classification. arXiv preprint arXiv:1910.10536.\n# Official implementation: https://github.com/dl4sits/BreizhCrops/blob/master/breizhcrops/models/OmniScaleCNN.py\n\nclass SampaddingConv1D_BN(Module):\n def __init__(self, in_channels, out_channels, kernel_size):\n self.padding = nn.ConstantPad1d((int((kernel_size - 1) / 2), int(kernel_size / 2)), 0)\n self.conv1d = torch.nn.Conv1d(in_channels=in_channels, out_channels=out_channels, kernel_size=kernel_size)\n self.bn = nn.BatchNorm1d(num_features=out_channels)\n\n def forward(self, x):\n x = self.padding(x)\n x = self.conv1d(x)\n x = self.bn(x)\n return x\n\n\nclass build_layer_with_layer_parameter(Module):\n \"\"\"\n formerly build_layer_with_layer_parameter\n \"\"\"\n def __init__(self, layer_parameters):\n \"\"\"\n layer_parameters format\n [in_channels, out_channels, kernel_size,\n in_channels, out_channels, kernel_size,\n ..., nlayers\n ]\n \"\"\"\n self.conv_list = nn.ModuleList()\n\n for i in layer_parameters:\n # in_channels, out_channels, kernel_size\n conv = SampaddingConv1D_BN(i[0], i[1], i[2])\n self.conv_list.append(conv)\n\n def forward(self, x):\n\n conv_result_list = []\n for conv in self.conv_list:\n conv_result = conv(x)\n conv_result_list.append(conv_result)\n\n result = F.relu(torch.cat(tuple(conv_result_list), 1))\n return result\n\n\nclass OmniScaleCNN(Module):\n def __init__(self, c_in, c_out, seq_len, layers=[8 * 128, 5 * 128 * 256 + 2 * 256 * 128], few_shot=False):\n\n receptive_field_shape = seq_len//4\n layer_parameter_list = generate_layer_parameter_list(1,receptive_field_shape, layers, in_channel=c_in)\n self.few_shot = few_shot\n self.layer_parameter_list = layer_parameter_list\n self.layer_list = []\n for i in range(len(layer_parameter_list)):\n layer = build_layer_with_layer_parameter(layer_parameter_list[i])\n self.layer_list.append(layer)\n self.net = nn.Sequential(*self.layer_list)\n self.gap = GAP1d(1)\n out_put_channel_number = 0\n for final_layer_parameters in layer_parameter_list[-1]:\n out_put_channel_number = out_put_channel_number + final_layer_parameters[1]\n self.hidden = nn.Linear(out_put_channel_number, c_out)\n\n def forward(self, x):\n x = self.net(x)\n x = self.gap(x)\n if not self.few_shot: x = self.hidden(x)\n return x\n\ndef get_Prime_number_in_a_range(start, end):\n Prime_list = []\n for val in range(start, end + 1):\n prime_or_not = True\n for n in range(2, val):\n if (val % n) == 0:\n prime_or_not = False\n break\n if prime_or_not:\n Prime_list.append(val)\n return Prime_list\n\n\ndef get_out_channel_number(paramenter_layer, in_channel, prime_list):\n out_channel_expect = max(1, int(paramenter_layer / (in_channel * sum(prime_list))))\n return out_channel_expect\n\n\ndef generate_layer_parameter_list(start, end, layers, in_channel=1):\n prime_list = get_Prime_number_in_a_range(start, end)\n\n layer_parameter_list = []\n for paramenter_number_of_layer in layers:\n out_channel = get_out_channel_number(paramenter_number_of_layer, in_channel, prime_list)\n\n tuples_in_layer = []\n for prime in prime_list:\n tuples_in_layer.append((in_channel, out_channel, prime))\n in_channel = len(prime_list) * out_channel\n\n layer_parameter_list.append(tuples_in_layer)\n\n tuples_in_layer_last = []\n first_out_channel = len(prime_list) * get_out_channel_number(layers[0], 1, prime_list)\n tuples_in_layer_last.append((in_channel, first_out_channel, 1))\n tuples_in_layer_last.append((in_channel, first_out_channel, 2))\n layer_parameter_list.append(tuples_in_layer_last)\n return layer_parameter_list",
"_____no_output_____"
],
[
"bs = 16\nc_in = 3\nseq_len = 12\nc_out = 2\nxb = torch.rand(bs, c_in, seq_len)\nm = create_model(OmniScaleCNN, c_in, c_out, seq_len)\ntest_eq(OmniScaleCNN(c_in, c_out, seq_len)(xb).shape, [bs, c_out])\nm",
"_____no_output_____"
],
[
"#hide\nfrom tsai.imports import *\nfrom tsai.export import *\nnb_name = get_nb_name()\n# nb_name = \"109_models.OmniScaleCNN.ipynb\"\ncreate_scripts(nb_name);",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
d0c5edb21af4943bdef1b6e5127e3cba6c42e939 | 878,157 | ipynb | Jupyter Notebook | NLP-Reddit.ipynb | daviscvance/NLP-Reddit | 66cba29eb6002a8f3b86b20fdca75ca787b0b7ff | [
"MIT"
] | 3 | 2020-11-28T09:07:15.000Z | 2021-11-17T11:04:54.000Z | NLP-Reddit.ipynb | daviscvance/NLP-Reddit | 66cba29eb6002a8f3b86b20fdca75ca787b0b7ff | [
"MIT"
] | null | null | null | NLP-Reddit.ipynb | daviscvance/NLP-Reddit | 66cba29eb6002a8f3b86b20fdca75ca787b0b7ff | [
"MIT"
] | 2 | 2019-05-25T15:35:43.000Z | 2019-08-22T01:22:41.000Z | 288.582649 | 347,373 | 0.796823 | [
[
[
"# Natural Language Processing - Unsupervised Topic Modeling with Reddit Posts\n\n###### This project dives into multiple techniques used for NLP and subtopics such as dimensionality reduction, topic modeling, and clustering.\n\n1. [Google BigQuery](#Google-BigQuery)\n1. [Exploratory Data Analysis (EDA) & Preprocessing](#Exploratory-Data-Analysis-&-Preprocessing)\n1. [Singular Value Decomposition (SVD)](#Singular-Value-Decomposition-(SVD))\n1. [Latent Semantic Analysis (LSA - applied SVD)](#Latent-Semantic-Analysis-(LSA))\n1. [Similarity Scoring Metrics](#sim)\n1. [KMeans Clustering](#km)\n1. [Latent Dirichlet Allocation (LDA)](#lda)\n1. [pyLDAvis - interactive d3 for LDA](#py)\n - This was separated out in a new notebook to quickly view visual (load files and see visualization)",
"_____no_output_____"
]
],
[
[
"# Easter Egg to start your imports\n#import this \nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport pandas as pd\nimport numpy as np\nimport logging\nimport pickle\nimport sys\nimport os\n\nfrom google.cloud import bigquery",
"_____no_output_____"
],
[
"import warnings\ndef warn(*args, **kwargs):\n pass\nwarnings.warn = warn\n\n# Logging is the verbose for Gensim\nlogging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)\n\n#plt.style.available # Style options\nplt.style.use('fivethirtyeight')\nsns.set_context(\"talk\")\n%matplotlib inline\n\npd.options.display.max_rows = 99\npd.options.display.max_columns = 99\npd.options.display.max_colwidth = 99\n#pd.describe_option('display') # Option settings\n\nfloat_formatter = lambda x: \"%.3f\" % x if x>0 else \"%.0f\" % x\nnp.set_printoptions(formatter={'float_kind':float_formatter})\npd.set_option('display.float_format', float_formatter)",
"_____no_output_____"
]
],
[
[
"## Google BigQuery",
"_____no_output_____"
]
],
[
[
"%%time\n\npath = \"data/posts.pkl\"\nkey = 'fakeKey38i7-4259.json'\n\nif not os.path.isdir('data/'):\n os.makedirs('data/')\n\n# Set GOOGLE_APPLICATION_CREDENTIALS before querying\ndef bigQuery(QUERY, key=key):\n \"\"\" \n Instantiates a client using a key, \n Requests a SQL query from the Big Query API,\n Returns the queried table as a DataFrame\n \"\"\"\n client = bigquery.Client.from_service_account_json(key)\n job_config = bigquery.QueryJobConfig()\n job_config.use_legacy_sql = False\n query_job = client.query(QUERY, job_config=job_config)\n return query_job.result().to_dataframe()\n\n# SQL query for Google BigQuery\nQUERY = (\n \"\"\"\n SELECT created_utc, subreddit, author, domain, url, num_comments, \n score, title, selftext, id, gilded, retrieved_on, over_18 \n FROM `fh-bigquery.reddit_posts.*` \n WHERE _table_suffix IN ( '2016_06' ) \n AND LENGTH(selftext) > 550\n AND LENGTH(title) > 15\n AND LENGTH(title) < 345\n AND score > 8\n AND is_self = true \n AND NOT subreddit IS NULL \n AND NOT subreddit = 'de' \n AND NOT subreddit = 'test' \n AND NOT subreddit = 'tr' \n AND NOT subreddit = 'A6XHE' \n AND NOT subreddit = 'es' \n AND NOT subreddit = 'removalbot'\n AND NOT subreddit = 'tldr'\n AND NOT selftext LIKE '[removed]' \n AND NOT selftext LIKE '[deleted]' \n ;\"\"\")\n \n#df = bigQuery(QUERY)\n#df.to_pickle(path)\n\ndf = pd.read_pickle(path)\ndf.info(memory_usage='Deep')",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 146996 entries, 0 to 146995\nData columns (total 13 columns):\ncreated_utc 146996 non-null int64\nsubreddit 146996 non-null object\nauthor 146996 non-null object\ndomain 146996 non-null object\nurl 146996 non-null object\nnum_comments 146996 non-null int64\nscore 146996 non-null int64\ntitle 146996 non-null object\nselftext 146996 non-null object\nid 146996 non-null object\ngilded 146996 non-null int64\nretrieved_on 146996 non-null int64\nover_18 146996 non-null bool\ndtypes: bool(1), int64(5), object(7)\nmemory usage: 13.6+ MB\nCPU times: user 443 ms, sys: 265 ms, total: 709 ms\nWall time: 713 ms\n"
]
],
[
[
"## Exploratory Data Analysis & Preprocessing",
"_____no_output_____"
]
],
[
[
"# Exploring data by length of .title or .selftext\ndf[[ True if 500 < len(x) < 800 else False for x in df.selftext ]].sample(1, replace=False)",
"_____no_output_____"
],
[
"%%time\nrun = False\npath = '/home/User/data/gif'\n\n# Run through various selftext lengths and save the plots of the distribution of the metric\n# Gif visual after piecing all the frames together\nwhile run==True:\n for i in range(500,20000,769):\n tempath = os.path.join(path, f\"textlen{i}.png\") # PEP498 requires python 3.6\n print(tempath)\n\n # Look at histogram of posts with len<i\n cuts = [len(x) for x in df.selftext if len(x)<i]\n\n # Save plot\n plt.figure()\n plt.hist(cuts, bins=30) #can change bins based on function of i\n plt.savefig(tempath, dpi=120, format='png', bbox_inches='tight', pad_inches=0.1)\n plt.close()",
"_____no_output_____"
],
[
"# Bin Settings\ndef binSize(lower, upper, buffer=.05):\n bins = upper - lower\n buffer = int(buffer*bins)\n bins -= buffer\n print('Lower Bound:', lower)\n print('Upper Bound:', upper)\n return bins, lower, upper\n\n# Plotting \ndef plotHist(tmp, bins, title, xlabel, ylabel, l, u):\n plt.figure(figsize=(10,6))\n plt.hist(tmp, bins=bins)\n plt.title(title)\n plt.xlabel(xlabel)\n plt.ylabel(ylabel)\n plt.xlim(lower + l, upper + u)\n print('\\nLocal Max %s:' % xlabel, max(tmp))\n print('Local Average %s:' % xlabel, int(np.mean(tmp)))\n print('Local Median %s:' % xlabel, int(np.median(tmp)))",
"_____no_output_____"
],
[
"# Create the correct bin size\nbins, lower, upper = binSize(lower=0, upper=175)\n\n# Plot distribution of lower scores\ntmp = df[[ True if lower <= x <= upper else False for x in df['score'] ]]['score']\nplotHist(tmp=tmp, bins=bins, title='Lower Post Scores', xlabel='Scoring', ylabel='Frequency', l=5, u=5);",
"Lower Bound: 0\nUpper Bound: 175\n\nLocal Max Scoring: 175\nLocal Average Scoring: 31\nLocal Median Scoring: 19\n"
],
[
"# Titles should be less than 300 charcters \n# Outliers are due to unicode translation\n# Plot lengths of titles\ntmp = [ len(x) for x in df.title ]\nbins, lower, upper = binSize(lower=0, upper=300, buffer=-.09)\n\nplotHist(tmp=tmp, bins=bins, title='Lengths of Titles', xlabel='Length', ylabel='Frequency', l=10, u=0);",
"Lower Bound: 0\nUpper Bound: 300\n\nLocal Max Lengths: 343\nLocal Average Lengths: 58\nLocal Median Lengths: 49\n"
],
[
"# Slice lengths of texts and plot histogram\nbins, lower, upper = binSize(lower=500, upper=5000, buffer=.011)\ntmp = [len(x) for x in df.selftext if lower <= len(x) <= upper]\n\nplotHist(tmp=tmp, bins=bins, title='Length of Self Posts Under 5k', xlabel='Length', ylabel='Frequency', l=10, u=0)\nplt.ylim(0, 200);\n\n# Anomalies could be attributed to bots or duplicate reposts",
"Lower Bound: 500\nUpper Bound: 5000\n\nLocal Max Lengths: 5000\nLocal Average Lengths: 1479\nLocal Median Lengths: 1128\n"
],
[
"# Posts per Subreddit\ntmp = df.groupby('subreddit')['id'].nunique().sort_values(ascending=False)\ntop = 100\ns = sum(tmp)\nprint('Subreddits:', len(tmp))\nprint('Total Posts:', s)\nprint('Total Posts from Top %s:' % top, sum(tmp[:top]), ', %.3f of Total' % (sum(tmp[:top])/s))\nprint('Total Posts from Top 10:', sum(tmp[:10]), ', %.3f of Total' % (sum(tmp[:10])/s))\nprint('\\nTop 10 Contributors:', tmp[:10])\n\n\n\nplt.figure(figsize=(10,6))\nplt.plot(tmp, 'go')\nplt.xticks('')\nplt.title('Top %s Subreddit Post Counts' % top)\nplt.xlabel('Subreddits, Ranked')\nplt.ylabel('Post Count') \nplt.xlim(-2, top+1)\nplt.ylim(0, 2650);",
"Subreddits: 8497\nTotal Posts: 146996\nTotal Posts from Top 100: 47498 , 0.323 of Total\nTotal Posts from Top 10: 13683 , 0.093 of Total\n\nTop 10 Contributors: subreddit\nnoveltranslations 2583\nThe_Donald 2241\nrelationships 2129\nJUSTNOMIL 1136\nexmormon 1065\nraisedbynarcissists 1015\nasoiaf 924\nOverwatch 870\nlegaladvice 869\nnosleep 851\nName: id, dtype: int64\n"
],
[
"path1 = 'data/origin.pkl'\n#path2 = 'data/grouped.pkl'\n\n# Save important data\norigin_df = df.loc[:,['created_utc', 'subreddit', 'author', 'title', 'selftext', 'id']] \\\n .copy().reset_index().rename(columns={\"index\": \"position\"})\nprint(origin_df.info())\norigin_df.to_pickle(path1)\n\nposts_df = origin_df.loc[:,['title', 'selftext']]\nposts_df['text'] = posts_df.title + ' ' + df.selftext\n#del origin_df\n\n# To group the results later\ndef groupUserPosts(x):\n ''' Group users' id's by post '''\n return pd.Series(dict(ids = \", \".join(x['id']), \n text = \", \".join(x['text'])))\n\n###df = posts_df.groupby('author').apply(groupUserPosts) \n#df.to_pickle(path2)\n\ndf = posts_df.text.to_frame()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 146996 entries, 0 to 146995\nData columns (total 7 columns):\nposition 146996 non-null int64\ncreated_utc 146996 non-null int64\nsubreddit 146996 non-null object\nauthor 146996 non-null object\ntitle 146996 non-null object\nselftext 146996 non-null object\nid 146996 non-null object\ndtypes: int64(2), object(5)\nmemory usage: 7.9+ MB\nNone\n"
],
[
"origin_df.sample(2).drop('author', axis=1)",
"_____no_output_____"
],
[
"%%time\ndef clean_text(df, text_field):\n '''\n Clean all the text data within a certain text column of the dataFrame.\n '''\n df[text_field] = df[text_field].str.replace(r\"http\\S+\", \" \")\n df[text_field] = df[text_field].str.replace(r\"&[a-z]{2,4};\", \"\")\n df[text_field] = df[text_field].str.replace(\"\\\\n\", \" \")\n df[text_field] = df[text_field].str.replace(r\"#f\", \"\")\n df[text_field] = df[text_field].str.replace(r\"[\\’\\'\\`\\\":]\", \"\")\n df[text_field] = df[text_field].str.replace(r\"[^A-Za-z0-9]\", \" \")\n df[text_field] = df[text_field].str.replace(r\" +\", \" \")\n df[text_field] = df[text_field].str.lower()\n \nclean_text(df, 'text')",
"CPU times: user 42.8 s, sys: 564 ms, total: 43.3 s\nWall time: 43.4 s\n"
],
[
"df.sample(3)",
"_____no_output_____"
],
[
"# For exploration of users\ndf[origin_df.author == '<Redacted>'][:3]\n\n# User is a post summarizer and aggregator, added /r/tldr to the blocked list!",
"_____no_output_____"
],
[
"# Slice lengths of texts and plot histogram\nbins, lower, upper = binSize(lower=500, upper=5000, buffer=.015)\ntmp = [len(x) for x in df.text if lower <= len(x) <= upper]\n\nplotHist(tmp=tmp, bins=bins, title='Cleaned - Length of Self Posts Under 5k', \n xlabel='Lengths', ylabel='Frequency', l=0, u=0)\nplt.ylim(0, 185);",
"Lower Bound: 500\nUpper Bound: 5000\n\nLocal Max Lengths: 5000\nLocal Average Lengths: 1531\nLocal Median Lengths: 1185\n"
],
[
"# Download everything for nltk! ('all')\nimport nltk\nnltk.download() # (Change config save path)\nnltk.data.path.append('/home/User/data/')",
"NLTK Downloader\n---------------------------------------------------------------------------\n d) Download l) List u) Update c) Config h) Help q) Quit\n---------------------------------------------------------------------------\nDownloader> c\n\nData Server:\n - URL: <https://raw.githubusercontent.com/nltk/nltk_data/gh-pages/index.xml>\n - 7 Package Collections Available\n - 106 Individual Packages Available\n\nLocal Machine:\n - Data directory: /home/Dave/nltk_data\n\n---------------------------------------------------------------------------\n s) Show Config u) Set Server URL d) Set Data Dir m) Main Menu\n---------------------------------------------------------------------------\nConfig> d\n New Directory> /home/Dave/data\n\n---------------------------------------------------------------------------\n s) Show Config u) Set Server URL d) Set Data Dir m) Main Menu\n---------------------------------------------------------------------------\nConfig> m\n\n---------------------------------------------------------------------------\n d) Download l) List u) Update c) Config h) Help q) Quit\n---------------------------------------------------------------------------\nDownloader> q\n"
],
[
"from nltk.corpus import stopwords\n\n# \"stopeng\" is our extended list of stopwords for use in the CountVectorizer\n# I could spend days extending this list for fine tuning results\nstopeng = stopwords.words('english')\nstopeng.extend([x.replace(\"\\'\", \"\") for x in stopeng])\nstopeng.extend(['nbsp', 'also', 'really', 'ive', 'even', 'jon', 'lot', 'could', 'many'])\nstopeng = list(set(stopeng))",
"_____no_output_____"
],
[
"from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer\n\n# Count vectorization for LDA\ncv = CountVectorizer(token_pattern='\\\\w{3,}', max_df=.30, min_df=.0001, \n stop_words=stopeng, ngram_range=(1,1), lowercase=False,\n dtype='uint8')\n\n# Vectorizer object to generate term frequency-inverse document frequency matrix\ntfidf = TfidfVectorizer(token_pattern='\\\\w{3,}', max_df=.30, min_df=.0001, \n stop_words=stopeng, ngram_range=(1,1), lowercase=False,\n sublinear_tf=True, smooth_idf=False, dtype='float32')",
"_____no_output_____"
]
],
[
[
"###### Tokenization is one of the most important steps in NLP, I will explain some of my parameter choices in the README. CountVectorizer was my preferred choice. I used these definitions to help me in the iterative process of building an unsupervised model.\n\n###### The goal of using tf-idf instead of the raw frequencies of occurrence of a token in a given document is to scale down the impact of tokens that occur very frequently in a given corpus and that are hence empirically less informative than features that occur in a small fraction of the training corpus.\n\n###### Smooth = False: The effect of adding “1” to the idf in the equation above is that terms with zero idf, i.e., terms that occur in all documents in a training set, will not be entirely ignored.\n\n###### sublinear_tf = True: “l” (logarithmic), replaces tf with 1 + log(tf)",
"_____no_output_____"
]
],
[
[
"%%time\n# Count & tf-idf vectorizer fits the tokenizer and transforms data into new matrix\ncv_vecs = cv.fit_transform(df.text).transpose()\ntf_vecs = tfidf.fit_transform(df.text).transpose()\npickle.dump(cv_vecs, open('data/cv_vecs.pkl', 'wb'))\n\n# Checking the shape and size of the count vectorizer transformed matrix\n# 47,317 terms\n# 146996 documents\nprint(\"Sparse Shape:\", cv_vecs.shape) \nprint('CV:', sys.getsizeof(cv_vecs))\nprint('Tf-Idf:', sys.getsizeof(tf_vecs))",
"Sparse Shape: (47317, 146996)\nCV: 56\nTf-Idf: 56\nCPU times: user 1min 22s, sys: 1.79 s, total: 1min 24s\nWall time: 1min 24s\n"
],
[
"# IFF using a subset can you store these in a Pandas DataFrame/\n\n#tfidf_df = pd.DataFrame(tf_vecs.transpose().todense(), columns=[tfidf.get_feature_names()]).astype('float32')\n#cv_df = pd.DataFrame(cv_vecs.transpose().todense(), columns=[cv.get_feature_names()]).astype('uint8')\n\n#print(cv_df.info())\n#print(tfidf_df.info())",
"_____no_output_____"
],
[
"#cv_description = cv_df.describe().T\n#tfidf_description = tfidf_df.describe().T\n\n#tfidf_df.sum().sort_values(ascending=False)",
"_____no_output_____"
],
[
"# Explore the document-term vectors\n#cv_description.sort_values(by='max', ascending=False)\n#tfidf_description.sort_values(by='mean', ascending=False)",
"_____no_output_____"
]
],
[
[
"## Singular Value Decomposition (SVD)",
"_____no_output_____"
]
],
[
[
"#from sklearn.utils.extmath import randomized_svd\n\n# Randomized SVD for extracting the full decomposition\n#U, Sigma, VT = randomized_svd(tf_vecs, n_components=8, random_state=42)",
"_____no_output_____"
],
[
"from sklearn.decomposition import TruncatedSVD\nfrom sklearn.preprocessing import Normalizer\n\ndef Trunc_SVD(vectorized, n_components=300, iterations=1, normalize=False, random_state=42):\n \"\"\"\n Performs LSA/LSI on a sparse document term matrix, returns a fitted, transformed, (normalized) LSA object\n \"\"\"\n # Already own the vectorized data for LSA, just transpose it back to normal\n vecs_lsa = vectorized.T\n\n # Initialize SVD object as LSA\n lsa = TruncatedSVD(n_components=n_components, n_iter=iterations, algorithm='randomized', random_state=random_state)\n dtm_lsa = lsa.fit(vecs_lsa)\n print(\"Explained Variance - LSA {}:\".format(n_components), dtm_lsa.explained_variance_ratio_.sum())\n if normalize:\n dtm_lsa_t = lsa.fit_transform(vecs_lsa)\n dtm_lsa_t = Normalizer(copy=False).fit_transform(dtm_lsa_t)\n return dtm_lsa, dtm_lsa_t\n return dtm_lsa\n\n\ndef plot_SVD(lsa, title, level=None):\n \"\"\"\n Plots the singular values of an LSA object\n \"\"\"\n plt.figure(num=1, figsize=(15,10))\n plt.suptitle(title, fontsize=22, x=.55, y=.45, horizontalalignment='left')\n plt.subplot(221)\n plt.title('Explained Variance by each Singular Value')\n plt.plot(lsa.explained_variance_[:level])\n \n plt.subplot(222)\n plt.title('Explained Variance Ratio by each Singular Value')\n plt.plot(lsa.explained_variance_ratio_[:level])\n \n plt.subplot(223)\n plt.title(\"Singular Values ('Components')\")\n plt.plot(lsa.singular_values_[:level])\n plt.show()",
"_____no_output_____"
],
[
"%%time\ncomponents = 350\ncv_dtm_lsa = Trunc_SVD(cv_vecs, n_components=components, iterations=5, normalize=False)\nplot_SVD(cv_dtm_lsa, title='Count Vectorizer', level=25)\n\ntf_dtm_lsa = Trunc_SVD(tf_vecs, n_components=components, iterations=5, normalize=False)\nplot_SVD(tf_dtm_lsa, title='Term Frequency - \\nInverse Document Frequency', level=25)",
"Explained Variance - LSA 350: 0.4865536129677702\n"
],
[
"# Numerically confirming the elbow in the above plot\nprint('SVD Value| CV | TFIDF')\nprint('Top 2: ',round(sum(list(cv_dtm_lsa.explained_variance_ratio_[:2])),3),round(sum(list(tf_dtm_lsa.explained_variance_ratio_[:2])),3))\nprint('Top 3: ',round(sum(list(cv_dtm_lsa.explained_variance_ratio_[:3])),3),round(sum(list(tf_dtm_lsa.explained_variance_ratio_[:3])),3))\nprint('Top 4: ',round(sum(list(cv_dtm_lsa.explained_variance_ratio_[:4])),3),round(sum(list(tf_dtm_lsa.explained_variance_ratio_[:4])),3))\nprint('Top 5: ',round(sum(list(cv_dtm_lsa.explained_variance_ratio_[:5])),3),round(sum(list(tf_dtm_lsa.explained_variance_ratio_[:5])),3))\nprint('Top 6: ',round(sum(list(cv_dtm_lsa.explained_variance_ratio_[:6])),3),round(sum(list(tf_dtm_lsa.explained_variance_ratio_[:6])),3))\nprint('Top 7: ',round(sum(list(cv_dtm_lsa.explained_variance_ratio_[:7])),3),round(sum(list(tf_dtm_lsa.explained_variance_ratio_[:7])),3))\nprint('Top 8: ',round(sum(list(cv_dtm_lsa.explained_variance_ratio_[:8])),3),round(sum(list(tf_dtm_lsa.explained_variance_ratio_[:8])),3))\nprint('Top 16:\\t',round(sum(list(cv_dtm_lsa.explained_variance_ratio_[:16])),3),round(sum(list(tf_dtm_lsa.explained_variance_ratio_[:16])),3))\nprint('Top 32:\\t',round(sum(list(cv_dtm_lsa.explained_variance_ratio_[:32])),3),round(sum(list(tf_dtm_lsa.explained_variance_ratio_[:32])),3))\nprint('Top 64:\\t',round(sum(list(cv_dtm_lsa.explained_variance_ratio_[:64])),3),round(sum(list(tf_dtm_lsa.explained_variance_ratio_[:64])),3))\nprint('Top 128:',round(sum(list(cv_dtm_lsa.explained_variance_ratio_[:128])),3),round(sum(list(tf_dtm_lsa.explained_variance_ratio_[:128])),3))\nprint('Top 256:',round(sum(list(cv_dtm_lsa.explained_variance_ratio_[:256])),3),round(sum(list(tf_dtm_lsa.explained_variance_ratio_[:256])),3))\nprint('Top 350:',round(sum(list(cv_dtm_lsa.explained_variance_ratio_[:350])),3),round(sum(list(tf_dtm_lsa.explained_variance_ratio_[:350])),3))",
"SVD Value| CV | TFIDF\nTop 2: 0.065 0.008\nTop 3: 0.079 0.01\nTop 4: 0.091 0.012\nTop 5: 0.101 0.014\nTop 6: 0.107 0.015\nTop 7: 0.114 0.017\nTop 8: 0.12 0.019\nTop 16:\t 0.157 0.028\nTop 32:\t 0.205 0.041\nTop 64:\t 0.265 0.059\nTop 128: 0.342 0.088\nTop 256: 0.439 0.134\nTop 350: 0.487 0.161\n"
],
[
"# Close look at the elbow plots\ndef elbow(dtm_lsa):\n evr = dtm_lsa.explained_variance_ratio_[:20]\n print(\"Explained Variance Ratio (EVR):\\n\", evr)\n print(\"Difference in EVR (start 3):\\n\", np.diff(evr[2:]))\n plt.figure()\n plt.plot(-np.diff(evr[2:]))\n plt.xticks(range(-1,22), range(2,20))\n plt.suptitle('Difference in Explained Variance Ratio', fontsize=15);\n plt.title('Start from 3, moves up to 20');\n\n# Count Vectorizer\nelbow(cv_dtm_lsa)",
"Explained Variance Ratio (EVR):\n [0.04145 0.02402 0.01375 0.01160 0.00975 0.00689 0.00656 0.00591 0.00551\n 0.00509 0.00508 0.00479 0.00452 0.00442 0.00407 0.00397 0.00381 0.00355\n 0.00350 0.00337]\nDifference in EVR (start 3):\n [-0.00216 -0.00185 -0.00286 -0.00033 -0.00066 -0.00040 -0.00042 -0.00001\n -0.00030 -0.00027 -0.00009 -0.00035 -0.00010 -0.00016 -0.00026 -0.00005\n -0.00012]\n"
],
[
"# Tf-Idf\nelbow(tf_dtm_lsa)",
"Explained Variance Ratio (EVR):\n [0.00400 0.00357 0.00222 0.00220 0.00184 0.00166 0.00158 0.00145 0.00128\n 0.00123 0.00122 0.00113 0.00111 0.00106 0.00103 0.00098 0.00097 0.00095\n 0.00092 0.00088]\nDifference in EVR (start 3):\n [-0.00002 -0.00036 -0.00017 -0.00008 -0.00013 -0.00017 -0.00005 -0.00001\n -0.00009 -0.00003 -0.00004 -0.00004 -0.00005 -0.00001 -0.00002 -0.00003\n -0.00004]\n"
]
],
[
[
"###### The count vectorizer seems like it will be more fool proof, so I will use cv for my study. 8 might be a good cutoff value for the number of components kept in dimensionality reduction, I will try to confirm this later with KMeans clustering. The intuition behind this is that the slope after the 8th element is significantly different from the first elements. Keeping just 2 components would not be sufficient enough for clustering because we want to retain as much information as we can while still cutting down the dimensions to find some kind of human readable latent concept space.\n\n###### I am going to try out 2 quick methods before clustering and moving onto my main goal of topic modeling with LDA.",
"_____no_output_____"
],
[
"## Latent Semantic Analysis (LSA)",
"_____no_output_____"
]
],
[
[
"%%time\nfrom gensim import corpora, matutils, models\n\n# Convert sparse matrix of term-doc counts to a gensim corpus\ncv_corpus = matutils.Sparse2Corpus(cv_vecs)\npickle.dump(cv_corpus, open('data/cv_corpus.pkl', 'wb'))\n\n# Maps index to term\nid2word = dict((v, k) for k, v in cv.vocabulary_.items())\n\n# This is for Python 3, Need this for something at the end\nid2word = corpora.Dictionary.from_corpus(cv_corpus, id2word=id2word)\npickle.dump(lda, open('data/id2word.pkl', 'wb'))\n\n# Fitting an LSI model\nlsi = models.LsiModel(corpus=cv_corpus, id2word=id2word, num_topics=10)",
"2018-03-11 18:22:19,137 : INFO : 'pattern' package not found; tag filters are not available for English\n2018-03-11 18:22:19,167 : INFO : adding document #0 to Dictionary(0 unique tokens: [])\n2018-03-11 18:22:20,327 : INFO : adding document #10000 to Dictionary(0 unique tokens: [])\n2018-03-11 18:22:21,548 : INFO : adding document #20000 to Dictionary(0 unique tokens: [])\n2018-03-11 18:22:22,807 : INFO : adding document #30000 to Dictionary(0 unique tokens: [])\n2018-03-11 18:22:24,019 : INFO : adding document #40000 to Dictionary(0 unique tokens: [])\n2018-03-11 18:22:25,235 : INFO : adding document #50000 to Dictionary(0 unique tokens: [])\n2018-03-11 18:22:26,512 : INFO : adding document #60000 to Dictionary(0 unique tokens: [])\n2018-03-11 18:22:27,745 : INFO : adding document #70000 to Dictionary(0 unique tokens: [])\n2018-03-11 18:22:29,009 : INFO : adding document #80000 to Dictionary(0 unique tokens: [])\n2018-03-11 18:22:30,274 : INFO : adding document #90000 to Dictionary(0 unique tokens: [])\n2018-03-11 18:22:31,516 : INFO : adding document #100000 to Dictionary(0 unique tokens: [])\n2018-03-11 18:22:32,809 : INFO : adding document #110000 to Dictionary(0 unique tokens: [])\n2018-03-11 18:22:34,065 : INFO : adding document #120000 to Dictionary(0 unique tokens: [])\n2018-03-11 18:22:35,326 : INFO : adding document #130000 to Dictionary(0 unique tokens: [])\n2018-03-11 18:22:36,574 : INFO : adding document #140000 to Dictionary(0 unique tokens: [])\n2018-03-11 18:22:37,703 : INFO : built Dictionary(47317 unique tokens: ['widen', 'breadth', 'circular', 'perkins', 'resiliency']...) from 146996 documents (total 23815900 corpus positions)\n2018-03-11 18:22:37,711 : INFO : using serial LSI version on this node\n2018-03-11 18:22:37,712 : INFO : updating model with new documents\n2018-03-11 18:22:38,321 : INFO : preparing a new chunk of documents\n2018-03-11 18:22:39,089 : INFO : using 100 extra samples and 2 power iterations\n2018-03-11 18:22:39,090 : INFO : 1st phase: constructing (47317, 110) action matrix\n2018-03-11 18:22:39,755 : INFO : orthonormalizing (47317, 110) action matrix\n2018-03-11 18:22:43,037 : INFO : 2nd phase: running dense svd on (110, 20000) matrix\n2018-03-11 18:22:43,512 : INFO : computing the final decomposition\n2018-03-11 18:22:43,516 : INFO : keeping 10 factors (discarding 56.637% of energy spectrum)\n"
],
[
"%%time\n# Retrieve vectors for the original cv corpus in the LS space (\"transform\" in sklearn)\nlsi_corpus = lsi[cv_corpus]\n\n# Dump the resulting document vectors into a list\ndoc_vecs = [doc for doc in lsi_corpus]",
"CPU times: user 16.1 s, sys: 1.35 s, total: 17.4 s\nWall time: 17.4 s\n"
],
[
"doc_vecs[0][:5]",
"_____no_output_____"
],
[
"# Sum of a documents' topics (?)\nfor i in range(5):\n print(sum(doc_vecs[i][1]))",
"-1.4106649569930907\n-0.031444038368904303\n-1.2723604749491777\n0.15042521472269477\n0.690384834127101\n"
]
],
[
[
"## <a id='sim'></a> Similarity Scoring",
"_____no_output_____"
]
],
[
[
"from gensim import similarities\n\n# Create an index transformer that calculates similarity based on our space\nindex = similarities.MatrixSimilarity(doc_vecs, num_features=300)\n\n# Return the sorted list of cosine similarities to the docu document\ndocu = 5 # Change docu as needed\nsims = sorted(enumerate(index[doc_vecs[docu]]), key=lambda item: -item[1])\nnp.r_[sims[:10] , sims[-10:]]",
"2018-03-11 06:05:35,806 : INFO : creating matrix with 146996 documents and 300 features\n"
],
[
"# Viewing similarity of top documents\ntop = 1\nfor sim_doc_id, sim_score in sims[:top + 1]: \n print(\"\\nScore:\", sim_score)\n print(\"Document Text:\\n\", df.text[sim_doc_id])",
"\nScore: 0.99999994\nDocument Text:\n I just recharged the A/C on my 2008 328xi Just wanted to share my success story.\n\nThis was an extremely easy project, after about 10 minutes of research.\n\nThe most difficult part was knowing which knob was the low-pressure service knob. After I convinced myself I had it right, the entire thing was smooth sailing.\n\nI was a little disappointed about the lack of specific directions on the interwebs, but there was enough for me to complete this successfully.\n\nI had NO cold air at all. The entire thing just blew hot air with all the recommended settings. Since it's now 103 degrees here in Boulder, CO, I figured it was time.\n\nI bought the hose/gauge and can of refrigerant at an o'reilly's auto parts store. \n\nHere are the steps I took to check the pressure of the system:\n\nDISCLAIMER: I know very little about engine systems. However, I do have a little bit of gumption and a can-do attitude.\n\n1. Let car run for 3 minutes with A/C at max, with recycled air setting\n2. unscrewed the dust cap\n3. attached the hose/gauge\n4. read the gauge\n\nIt's really that easy. The hose/gauge has a clicking mechanism that actually connects securely to the service port, so it was kind of a no-brainer to be confident it was attached. The gauge read less than 10. The gauge was divided up into three ranges: low, normal and warning. 10 was in the low range, and probably refers to PSI or pounds per square inch. Correct me if I'm wrong.\n\nHere's the steps I took to recharging the system. (with gloves and glasses on!)\n\n1. Engine running for 5 minutes with A/C going full blast\n2. Shook can of refrigerant (freon, presumably) while 5 minutes passes\n3. Attached hose/gauge to refrigerant\n4. Pierced refrigerant with the provided attachment\n5. Opened the airflow between can of refrigerant and my car's A/C system\n6. Gauge spiked to \"warning\" range, but then settled back down to the normal range\n7. placed can in warm water, to facilitate emptying of gas into system\n8. Waited for 5-8 minutes\n\nThat's about it. The can of refrigerant got cold while it was emptying into the car, but I knew it was empty when the can was no longer cold. I then pretty much reversed my steps to remove the hose/gauge and replace the dust cap securely. I test drove the car, and the cool air was heavenly while I glanced down at the dashboard thermometer and read the highest number I've ever seen on it, 103. \n\nFuckin' A.\n\nLet me know if you have any questions, or need pictures. I'll do what I can to help.\n\nCheers.\n\nScore: 0.9876238\nDocument Text:\n Water softener ran out of salt, and this was my experience After hosting several visitors to my house, plus forgetting to fill the salt bucket, I experience hard water lather and the delightfult return to soft water lather.\n\nI've been using Mikes Natural Soap for about 3 weeks with a SOC boar. I've used this same brush every day since mid January and it is well broken in. Before using MNS I used a puck of Strop Shoppe and an entire puck of Haslinger. For MNS I load generously at about 1.5 gm/load. I have a base weight, so after 2 weeks I know how much I am loading. I face later for 1-2 minutes and then finish the brush in a scuttle to thicken the lather a bit more and to have enough for a second pass and touch up. That last step in the scuttle to create very thick and warm lather is a wonderful step.\n\nBaseline, with soft water, the lather is as described by other users, rich, thick, holds water well, and very slick. I find it very similar to MWS, slightly better than Strop Shoppe and not quite as good as Haslinger. \n\nWhen the water softener needed recharging, and the water was hard, the brush was much harder to load, the lather took much longer to develop--2 to 3 minutes, and broke down very quickly--before finishing the first pass. It was still a fine shave but very thin and slick lather. The scuttle made the lather much worse. Warm and pasty instead of warm and thick. So as the micelles form to trap water and air to make a lather emusion, they break down from the fatty acids binding to Mg++ and Ca++ cations. The heat from the scuttle accelerates the reaction, breaking the emulsion down even faster. Further, tallow soaps have longer chain fatty acids (C16, C18) than soaps with more coconut oils (C12), which to me seems to explain part of the problem with diffiuclty lather tallow based soaps for some users.\n\nThus, I knew I had about 3 days before recharging the hot water heater to try a few things.\n\nFirst, citric acid. I already had some. I sprinkled it like salt. It softened the water (that is, no scum in the sink when I drained), but only modestly better lather. In retrospect, I should have measured our water hardness in ppm (same as mg/L) and done the chemistry to add the correct molar concentration of citric acid, and converted back to weight. However, that takes me back to college chemistry and not an exercise I wanted to do. \n\nSecond distilled water. We had a bit already in the house for the steam iron. Easy to load but very airy and fluffy. Not the experience I wanted.\n\nThird, return to soft water. So after 3 days the hot water heater has recharged with soft water and the lather is back to where I started--thick like yougurt, slick, protective, and nicely warmed from the scuttle.\n\nSo, for W-E's who remember more chemistry that I do, what is happening?\n\nWhen water is softened via an ion exhange process, Ca++ and Mg++ are exchanged for Na+, so the water has sodium carbonate (Na2C03--washing soda) instead of magnesium carbonate or calcium carbonate (CaC03 or lime) \n\nSo, does the sodium carbonate that has replaced calcium carbonate and magnesium carbonate at equal molar concentration produce a better and more stable lather? \n\nIs calcium carbonate a better solution than citric acid? If so how much? Arm and Hammer recommends 2 tablespoons per gallon (0.75% solutiion). Or is citirc acid still okay but I simply should have measured my hard water with a strip from the hardware store and calculated the molar concentration to soften the water rather than guessed? \n\nAny insight would be appreciated. I know hard water is a common \"lather killer\" and this certainly confirms that observation.\n\nI think W-E's would like to know a number of solutions, so they can continue to enjoy a number of products.\n\n\n"
]
],
[
[
"###### The metrics look artifically high, and do not match well for each document. The similarity method could be used to optimize keyword search if we were trying to expand the reach of a certain demographic using these rankings. The next step would be to improve on this method with word2vec or a better LSI model.",
"_____no_output_____"
],
[
"# <a id='km'></a>KMeans Clustering",
"_____no_output_____"
]
],
[
[
"lsi_red = matutils.corpus2dense(lsi_corpus, num_terms=300).transpose()\nprint('Reduced LS space shape:', lsi_red.shape)\nprint('Reduced LS space size in bytes:', sys.getsizeof(lsi_red))\n\n# Taking a subset for Kmeans due to memory dropout\nlsi_red_sub = lsi_red.copy()\nnp.random.shuffle(lsi_red_sub)\nlsi_red_sub = lsi_red_sub[:30000]\nlsi_red_sub = Normalizer(copy=False).fit_transform(lsi_red_sub) # Normalized for the Euclidean metric\nprint('Reduced LS space subset shape:', lsi_red_sub.shape)\nprint('Reduced LS space subset size in bytes:', sys.getsizeof(lsi_red_sub))",
"Reduced LS space shape: (146996, 300)\nReduced LS space size in bytes: 112\nReduced LS space subset shape: (30000, 300)\nReduced LS space subset size in bytes: 112\n"
],
[
"from sklearn.cluster import KMeans\nfrom sklearn.metrics import silhouette_score\n\n# Calculating Silhouette coefficients and Sum of Squared Errors\ndef silhouette_co(start, stop, lsi_red_sub, random_state=42, n_jobs=-2, verbose=4):\n \"\"\" \n Input a normalized subset of a reduced dense latent semantic matrix\n Returns list of scores for plotting\n \"\"\"\n SSEs = []\n Sil_coefs = []\n try_clusters = range(start, stop)\n for k in try_clusters:\n km = KMeans(n_clusters=k, random_state=random_state, n_jobs=n_jobs)\n km.fit(lsi_red_sub)\n labels = km.labels_\n Sil_coefs.append(silhouette_score(lsi_red_sub, labels, metric='euclidean'))\n SSEs.append(km.inertia_)\n if k%verbose==0:\n print(k)\n return SSEs, Sil_coefs, try_clusters",
"_____no_output_____"
],
[
"%%time\nSSEs, Sil_coefs, try_clusters = silhouette_co(start=2, stop=40, lsi_red_sub=lsi_red_sub)",
"_____no_output_____"
],
[
"def plot_sil(try_clusters, Sil_coefs, SSEs):\n \"\"\" Function for visualizing/ finding the best clustering point \"\"\"\n # Plot Silhouette scores\n fig, (ax1, ax2) = plt.subplots(1,2, figsize=(15,5), sharex=True, dpi=200)\n ax1.plot(try_clusters, Sil_coefs)\n ax1.title('Silhouette of Clusters')\n ax1.set_xlabel('Number of Clusters')\n ax1.set_ylabel('Silhouette Coefficient')\n\n # Plot errors\n ax2.plot(try_clusters, SSEs)\n ax2.title(\"Cluster's Error\")\n ax2.set_xlabel('Number of Clusters')\n ax2.set_ylabel('SSE');\n \nplot_sil(try_clusters=try_clusters, Sil_coefs=Sil_coefs, SSEs=SSes)",
"_____no_output_____"
]
],
[
[
"###### This suggests that there arent meaningful clusters in the normalized LS300 space. To test if 300 dimensions is too large, I will try clustering again with a reduced input.",
"_____no_output_____"
]
],
[
[
"# Fix IndexError: index 10\nlsi_red5 = matutils.corpus2dense(lsi_corpus, num_terms=10).transpose()\nprint('Reduced LSI space shape:', lsi_red5.shape)\nprint('Reduced LS space subset size in bytes:', sys.getsizeof(lsi_red5))\n\n# Taking a subset for Kmeans due to memory dropout\nlsi_red_sub5 = lsi_red5.copy()\nnp.random.shuffle(lsi_red_sub5)\nlsi_red_sub5 = lsi_red_sub5[:5000]\nlsi_red_sub5 = Normalizer(copy=False).fit_transform(lsi_red_sub5) # Normalized for the Euclidean metric\nprint('Reduced LSI space subset shape:', lsi_red_sub5.shape)\nprint('Reduced LS space subset size in bytes:', sys.getsizeof(lsi_red_sub5))",
"_____no_output_____"
],
[
"%%time\nSSEs, Sil_coefs, try_clusters = silhouette_co(start=2, stop=40, lsi_red_sub=lsi_red_sub)\nplot_sil(try_clusters=try_clusters, Sil_coefs=Sil_coefs, SSEs=SSes)",
"_____no_output_____"
]
],
[
[
"###### Due to project deadlines, I was not able to complete this method but I wanted to preserve the effort and document the process for later use. I will move on to LDA.",
"_____no_output_____"
]
],
[
[
"# Cluster with the best results\n#kmeans = KMeans(n_clusters=20, n_jobs=-2)\n#lsi_clusters = kmeans.fit_predict(lsi_red)\n\n# Take a look at the \nprint(lsi_clusters[0:15])\ndf.text[0:2]",
"[0 0 0 0 0 0 0 0 0 0 7 0 0 0 0]\n"
],
[
"from sklearn.metrics import silhouette_samples, silhouette_score\n\n# Validating cluster performance\n# Select range around best result, plot the silhouette distributions for each cluster\nfor k in range(14,17):\n plt.figure(dpi=120, figsize=(8,6))\n ax1 = plt.gca()\n km = KMeans(n_clusters=k, random_state=1)\n km.fit(X)\n labels = km.labels_\n silhouette_avg = silhouette_score(X, labels)\n print(\"For n_clusters =\", k,\n \"The average silhouette_score is :\", silhouette_avg)\n\n # Compute the silhouette scores for each sample\n sample_silhouette_values = silhouette_samples(X, labels)\n y_lower = 100\n for i in range(k):\n # Aggregate the silhouette scores for samples belonging to cluster i\n ith_cluster_silhouette_values = sample_silhouette_values[labels == i]\n \n #Sort\n ith_cluster_silhouette_values.sort()\n \n size_cluster_i = ith_cluster_silhouette_values.shape[0]\n y_upper = y_lower + size_cluster_i\n\n color = plt.cm.spectral(float(i) / k)\n ax1.fill_betweenx(np.arange(y_lower, y_upper),\n 0, ith_cluster_silhouette_values,\n facecolor=color, edgecolor=color, alpha=0.7)\n\n # Label the silhouette plots with their cluster numbers at the middle\n ax1.text(-0.05, y_lower + 0.5 * size_cluster_i, str(i))\n\n # Compute the new y_lower for next plot\n y_lower = y_upper + 10 # 10 for the 0 samples\n \n ax1.set_title(\"The silhouette plot for the various clusters.\")\n ax1.set_xlabel(\"The silhouette coefficient values\")\n ax1.set_ylabel(\"Cluster label\")\n\n # The vertical line for average silhouette score of all the values\n ax1.axvline(x=silhouette_avg, color=\"red\", linestyle=\"--\")\n\n ax1.set_yticks([]) # Clear the yaxis labels / ticks\n ax1.set_xticks([-0.1, 0, 0.2, 0.4, 0.6, 0.8, 1])",
"_____no_output_____"
]
],
[
[
"## <a id='lda'></a> Latent Dirichlet Allocation (LDA)",
"_____no_output_____"
]
],
[
[
"%%time\nrun = False\n\npasses = 85\nif run==True:\n lda = models.LdaMulticore(corpus=cv_corpus, num_topics=15, id2word=id2word, passes=passes, \n workers=13, random_state=42, eval_every=None, chunksize=6000)",
"2018-03-08 22:48:40,670 : INFO : using symmetric alpha at 0.06666666666666667\n2018-03-08 22:48:40,673 : INFO : using symmetric eta at 0.06666666666666667\n2018-03-08 22:48:40,689 : INFO : using serial LDA version on this node\n2018-03-08 22:48:40,811 : INFO : running online LDA training, 15 topics, 100 passes over the supplied corpus of 146996 documents, updating every 78000 documents, evaluating every ~0 documents, iterating 50x with a convergence threshold of 0.001000\n2018-03-08 22:48:40,815 : INFO : training LDA model using 13 processes\n2018-03-08 22:48:43,307 : INFO : PROGRESS: pass 0, dispatched chunk #0 = documents up to #6000/146996, outstanding queue size 1\n2018-03-08 22:48:52,172 : INFO : PROGRESS: pass 0, dispatched chunk #1 = documents up to #12000/146996, outstanding queue size 2\n2018-03-08 22:48:58,812 : INFO : PROGRESS: pass 0, dispatched chunk #2 = documents up to #18000/146996, outstanding queue size 3\n2018-03-08 22:49:05,247 : INFO : PROGRESS: pass 0, dispatched chunk #3 = documents up to #24000/146996, outstanding queue size 3\n2018-03-08 22:49:12,401 : INFO : PROGRESS: pass 0, dispatched chunk #4 = documents up to #30000/146996, outstanding queue size 3\n2018-03-08 22:49:19,633 : INFO : PROGRESS: pass 0, dispatched chunk #5 = documents up to #36000/146996, outstanding queue size 3\n2018-03-08 22:49:26,781 : INFO : PROGRESS: pass 0, dispatched chunk #6 = documents up to #42000/146996, outstanding queue size 3\n2018-03-08 22:49:33,834 : INFO : PROGRESS: pass 0, dispatched chunk #7 = documents up to #48000/146996, outstanding queue size 3\n2018-03-08 22:49:41,172 : INFO : PROGRESS: pass 0, dispatched chunk #8 = documents up to #54000/146996, outstanding queue size 3\n2018-03-08 22:49:48,312 : INFO : PROGRESS: pass 0, dispatched chunk #9 = documents up to #60000/146996, outstanding queue size 3\n2018-03-08 22:49:55,930 : INFO : PROGRESS: pass 0, dispatched chunk #10 = documents up to #66000/146996, outstanding queue size 3\n2018-03-08 22:50:02,804 : INFO : PROGRESS: pass 0, dispatched chunk #11 = documents up to #72000/146996, outstanding queue size 3\n2018-03-08 22:50:10,214 : INFO : PROGRESS: pass 0, dispatched chunk #12 = documents up to #78000/146996, outstanding queue size 3\n2018-03-08 22:50:19,620 : INFO : PROGRESS: pass 0, dispatched chunk #13 = documents up to #84000/146996, outstanding queue size 3\n2018-03-08 22:50:26,716 : INFO : PROGRESS: pass 0, dispatched chunk #14 = documents up to #90000/146996, outstanding queue size 3\n2018-03-08 22:50:32,725 : INFO : merging changes from 78000 documents into a model of 146996 documents\n2018-03-08 22:50:32,835 : INFO : topic #7 (0.067): 0.005*\"well\" + 0.004*\"game\" + 0.004*\"want\" + 0.004*\"think\" + 0.003*\"see\" + 0.003*\"good\" + 0.003*\"first\" + 0.003*\"much\" + 0.003*\"team\" + 0.003*\"something\"\n2018-03-08 22:50:32,840 : INFO : topic #10 (0.067): 0.005*\"think\" + 0.004*\"make\" + 0.004*\"well\" + 0.004*\"much\" + 0.003*\"want\" + 0.003*\"back\" + 0.003*\"work\" + 0.003*\"since\" + 0.003*\"right\" + 0.003*\"game\"\n2018-03-08 22:50:32,845 : INFO : topic #2 (0.067): 0.005*\"game\" + 0.004*\"good\" + 0.004*\"back\" + 0.004*\"new\" + 0.003*\"think\" + 0.003*\"make\" + 0.003*\"need\" + 0.003*\"two\" + 0.003*\"much\" + 0.003*\"feel\"\n2018-03-08 22:50:32,850 : INFO : topic #0 (0.067): 0.004*\"game\" + 0.004*\"first\" + 0.004*\"want\" + 0.003*\"think\" + 0.003*\"well\" + 0.003*\"still\" + 0.003*\"make\" + 0.003*\"way\" + 0.003*\"last\" + 0.003*\"see\"\n2018-03-08 22:50:32,854 : INFO : topic #12 (0.067): 0.003*\"back\" + 0.003*\"think\" + 0.003*\"game\" + 0.003*\"make\" + 0.003*\"want\" + 0.003*\"see\" + 0.003*\"still\" + 0.003*\"going\" + 0.003*\"first\" + 0.003*\"two\"\n"
],
[
"# Save model after your last run, or continue to update LDA\n#pickle.dump(lda, open('data/lda_gensim.pkl', 'wb'))\n\n# Gensim save\n#lda.save('data/gensim_lda.model')\nlda = models.LdaModel.load('data/gensim_lda.model')",
"2018-03-11 18:37:34,480 : INFO : loading LdaModel object from data/gensim_lda.model\n2018-03-11 18:37:34,484 : INFO : loading expElogbeta from data/gensim_lda.model.expElogbeta.npy with mmap=None\n2018-03-11 18:37:34,489 : INFO : setting ignored attribute state to None\n2018-03-11 18:37:34,490 : INFO : setting ignored attribute id2word to None\n2018-03-11 18:37:34,491 : INFO : setting ignored attribute dispatcher to None\n2018-03-11 18:37:34,492 : INFO : loaded data/gensim_lda.model\n2018-03-11 18:37:34,493 : INFO : loading LdaState object from data/gensim_lda.model.state\n2018-03-11 18:37:34,517 : INFO : loaded data/gensim_lda.model.state\n"
],
[
"%%time\n# Transform the docs from the word space to the topic space (like \"transform\" in sklearn)\nlda_corpus = lda[cv_corpus]\n\n# Store the documents' topic vectors in a list so we can take a peak\nlda_docs = [doc for doc in lda_corpus]",
"CPU times: user 7min 36s, sys: 12min 58s, total: 20min 34s\nWall time: 2min 20s\n"
],
[
"# Review Dirichlet distribution for documents\nlda_docs[25000]",
"_____no_output_____"
],
[
"# Manually review the document to see if it makes sense! \n# Look back at the topics that it matches with to confirm the result!\ndf.iloc[25000]",
"_____no_output_____"
],
[
"#bow = df.iloc[1,0].split()\n\n# Print topic probability distribution for a document\n#print(lda[bow]) #Values unpack error\n\n# Given a chunk of sparse document vectors, estimate gamma:\n# (parameters controlling the topic weights) for each document in the chunk.\n#lda.inference(bow) #Not enough values\n\n# Makeup of each topic! Interpretable! \n# The d3 visualization below is far better for looking at the interpretations.\nlda.print_topics(num_words=10, num_topics=1)",
"2018-03-11 07:47:06,762 : INFO : topic #1 (0.067): 0.005*\"water\" + 0.005*\"food\" + 0.004*\"use\" + 0.003*\"make\" + 0.003*\"much\" + 0.003*\"used\" + 0.003*\"well\" + 0.003*\"skin\" + 0.003*\"first\" + 0.003*\"good\"\n"
]
],
[
[
"## <a id='py'></a> pyLDAvis",
"_____no_output_____"
]
],
[
[
"# For quickstart, we can just jump straight to results\nimport pickle\nfrom gensim import models\ndef loadingPickles():\n id2word = pickle.load(open('data/id2word.pkl','rb'))\n cv_vecs = pickle.load(open('data/cv_vecs.pkl','rb'))\n cv_corpus = pickle.load(open('data/cv_corpus.pkl','rb'))\n lda = models.LdaModel.load('data/gensim_lda.model')\n return id2word, cv_vecs, cv_corpus, lda",
"_____no_output_____"
],
[
"import pyLDAvis.gensim\nimport gensim\n\n# Enables visualization in jupyter notebook\npyLDAvis.enable_notebook()\n\n# Prepare the visualization\n# Change multidimensional scaling function via mds parameter\n# Options are tsne, mmds, pcoa \n# cv_corpus or cv_vecs work equally\nid2word, _, cv_corpus, lda = loadingPickles()\nviz = pyLDAvis.gensim.prepare(topic_model=lda, corpus=cv_corpus, dictionary=id2word, mds='mmds')\n\n# Save the html for sharing!\npyLDAvis.save_html(viz,'data/viz.html')\n\n# Interact! Saliency is the most important metric that changes the story of each topic.\npyLDAvis.display(viz)",
"/usr/local/lib/python3.5/dist-packages/pyLDAvis/_prepare.py:387: DeprecationWarning: \n.ix is deprecated. Please use\n.loc for label based indexing or\n.iloc for positional indexing\n\nSee the documentation here:\nhttp://pandas.pydata.org/pandas-docs/stable/indexing.html#ix-indexer-is-deprecated\n topic_term_dists = topic_term_dists.ix[topic_order]\n"
]
],
[
[
"# There you have it. There is a ton of great information right here that I will conclude upon in the README and the slides on my github. \n\n###### In it I will discuss what I could do with this information. I did not end up using groupUserPosts but I could create user profiles based on the aggregate of their document topic distributions. I believe this is a great start to understanding NLP and how it can be used. I would consider working on this again but with more technologies needed for big data.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
d0c6008fe0e20fe81c885bfc57dbab5e6a598480 | 23,887 | ipynb | Jupyter Notebook | examples/notebooks/statespace_forecasting.ipynb | diego-mazon/statsmodels | af8b5b5dc78acb600ffd08cda6bd9b1ca5200e10 | [
"BSD-3-Clause"
] | null | null | null | examples/notebooks/statespace_forecasting.ipynb | diego-mazon/statsmodels | af8b5b5dc78acb600ffd08cda6bd9b1ca5200e10 | [
"BSD-3-Clause"
] | 1 | 2019-07-29T08:35:08.000Z | 2019-07-29T08:35:08.000Z | examples/notebooks/statespace_forecasting.ipynb | ozeno/statsmodels | 9271ced806b807a4dd325238df38b60f1aa363e2 | [
"BSD-3-Clause"
] | null | null | null | 32.32341 | 531 | 0.611504 | [
[
[
"# Forecasting in Statsmodels\n\nThis notebook describes forecasting using time series models in Statsmodels.\n\n**Note**: this notebook applies only to the state space model classes, which are:\n\n- `sm.tsa.SARIMAX`\n- `sm.tsa.UnobservedComponents`\n- `sm.tsa.VARMAX`\n- `sm.tsa.DynamicFactor`",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\n\nimport numpy as np\nimport pandas as pd\nimport statsmodels.api as sm\nimport matplotlib.pyplot as plt\n\nmacrodata = sm.datasets.macrodata.load_pandas().data\nmacrodata.index = pd.period_range('1959Q1', '2009Q3', freq='Q')",
"_____no_output_____"
]
],
[
[
"## Basic example\n\nA simple example is to use an AR(1) model to forecast inflation. Before forecasting, let's take a look at the series:",
"_____no_output_____"
]
],
[
[
"endog = macrodata['infl']\nendog.plot(figsize=(15, 5))",
"_____no_output_____"
]
],
[
[
"### Constructing and estimating the model",
"_____no_output_____"
],
[
"The next step is to formulate the econometric model that we want to use for forecasting. In this case, we will use an AR(1) model via the `SARIMAX` class in Statsmodels.\n\nAfter constructing the model, we need to estimate its parameters. This is done using the `fit` method. The `summary` method produces several convinient tables showing the results.",
"_____no_output_____"
]
],
[
[
"# Construct the model\nmod = sm.tsa.SARIMAX(endog, order=(1, 0, 0), trend='c')\n# Estimate the parameters\nres = mod.fit()\n\nprint(res.summary())",
"_____no_output_____"
]
],
[
[
"### Forecasting",
"_____no_output_____"
],
[
"Out-of-sample forecasts are produced using the `forecast` or `get_forecast` methods from the results object.\n\nThe `forecast` method gives only point forecasts.",
"_____no_output_____"
]
],
[
[
"# The default is to get a one-step-ahead forecast:\nprint(res.forecast())",
"_____no_output_____"
]
],
[
[
"The `get_forecast` method is more general, and also allows constructing confidence intervals.",
"_____no_output_____"
]
],
[
[
"# Here we construct a more complete results object.\nfcast_res1 = res.get_forecast()\n\n# Most results are collected in the `summary_frame` attribute.\n# Here we specify that we want a confidence level of 90%\nprint(fcast_res1.summary_frame(alpha=0.10))",
"_____no_output_____"
]
],
[
[
"The default confidence level is 95%, but this can be controlled by setting the `alpha` parameter, where the confidence level is defined as $(1 - \\alpha) \\times 100\\%$. In the example above, we specified a confidence level of 90%, using `alpha=0.10`.",
"_____no_output_____"
],
[
"### Specifying the number of forecasts\n\nBoth of the functions `forecast` and `get_forecast` accept a single argument indicating how many forecasting steps are desired. One option for this argument is always to provide an integer describing the number of steps ahead you want.",
"_____no_output_____"
]
],
[
[
"print(res.forecast(steps=2))",
"_____no_output_____"
],
[
"fcast_res2 = res.get_forecast(steps=2)\n# Note: since we did not specify the alpha parameter, the\n# confidence level is at the default, 95%\nprint(fcast_res2.summary_frame())",
"_____no_output_____"
]
],
[
[
"However, **if your data included a Pandas index with a defined frequency** (see the section at the end on Indexes for more information), then you can alternatively specify the date through which you want forecasts to be produced:",
"_____no_output_____"
]
],
[
[
"print(res.forecast('2010Q2'))",
"_____no_output_____"
],
[
"fcast_res3 = res.get_forecast('2010Q2')\nprint(fcast_res3.summary_frame())",
"_____no_output_____"
]
],
[
[
"### Plotting the data, forecasts, and confidence intervals\n\nOften it is useful to plot the data, the forecasts, and the confidence intervals. There are many ways to do this, but here's one example",
"_____no_output_____"
]
],
[
[
"fig, ax = plt.subplots(figsize=(15, 5))\n\n# Plot the data (here we are subsetting it to get a better look at the forecasts)\nendog.loc['1999':].plot(ax=ax)\n\n# Construct the forecasts\nfcast = res.get_forecast('2011Q4').summary_frame()\nfcast['mean'].plot(ax=ax, style='k--')\nax.fill_between(fcast.index, fcast['mean_ci_lower'], fcast['mean_ci_upper'], color='k', alpha=0.1);",
"_____no_output_____"
]
],
[
[
"### Note on what to expect from forecasts\n\nThe forecast above may not look very impressive, as it is almost a straight line. This is because this is a very simple, univariate forecasting model. Nonetheless, keep in mind that these simple forecasting models can be extremely competitive.",
"_____no_output_____"
],
[
"## Prediction vs Forecasting\n\nThe results objects also contain two methods that all for both in-sample fitted values and out-of-sample forecasting. They are `predict` and `get_prediction`. The `predict` method only returns point predictions (similar to `forecast`), while the `get_prediction` method also returns additional results (similar to `get_forecast`).\n\nIn general, if your interest is out-of-sample forecasting, it is easier to stick to the `forecast` and `get_forecast` methods.",
"_____no_output_____"
],
[
"## Cross validation\n\n**Note**: some of the functions used in this section were first introduced in Statsmodels v0.11.0.\n\nA common use case is to cross-validate forecasting methods by performing h-step-ahead forecasts recursively using the following process:\n\n1. Fit model parameters on a training sample\n2. Produce h-step-ahead forecasts from the end of that sample\n3. Compare forecasts against test dataset to compute error rate\n4. Expand the sample to include the next observation, and repeat\n\nEconomists sometimes call this a pseudo-out-of-sample forecast evaluation exercise, or time-series cross-validation.",
"_____no_output_____"
],
[
"### Basic example",
"_____no_output_____"
],
[
"We will conduct a very simple exercise of this sort using the inflation dataset above. The full dataset contains 203 observations, and for expositional purposes we'll use the first 80% as our training sample and only consider one-step-ahead forecasts.",
"_____no_output_____"
],
[
"A single iteration of the above procedure looks like the following:",
"_____no_output_____"
]
],
[
[
"# Step 1: fit model parameters w/ training sample\ntraining_obs = int(len(endog) * 0.8)\n\ntraining_endog = endog[:training_obs]\ntraining_mod = sm.tsa.SARIMAX(\n training_endog, order=(1, 0, 0), trend='c')\ntraining_res = training_mod.fit()\n\n# Print the estimated parameters\nprint(training_res.params)",
"_____no_output_____"
],
[
"# Step 2: produce one-step-ahead forecasts\nfcast = training_res.forecast()\n\n# Step 3: compute root mean square forecasting error\ntrue = endog.reindex(fcast.index)\nerror = true - fcast\n\n# Print out the results\nprint(pd.concat([true.rename('true'),\n fcast.rename('forecast'),\n error.rename('error')], axis=1))",
"_____no_output_____"
]
],
[
[
"To add on another observation, we can use the `append` or `extend` results methods. Either method can produce the same forecasts, but they differ in the other results that are available:\n\n- `append` is the more complete method. It always stores results for all training observations, and it optionally allows refitting the model parameters given the new observations (note that the default is *not* to refit the parameters).\n- `extend` is a faster method that may be useful if the training sample is very large. It *only* stores results for the new observations, and it does not allow refitting the model parameters (i.e. you have to use the parameters estimated on the previous sample).\n\nIf your training sample is relatively small (less than a few thousand observations, for example) or if you want to compute the best possible forecasts, then you should use the `append` method. However, if that method is infeasible (for example, becuase you have a very large training sample) or if you are okay with slightly suboptimal forecasts (because the parameter estimates will be slightly stale), then you can consider the `extend` method.",
"_____no_output_____"
],
[
"A second iteration, using the `append` method and refitting the parameters, would go as follows (note again that the default for `append` does not refit the parameters, but we have overridden that with the `refit=True` argument):",
"_____no_output_____"
]
],
[
[
"# Step 1: append a new observation to the sample and refit the parameters\nappend_res = training_res.append(endog[training_obs:training_obs + 1], refit=True)\n\n# Print the re-estimated parameters\nprint(append_res.params)",
"_____no_output_____"
]
],
[
[
"Notice that these estimated parameters are slightly different than those we originally estimated. With the new results object, `append_res`, we can compute forecasts starting from one observation further than the previous call:",
"_____no_output_____"
]
],
[
[
"# Step 2: produce one-step-ahead forecasts\nfcast = append_res.forecast()\n\n# Step 3: compute root mean square forecasting error\ntrue = endog.reindex(fcast.index)\nerror = true - fcast\n\n# Print out the results\nprint(pd.concat([true.rename('true'),\n fcast.rename('forecast'),\n error.rename('error')], axis=1))",
"_____no_output_____"
]
],
[
[
"Putting it altogether, we can perform the recursive forecast evaluation exercise as follows:",
"_____no_output_____"
]
],
[
[
"# Setup forecasts\nnforecasts = 3\nforecasts = {}\n\n# Get the number of initial training observations\nnobs = len(endog)\nn_init_training = int(nobs * 0.8)\n\n# Create model for initial training sample, fit parameters\ninit_training_endog = endog.iloc[:n_init_training]\nmod = sm.tsa.SARIMAX(training_endog, order=(1, 0, 0), trend='c')\nres = mod.fit()\n\n# Save initial forecast\nforecasts[training_endog.index[-1]] = res.forecast(steps=nforecasts)\n\n# Step through the rest of the sample\nfor t in range(n_init_training, nobs):\n # Update the results by appending the next observation\n updated_endog = endog.iloc[t:t+1]\n res = res.append(updated_endog, refit=False)\n \n # Save the new set of forecasts\n forecasts[updated_endog.index[0]] = res.forecast(steps=nforecasts)\n\n# Combine all forecasts into a dataframe\nforecasts = pd.concat(forecasts, axis=1)\n\nprint(forecasts.iloc[:5, :5])",
"_____no_output_____"
]
],
[
[
"We now have a set of three forecasts made at each point in time from 1999Q2 through 2009Q3. We can construct the forecast errors by subtracting each forecast from the actual value of `endog` at that point.",
"_____no_output_____"
]
],
[
[
"# Construct the forecast errors\nforecast_errors = forecasts.apply(lambda column: endog - column).reindex(forecasts.index)\n\nprint(forecast_errors.iloc[:5, :5])",
"_____no_output_____"
]
],
[
[
"To evaluate our forecasts, we often want to look at a summary value like the root mean square error. Here we can compute that for each horizon by first flattening the forecast errors so that they are indexed by horizon and then computing the root mean square error fore each horizon.",
"_____no_output_____"
]
],
[
[
"# Reindex the forecasts by horizon rather than by date\ndef flatten(column):\n return column.dropna().reset_index(drop=True)\n\nflattened = forecast_errors.apply(flatten)\nflattened.index = (flattened.index + 1).rename('horizon')\n\nprint(flattened.iloc[:3, :5])",
"_____no_output_____"
],
[
"# Compute the root mean square error\nrmse = (flattened**2).mean(axis=1)**0.5\n\nprint(rmse)",
"_____no_output_____"
]
],
[
[
"#### Using `extend`\n\nWe can check that we get similar forecasts if we instead use the `extend` method, but that they are not exactly the same as when we use `append` with the `refit=True` argument. This is because `extend` does not re-estimate the parameters given the new observation.",
"_____no_output_____"
]
],
[
[
"# Setup forecasts\nnforecasts = 3\nforecasts = {}\n\n# Get the number of initial training observations\nnobs = len(endog)\nn_init_training = int(nobs * 0.8)\n\n# Create model for initial training sample, fit parameters\ninit_training_endog = endog.iloc[:n_init_training]\nmod = sm.tsa.SARIMAX(training_endog, order=(1, 0, 0), trend='c')\nres = mod.fit()\n\n# Save initial forecast\nforecasts[training_endog.index[-1]] = res.forecast(steps=nforecasts)\n\n# Step through the rest of the sample\nfor t in range(n_init_training, nobs):\n # Update the results by appending the next observation\n updated_endog = endog.iloc[t:t+1]\n res = res.extend(updated_endog)\n \n # Save the new set of forecasts\n forecasts[updated_endog.index[0]] = res.forecast(steps=nforecasts)\n\n# Combine all forecasts into a dataframe\nforecasts = pd.concat(forecasts, axis=1)\n\nprint(forecasts.iloc[:5, :5])",
"_____no_output_____"
],
[
"# Construct the forecast errors\nforecast_errors = forecasts.apply(lambda column: endog - column).reindex(forecasts.index)\n\nprint(forecast_errors.iloc[:5, :5])",
"_____no_output_____"
],
[
"# Reindex the forecasts by horizon rather than by date\ndef flatten(column):\n return column.dropna().reset_index(drop=True)\n\nflattened = forecast_errors.apply(flatten)\nflattened.index = (flattened.index + 1).rename('horizon')\n\nprint(flattened.iloc[:3, :5])",
"_____no_output_____"
],
[
"# Compute the root mean square error\nrmse = (flattened**2).mean(axis=1)**0.5\n\nprint(rmse)",
"_____no_output_____"
]
],
[
[
"By not re-estimating the parameters, our forecasts are slightly worse (the root mean square error is higher at each horizon). However, the process is faster, even with only 200 datapoints. Using the `%%timeit` cell magic on the cells above, we found a runtime of 570ms using `extend` versus 1.7s using `append` with `refit=True`. (Note that using `extend` is also faster than using `append` with `refit=False`).",
"_____no_output_____"
],
[
"## Indexes\n\nThroughout this notebook, we have been making use of Pandas date indexes with an associated frequency. As you can see, this index marks our data as at a quarterly frequency, between 1959Q1 and 2009Q3.",
"_____no_output_____"
]
],
[
[
"print(endog.index)",
"_____no_output_____"
]
],
[
[
"In most cases, if your data has an associated data/time index with a defined frequency (like quarterly, monthly, etc.), then it is best to make sure your data is a Pandas series with the appropriate index. Here are three examples of this:",
"_____no_output_____"
]
],
[
[
"# Annual frequency, using a PeriodIndex\nindex = pd.period_range(start='2000', periods=4, freq='A')\nendog1 = pd.Series([1, 2, 3, 4], index=index)\nprint(endog1.index)",
"_____no_output_____"
],
[
"# Quarterly frequency, using a DatetimeIndex\nindex = pd.date_range(start='2000', periods=4, freq='QS')\nendog2 = pd.Series([1, 2, 3, 4], index=index)\nprint(endog2.index)",
"_____no_output_____"
],
[
"# Monthly frequency, using a DatetimeIndex\nindex = pd.date_range(start='2000', periods=4, freq='M')\nendog3 = pd.Series([1, 2, 3, 4], index=index)\nprint(endog3.index)",
"_____no_output_____"
]
],
[
[
"In fact, if your data has an associated date/time index, it is best to use that even if does not have a defined frequency. An example of that kind of index is as follows - notice that it has `freq=None`:",
"_____no_output_____"
]
],
[
[
"index = pd.DatetimeIndex([\n '2000-01-01 10:08am', '2000-01-01 11:32am',\n '2000-01-01 5:32pm', '2000-01-02 6:15am'])\nendog4 = pd.Series([0.2, 0.5, -0.1, 0.1], index=index)\nprint(endog4.index)",
"_____no_output_____"
]
],
[
[
"You can still pass this data to Statsmodels' model classes, but you will get the following warning, that no frequency data was found:",
"_____no_output_____"
]
],
[
[
"mod = sm.tsa.SARIMAX(endog4)\nres = mod.fit()",
"_____no_output_____"
]
],
[
[
"What this means is that you cannot specify forecasting steps by dates, and the output of the `forecast` and `get_forecast` methods will not have associated dates. The reason is that without a given frequency, there is no way to determine what date each forecast should be assigned to. In the example above, there is no pattern to the date/time stamps of the index, so there is no way to determine what the next date/time should be (should it be in the morning of 2000-01-02? the afternoon? or maybe not until 2000-01-03?).\n\nFor example, if we forecast one-step-ahead:",
"_____no_output_____"
]
],
[
[
"res.forecast(1)",
"_____no_output_____"
]
],
[
[
"The index associated with the new forecast is `4`, because if the given data had an integer index, that would be the next value. A warning is given letting the user know that the index is not a date/time index.\n\nIf we try to specify the steps of the forecast using a date, we will get the following exception:\n\n KeyError: 'The `end` argument could not be matched to a location related to the index of the data.'\n",
"_____no_output_____"
]
],
[
[
"# Here we'll catch the exception to prevent printing too much of\n# the exception trace output in this notebook\ntry:\n res.forecast('2000-01-03')\nexcept KeyError as e:\n print(e)",
"_____no_output_____"
]
],
[
[
"Ultimately there is nothing wrong with using data that does not have an associated date/time frequency, or even using data that has no index at all, like a Numpy array. However, if you can use a Pandas series with an associated frequency, you'll have more options for specifying your forecasts and get back results with a more useful index.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d0c6126f71753137dc8164dd0e6bf726cc3329ad | 7,289 | ipynb | Jupyter Notebook | Notebooks/From Tutorial.ipynb | MikeAnderson89/Swiss_Re_Accident_Risk_Scores | 4028b5523812b135bdcfdb1e0b17a4672d2a7930 | [
"MIT"
] | null | null | null | Notebooks/From Tutorial.ipynb | MikeAnderson89/Swiss_Re_Accident_Risk_Scores | 4028b5523812b135bdcfdb1e0b17a4672d2a7930 | [
"MIT"
] | null | null | null | Notebooks/From Tutorial.ipynb | MikeAnderson89/Swiss_Re_Accident_Risk_Scores | 4028b5523812b135bdcfdb1e0b17a4672d2a7930 | [
"MIT"
] | null | null | null | 28.924603 | 129 | 0.461106 | [
[
[
"import pandas as pd\nimport numpy as np\nfrom bayes_opt import BayesianOptimization\nfrom sklearn.ensemble import RandomForestRegressor\nfrom sklearn.model_selection import cross_val_score\nfrom Data_Processing import DataProcessing\nfrom sklearn.metrics import mean_squared_error\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.decomposition import PCA",
"_____no_output_____"
],
[
"pop = pd.read_csv('../Data/population.csv')\ntrain = pd.read_csv('../Data/train.csv')\ntest = pd.read_csv('../Data/test.csv')",
"_____no_output_____"
],
[
"X, y, test = DataProcessing(train, test, pop)\npca = PCA(n_components=20)\nX = pca.fit_transform(X)\ny = y.ravel()",
"_____no_output_____"
],
[
"X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)",
"_____no_output_____"
],
[
"#Bayesian optimization\ndef bayesian_optimization(dataset, function, parameters):\n X_train, y_train, X_test, y_test = dataset\n n_iterations = 5\n gp_params = {\"alpha\": 1e-4}\n\n BO = BayesianOptimization(function, parameters)\n BO.maximize(n_iter=n_iterations, **gp_params)\n\n return BO.max",
"_____no_output_____"
],
[
"def rfc_optimization(cv_splits):\n def function(n_estimators, max_depth, min_samples_split):\n return cross_val_score(\n RandomForestRegressor(\n n_estimators=int(max(n_estimators,0)), \n max_depth=int(max(max_depth,1)),\n min_samples_split=int(max(min_samples_split,2)), \n n_jobs=-1, \n random_state=42), \n X=X_train, \n y=y_train, \n cv=cv_splits,\n scoring=\"neg_mean_squared_error\",\n n_jobs=-1).mean()\n\n parameters = {\"n_estimators\": (10, 1000),\n \"max_depth\": (1, 150),\n \"min_samples_split\": (2, 10)\n# \"criterion\": ('squared_error', 'absolute_error', 'poisson'),\n# 'bootstrap': (True, False)\n }\n \n return function, parameters",
"_____no_output_____"
],
[
"def xgb_optimization(cv_splits, eval_set):\n def function(eta, gamma, max_depth):\n return cross_val_score(\n xgb.XGBClassifier(\n objective=\"binary:logistic\",\n learning_rate=max(eta, 0),\n gamma=max(gamma, 0),\n max_depth=int(max_depth), \n seed=42,\n nthread=-1,\n scale_pos_weight = len(y_train[y_train == 0])/\n len(y_train[y_train == 1])), \n X=X_train, \n y=y_train, \n cv=cv_splits,\n scoring=\"roc_auc\",\n fit_params={\n \"early_stopping_rounds\": 10, \n \"eval_metric\": \"auc\", \n \"eval_set\": eval_set},\n n_jobs=-1).mean()\n\n parameters = {\"eta\": (0.001, 0.4),\n \"gamma\": (0, 20),\n \"max_depth\": (1, 2000),\n \"criterion\": ('squared_error', 'absolute_error', 'poisson'),\n 'bootstrap': (True, False),\n }\n \n return function, parameters",
"_____no_output_____"
],
[
"#Train model\ndef train(X_train, y_train, X_test, y_test, function, parameters):\n dataset = (X_train, y_train, X_test, y_test)\n cv_splits = 4\n \n best_solution = bayesian_optimization(dataset, function, parameters) \n params = best_solution[\"params\"]\n\n model = RandomForestRegressor(\n n_estimators=int(max(params[\"n_estimators\"], 0)),\n max_depth=int(max(params[\"max_depth\"], 1)),\n min_samples_split=int(max(params[\"min_samples_split\"], 2)), \n n_jobs=-1, \n random_state=42)\n\n model.fit(X_train, y_train)\n \n return model",
"_____no_output_____"
],
[
"rfc, params = rfc_optimization(4)",
"_____no_output_____"
],
[
"model = train(X_train, y_train, X_test, y_test, rfc, params)",
"| iter | target | max_depth | min_sa... | n_esti... |\n-------------------------------------------------------------\n"
],
[
"y_true = y_test\ny_pred = model.predict(X_test)\n\nmean_squared_error(y_true, y_pred)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0c620a9cba002b5b0f76a4c5c5077c4ad25d1db | 4,740 | ipynb | Jupyter Notebook | Naas/Naas_Get_help.ipynb | Charles-de-Montigny/awesome-notebooks | 79485142ba557e9c20e6f6dca4fdc12a3443813e | [
"BSD-3-Clause"
] | 1,114 | 2020-09-28T07:32:23.000Z | 2022-03-31T22:35:50.000Z | Naas/Naas_Get_help.ipynb | mmcfer/awesome-notebooks | 8d2892e40db480a323049e04decfefac45904af4 | [
"BSD-3-Clause"
] | 298 | 2020-10-29T09:39:17.000Z | 2022-03-31T15:24:44.000Z | Naas/Naas_Get_help.ipynb | mmcfer/awesome-notebooks | 8d2892e40db480a323049e04decfefac45904af4 | [
"BSD-3-Clause"
] | 153 | 2020-09-29T06:07:39.000Z | 2022-03-31T17:41:16.000Z | 25.347594 | 983 | 0.601477 | [
[
[
"<img width=\"10%\" alt=\"Naas\" src=\"https://landen.imgix.net/jtci2pxwjczr/assets/5ice39g4.png?w=160\"/>",
"_____no_output_____"
],
[
"# Naas - Get help\n<a href=\"https://app.naas.ai/user-redirect/naas/downloader?url=https://raw.githubusercontent.com/jupyter-naas/awesome-notebooks/master/Naas/Naas_Get_help.ipynb\" target=\"_parent\"><img src=\"https://img.shields.io/badge/-Open%20in%20Naas-success?labelColor=000000&logo=data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz4KPHN2ZyB3aWR0aD0iMTAyNHB4IiBoZWlnaHQ9IjEwMjRweCIgdmlld0JveD0iMCAwIDEwMjQgMTAyNCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB4bWxuczp4bGluaz0iaHR0cDovL3d3dy53My5vcmcvMTk5OS94bGluayIgdmVyc2lvbj0iMS4xIj4KIDwhLS0gR2VuZXJhdGVkIGJ5IFBpeGVsbWF0b3IgUHJvIDIuMC41IC0tPgogPGRlZnM+CiAgPHRleHQgaWQ9InN0cmluZyIgdHJhbnNmb3JtPSJtYXRyaXgoMS4wIDAuMCAwLjAgMS4wIDIyOC4wIDU0LjUpIiBmb250LWZhbWlseT0iQ29tZm9ydGFhLVJlZ3VsYXIsIENvbWZvcnRhYSIgZm9udC1zaXplPSI4MDAiIHRleHQtZGVjb3JhdGlvbj0ibm9uZSIgZmlsbD0iI2ZmZmZmZiIgeD0iMS4xOTk5OTk5OTk5OTk5ODg2IiB5PSI3MDUuMCI+bjwvdGV4dD4KIDwvZGVmcz4KIDx1c2UgaWQ9Im4iIHhsaW5rOmhyZWY9IiNzdHJpbmciLz4KPC9zdmc+Cg==\"/></a>",
"_____no_output_____"
],
[
"**Tags:** #naas #help",
"_____no_output_____"
],
[
"## Input",
"_____no_output_____"
],
[
"### Import needed library",
"_____no_output_____"
]
],
[
[
"import naas",
"_____no_output_____"
]
],
[
[
"## Output",
"_____no_output_____"
],
[
"### Open help chatbox",
"_____no_output_____"
]
],
[
[
"naas.open_help()",
"_____no_output_____"
]
],
[
[
"### Close help chatbox",
"_____no_output_____"
]
],
[
[
"naas.close_help()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0c62c551ba7aec0658ce80fd22dfe8694ab5df6 | 2,086 | ipynb | Jupyter Notebook | docs/chap02-02-commentaires.ipynb | bellash13/SmartAcademyPython | 44d0f6db0fcdcbbf1449a45b073a2b3182a19714 | [
"MIT"
] | null | null | null | docs/chap02-02-commentaires.ipynb | bellash13/SmartAcademyPython | 44d0f6db0fcdcbbf1449a45b073a2b3182a19714 | [
"MIT"
] | null | null | null | docs/chap02-02-commentaires.ipynb | bellash13/SmartAcademyPython | 44d0f6db0fcdcbbf1449a45b073a2b3182a19714 | [
"MIT"
] | null | null | null | 29.8 | 221 | 0.614094 | [
[
[
"<h3>Les commentaires</h3>\r\n<p>Lorsque vous écrivez un programme, vous serez le plus souvent appelé à commenter vos lignes de code afin de vous les expliquer à vous ou aux autres programmeurs qui vous liront; c'est une bonne pratique.</p>\r\n<p>Les commentaires entre codes facilitent aux humains la compréhension du code et par conséquent, ils ne sont pas ignorés par Python. Voici notre programme de tout à l'heure, commenté.</p>",
"_____no_output_____"
]
],
[
[
"'''\r\nFonction pour dire bonjour à l'utilisateur :\r\ncette fonction va afficher un message sympa à l'écran\r\n'''\r\ndef direBonjour():\r\n print('Bonjour cher utilisateur')\r\n\r\n#ici j'appelle ma fonction\r\ndireBonjour()",
"Bonjour cher utilisateur\n"
]
],
[
[
"<p>Lorsque le programme rencontre des lignes des textes entre les trois guillemets simples <code>'''</code> et <code>'''</code> il les ignore à l'exécution: il l'utilise pour la documentation du code.</p>\r\n<p>Une autre façon de commenter le code sur une seule ligne est d'utiliser le caractère <code>#</code> suivi du commentaire comme nous l'avons fait sur la ligne </p>",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d0c6351f85c644c2f4c6d87e9bb910c2f6666a71 | 718 | ipynb | Jupyter Notebook | 04_visualise.ipynb | YiweiMao/openhsi | 2b8631bd391d247b1a03be48ebb44c9e7d5df2ab | [
"CC-BY-3.0"
] | 1 | 2020-12-24T12:46:13.000Z | 2020-12-24T12:46:13.000Z | 04_visualise.ipynb | YiweiMao/openhsi | 2b8631bd391d247b1a03be48ebb44c9e7d5df2ab | [
"CC-BY-3.0"
] | 2 | 2021-02-05T07:30:40.000Z | 2022-02-26T09:14:59.000Z | 04_visualise.ipynb | YiweiMao/openhsi | 2b8631bd391d247b1a03be48ebb44c9e7d5df2ab | [
"CC-BY-3.0"
] | null | null | null | 17.095238 | 67 | 0.530641 | [
[
[
"# default_exp visualise",
"_____no_output_____"
]
],
[
[
"# Visualise\n\n> Visualise the datacube interactively. ",
"_____no_output_____"
]
],
[
[
"#hide\nfrom nbdev.showdoc import *",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0c6442956a9ffe115728ba14c068c351d0ae58d | 9,991 | ipynb | Jupyter Notebook | pgssalientobject.ipynb | pranit570/salientobjectsdetection | 995ae1753b431c467e24a1253cac49da4822e5f6 | [
"Apache-2.0"
] | 1 | 2020-05-28T20:37:53.000Z | 2020-05-28T20:37:53.000Z | pgssalientobject.ipynb | pranit570/salientobjectsdetection | 995ae1753b431c467e24a1253cac49da4822e5f6 | [
"Apache-2.0"
] | null | null | null | pgssalientobject.ipynb | pranit570/salientobjectsdetection | 995ae1753b431c467e24a1253cac49da4822e5f6 | [
"Apache-2.0"
] | null | null | null | 35.682143 | 245 | 0.531979 | [
[
[
"from keras.layers import Input, Dense, merge\nfrom keras.models import Model\nfrom keras.layers import Convolution2D, MaxPooling2D, Reshape, BatchNormalization\nfrom keras.layers import Activation, Dropout, Flatten, Dense",
"_____no_output_____"
],
[
"def default_categorical():\n img_in = Input(shape=(120, 160, 3), name='img_in') # First layer, input layer, Shape comes from camera.py resolution, RGB\n x = img_in\n x = Convolution2D(24, (5,5), strides=(2,2), activation='relu', name = 'conv1')(x) # 24 features, 5 pixel x 5 pixel kernel (convolution, feauture) window, 2wx2h stride, relu activation\n x = Convolution2D(32, (5,5), strides=(2,2), activation='relu', name = 'conv2')(x) # 32 features, 5px5p kernel window, 2wx2h stride, relu activatiion\n x = Convolution2D(64, (5,5), strides=(2,2), activation='relu', name = 'conv3')(x) # 64 features, 5px5p kernal window, 2wx2h stride, relu\n x = Convolution2D(64, (3,3), strides=(2,2), activation='relu', name = 'conv4')(x) # 64 features, 3px3p kernal window, 2wx2h stride, relu\n x = Convolution2D(64, (3,3), strides=(1,1), activation='relu', name = 'conv5')(x) # 64 features, 3px3p kernal window, 1wx1h stride, relu\n\n # Possibly add MaxPooling (will make it less sensitive to position in image). Camera angle fixed, so may not to be needed\n\n x = Flatten(name='flattened')(x) # Flatten to 1D (Fully connected)\n x = Dense(100, activation='relu', name = 'dense1')(x) # Classify the data into 100 features, make all negatives 0\n x = Dropout(.1)(x) # Randomly drop out (turn off) 10% of the neurons (Prevent overfitting)\n x = Dense(50, activation='relu', name = 'dense2')(x) # Classify the data into 50 features, make all negatives 0\n x = Dropout(.1)(x) # Randomly drop out 10% of the neurons (Prevent overfitting)\n #categorical output of the angle\n angle_out = Dense(15, activation='softmax', name='angle_out')(x) # Connect every input with every output and output 15 hidden units. Use Softmax to give percentage. 15 categories and find best one based off percentage 0.0-1.0\n \n #continous output of throttle\n throttle_out = Dense(1, activation='relu', name='throttle_out')(x) # Reduce to 1 number, Positive number only\n \n model = Model(inputs=[img_in], outputs=[angle_out, throttle_out])\n \n return model",
"_____no_output_____"
],
[
"model = default_categorical()\nmodel.load_weights('weights.h5')",
"_____no_output_____"
],
[
"img_in = Input(shape=(120, 160, 3), name='img_in')\nx = img_in\nx = Convolution2D(24, (5,5), strides=(2,2), activation='relu', name='conv1')(x)\nx = Convolution2D(32, (5,5), strides=(2,2), activation='relu', name='conv2')(x)\nx = Convolution2D(64, (5,5), strides=(2,2), activation='relu', name='conv3')(x)\nx = Convolution2D(64, (3,3), strides=(2,2), activation='relu', name='conv4')(x)\nconv_5 = Convolution2D(64, (3,3), strides=(1,1), activation='relu', name='conv5')(x)\nconvolution_part = Model(inputs=[img_in], outputs=[conv_5])",
"_____no_output_____"
],
[
"for layer_num in ('1', '2', '3', '4', '5'):\n convolution_part.get_layer('conv' + layer_num).set_weights(model.get_layer('conv' + layer_num).get_weights())",
"_____no_output_____"
],
[
"from keras import backend as K\n\ninp = convolution_part.input # input placeholder\noutputs = [layer.output for layer in convolution_part.layers[1:]] # all layer outputs\nfunctor = K.function([inp], outputs)",
"_____no_output_____"
],
[
"import tensorflow as tf",
"_____no_output_____"
],
[
"import numpy as np",
"_____no_output_____"
],
[
"import pdb",
"_____no_output_____"
],
[
"kernel_3x3 = tf.constant(np.array([\n [[[1]], [[1]], [[1]]], \n [[[1]], [[1]], [[1]]], \n [[[1]], [[1]], [[1]]]\n]), tf.float32)\n\nkernel_5x5 = tf.constant(np.array([\n [[[1]], [[1]], [[1]], [[1]], [[1]]], \n [[[1]], [[1]], [[1]], [[1]], [[1]]], \n [[[1]], [[1]], [[1]], [[1]], [[1]]],\n [[[1]], [[1]], [[1]], [[1]], [[1]]],\n [[[1]], [[1]], [[1]], [[1]], [[1]]]\n]), tf.float32)\n\nlayers_kernels = {5: kernel_3x3, 4: kernel_3x3, 3: kernel_5x5, 2: kernel_5x5, 1: kernel_5x5}\n\nlayers_strides = {5: [1, 1, 1, 1], 4: [1, 2, 2, 1], 3: [1, 2, 2, 1], 2: [1, 2, 2, 1], 1: [1, 2, 2, 1]}\n\ndef compute_visualisation_mask(img):\n# pdb.set_trace()\n activations = functor([np.array([img])])\n activations = [np.reshape(img, (1, img.shape[0], img.shape[1], img.shape[2]))] + activations\n upscaled_activation = np.ones((3, 6))\n for layer in [5, 4, 3, 2, 1]:\n averaged_activation = np.mean(activations[layer], axis=3).squeeze(axis=0) * upscaled_activation\n output_shape = (activations[layer - 1].shape[1], activations[layer - 1].shape[2])\n x = tf.constant(\n np.reshape(averaged_activation, (1,averaged_activation.shape[0],averaged_activation.shape[1],1)),\n tf.float32\n )\n conv = tf.nn.conv2d_transpose(\n x, layers_kernels[layer],\n output_shape=(1,output_shape[0],output_shape[1], 1), \n strides=layers_strides[layer], \n padding='VALID'\n )\n with tf.Session() as session:\n result = session.run(conv)\n upscaled_activation = np.reshape(result, output_shape)\n final_visualisation_mask = upscaled_activation\n return (final_visualisation_mask - np.min(final_visualisation_mask))/(np.max(final_visualisation_mask) - np.min(final_visualisation_mask))",
"_____no_output_____"
],
[
"import cv2\nimport numpy as np",
"_____no_output_____"
],
[
"from matplotlib import pyplot as plt\n%matplotlib inline",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\nfrom matplotlib import animation\nfrom IPython.display import display, HTML\n\ndef plot_movie_mp4(image_array):\n dpi = 72.0\n xpixels, ypixels = image_array[0].shape[0], image_array[0].shape[1]\n fig = plt.figure(figsize=(ypixels/dpi, xpixels/dpi), dpi=dpi)\n im = plt.figimage(image_array[0])\n\n def animate(i):\n im.set_array(image_array[i])\n return (im,)\n\n anim = animation.FuncAnimation(fig, animate, frames=len(image_array))\n display(HTML(anim.to_html5_video()))",
"_____no_output_____"
],
[
"from glob import iglob",
"_____no_output_____"
],
[
"imgs = []\nalpha = 0.004\nbeta = 1.0 - alpha\ncounter = 0\nfor path in sorted(iglob('imgs/*.jpg')):\n img = cv2.imread(path)\n salient_mask = compute_visualisation_mask(img)\n salient_mask_stacked = np.dstack((salient_mask,salient_mask))\n salient_mask_stacked = np.dstack((salient_mask_stacked,salient_mask))\n blend = cv2.addWeighted(img.astype('float32'), alpha, salient_mask_stacked, beta, 0.0)\n imgs.append(blend)\n counter += 1\n if counter >= 400:\n break",
"_____no_output_____"
],
[
"plot_movie_mp4(imgs)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0c66926206d3025cb1f6fd0041324754aa1fedd | 33,820 | ipynb | Jupyter Notebook | ML/Teste2 - HMM/.ipynb_checkpoints/Teste2- v1-checkpoint.ipynb | Vinschers/egg-waves | 68ef21179f5f10045bc329d427787c84569a0c5e | [
"MIT"
] | null | null | null | ML/Teste2 - HMM/.ipynb_checkpoints/Teste2- v1-checkpoint.ipynb | Vinschers/egg-waves | 68ef21179f5f10045bc329d427787c84569a0c5e | [
"MIT"
] | 2 | 2020-04-19T19:13:42.000Z | 2020-04-19T19:13:42.000Z | ML/Teste2 - HMM/.ipynb_checkpoints/Teste2- v1-checkpoint.ipynb | Vinschers/egg-wave | 68ef21179f5f10045bc329d427787c84569a0c5e | [
"MIT"
] | null | null | null | 76.343115 | 16,568 | 0.727912 | [
[
[
"# Lendo/Tratando dados",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n%matplotlib inline",
"_____no_output_____"
],
[
"df = pd.read_csv('emotions.csv')",
"_____no_output_____"
],
[
"sns.heatmap(df.isnull(),yticklabels=False,cbar=False,cmap='viridis')",
"_____no_output_____"
],
[
"df.columns[2548]",
"_____no_output_____"
]
],
[
[
"# Treinando modelo",
"_____no_output_____"
]
],
[
[
"from sklearn.model_selection import train_test_split\n\nx_train, x_test, y_train, y_test = train_test_split(df.drop('label',axis=1), \n df['label'], test_size=0.30)",
"_____no_output_____"
],
[
"from sklearn.hmm import MultinomialHMM",
"_____no_output_____"
],
[
"dtree = RandomForestClassifier(n_estimators = 200)\ndtree.fit(x_train, y_train)\npredict = dtree.predict(x_test)",
"_____no_output_____"
],
[
"predict",
"_____no_output_____"
],
[
"from sklearn.metrics import classification_report",
"_____no_output_____"
],
[
"print(classification_report(y_test, predict))",
" precision recall f1-score support\n\n NEGATIVE 0.99 0.98 0.98 203\n NEUTRAL 1.00 1.00 1.00 206\n POSITIVE 0.98 0.98 0.98 231\n\n accuracy 0.99 640\n macro avg 0.99 0.99 0.99 640\nweighted avg 0.99 0.99 0.99 640\n\n"
],
[
"import pickle\npickle.dump(dtree, open('DEUS VULT.sat', 'wb'))\n#modelo = pickle.load(open('DEUS VULT.sat', 'rb'))",
"_____no_output_____"
]
],
[
[
"# Teste para Erro",
"_____no_output_____"
]
],
[
[
"x_train, x_test, y_train, y_test = train_test_split(df.drop('label',axis=1), \n df['label'], test_size=0.30)\n\ny_train_forjado = []\n\nfor i in range(0, 1492):\n y_train_forjado.append('POSITIVE')",
"_____no_output_____"
],
[
"type(y_train)",
"_____no_output_____"
],
[
"dtree = DecisionTreeClassifier()\ndtree.fit(x_train, pd.Series(y_train_forjado))\npredict = dtree.predict(x_test)",
"_____no_output_____"
],
[
"print(classification_report(y_test, predict))",
" precision recall f1-score support\n\n NEGATIVE 0.00 0.00 0.00 228\n NEUTRAL 0.00 0.00 0.00 221\n POSITIVE 0.30 1.00 0.46 191\n\n accuracy 0.30 640\n macro avg 0.10 0.33 0.15 640\nweighted avg 0.09 0.30 0.14 640\n\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
d0c679bb502ad5adb1cf204dbab391e7760a819d | 401,591 | ipynb | Jupyter Notebook | locations.ipynb | hystx1/csc280 | 10a1654fb7f9d4b1b62bc99af1eb02340621c9af | [
"MIT"
] | 1 | 2021-06-14T21:34:26.000Z | 2021-06-14T21:34:26.000Z | locations.ipynb | hystx1/csc280 | 10a1654fb7f9d4b1b62bc99af1eb02340621c9af | [
"MIT"
] | null | null | null | locations.ipynb | hystx1/csc280 | 10a1654fb7f9d4b1b62bc99af1eb02340621c9af | [
"MIT"
] | 2 | 2019-01-29T18:18:59.000Z | 2019-09-11T13:52:12.000Z | 497.634449 | 85,552 | 0.942969 | [
[
[
"# Notebook to visualize location data",
"_____no_output_____"
]
],
[
[
"import csv",
"_____no_output_____"
],
[
"# count the number of Starbucks in DC\nwith open('starbucks.csv') as file:\n csvinput = csv.reader(file)\n\n acc = 0\n for record in csvinput:\n if 'DC' in record[3]:\n acc += 1\n \nprint( acc )",
"75\n"
],
[
"def parse_locations(csv_iterator,state=''):\n \"\"\" strip out long/lat and convert to a list of floating point 2-tuples --\n optionally, filter by a specified state \"\"\"\n return [ ( float(row[0]), float(row[1])) for row in csv_iterator \n if state in row[3]]\n\ndef get_locations(filename, state=''):\n \"\"\" read a list of longitude/latitude pairs from a csv file, \n optionally, filter by a specified state \"\"\"\n with open(filename, 'r') as input_file:\n csvinput = csv.reader(input_file)\n location_data = parse_locations(csvinput,state)\n return location_data",
"_____no_output_____"
],
[
"# get the data from all starbucks locations\nstarbucks_locations = get_locations('starbucks.csv')\n# get the data from burger locations\"\nburger_locations = get_locations('burgerking.csv') + \\\n get_locations('mcdonalds.csv') + \\\n get_locations('wendys.csv')",
"_____no_output_____"
],
[
"# look at the first few (10) data points of each\nfor n in range(10):\n print( starbucks_locations[n] )\n \nprint() \n \nfor n in range(10):\n print( burger_locations[n] ) ",
"(-159.459214, 21.879285)\n(-159.380923, 21.97116)\n(-159.375636, 21.971295)\n(-159.34927, 21.979465)\n(-159.315957, 22.078248)\n(-158.18458, 21.434788)\n(-158.116013, 21.343991)\n(-158.08179, 21.3341)\n(-158.061706, 21.648414)\n(-158.058557, 21.483994)\n\n(-149.95032, 61.13782)\n(-149.909, 61.19542)\n(-149.88789, 61.21743)\n(-149.86801, 61.18736)\n(-149.86568, 61.14431)\n(-149.83383, 61.18072)\n(-149.81281, 61.21526)\n(-149.77812, 61.19598)\n(-149.57101, 61.32533)\n(-149.8196, 61.27584)\n"
],
[
"# a common, powerful plotting library\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"# set figure size\nplt.figure(figsize=(12, 9))\n\n# get the axes of the plot and set them to be equal-aspect and limited (specify bounds) by data\nax = plt.axes()\nax.set_aspect('equal', 'datalim')\n\n# plot the data\nplt.scatter(*zip(*starbucks_locations), s=1)\n\nplt.legend([\"Starbucks\"])\n\n# jupyter automatically plots this inline. On the console, you need to invoke plt.show()\n# FYI: In that case, execution halts until you close the window it opens.",
"_____no_output_____"
],
[
"# set figure size\nplt.figure(figsize=(12, 9))\n\n# get the axes of the plot and set them to be equal-aspect and limited (specify bounds) by data\nax = plt.axes()\nax.set_aspect('equal', 'datalim')\n\n# plot the data\nplt.scatter(*zip(*burger_locations), color='green', s=1)\n\nplt.legend([\"Burgers\"])",
"_____no_output_____"
],
[
"lat, lon = zip(*get_locations('burgerking.csv'))\n\nmin_lat = min(lat)\nmax_lat = max(lat)\nmin_lon = min(lon)\nmax_lon = max(lon)\n\nlat, lon = zip(*get_locations('mcdonalds.csv'))\n\nmin_lat = min(min_lat,min(lat))\nmax_lat = max(max_lat,max(lat))\nmin_lon = min(min_lon,min(lon))\nmax_lon = max(max_lon,max(lon))\n\nlat, lon = zip(*get_locations('wendys.csv'))\n\nmin_lat = min(min_lat,min(lat))\nmax_lat = max(max_lat,max(lat))\nmin_lon = min(min_lon,min(lon))\nmax_lon = max(max_lon,max(lon))\n\nlat, lon = zip(*get_locations('pizzahut.csv'))\n\nmin_lat = min(min_lat,min(lat))\nmax_lat = max(max_lat,max(lat))\nmin_lon = min(min_lon,min(lon))\nmax_lon = max(max_lon,max(lon))\n",
"_____no_output_____"
],
[
"# set figure size\nfig = plt.figure(figsize=(12, 9))\n#fig = plt.figure()\n\n\n\n\nplt.subplot(2,2,1)\nplt.scatter(*zip(*get_locations('burgerking.csv')), color='black', s=1, alpha=0.2)\nplt.xlim(min_lat-5,max_lat+5)\nplt.ylim(min_lon-5,max_lon+5)\nplt.gca().set_aspect('equal')\nplt.subplot(2,2,2)\nplt.scatter(*zip(*get_locations('mcdonalds.csv')), color='black', s=1, alpha=0.2)\nplt.xlim(min_lat-5,max_lat+5)\nplt.ylim(min_lon-5,max_lon+5)\nplt.gca().set_aspect('equal')\nplt.subplot(2,2,3)\nplt.scatter(*zip(*get_locations('wendys.csv')), color='black', s=1, alpha=0.2)\nplt.xlim(min_lat-5,max_lat+5)\nplt.ylim(min_lon-5,max_lon+5)\nplt.gca().set_aspect('equal')\nplt.subplot(2,2,4)\nplt.scatter(*zip(*get_locations('pizzahut.csv')), color='black', s=1, alpha=0.2)\nplt.xlim(min_lat-5,max_lat+5)\nplt.ylim(min_lon-5,max_lon+5)\nplt.gca().set_aspect('equal')\n\n#plt.scatter(*zip(*get_locations('dollar-tree.csv')), color='black', s=1, alpha=0.2)",
"_____no_output_____"
],
[
"# get the starbucks in DC\nstarbucks_dc_locations = get_locations('starbucks.csv', state='DC')\n\nburger_dc_locations = get_locations('burgerking.csv', state='DC') + \\\n get_locations('mcdonalds.csv', state='DC') + \\\n get_locations('wendys.csv', state='DC')",
"_____no_output_____"
],
[
"# show the first 10 locations of each:\nfor n in range(10):\n print( starbucks_dc_locations[n] )\n \nprint() \n \nfor n in range(min(10,len(burger_dc_locations))):\n print( burger_dc_locations[n] )\n ",
"(-77.102842, 38.926656)\n(-77.095791, 38.91756)\n(-77.095684, 38.944565)\n(-77.085464, 38.960783)\n(-77.084843, 38.933583)\n(-77.079661, 38.948279)\n(-77.07613, 38.912007)\n(-77.074559, 38.963113)\n(-77.073222, 38.935104)\n(-77.071562, 38.920283)\n\n(-77.06535, 38.947)\n(-77.04546, 38.90951)\n(-77.04172, 38.92349)\n(-77.03727, 38.90221)\n(-77.03613, 38.97252)\n(-77.02283, 38.92583)\n(-77.01903, 38.89839)\n(-77.01655, 38.84225)\n(-77.00158, 38.90777)\n(-77.08684, 38.9384)\n"
],
[
"# set figure size\nplt.figure(figsize=(12, 9))\n\n# get the axes of the plot and set them to be equal-aspect and limited by data\nax = plt.axes()\nax.set_aspect('equal', 'datalim')\n\n# plot the data\nplt.scatter(*zip(*starbucks_dc_locations))\nplt.scatter(*zip(*burger_dc_locations), color='green')",
"_____no_output_____"
],
[
"# We also want to plot the DC boundaries, so we have a better idea where these things are\n# the data is contained in DC.txt\n\n# let's inspect it. Observe the format\nwith open('DC.txt') as file:\n for line in file:\n print(line,end='') # lines already end with a newline so don't print another",
" -77.120201 38.791401\n -76.909706 38.994400\n1\n\nDistrict of Columbia\nDC\n25\n -77.120201 38.934200\n -77.042305 38.994400\n -77.036400 38.991402\n -77.008301 38.969601\n -76.909706 38.892700\n -77.038902 38.791401\n -77.036102 38.814800\n -77.040703 38.821602\n -77.039505 38.832100\n -77.045197 38.834599\n -77.046303 38.841202\n -77.033104 38.841599\n -77.031998 38.850399\n -77.038101 38.861801\n -77.042908 38.863400\n -77.039200 38.865700\n -77.040901 38.871101\n -77.045708 38.875099\n -77.046600 38.871201\n -77.049400 38.870602\n -77.054398 38.879002\n -77.058556 38.879955\n -77.068504 38.899700\n -77.090508 38.904099\n -77.101501 38.910999\n\n"
],
[
"with open('DC.txt') as file:\n # get the lower left and upper right coords for the bounding box\n ll_long, ll_lat = map(float, next(file).split())\n ur_long, ur_lat = map(float, next(file).split())\n # get the number of regions \n num_records = int(next(file))\n # there better just be one\n assert num_records == 1\n # then a blank line\n next(file)\n # Title of \"county\"\n county_name = next(file).rstrip() # removes newline at end\n # \"State\" county resides in\n state_name = next(file).rstrip()\n # this is supposed to be DC\n assert state_name == \"DC\"\n # number of points to expect\n num_pairs = int(next(file))\n dc_boundary = [ tuple(map(float,next(file).split())) for n in range(num_pairs)]\n \n",
"_____no_output_____"
],
[
"dc_boundary",
"_____no_output_____"
],
[
"# add the beginning to the end so that it closes up\ndc_boundary.append(dc_boundary[0])",
"_____no_output_____"
],
[
"# draw it!\n\nax = plt.axes()\nax.set_aspect('equal', 'datalim')\n\nplt.plot(*zip(*dc_boundary))",
"_____no_output_____"
],
[
"# draw both the starbucks location and DC boundary together\n\nplt.figure(figsize=(12, 9))\n\nax = plt.axes()\nax.set_aspect('equal', 'datalim')\n\nplt.scatter(*zip(*starbucks_dc_locations))\nplt.scatter(*zip(*burger_dc_locations), color='green')\nplt.plot(*zip(*dc_boundary))",
"_____no_output_____"
],
[
"# draw both the starbucks location and DC boundary together\n\nplt.figure(figsize=(12, 9))\n\nax = plt.axes()\nax.set_aspect('equal', 'datalim')\n\nplt.scatter(*zip(*get_locations('burgerking.csv', state='DC')), color='red')\nplt.scatter(*zip(*get_locations('mcdonalds.csv', state='DC')), color='green')\nplt.scatter(*zip(*get_locations('wendys.csv', state='DC')), color='blue')\nplt.scatter(*zip(*get_locations('pizzahut.csv', state='DC')), color='yellow')\n\nplt.scatter(*zip(*get_locations('dollar-tree.csv', state='DC')), color='black')\nplt.plot(*zip(*dc_boundary))",
"_____no_output_____"
]
],
[
[
"### But where's AU?",
"_____no_output_____"
]
],
[
[
"# draw both the starbucks location and DC boundary together\n\nplt.figure(figsize=(12, 9))\n\nax = plt.axes()\nax.set_aspect('equal', 'datalim')\n\nplt.scatter(*zip(*starbucks_dc_locations))\nplt.scatter(*zip(*burger_dc_locations), color='green')\nplt.plot(*zip(*dc_boundary))\n\n# add a red dot right over Anderson\nplt.scatter([-77.0897511],[38.9363019],color='red')",
"_____no_output_____"
],
[
"from ipyleaflet import Map, basemaps, basemap_to_tiles, Marker, CircleMarker\n\nm = Map(layers=(basemap_to_tiles(basemaps.OpenStreetMap.HOT), ),\n center=(38.898082, -77.036696),\n zoom=11)\n\n# marker for AU\nmarker = Marker(location=(38.937831, -77.088852), radius=2, color='green')\nm.add_layer(marker)\n\nfor (long,lat) in starbucks_dc_locations:\n marker = CircleMarker(location=(lat,long), radius=1, color='steelblue')\n m.add_layer(marker);\n\nfor (long,lat) in burger_dc_locations:\n marker = CircleMarker(location=(lat,long), radius=1, color='green')\n m.add_layer(marker);\n \n \n\nm",
"_____no_output_____"
],
[
"\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
d0c67b94becf98774ff3a031e4631ccd4d254d88 | 948 | ipynb | Jupyter Notebook | Assignment-3-Day-5.ipynb | Amitkumarpanda192/LetsUpgrade-Python | fa8afd1f24a438de0231c1ca8bad90a4755545c1 | [
"Apache-2.0"
] | null | null | null | Assignment-3-Day-5.ipynb | Amitkumarpanda192/LetsUpgrade-Python | fa8afd1f24a438de0231c1ca8bad90a4755545c1 | [
"Apache-2.0"
] | null | null | null | Assignment-3-Day-5.ipynb | Amitkumarpanda192/LetsUpgrade-Python | fa8afd1f24a438de0231c1ca8bad90a4755545c1 | [
"Apache-2.0"
] | null | null | null | 18.96 | 54 | 0.517932 | [
[
[
"li = ['hey this is sai','i am in mumbai']\nprint(list(map(lambda x:x.title(),li)))",
"['Hey This Is Sai', 'I Am In Mumbai']\n"
]
]
] | [
"code"
] | [
[
"code"
]
] |
d0c68115865c065a004da4989bde73ab00c2e317 | 710 | ipynb | Jupyter Notebook | notebooks/web_app.ipynb | DiegoFreitasDS/house_sales_prediction | 7ae3fa9a487afbea2076bf5aeb8c061db5683134 | [
"MIT"
] | null | null | null | notebooks/web_app.ipynb | DiegoFreitasDS/house_sales_prediction | 7ae3fa9a487afbea2076bf5aeb8c061db5683134 | [
"MIT"
] | null | null | null | notebooks/web_app.ipynb | DiegoFreitasDS/house_sales_prediction | 7ae3fa9a487afbea2076bf5aeb8c061db5683134 | [
"MIT"
] | null | null | null | 710 | 710 | 0.749296 | [
[
[
"https://stackoverflow.com/questions/64918649/streamlit-with-colab-and-pyngrok-failed-to-complete-tunnel-connection-versio\n\nhttps://colab.research.google.com/github/mrm8488/shared_colab_notebooks/blob/master/Create_streamlit_app.ipynb#scrollTo=vWmc_s2ezvU0\n\nhttps://medium.com/@jcharistech/how-to-run-streamlit-apps-from-colab-29b969a1bdfc\n\n",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code"
]
] |
d0c6926546cb321d547e705fd68fb8987f1180ab | 1,957 | ipynb | Jupyter Notebook | notebooks/Hello-DAISY.ipynb | k-sunako/learning-DAISY-feat-desc | eddb3c77472d365cc11d0fd14057b867516549ca | [
"MIT"
] | null | null | null | notebooks/Hello-DAISY.ipynb | k-sunako/learning-DAISY-feat-desc | eddb3c77472d365cc11d0fd14057b867516549ca | [
"MIT"
] | null | null | null | notebooks/Hello-DAISY.ipynb | k-sunako/learning-DAISY-feat-desc | eddb3c77472d365cc11d0fd14057b867516549ca | [
"MIT"
] | null | null | null | 20.175258 | 92 | 0.534492 | [
[
[
"# Dense DAISY feature description\nhttps://scikit-image.org/docs/dev/auto_examples/features_detection/plot_daisy.htmlfrom",
"_____no_output_____"
]
],
[
[
"from skimage.feature import daisy\nfrom skimage import data\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"%%time\nimg = data.camera()\ndescs = daisy(img, step=5, radius=8, visualize=True)",
"CPU times: user 862 ms, sys: 262 ms, total: 1.12 s\nWall time: 1.13 s\n"
],
[
"descs.shape",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
d0c69b5f78c95bda00c59ab318baa6403aedd7e7 | 12,937 | ipynb | Jupyter Notebook | tree.ipynb | matbur/inz | f6be1a685761f99f8c808d8b23f58debf7e19da2 | [
"MIT"
] | null | null | null | tree.ipynb | matbur/inz | f6be1a685761f99f8c808d8b23f58debf7e19da2 | [
"MIT"
] | null | null | null | tree.ipynb | matbur/inz | f6be1a685761f99f8c808d8b23f58debf7e19da2 | [
"MIT"
] | null | null | null | 30.949761 | 92 | 0.313751 | [
[
[
"import pandas as pd\nfrom pydotplus import graph_from_dot_file\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.tree import DecisionTreeClassifier, export_graphviz",
"_____no_output_____"
],
[
"fn = 'data.csv'\ndata = pd.read_csv(fn)\n\nprint(data.shape)\ndata.head()",
"(476, 32)\n"
],
[
"class_names = [\n 'Ostre zapalenie wyrostka robaczkowego',\n 'Zapalenie uchyłków jelit',\n 'Niedrożność mechaniczna jelit',\n 'Perforowany wrzód trawienny',\n 'Zapalenie woreczka żółciowego',\n 'Ostre zapalenie trzustki',\n 'Niecharakterystyczny ból brzucha',\n 'Inne przyczyny ostrego bólu brzucha',\n]",
"_____no_output_____"
],
[
"train_x, test_x = train_test_split(data)\n\ntrain_y = train_x.pop('Choroba')\ntest_y = test_x.pop('Choroba')",
"_____no_output_____"
],
[
"tree = DecisionTreeClassifier(random_state=42,\n min_samples_leaf=1,\n )\ntree = tree.fit(train_x, train_y)\ntree",
"_____no_output_____"
],
[
"dot_file = 'tree.dot'\nexport_graphviz(tree,\n out_file=dot_file,\n feature_names=train_x.columns,\n class_names=class_names,\n filled=True,\n impurity=False,\n rounded=True,\n )",
"_____no_output_____"
],
[
"graph = graph_from_dot_file(dot_file)\ngraph.write_pdf('tree.pdf')\ngraph.write_png('tree.png')",
"_____no_output_____"
],
[
"tree.score(test_x, test_y)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0c69e73d4a5b7f78d9e3be192f65d3027b869b6 | 1,713 | ipynb | Jupyter Notebook | gs_quant/documentation/02_pricing_and_risk/00_instruments_and_measures/examples/01_rates/000111_swap_future_cashflows.ipynb | webclinic017/gs-quant | ebb8ee5e1d954ab362aa567293906ce51818cfa8 | [
"Apache-2.0"
] | 4 | 2021-05-11T14:35:53.000Z | 2022-03-14T03:52:34.000Z | gs_quant/documentation/02_pricing_and_risk/00_instruments_and_measures/examples/01_rates/000111_swap_future_cashflows.ipynb | webclinic017/gs-quant | ebb8ee5e1d954ab362aa567293906ce51818cfa8 | [
"Apache-2.0"
] | null | null | null | gs_quant/documentation/02_pricing_and_risk/00_instruments_and_measures/examples/01_rates/000111_swap_future_cashflows.ipynb | webclinic017/gs-quant | ebb8ee5e1d954ab362aa567293906ce51818cfa8 | [
"Apache-2.0"
] | null | null | null | 22.246753 | 122 | 0.579101 | [
[
[
"from gs_quant.session import Environment, GsSession\nfrom gs_quant.common import PayReceive, Currency\nfrom gs_quant.instrument import IRSwap\nfrom gs_quant.risk import Cashflows",
"_____no_output_____"
],
[
"# external users should substitute their client id and secret; please skip this step if using internal jupyterhub\nGsSession.use(Environment.PROD, client_id=None, client_secret=None, scopes=('run_analytics',))",
"_____no_output_____"
],
[
"swap = IRSwap(PayReceive.Receive, '5y', Currency.EUR, fixed_rate='atm+10')",
"_____no_output_____"
],
[
"# returns a dataframe of future cashflows \n# note this feature will be expanded to cover portfolios in future releases\nswap.calc(Cashflows)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code"
]
] |
d0c6a06df6df33bcadb3adea49b3cc9e7c1ab3d0 | 214,786 | ipynb | Jupyter Notebook | 02-Improving-Deep-Neural-Networks/week1/Programming-Assignments/Regularization/Regularization+-+v2.ipynb | fera0013/deep-learning-specialization-coursera | bd1d2b3a04f7a9459f7eaafc29f255e3ba6c8c86 | [
"MIT"
] | null | null | null | 02-Improving-Deep-Neural-Networks/week1/Programming-Assignments/Regularization/Regularization+-+v2.ipynb | fera0013/deep-learning-specialization-coursera | bd1d2b3a04f7a9459f7eaafc29f255e3ba6c8c86 | [
"MIT"
] | null | null | null | 02-Improving-Deep-Neural-Networks/week1/Programming-Assignments/Regularization/Regularization+-+v2.ipynb | fera0013/deep-learning-specialization-coursera | bd1d2b3a04f7a9459f7eaafc29f255e3ba6c8c86 | [
"MIT"
] | null | null | null | 198.508318 | 56,104 | 0.876635 | [
[
[
"# Regularization\n\nWelcome to the second assignment of this week. Deep Learning models have so much flexibility and capacity that **overfitting can be a serious problem**, if the training dataset is not big enough. Sure it does well on the training set, but the learned network **doesn't generalize to new examples** that it has never seen!\n\n**You will learn to:** Use regularization in your deep learning models.\n\nLet's first import the packages you are going to use.",
"_____no_output_____"
]
],
[
[
"# import packages\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom reg_utils import sigmoid, relu, plot_decision_boundary, initialize_parameters, load_2D_dataset, predict_dec\nfrom reg_utils import compute_cost, predict, forward_propagation, backward_propagation, update_parameters\nimport sklearn\nimport sklearn.datasets\nimport scipy.io\nfrom testCases import *\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'",
"_____no_output_____"
]
],
[
[
"**Problem Statement**: You have just been hired as an AI expert by the French Football Corporation. They would like you to recommend positions where France's goal keeper should kick the ball so that the French team's players can then hit it with their head. \n\n<img src=\"images/field_kiank.png\" style=\"width:600px;height:350px;\">\n<caption><center> <u> **Figure 1** </u>: **Football field**<br> The goal keeper kicks the ball in the air, the players of each team are fighting to hit the ball with their head </center></caption>\n\n\nThey give you the following 2D dataset from France's past 10 games.",
"_____no_output_____"
]
],
[
[
"train_X, train_Y, test_X, test_Y = load_2D_dataset()",
"_____no_output_____"
]
],
[
[
"Each dot corresponds to a position on the football field where a football player has hit the ball with his/her head after the French goal keeper has shot the ball from the left side of the football field.\n- If the dot is blue, it means the French player managed to hit the ball with his/her head\n- If the dot is red, it means the other team's player hit the ball with their head\n\n**Your goal**: Use a deep learning model to find the positions on the field where the goalkeeper should kick the ball.",
"_____no_output_____"
],
[
"**Analysis of the dataset**: This dataset is a little noisy, but it looks like a diagonal line separating the upper left half (blue) from the lower right half (red) would work well. \n\nYou will first try a non-regularized model. Then you'll learn how to regularize it and decide which model you will choose to solve the French Football Corporation's problem. ",
"_____no_output_____"
],
[
"## 1 - Non-regularized model\n\nYou will use the following neural network (already implemented for you below). This model can be used:\n- in *regularization mode* -- by setting the `lambd` input to a non-zero value. We use \"`lambd`\" instead of \"`lambda`\" because \"`lambda`\" is a reserved keyword in Python. \n- in *dropout mode* -- by setting the `keep_prob` to a value less than one\n\nYou will first try the model without any regularization. Then, you will implement:\n- *L2 regularization* -- functions: \"`compute_cost_with_regularization()`\" and \"`backward_propagation_with_regularization()`\"\n- *Dropout* -- functions: \"`forward_propagation_with_dropout()`\" and \"`backward_propagation_with_dropout()`\"\n\nIn each part, you will run this model with the correct inputs so that it calls the functions you've implemented. Take a look at the code below to familiarize yourself with the model.",
"_____no_output_____"
]
],
[
[
"def model(X, Y, learning_rate = 0.3, num_iterations = 30000, print_cost = True, lambd = 0, keep_prob = 1):\n \"\"\"\n Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID.\n \n Arguments:\n X -- input data, of shape (input size, number of examples)\n Y -- true \"label\" vector (1 for blue dot / 0 for red dot), of shape (output size, number of examples)\n learning_rate -- learning rate of the optimization\n num_iterations -- number of iterations of the optimization loop\n print_cost -- If True, print the cost every 10000 iterations\n lambd -- regularization hyperparameter, scalar\n keep_prob - probability of keeping a neuron active during drop-out, scalar.\n \n Returns:\n parameters -- parameters learned by the model. They can then be used to predict.\n \"\"\"\n \n grads = {}\n costs = [] # to keep track of the cost\n m = X.shape[1] # number of examples\n layers_dims = [X.shape[0], 20, 3, 1]\n \n # Initialize parameters dictionary.\n parameters = initialize_parameters(layers_dims)\n\n # Loop (gradient descent)\n\n for i in range(0, num_iterations):\n\n # Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID.\n if keep_prob == 1:\n a3, cache = forward_propagation(X, parameters)\n elif keep_prob < 1:\n a3, cache = forward_propagation_with_dropout(X, parameters, keep_prob)\n \n # Cost function\n if lambd == 0:\n cost = compute_cost(a3, Y)\n else:\n cost = compute_cost_with_regularization(a3, Y, parameters, lambd)\n \n # Backward propagation.\n assert(lambd==0 or keep_prob==1) # it is possible to use both L2 regularization and dropout, \n # but this assignment will only explore one at a time\n if lambd == 0 and keep_prob == 1:\n grads = backward_propagation(X, Y, cache)\n elif lambd != 0:\n grads = backward_propagation_with_regularization(X, Y, cache, lambd)\n elif keep_prob < 1:\n grads = backward_propagation_with_dropout(X, Y, cache, keep_prob)\n \n # Update parameters.\n parameters = update_parameters(parameters, grads, learning_rate)\n \n # Print the loss every 10000 iterations\n if print_cost and i % 10000 == 0:\n print(\"Cost after iteration {}: {}\".format(i, cost))\n if print_cost and i % 1000 == 0:\n costs.append(cost)\n \n # plot the cost\n plt.plot(costs)\n plt.ylabel('cost')\n plt.xlabel('iterations (x1,000)')\n plt.title(\"Learning rate =\" + str(learning_rate))\n plt.show()\n \n return parameters",
"_____no_output_____"
]
],
[
[
"Let's train the model without any regularization, and observe the accuracy on the train/test sets.",
"_____no_output_____"
]
],
[
[
"parameters = model(train_X, train_Y)\nprint (\"On the training set:\")\npredictions_train = predict(train_X, train_Y, parameters)\nprint (\"On the test set:\")\npredictions_test = predict(test_X, test_Y, parameters)",
"Cost after iteration 0: 0.6557412523481002\nCost after iteration 10000: 0.16329987525724216\nCost after iteration 20000: 0.13851642423255986\n"
]
],
[
[
"The train accuracy is 94.8% while the test accuracy is 91.5%. This is the **baseline model** (you will observe the impact of regularization on this model). Run the following code to plot the decision boundary of your model.",
"_____no_output_____"
]
],
[
[
"plt.title(\"Model without regularization\")\naxes = plt.gca()\naxes.set_xlim([-0.75,0.40])\naxes.set_ylim([-0.75,0.65])\nplot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)",
"_____no_output_____"
]
],
[
[
"The non-regularized model is obviously overfitting the training set. It is fitting the noisy points! Lets now look at two techniques to reduce overfitting.",
"_____no_output_____"
],
[
"## 2 - L2 Regularization\n\nThe standard way to avoid overfitting is called **L2 regularization**. It consists of appropriately modifying your cost function, from:\n$$J = -\\frac{1}{m} \\sum\\limits_{i = 1}^{m} \\large{(}\\small y^{(i)}\\log\\left(a^{[L](i)}\\right) + (1-y^{(i)})\\log\\left(1- a^{[L](i)}\\right) \\large{)} \\tag{1}$$\nTo:\n$$J_{regularized} = \\small \\underbrace{-\\frac{1}{m} \\sum\\limits_{i = 1}^{m} \\large{(}\\small y^{(i)}\\log\\left(a^{[L](i)}\\right) + (1-y^{(i)})\\log\\left(1- a^{[L](i)}\\right) \\large{)} }_\\text{cross-entropy cost} + \\underbrace{\\frac{1}{m} \\frac{\\lambda}{2} \\sum\\limits_l\\sum\\limits_k\\sum\\limits_j W_{k,j}^{[l]2} }_\\text{L2 regularization cost} \\tag{2}$$\n\nLet's modify your cost and observe the consequences.\n\n**Exercise**: Implement `compute_cost_with_regularization()` which computes the cost given by formula (2). To calculate $\\sum\\limits_k\\sum\\limits_j W_{k,j}^{[l]2}$ , use :\n```python\nnp.sum(np.square(Wl))\n```\nNote that you have to do this for $W^{[1]}$, $W^{[2]}$ and $W^{[3]}$, then sum the three terms and multiply by $ \\frac{1}{m} \\frac{\\lambda}{2} $.",
"_____no_output_____"
]
],
[
[
"# GRADED FUNCTION: compute_cost_with_regularization\n\ndef compute_cost_with_regularization(A3, Y, parameters, lambd):\n \"\"\"\n Implement the cost function with L2 regularization. See formula (2) above.\n \n Arguments:\n A3 -- post-activation, output of forward propagation, of shape (output size, number of examples)\n Y -- \"true\" labels vector, of shape (output size, number of examples)\n parameters -- python dictionary containing parameters of the model\n \n Returns:\n cost - value of the regularized loss function (formula (2))\n \"\"\"\n m = Y.shape[1]\n W1 = parameters[\"W1\"]\n W2 = parameters[\"W2\"]\n W3 = parameters[\"W3\"]\n \n cross_entropy_cost = compute_cost(A3, Y) # This gives you the cross-entropy part of the cost\n \n ### START CODE HERE ### (approx. 1 line)\n L2_regularization_cost = 1/m*lambd/2*(np.sum(np.square(W1))+np.sum(np.square(W2)) + np.sum(np.square(W3)))\n ### END CODER HERE ###\n \n cost = cross_entropy_cost + L2_regularization_cost\n \n return cost",
"_____no_output_____"
],
[
"A3, Y_assess, parameters = compute_cost_with_regularization_test_case()\n\nprint(\"cost = \" + str(compute_cost_with_regularization(A3, Y_assess, parameters, lambd = 0.1)))",
"cost = 1.78648594516\n"
]
],
[
[
"**Expected Output**: \n\n<table> \n <tr>\n <td>\n **cost**\n </td>\n <td>\n 1.78648594516\n </td>\n \n </tr>\n\n</table> ",
"_____no_output_____"
],
[
"Of course, because you changed the cost, you have to change backward propagation as well! All the gradients have to be computed with respect to this new cost. \n\n**Exercise**: Implement the changes needed in backward propagation to take into account regularization. The changes only concern dW1, dW2 and dW3. For each, you have to add the regularization term's gradient ($\\frac{d}{dW} ( \\frac{1}{2}\\frac{\\lambda}{m} W^2) = \\frac{\\lambda}{m} W$).",
"_____no_output_____"
]
],
[
[
"# GRADED FUNCTION: backward_propagation_with_regularization\n\ndef backward_propagation_with_regularization(X, Y, cache, lambd):\n \"\"\"\n Implements the backward propagation of our baseline model to which we added an L2 regularization.\n \n Arguments:\n X -- input dataset, of shape (input size, number of examples)\n Y -- \"true\" labels vector, of shape (output size, number of examples)\n cache -- cache output from forward_propagation()\n lambd -- regularization hyperparameter, scalar\n \n Returns:\n gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables\n \"\"\"\n \n m = X.shape[1]\n (Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache\n \n dZ3 = A3 - Y\n \n ### START CODE HERE ### (approx. 1 line)\n dW3 = 1./m * np.dot(dZ3, A2.T) + lambd/m*W3\n ### END CODE HERE ###\n db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)\n \n dA2 = np.dot(W3.T, dZ3)\n dZ2 = np.multiply(dA2, np.int64(A2 > 0))\n ### START CODE HERE ### (approx. 1 line)\n dW2 = 1./m * np.dot(dZ2, A1.T) + lambd/m*W2\n ### END CODE HERE ###\n db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)\n \n dA1 = np.dot(W2.T, dZ2)\n dZ1 = np.multiply(dA1, np.int64(A1 > 0))\n ### START CODE HERE ### (approx. 1 line)\n dW1 = 1./m * np.dot(dZ1, X.T) + lambd/m*W1\n ### END CODE HERE ###\n db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)\n \n gradients = {\"dZ3\": dZ3, \"dW3\": dW3, \"db3\": db3,\"dA2\": dA2,\n \"dZ2\": dZ2, \"dW2\": dW2, \"db2\": db2, \"dA1\": dA1, \n \"dZ1\": dZ1, \"dW1\": dW1, \"db1\": db1}\n \n return gradients",
"_____no_output_____"
],
[
"X_assess, Y_assess, cache = backward_propagation_with_regularization_test_case()\n\ngrads = backward_propagation_with_regularization(X_assess, Y_assess, cache, lambd = 0.7)\nprint (\"dW1 = \"+ str(grads[\"dW1\"]))\nprint (\"dW2 = \"+ str(grads[\"dW2\"]))\nprint (\"dW3 = \"+ str(grads[\"dW3\"]))",
"dW1 = [[-0.25604646 0.12298827 -0.28297129]\n [-0.17706303 0.34536094 -0.4410571 ]]\ndW2 = [[ 0.79276486 0.85133918]\n [-0.0957219 -0.01720463]\n [-0.13100772 -0.03750433]]\ndW3 = [[-1.77691347 -0.11832879 -0.09397446]]\n"
]
],
[
[
"**Expected Output**:\n\n<table> \n <tr>\n <td>\n **dW1**\n </td>\n <td>\n [[-0.25604646 0.12298827 -0.28297129]\n [-0.17706303 0.34536094 -0.4410571 ]]\n </td>\n </tr>\n <tr>\n <td>\n **dW2**\n </td>\n <td>\n [[ 0.79276486 0.85133918]\n [-0.0957219 -0.01720463]\n [-0.13100772 -0.03750433]]\n </td>\n </tr>\n <tr>\n <td>\n **dW3**\n </td>\n <td>\n [[-1.77691347 -0.11832879 -0.09397446]]\n </td>\n </tr>\n</table> ",
"_____no_output_____"
],
[
"Let's now run the model with L2 regularization $(\\lambda = 0.7)$. The `model()` function will call: \n- `compute_cost_with_regularization` instead of `compute_cost`\n- `backward_propagation_with_regularization` instead of `backward_propagation`",
"_____no_output_____"
]
],
[
[
"parameters = model(train_X, train_Y, lambd = 0.7)\nprint (\"On the train set:\")\npredictions_train = predict(train_X, train_Y, parameters)\nprint (\"On the test set:\")\npredictions_test = predict(test_X, test_Y, parameters)",
"Cost after iteration 0: 0.6974484493131264\nCost after iteration 10000: 0.2684918873282239\nCost after iteration 20000: 0.2680916337127301\n"
]
],
[
[
"Congrats, the test set accuracy increased to 93%. You have saved the French football team!\n\nYou are not overfitting the training data anymore. Let's plot the decision boundary.",
"_____no_output_____"
]
],
[
[
"plt.title(\"Model with L2-regularization\")\naxes = plt.gca()\naxes.set_xlim([-0.75,0.40])\naxes.set_ylim([-0.75,0.65])\nplot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)",
"_____no_output_____"
]
],
[
[
"**Observations**:\n- The value of $\\lambda$ is a hyperparameter that you can tune using a dev set.\n- L2 regularization makes your decision boundary smoother. If $\\lambda$ is too large, it is also possible to \"oversmooth\", resulting in a model with high bias.\n\n**What is L2-regularization actually doing?**:\n\nL2-regularization relies on the assumption that a model with small weights is simpler than a model with large weights. Thus, by penalizing the square values of the weights in the cost function you drive all the weights to smaller values. It becomes too costly for the cost to have large weights! This leads to a smoother model in which the output changes more slowly as the input changes. \n\n<font color='blue'>\n**What you should remember** -- the implications of L2-regularization on:\n- The cost computation:\n - A regularization term is added to the cost\n- The backpropagation function:\n - There are extra terms in the gradients with respect to weight matrices\n- Weights end up smaller (\"weight decay\"): \n - Weights are pushed to smaller values.",
"_____no_output_____"
],
[
"## 3 - Dropout\n\nFinally, **dropout** is a widely used regularization technique that is specific to deep learning. \n**It randomly shuts down some neurons in each iteration.** Watch these two videos to see what this means!\n\n<!--\nTo understand drop-out, consider this conversation with a friend:\n- Friend: \"Why do you need all these neurons to train your network and classify images?\". \n- You: \"Because each neuron contains a weight and can learn specific features/details/shape of an image. The more neurons I have, the more featurse my model learns!\"\n- Friend: \"I see, but are you sure that your neurons are learning different features and not all the same features?\"\n- You: \"Good point... Neurons in the same layer actually don't talk to each other. It should be definitly possible that they learn the same image features/shapes/forms/details... which would be redundant. There should be a solution.\"\n!--> \n\n\n<center>\n<video width=\"620\" height=\"440\" src=\"images/dropout1_kiank.mp4\" type=\"video/mp4\" controls>\n</video>\n</center>\n<br>\n<caption><center> <u> Figure 2 </u>: Drop-out on the second hidden layer. <br> At each iteration, you shut down (= set to zero) each neuron of a layer with probability $1 - keep\\_prob$ or keep it with probability $keep\\_prob$ (50% here). The dropped neurons don't contribute to the training in both the forward and backward propagations of the iteration. </center></caption>\n\n<center>\n<video width=\"620\" height=\"440\" src=\"images/dropout2_kiank.mp4\" type=\"video/mp4\" controls>\n</video>\n</center>\n\n<caption><center> <u> Figure 3 </u>: Drop-out on the first and third hidden layers. <br> $1^{st}$ layer: we shut down on average 40% of the neurons. $3^{rd}$ layer: we shut down on average 20% of the neurons. </center></caption>\n\n\nWhen you shut some neurons down, you actually modify your model. The idea behind drop-out is that at each iteration, you train a different model that uses only a subset of your neurons. With dropout, your neurons thus become less sensitive to the activation of one other specific neuron, because that other neuron might be shut down at any time. \n\n### 3.1 - Forward propagation with dropout\n\n**Exercise**: Implement the forward propagation with dropout. You are using a 3 layer neural network, and will add dropout to the first and second hidden layers. We will not apply dropout to the input layer or output layer. \n\n**Instructions**:\nYou would like to shut down some neurons in the first and second layers. To do that, you are going to carry out 4 Steps:\n1. In lecture, we dicussed creating a variable $d^{[1]}$ with the same shape as $a^{[1]}$ using `np.random.rand()` to randomly get numbers between 0 and 1. Here, you will use a vectorized implementation, so create a random matrix $D^{[1]} = [d^{[1](1)} d^{[1](2)} ... d^{[1](m)}] $ of the same dimension as $A^{[1]}$.\n2. Set each entry of $D^{[1]}$ to be 0 with probability (`1-keep_prob`) or 1 with probability (`keep_prob`), by thresholding values in $D^{[1]}$ appropriately. Hint: to set all the entries of a matrix X to 0 (if entry is less than 0.5) or 1 (if entry is more than 0.5) you would do: `X = (X < 0.5)`. Note that 0 and 1 are respectively equivalent to False and True.\n3. Set $A^{[1]}$ to $A^{[1]} * D^{[1]}$. (You are shutting down some neurons). You can think of $D^{[1]}$ as a mask, so that when it is multiplied with another matrix, it shuts down some of the values.\n4. Divide $A^{[1]}$ by `keep_prob`. By doing this you are assuring that the result of the cost will still have the same expected value as without drop-out. (This technique is also called inverted dropout.)",
"_____no_output_____"
]
],
[
[
"# GRADED FUNCTION: forward_propagation_with_dropout\n\ndef forward_propagation_with_dropout(X, parameters, keep_prob = 0.5):\n \"\"\"\n Implements the forward propagation: LINEAR -> RELU + DROPOUT -> LINEAR -> RELU + DROPOUT -> LINEAR -> SIGMOID.\n \n Arguments:\n X -- input dataset, of shape (2, number of examples)\n parameters -- python dictionary containing your parameters \"W1\", \"b1\", \"W2\", \"b2\", \"W3\", \"b3\":\n W1 -- weight matrix of shape (20, 2)\n b1 -- bias vector of shape (20, 1)\n W2 -- weight matrix of shape (3, 20)\n b2 -- bias vector of shape (3, 1)\n W3 -- weight matrix of shape (1, 3)\n b3 -- bias vector of shape (1, 1)\n keep_prob - probability of keeping a neuron active during drop-out, scalar\n \n Returns:\n A3 -- last activation value, output of the forward propagation, of shape (1,1)\n cache -- tuple, information stored for computing the backward propagation\n \"\"\"\n \n np.random.seed(1)\n \n # retrieve parameters\n W1 = parameters[\"W1\"]\n b1 = parameters[\"b1\"]\n W2 = parameters[\"W2\"]\n b2 = parameters[\"b2\"]\n W3 = parameters[\"W3\"]\n b3 = parameters[\"b3\"]\n \n # LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID\n Z1 = np.dot(W1, X) + b1\n A1 = relu(Z1)\n ### START CODE HERE ### (approx. 4 lines) # Steps 1-4 below correspond to the Steps 1-4 described above. \n D1 = np.random.rand(*A1.shape) # Step 1: initialize matrix D1 = np.random.rand(..., ...)\n D1 = (D1 > keep_prob) # Step 2: convert entries of D1 to 0 or 1 (using keep_prob as the threshold)\n A1 = A1*D1 # Step 3: shut down some neurons of A1\n A1 = A1/keep_prob # Step 4: scale the value of neurons that haven't been shut down\n ### END CODE HERE ###\n Z2 = np.dot(W2, A1) + b2\n A2 = relu(Z2)\n ### START CODE HERE ### (approx. 4 lines)\n D2 = np.random.rand(*A2.shape) # Step 1: initialize matrix D2 = np.random.rand(..., ...)\n D2 = (D2 > keep_prob) # Step 2: convert entries of D2 to 0 or 1 (using keep_prob as the threshold)\n A2 = A2*D2 # Step 3: shut down some neurons of A2\n A2 = A2/keep_prob # Step 4: scale the value of neurons that haven't been shut down\n ### END CODE HERE ###\n Z3 = np.dot(W3, A2) + b3\n A3 = sigmoid(Z3)\n \n cache = (Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3)\n \n return A3, cache",
"_____no_output_____"
],
[
"X_assess, parameters = forward_propagation_with_dropout_test_case()\n\nA3, cache = forward_propagation_with_dropout(X_assess, parameters, keep_prob = 0.7)\nprint (\"A3 = \" + str(A3))",
"A3 = [[ 0.49683389 0.49683389 0.49683389 0.36974721 0.49683389]]\n"
]
],
[
[
"**Expected Output**: \n\n<table> \n <tr>\n <td>\n **A3**\n </td>\n <td>\n [[ 0.36974721 0.00305176 0.04565099 0.49683389 0.36974721]]\n </td>\n \n </tr>\n\n</table> ",
"_____no_output_____"
],
[
"### 3.2 - Backward propagation with dropout\n\n**Exercise**: Implement the backward propagation with dropout. As before, you are training a 3 layer network. Add dropout to the first and second hidden layers, using the masks $D^{[1]}$ and $D^{[2]}$ stored in the cache. \n\n**Instruction**:\nBackpropagation with dropout is actually quite easy. You will have to carry out 2 Steps:\n1. You had previously shut down some neurons during forward propagation, by applying a mask $D^{[1]}$ to `A1`. In backpropagation, you will have to shut down the same neurons, by reapplying the same mask $D^{[1]}$ to `dA1`. \n2. During forward propagation, you had divided `A1` by `keep_prob`. In backpropagation, you'll therefore have to divide `dA1` by `keep_prob` again (the calculus interpretation is that if $A^{[1]}$ is scaled by `keep_prob`, then its derivative $dA^{[1]}$ is also scaled by the same `keep_prob`).\n",
"_____no_output_____"
]
],
[
[
"# GRADED FUNCTION: backward_propagation_with_dropout\n\ndef backward_propagation_with_dropout(X, Y, cache, keep_prob):\n \"\"\"\n Implements the backward propagation of our baseline model to which we added dropout.\n \n Arguments:\n X -- input dataset, of shape (2, number of examples)\n Y -- \"true\" labels vector, of shape (output size, number of examples)\n cache -- cache output from forward_propagation_with_dropout()\n keep_prob - probability of keeping a neuron active during drop-out, scalar\n \n Returns:\n gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables\n \"\"\"\n \n m = X.shape[1]\n (Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3) = cache\n \n dZ3 = A3 - Y\n dW3 = 1./m * np.dot(dZ3, A2.T)\n db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)\n dA2 = np.dot(W3.T, dZ3)\n ### START CODE HERE ### (≈ 2 lines of code)\n dA2 = dA2*D2 # Step 1: Apply mask D2 to shut down the same neurons as during the forward propagation\n dA2 = dA2/keep_prob # Step 2: Scale the value of neurons that haven't been shut down\n ### END CODE HERE ###\n dZ2 = np.multiply(dA2, np.int64(A2 > 0))\n dW2 = 1./m * np.dot(dZ2, A1.T)\n db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)\n \n dA1 = np.dot(W2.T, dZ2)\n ### START CODE HERE ### (≈ 2 lines of code)\n dA1 = dA1*D1 # Step 1: Apply mask D1 to shut down the same neurons as during the forward propagation\n dA1 = dA1/keep_prob # Step 2: Scale the value of neurons that haven't been shut down\n ### END CODE HERE ###\n dZ1 = np.multiply(dA1, np.int64(A1 > 0))\n dW1 = 1./m * np.dot(dZ1, X.T)\n db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)\n \n gradients = {\"dZ3\": dZ3, \"dW3\": dW3, \"db3\": db3,\"dA2\": dA2,\n \"dZ2\": dZ2, \"dW2\": dW2, \"db2\": db2, \"dA1\": dA1, \n \"dZ1\": dZ1, \"dW1\": dW1, \"db1\": db1}\n \n return gradients",
"_____no_output_____"
],
[
"X_assess, Y_assess, cache = backward_propagation_with_dropout_test_case()\n\ngradients = backward_propagation_with_dropout(X_assess, Y_assess, cache, keep_prob = 0.8)\n\nprint (\"dA1 = \" + str(gradients[\"dA1\"]))\nprint (\"dA2 = \" + str(gradients[\"dA2\"]))",
"dA1 = [[ 0.36544439 0. -0.00188233 0. -0.17408748]\n [ 0.65515713 0. -0.00337459 0. -0. ]]\ndA2 = [[ 0.58180856 0. -0.00299679 0. -0.27715731]\n [ 0. 0.53159854 -0. 0.53159854 -0.34089673]\n [ 0. 0. -0.00292733 0. -0. ]]\n"
]
],
[
[
"**Expected Output**: \n\n<table> \n <tr>\n <td>\n **dA1**\n </td>\n <td>\n [[ 0.36544439 0. -0.00188233 0. -0.17408748]\n [ 0.65515713 0. -0.00337459 0. -0. ]]\n </td>\n \n </tr>\n <tr>\n <td>\n **dA2**\n </td>\n <td>\n [[ 0.58180856 0. -0.00299679 0. -0.27715731]\n [ 0. 0.53159854 -0. 0.53159854 -0.34089673]\n [ 0. 0. -0.00292733 0. -0. ]]\n </td>\n \n </tr>\n</table> ",
"_____no_output_____"
],
[
"Let's now run the model with dropout (`keep_prob = 0.86`). It means at every iteration you shut down each neurons of layer 1 and 2 with 14% probability. The function `model()` will now call:\n- `forward_propagation_with_dropout` instead of `forward_propagation`.\n- `backward_propagation_with_dropout` instead of `backward_propagation`.",
"_____no_output_____"
]
],
[
[
"parameters = model(train_X, train_Y, keep_prob = 0.86, learning_rate = 0.3)\n\nprint (\"On the train set:\")\npredictions_train = predict(train_X, train_Y, parameters)\nprint (\"On the test set:\")\npredictions_test = predict(test_X, test_Y, parameters)",
"_____no_output_____"
]
],
[
[
"Dropout works great! The test accuracy has increased again (to 95%)! Your model is not overfitting the training set and does a great job on the test set. The French football team will be forever grateful to you! \n\nRun the code below to plot the decision boundary.",
"_____no_output_____"
]
],
[
[
"plt.title(\"Model with dropout\")\naxes = plt.gca()\naxes.set_xlim([-0.75,0.40])\naxes.set_ylim([-0.75,0.65])\nplot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)",
"_____no_output_____"
]
],
[
[
"**Note**:\n- A **common mistake** when using dropout is to use it both in training and testing. You should use dropout (randomly eliminate nodes) only in training. \n- Deep learning frameworks like [tensorflow](https://www.tensorflow.org/api_docs/python/tf/nn/dropout), [PaddlePaddle](http://doc.paddlepaddle.org/release_doc/0.9.0/doc/ui/api/trainer_config_helpers/attrs.html), [keras](https://keras.io/layers/core/#dropout) or [caffe](http://caffe.berkeleyvision.org/tutorial/layers/dropout.html) come with a dropout layer implementation. Don't stress - you will soon learn some of these frameworks.\n\n<font color='blue'>\n**What you should remember about dropout:**\n- Dropout is a regularization technique.\n- You only use dropout during training. Don't use dropout (randomly eliminate nodes) during test time.\n- Apply dropout both during forward and backward propagation.\n- During training time, divide each dropout layer by keep_prob to keep the same expected value for the activations. For example, if keep_prob is 0.5, then we will on average shut down half the nodes, so the output will be scaled by 0.5 since only the remaining half are contributing to the solution. Dividing by 0.5 is equivalent to multiplying by 2. Hence, the output now has the same expected value. You can check that this works even when keep_prob is other values than 0.5. ",
"_____no_output_____"
],
[
"## 4 - Conclusions",
"_____no_output_____"
],
[
"**Here are the results of our three models**: \n\n<table> \n <tr>\n <td>\n **model**\n </td>\n <td>\n **train accuracy**\n </td>\n <td>\n **test accuracy**\n </td>\n\n </tr>\n <td>\n 3-layer NN without regularization\n </td>\n <td>\n 95%\n </td>\n <td>\n 91.5%\n </td>\n <tr>\n <td>\n 3-layer NN with L2-regularization\n </td>\n <td>\n 94%\n </td>\n <td>\n 93%\n </td>\n </tr>\n <tr>\n <td>\n 3-layer NN with dropout\n </td>\n <td>\n 93%\n </td>\n <td>\n 95%\n </td>\n </tr>\n</table> ",
"_____no_output_____"
],
[
"Note that regularization hurts training set performance! This is because it limits the ability of the network to overfit to the training set. But since it ultimately gives better test accuracy, it is helping your system. ",
"_____no_output_____"
],
[
"Congratulations for finishing this assignment! And also for revolutionizing French football. :-) ",
"_____no_output_____"
],
[
"<font color='blue'>\n**What we want you to remember from this notebook**:\n- Regularization will help you reduce overfitting.\n- Regularization will drive your weights to lower values.\n- L2 regularization and Dropout are two very effective regularization techniques.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
d0c6a33406730196d9c7b877273a336d0b30bd9e | 777,219 | ipynb | Jupyter Notebook | Simple scattering .ipynb | ivanzmilic/collage | bd1051c7fc5ad8a0d6929d50c06c66f51bb6b167 | [
"MIT"
] | null | null | null | Simple scattering .ipynb | ivanzmilic/collage | bd1051c7fc5ad8a0d6929d50c06c66f51bb6b167 | [
"MIT"
] | null | null | null | Simple scattering .ipynb | ivanzmilic/collage | bd1051c7fc5ad8a0d6929d50c06c66f51bb6b167 | [
"MIT"
] | 2 | 2021-02-02T22:53:46.000Z | 2021-03-15T21:50:17.000Z | 704.640979 | 331,192 | 0.948191 | [
[
[
"### Calculating intensity in an inclined direction $\\cos\\theta \\neq 1$ \n\nFor this first part we are going to use our good old FALC model and calculate intensity in direction other than $\\mu = 1$. This is also an essential part of scattering problems! \n\n### We will assume that we are dealing with continuum everywhere!",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport matplotlib.pyplot as plt \nimport matplotlib\nmatplotlib.rcParams.update({\n \"text.usetex\": False,\n \"font.size\" : 16,\n \"font.family\": \"sans-serif\",\n \"font.sans-serif\": [\"Helvetica\"]})\n",
"_____no_output_____"
],
[
"# Let's load the data:\natmos = np.loadtxt(\"falc_71.dat\",unpack=True,skiprows=1)\natmos.shape",
"_____no_output_____"
],
[
"# 0th parameter - log optical depth in continuum (lambda = 500 nm)\n# 2nd parameter - Temperature\n\nplt.figure(figsize=[9,5])\nplt.plot(atmos[0],atmos[2])",
"_____no_output_____"
],
[
"llambda = 500E-9 # in nm\nk = 1.38E-23 # Boltzmann constant \nc = 3E8 # speed of light\nh_p = 6.626E-34\n\nT = np.copy(atmos[2])\nlogtau = np.copy(atmos[0])\ntau = 10.** logtau\n\n# We will separate now Planck function and the source function:\nB = 2*h_p*c**2.0 / llambda**5.0 / (np.exp(h_p*c/(llambda*k*T))-1)\nS = np.copy(B)\n\n",
"_____no_output_____"
],
[
"plt.figure(figsize=[9,5])\nplt.plot(logtau,S)",
"_____no_output_____"
]
],
[
[
"#### We want to use a very simple formal solution:\n\n### $$I = I_{inc} e^{-\\Delta \\tau} + S(1-e^{-\\Delta \\tau}) $$\n\n",
"_____no_output_____"
]
],
[
[
"# Let's define a function that is doing that:\n\ndef synthesis_whole(S,tau):\n \n ND = len(S)\n intensity = np.zeros(ND)\n \n # At the bottom, intensity is equal to the source function (similar to Blackbody)\n intensity[ND-1] = S[ND-1]\n \n for d in range(ND-2,-1,-1):\n # optical thickness of the layer:\n delta_tau = tau[d+1] - tau[d]\n # Mean source function:\n S_mean = (S[d+1] + S[d])*0.5\n \n intensity[d] = intensity[d+1] * np.exp(-delta_tau) + S_mean *(1.-np.exp(-delta_tau))\n \n return intensity",
"_____no_output_____"
],
[
"intensity = synthesis_whole(S,tau)",
"_____no_output_____"
],
[
"plt.figure(figsize=[9,5])\nplt.plot(logtau,S,label='Source Function')\nplt.plot(logtau,intensity,label='Intensity in the direction theta =0 ')\nplt.legend()\nplt.xlabel(\"$\\\\log\\\\tau$\")",
"_____no_output_____"
]
],
[
[
"### Discuss this 3-4 mins. \nWrite the conclusions here:\n\nIntensity is not the same as the source function. It is \"nonlocal\" (ok Ivan, we get it, you said million times!)\n\n\n### Now how about using a grid of $mu$ angles?\n\n### We are solving \n### $$\\mu \\frac{dI}{d\\tau} = I-S $$\n\n### $$\\frac{dI}{d\\tau / \\mu} = I-S $$\n\n### $\\mu$ goes from 0 to 1 \n\n",
"_____no_output_____"
]
],
[
[
"NM = 10\nND = len(S)\n\n\n# create mu grid\nmu = np.linspace(0.1,1.0,NM)\n\nintensity_grid = np.zeros([NM,ND])\n\n# Calculate the intensity in each direction:\nfor m in range(0,10):\n intensity_grid[m] = synthesis_whole(S,tau/mu[m])",
"_____no_output_____"
],
[
"plt.figure(figsize=[9,5])\nplt.plot(mu,intensity_grid[:,0])\nplt.ylabel(\"$I^+(\\mu)$\")\nplt.xlabel(\"$\\mu = \\cos\\\\theta$\")",
"_____no_output_____"
]
],
[
[
"### Limb darkening!\n\n",
"_____no_output_____"
],
[
"### Limb darkening explanation:\n\n",
"_____no_output_____"
],
[
"# Now the scattering problem! \n\n### Solve, iteratively:\n\n### $$S = \\epsilon B + (1-\\epsilon) J $$\n### $$ J = \\frac{1}{4\\pi} \\int_0^{\\pi} \\int_0^{2\\pi} I(\\theta,\\phi) \\sin \\theta d\\theta d\\phi = \\frac{1}{2} \\int_{-1}^{1} I(\\mu) d\\mu $$\nand for this second equation we need to formally solve: \n### $$ \\frac{dI}{d\\tau/\\mu} = I-S $$",
"_____no_output_____"
],
[
"### How to calculate J at the top of the atmosphere from the intensity we just calculated?",
"_____no_output_____"
],
[
"Would you all agree that, at the top:\n\n$$J = \\frac{1}{2} \\int_{-1}^{1} I(\\mu) d\\mu = \\frac{1}{2} \\int_{0}^{1} I(\\mu) d\\mu = \\frac{1}{2}\\sum_m (I_m + I_{m+1}) \\frac{\\mu_{m+1}-\\mu_{m}}{2}$$",
"_____no_output_____"
]
],
[
[
"# simplified trapezoidal rule:\n# Starting value:\nJ = 0.\nfor m in range(0,NM-1):\n J += (intensity_grid[m,0] + intensity_grid[m+1,0]) * (mu[m+1] - mu[m])/2.0\nJ *= 0.5",
"_____no_output_____"
],
[
"print (\"ratio of the mean intensity at the surface to the Planck function at the surface is: \", J/B[0])",
"ratio of the mean intensity at the surface to the Planck function at the surface is: 0.11910542541966643\n"
]
],
[
[
"### Now the next step would be, to say: \n\n### $$S = \\epsilon B + \\epsilon J $$\n\nThere are, however, few hurdles before that: \n\n- We don't know $\\epsilon$. Let's assume there is a lot of scattering and set $\\epsilon=10^{-2}$\n- We don't know J everywhere, we only calculate at the top. This one is harder to fix. \n\nTo really calculate J everywhere properly we need to solve radiative transfer equation going inward. \n\n### Convince yourself that that is really what we have to do. \n\nHow do we solve the RTE inward? Identically to upward except we start from the top and go down. Let's sketch a scheme for that. ",
"_____no_output_____"
]
],
[
[
"# Old RTE solver looks like this. Renamed it to out so that we know it is the outgoing intensity\n\ndef synthesis_out(S,tau):\n \n ND = len(S)\n intensity = np.zeros(ND)\n \n # At the bottom, intensity is equal to the source function (similar to Blackbody)\n intensity[ND-1] = S[ND-1]\n \n for d in range(ND-2,-1,-1):\n # optical thickness of the layer:\n delta_tau = tau[d+1] - tau[d]\n # Mean source function:\n S_mean = (S[d+1] + S[d])*0.5\n \n intensity[d] = intensity[d+1] * np.exp(-delta_tau) + S_mean *(1.-np.exp(-delta_tau))\n \n return intensity\n\ndef synthesis_in(S,tau):\n \n ND = len(S)\n intensity = np.zeros(ND)\n \n # At the top, intensity is equal to ZERO\n intensity[0] = 0.0\n \n # The main difference now is that this \"previous\" or \"upwind\" point is the point before (d-1)\n \n for d in range(1,ND,1):\n # optical thickness of the layer:\n delta_tau = tau[d] - tau[d-1] # Note that I am using previous point now\n # Mean source function:\n S_mean = (S[d] + S[d-1])*0.5\n \n intensity[d] = intensity[d-1] * np.exp(-delta_tau) + S_mean *(1.-np.exp(-delta_tau))\n \n return intensity\n",
"_____no_output_____"
],
[
"# Now we can solve and we will have two different intensities, in and out one \n\nint_out = np.zeros([NM,ND])\nint_in = np.zeros([NM,ND])\n\n# We solve in multiple directions:\nfor m in range(0,10):\n int_out[m] = synthesis_out(S,tau/mu[m])\n int_in[m] = synthesis_in (S,tau/mu[m])",
"_____no_output_____"
],
[
"# It is interesting now to visualize the outgoing intensity, and, \n# for example, the intensity at the bottom. How about that:\n\nplt.figure(figsize=[9,5])\nplt.plot(mu,int_out[:,0],label='Outgoing intensity')\nplt.plot(mu,int_in[:,0], label='Ingoing intensity')\nplt.legend()\nplt.xlabel(\"$\\\\mu$\")\nplt.ylabel(\"Intensity\")\nplt.title(\"Intensity distribution at the top of the atmosphere\")",
"_____no_output_____"
],
[
"# Here is the bottom intensity\n# Don't be confused by the expression outgoing / ingoing. We mean \"at that point\"\n\nplt.figure(figsize=[9,5])\nplt.plot(mu,int_out[:,ND-1],label='Outgoing intensity')\nplt.plot(mu,int_in[:,ND-1], label='Ingoing intensity')\nplt.legend()\nplt.xlabel(\"$\\\\mu$\")\nplt.ylabel(\"Intensity\")\nplt.ylim([0,1.5E14])\nplt.title(\"Intensity distribution at the bottom of the atmosphere\")",
"_____no_output_____"
]
],
[
[
"### Take a moment here and try to think about these two things: \n\n- The intensity at the bottom is much more close to each other and much less strongly depends on $\\mu$. Would you call that situation \"isotropic\"?\n- Why is this so?",
"_____no_output_____"
]
],
[
[
"# Now we can calculate mean intensity similarly to above. We don't even have to use super strict \n# trapezoidal integration. Just add all the intensities and divide by 20\n\nJ = np.sum(int_in+int_out,axis=0)/20.",
"_____no_output_____"
],
[
"# Then define the epsilon\n\nepsilon = 1E-2\n\n# And then we can update the source function:\nS = epsilon * B + (1-epsilon) * J\n\n# And we can now plot B (Planck function, our \"old value for the source function\") vs our \"new\" \n# source function\n\nplt.figure(figsize=[9,5])\nplt.plot(logtau,B,label=\"Planck Function\")\nplt.plot(logtau,S,label=\"Source Function\")\nplt.xlabel(\"$\\\\log \\\\tau$\")\nplt.legend()",
"_____no_output_____"
]
],
[
[
"Now: This is new value of the source function! We should now recalculate the new intensity. \n\nBut wait, this new intensity now results in a new Source function. \n\nWhich then results in the new intensity....\n\nThis leads to an iterative process, known as \"Lambda\" iteration. We repeat the process until (a very slow) convergence. Let's try!",
"_____no_output_____"
]
],
[
[
"# I will define an additional function to make things clearer:\n\ndef update_S(int_in, int_out, epsilon, B):\n J = np.sum(int_in+int_out,axis=0)/20.\n S = epsilon * B + (1-epsilon) * J\n return S",
"_____no_output_____"
],
[
"# And then we devise an iterative scheme:\n\nmax_iter = 100\n\n# Start from the source function equal to the Planck function:\nS = np.copy(B)\n\nfor i in range(0,max_iter):\n \n # solve RTE:\n for m in range(0,10):\n int_out[m] = synthesis_out(S,tau/mu[m])\n int_in[m] = synthesis_in (S,tau/mu[m])\n \n # update the source function:\n \n S_new = update_S(int_in,int_out,epsilon,B)\n \n # find where did the source function change the most \n \n relative_change = np.amax(np.abs((S_new-S)/S))\n print(\"Relative change in this iteration is: \",relative_change)\n \n # And assign new source function to the old one:\n S = np.copy(S_new)",
"Relative change in this iteration is: 1.3314924320494517\nRelative change in this iteration is: 0.04805746551383416\nRelative change in this iteration is: 0.022455016014927415\nRelative change in this iteration is: 0.019691776769293193\nRelative change in this iteration is: 0.01776157724309405\nRelative change in this iteration is: 0.01652685527990853\nRelative change in this iteration is: 0.015619266117131922\nRelative change in this iteration is: 0.01503514328248859\nRelative change in this iteration is: 0.014481956406480876\nRelative change in this iteration is: 0.013891749317768114\nRelative change in this iteration is: 0.01331940145864249\nRelative change in this iteration is: 0.012787999559594275\nRelative change in this iteration is: 0.01230499264377662\nRelative change in this iteration is: 0.011870217784521643\nRelative change in this iteration is: 0.01148002673728799\nRelative change in this iteration is: 0.011129430889475208\nRelative change in this iteration is: 0.010813199708225122\nRelative change in this iteration is: 0.01052639525000082\nRelative change in this iteration is: 0.010264601941141491\nRelative change in this iteration is: 0.010023994398619377\nRelative change in this iteration is: 0.00980132281604571\nRelative change in this iteration is: 0.009593860119202186\nRelative change in this iteration is: 0.009399335051463401\nRelative change in this iteration is: 0.00921586391391034\nRelative change in this iteration is: 0.009041887192909583\nRelative change in this iteration is: 0.00887611368209911\nRelative change in this iteration is: 0.008717472744430008\nRelative change in this iteration is: 0.008565074358955678\nRelative change in this iteration is: 0.008418176141613611\nRelative change in this iteration is: 0.008276156375340717\nRelative change in this iteration is: 0.008138492089803452\nRelative change in this iteration is: 0.008004741312654376\nRelative change in this iteration is: 0.00787452872644683\nRelative change in this iteration is: 0.007747534083037639\nRelative change in this iteration is: 0.007623482838041036\nRelative change in this iteration is: 0.007502138565932556\nRelative change in this iteration is: 0.007383296800261419\nRelative change in this iteration is: 0.007266780013345936\nRelative change in this iteration is: 0.007152433507201133\nRelative change in this iteration is: 0.007040122033960388\nRelative change in this iteration is: 0.00692972700144494\nRelative change in this iteration is: 0.0068211441493597315\nRelative change in this iteration is: 0.006714281605311472\nRelative change in this iteration is: 0.00660905824862608\nRelative change in this iteration is: 0.006505402324770845\nRelative change in this iteration is: 0.006403250264879413\nRelative change in this iteration is: 0.006302545674097195\nRelative change in this iteration is: 0.0062032384597361856\nRelative change in this iteration is: 0.006105284075931407\nRelative change in this iteration is: 0.00600864286603329\nRelative change in this iteration is: 0.005913279487516914\nRelative change in this iteration is: 0.005819162407019697\nRelative change in this iteration is: 0.005726263455373415\nRelative change in this iteration is: 0.005634557434262397\nRelative change in this iteration is: 0.005544021767576933\nRelative change in this iteration is: 0.005454636191684033\nRelative change in this iteration is: 0.005366382479733791\nRelative change in this iteration is: 0.005279244195900238\nRelative change in this iteration is: 0.00519320647604306\nRelative change in this iteration is: 0.005108255831772064\nRelative change in this iteration is: 0.005024492860981598\nRelative change in this iteration is: 0.004942873327425288\nRelative change in this iteration is: 0.004862175341229857\nRelative change in this iteration is: 0.004782402081509607\nRelative change in this iteration is: 0.004703556179800586\nRelative change in this iteration is: 0.004625639736273327\nRelative change in this iteration is: 0.0045486543355527185\nRelative change in this iteration is: 0.004472601062211104\nRelative change in this iteration is: 0.004397480515958641\nRelative change in this iteration is: 0.004323292826578827\nRelative change in this iteration is: 0.00425003766862223\nRelative change in this iteration is: 0.004177714275888769\nRelative change in this iteration is: 0.004106321455710899\nRelative change in this iteration is: 0.004035857603046488\nRelative change in this iteration is: 0.003966320714399837\nRelative change in this iteration is: 0.0038977084015627103\nRelative change in this iteration is: 0.0038300179051948863\nRelative change in this iteration is: 0.0037632461082352433\nRelative change in this iteration is: 0.0036973895491447376\nRelative change in this iteration is: 0.003632444434989971\nRelative change in this iteration is: 0.0035684066543515917\nRelative change in this iteration is: 0.00350527179006848\nRelative change in this iteration is: 0.003443035131802984\nRelative change in this iteration is: 0.0033816916884323324\nRelative change in this iteration is: 0.0033212362002559273\nRelative change in this iteration is: 0.0032616631510193872\nRelative change in this iteration is: 0.003202966779748377\nRelative change in this iteration is: 0.0031451410923833438\nRelative change in this iteration is: 0.0030881798732229665\nRelative change in this iteration is: 0.003032076696157362\nRelative change in this iteration is: 0.0029768249356989457\nRelative change in this iteration is: 0.0029224177777978856\nRelative change in this iteration is: 0.0028688482304447885\nRelative change in this iteration is: 0.0028161091340556743\nRelative change in this iteration is: 0.002764193171632812\nRelative change in this iteration is: 0.0027130928787004144\nRelative change in this iteration is: 0.0026628006530184352\nRelative change in this iteration is: 0.0026133087640589857\nRelative change in this iteration is: 0.002564609362258417\nRelative change in this iteration is: 0.0025166944880325817\n"
]
],
[
[
"### Spend some time figuring this out. This is a common technique for solving various kinds of coupled equations in astrophysics! ",
"_____no_output_____"
],
[
"To finish the story let's visualize the final solution of this, and then do one more example - constant temperature!",
"_____no_output_____"
]
],
[
[
"plt.figure(figsize=[9,5])\nplt.plot(logtau,B,label=\"Planck Function\")\nplt.plot(logtau,S,label=\"Source Function\")\nplt.xlabel(\"$\\\\log \\\\tau$\")\nplt.title(\"Scattering Source function\")\nplt.legend()",
"_____no_output_____"
]
],
[
[
"### It drops very very very low, even in great depths! This is exactly not the greatest example as in the great depths the epsilon will never be so small. Epsilon is generally depth dependent.",
"_____no_output_____"
],
[
"## Example - Scattering in an isothermal atmosphere",
"_____no_output_____"
]
],
[
[
"## We will do all the same as before except we will now use a different tau grid, to make atmosphere\n# much deeper. And we will use B = const, to reproduce some plots from the literature:\n\nlogtau = np.linspace(-4,4,81)\ntau = 10.**logtau\nND = len(logtau)\nB = np.zeros(ND)\nB[:] = 1.0 # Units do not matter, we can say everything is in units of B\nS = np.copy(B)\n\nmax_iter = 100\n\n# Start from the source function equal to the Planck function:\nint_out = np.zeros([NM,ND])\nint_in = np.zeros([NM,ND])\n\nfor i in range(0,max_iter):\n \n # solve RTE:\n for m in range(0,10):\n int_out[m] = synthesis_out(S,tau/mu[m])\n int_in[m] = synthesis_in (S,tau/mu[m])\n \n # update the source function:\n \n S_new = update_S(int_in,int_out,epsilon,B)\n \n # find where did the source function change the most \n \n relative_change = np.amax(np.abs((S_new-S)/S))\n print(\"Relative change in this iteration is: \",relative_change)\n \n # And assign new source function to the old one:\n S = np.copy(S_new)",
"Relative change in this iteration is: 0.495\nRelative change in this iteration is: 0.24554435193688087\nRelative change in this iteration is: 0.16165180317576575\nRelative change in this iteration is: 0.11907785184516301\nRelative change in this iteration is: 0.0934605504492721\nRelative change in this iteration is: 0.07639398818170434\nRelative change in this iteration is: 0.06426231766991904\nRelative change in this iteration is: 0.055184567422810145\nRelative change in this iteration is: 0.04816100504989594\nRelative change in this iteration is: 0.042561204227476675\nRelative change in this iteration is: 0.03800930941085114\nRelative change in this iteration is: 0.03422423011216586\nRelative change in this iteration is: 0.03104905444678713\nRelative change in this iteration is: 0.02833596506538578\nRelative change in this iteration is: 0.02599331859923862\nRelative change in this iteration is: 0.023963907384147783\nRelative change in this iteration is: 0.022177874061694313\nRelative change in this iteration is: 0.020595244174362176\nRelative change in this iteration is: 0.019196061582771876\nRelative change in this iteration is: 0.017941215852293142\nRelative change in this iteration is: 0.016809536981195885\nRelative change in this iteration is: 0.015786824465279008\nRelative change in this iteration is: 0.01486429969338762\nRelative change in this iteration is: 0.014021310076619498\nRelative change in this iteration is: 0.013248506341107262\nRelative change in this iteration is: 0.012537929485077428\nRelative change in this iteration is: 0.011889703114935922\nRelative change in this iteration is: 0.01129098840247929\nRelative change in this iteration is: 0.010735588499547501\nRelative change in this iteration is: 0.01021926561681866\nRelative change in this iteration is: 0.009738307223026725\nRelative change in this iteration is: 0.009293802209125204\nRelative change in this iteration is: 0.008879896632323993\nRelative change in this iteration is: 0.00849182762994324\nRelative change in this iteration is: 0.008127436753987761\nRelative change in this iteration is: 0.007784793644885518\nRelative change in this iteration is: 0.007462166986852504\nRelative change in this iteration is: 0.007160330312042671\nRelative change in this iteration is: 0.0068778630567019215\nRelative change in this iteration is: 0.006610602980846975\nRelative change in this iteration is: 0.0063574691294311335\nRelative change in this iteration is: 0.006117477330138822\nRelative change in this iteration is: 0.005889729748688644\nRelative change in this iteration is: 0.005673405755313838\nRelative change in this iteration is: 0.005467753916149217\nRelative change in this iteration is: 0.005275659026641683\nRelative change in this iteration is: 0.0050926849261230785\nRelative change in this iteration is: 0.004918112588820311\nRelative change in this iteration is: 0.004751441738192838\nRelative change in this iteration is: 0.004592209332153507\nRelative change in this iteration is: 0.00443998622898464\nRelative change in this iteration is: 0.004294374200710125\nRelative change in this iteration is: 0.004155003252898637\nRelative change in this iteration is: 0.00402152921526048\nRelative change in this iteration is: 0.0038959803506103366\nRelative change in this iteration is: 0.003775872549660567\nRelative change in this iteration is: 0.00366052438072716\nRelative change in this iteration is: 0.0035496961347309857\nRelative change in this iteration is: 0.0034431630263746407\nRelative change in this iteration is: 0.003340714082175376\nRelative change in this iteration is: 0.0032421511246688273\nRelative change in this iteration is: 0.003147287843391393\nRelative change in this iteration is: 0.0030559489442496994\nRelative change in this iteration is: 0.0029679693697882105\nRelative change in this iteration is: 0.0028831935836545496\nRelative change in this iteration is: 0.0028030048234133666\nRelative change in this iteration is: 0.002726003900987511\nRelative change in this iteration is: 0.002651641287642232\nRelative change in this iteration is: 0.0025798047075349193\nRelative change in this iteration is: 0.002510387703436588\nRelative change in this iteration is: 0.002443289279160177\nRelative change in this iteration is: 0.0023784135673349653\nRelative change in this iteration is: 0.0023156695205066883\nRelative change in this iteration is: 0.0022549706237164936\nRelative change in this iteration is: 0.002196234626894639\nRelative change in this iteration is: 0.002139383295506079\nRelative change in this iteration is: 0.0020843421780647802\nRelative change in this iteration is: 0.002031040389218473\nRelative change in this iteration is: 0.0019798898999426476\nRelative change in this iteration is: 0.0019311271517398524\nRelative change in this iteration is: 0.0018838188385035053\nRelative change in this iteration is: 0.0018379123965208055\nRelative change in this iteration is: 0.0017933575326783595\nRelative change in this iteration is: 0.00175010610978349\nRelative change in this iteration is: 0.0017081120384859977\nRelative change in this iteration is: 0.001667331175375418\nRelative change in this iteration is: 0.0016277212268620467\nRelative change in this iteration is: 0.0015892416584840187\nRelative change in this iteration is: 0.0015518536092905034\nRelative change in this iteration is: 0.0015155198109928576\nRelative change in this iteration is: 0.0014802045115813564\nRelative change in this iteration is: 0.0014458734031368478\nRelative change in this iteration is: 0.0014124935535775338\nRelative change in this iteration is: 0.0013800333421006821\nRelative change in this iteration is: 0.0013486081163542522\nRelative change in this iteration is: 0.0013187139817326114\nRelative change in this iteration is: 0.0012895995643111046\nRelative change in this iteration is: 0.0012612408560099988\nRelative change in this iteration is: 0.0012336147131749675\nRelative change in this iteration is: 0.001206698820871928\n"
]
],
[
[
"### We have reached some sort of convergence. (Relative change of roughly 1E-3).\n\nLet's have a look at our source function:",
"_____no_output_____"
]
],
[
[
"plt.figure(figsize=[9,5])\nplt.plot(logtau,B,label=\"Planck Function\")\nplt.plot(logtau,S,label=\"Source Function\")\nplt.xlabel(\"$\\\\log \\\\tau$\")\nplt.title(\"Scattering (NLTE) Source function\")\nplt.legend()",
"_____no_output_____"
]
],
[
[
"### This is a famous plot! What this tells us that even in the isothermal atmosphere the source function, in presence of scattering, drops below the Planck function. Extremely important effect for formation of strong chromospheric spectral lines. \n\n#### Note that this was a continuum example. The line situation is a tad more complicated and will involve an additional integration over the wavelengths. Numerically, results are different, but the spirit is the same! ",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d0c6ab26e2069296c77ab64deaeb98b29071081e | 8,934 | ipynb | Jupyter Notebook | exercises/Jupyter/working-with-code-cells.ipynb | rogalskim/udacity-ai-with-python | 7e50c4ff496b3cfa34df505d9d2dc87a8481dccb | [
"MIT"
] | null | null | null | exercises/Jupyter/working-with-code-cells.ipynb | rogalskim/udacity-ai-with-python | 7e50c4ff496b3cfa34df505d9d2dc87a8481dccb | [
"MIT"
] | null | null | null | exercises/Jupyter/working-with-code-cells.ipynb | rogalskim/udacity-ai-with-python | 7e50c4ff496b3cfa34df505d9d2dc87a8481dccb | [
"MIT"
] | null | null | null | 26.353982 | 423 | 0.573315 | [
[
[
"# Working with code cells\n\nIn this notebook you'll get some experience working with code cells.\n\nFirst, run the cell below. As I mentioned before, you can run the cell by selecting it the click the \"run cell\" button above. However, it's easier to run it by pressing **Shift + Enter** so you don't have to take your hands away from the keyboard.",
"_____no_output_____"
]
],
[
[
"# Select the cell, then press Shift + Enter\n3**2",
"_____no_output_____"
]
],
[
[
"Shift + Enter runs the cell then selects the next cell or creates a new one if necessary. You can run a cell without changing the selected cell by pressing **Control + Enter**.\n\nThe output shows up below the cell. It's printing out the result just like in a normal Python shell. Only the very last result in a cell will be printed though. Otherwise, you'll need to use `print()` print out any variables. \n\n> **Exercise:** Run the next two cells to test this out. Think about what you expect to happen, then try it.",
"_____no_output_____"
]
],
[
[
"3**2\n4**2",
"_____no_output_____"
],
[
"print(3**2)\n4**2",
"9\n"
]
],
[
[
"Now try assigning a value to a variable.",
"_____no_output_____"
]
],
[
[
"import string\n\nmindset = 'growth'\ncodes = [string.ascii_letters.index(char) for char in mindset]",
"_____no_output_____"
]
],
[
[
"There is no output, `'growth'` has been assigned to the variable `mindset`. All variables, functions, and classes created in a cell are available in every other cell in the notebook.\n\nWhat do you think the output will be when you run the next cell? Feel free to play around with this a bit to get used to how it works.",
"_____no_output_____"
]
],
[
[
"print(mindset[:4])\nprint(codes)",
"grow\n[6, 17, 14, 22, 19, 7]\n"
]
],
[
[
"## Code completion\n\nWhen you're writing code, you'll often be using a variable or function often and can save time by using code completion. That is, you only need to type part of the name, then press **tab**.\n\n> **Exercise:** Place the cursor at the end of `mind` in the next cell and press **tab**",
"_____no_output_____"
]
],
[
[
"mindset",
"_____no_output_____"
]
],
[
[
"Here, completing `mind` writes out the full variable name `mindset`. If there are multiple names that start the same, you'll get a menu, see below.",
"_____no_output_____"
]
],
[
[
"# Run this cell\nmindful = True",
"_____no_output_____"
],
[
"# Complete the name here again, choose one from the menu\nmindful\n",
"_____no_output_____"
]
],
[
[
"Remember that variables assigned in one cell are available in all cells. This includes cells that you've previously run and cells that are above where the variable was assigned. Try doing the code completion on the cell third up from here.\n\nCode completion also comes in handy if you're using a module but don't quite remember which function you're looking for or what the available functions are. I'll show you how this works with the [random](https://docs.python.org/3/library/random.html) module. This module provides functions for generating random numbers, often useful for making fake data or picking random items from lists.",
"_____no_output_____"
]
],
[
[
"# Run this\nimport random",
"_____no_output_____"
]
],
[
[
"> **Exercise:** In the cell below, place the cursor after `random.` then press **tab** to bring up the code completion menu for the module. Choose `random.randint` from the list, you can move through the menu with the up and down arrow keys.",
"_____no_output_____"
]
],
[
[
"random.randint",
"_____no_output_____"
]
],
[
[
"Above you should have seen all the functions available from the random module. Maybe you're looking to draw random numbers from a [Gaussian distribution](https://en.wikipedia.org/wiki/Normal_distribution), also known as the normal distribution or the \"bell curve\". \n\n## Tooltips\n\nYou see there is the function `random.gauss` but how do you use it? You could check out the [documentation](https://docs.python.org/3/library/random.html), or just look up the documentation in the notebook itself.\n\n> **Exercise:** In the cell below, place the cursor after `random.gauss` the press **shift + tab** to bring up the tooltip.",
"_____no_output_____"
]
],
[
[
"random.gauss",
"_____no_output_____"
]
],
[
[
"You should have seen some simple documentation like this:\n\n Signature: random.gauss(mu, sigma)\n Docstring:\n Gaussian distribution.\n \nThe function takes two arguments, `mu` and `sigma`. These are the standard symbols for the mean and the standard deviation, respectively, of the Gaussian distribution. Maybe you're not familiar with this though, and you need to know what the parameters actually mean. This will happen often, you'll find some function, but you need more information. You can show more information by pressing **shift + tab** twice.\n\n> **Exercise:** In the cell below, show the full help documentation by pressing **shift + tab** twice.",
"_____no_output_____"
]
],
[
[
"random.gauss",
"_____no_output_____"
]
],
[
[
"You should see more help text like this:\n\n mu is the mean, and sigma is the standard deviation. This is\n slightly faster than the normalvariate() function.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d0c6ace60190d333847d8d47fde6f506b7caeceb | 14,629 | ipynb | Jupyter Notebook | notebooks/Softmax Regression.ipynb | yhswjtuILMARE/Machine-Learning-Study-Notes | fab6178303f3f1d3475df5736cfc70f3062e7514 | [
"Apache-2.0"
] | 45 | 2018-03-28T04:00:58.000Z | 2021-11-10T13:25:21.000Z | python/noteBooks/Softmax Regression.ipynb | alenwuyx/Machine-Learning-Study-Notes | f1ad9194f8c9efeb11885174bdb9daef9ed8353a | [
"Apache-2.0"
] | 6 | 2020-01-28T22:44:16.000Z | 2022-02-10T00:15:17.000Z | python/noteBooks/Softmax Regression.ipynb | alenwuyx/Machine-Learning-Study-Notes | f1ad9194f8c9efeb11885174bdb9daef9ed8353a | [
"Apache-2.0"
] | 18 | 2018-09-04T14:42:16.000Z | 2021-09-09T02:02:37.000Z | 59.226721 | 6,236 | 0.76779 | [
[
[
"# 基于Tensorflow的softmax回归",
"_____no_output_____"
],
[
"Tensorflow是近年来非常非常流行的一个分布式的机器学习框架,之前一直想学习但是一直被各种各样的事情耽搁着。这学期恰好选了“人工神经网络”这门课,不得不接触这个框架了。最开始依照书上的教程通过Anaconda来配置环境,安装tensorflow。结果tensorflow是安装好了但是用起来是真麻烦。最后卸载了Anaconda在裸机上用`pip install tensorflow`来安装,可是裸机上的python是3.6.3版本的,似乎不支持tensorflow,于是在电脑上安装了另一个版本的python才算解决了这个问题,哎!说多了都是泪。言归正传,现在通过一个softmax实现手写字母识别的例子来正式进入tensorflow学习之旅。\n\nsoftmax是一个非常常见的函数处理方式,它允许我们将模型的输出归一化并且以概率的形式输出,是非常有用的一种处理方式。具体的内容可以参见这个知乎问题[Softmax 函数的特点和作用是什么?](https://www.zhihu.com/question/23765351)\n\n## 数据集\n\n本例采用的是`mnist`数据集,它在机器学习领域非常有名。首先我们来认识一下这个数据集,tensorflow能够自动下载并使用这个数据集。获取到数据集之后首先查看一下训练集的大小,由于这次softmax回归使用的是mnist中的手写图片作为训练集,因此为了直观地了解一下数据集还需要查看其中的一些手写图片,在这里就用到了matplotlib这个框架来绘图。",
"_____no_output_____"
]
],
[
[
"from tensorflow.examples.tutorials.mnist import input_data\nimport os\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nos.environ[\"TF_CPP_MIN_LOG_LEVEL\"]='3'#禁止输出警告信息\n#加载mnist数据集,one_hot设定为True是使用向量的形式编码数据类别,这主要是考虑到使用softmax\nmnist = input_data.read_data_sets(\"MNIST_data/\", one_hot=True)\nprint(\"训练集数据的大小为:{0}\".format(mnist.train.images.shape))\nfig = plt.figure(\"数据展示\")\nfor k in range(3):\n result = []\n temp = []\n img = mnist.train.images[k]#获得第一幅图片,是一个28*28的图片展开成的784维的向量,\n for i in range(img.shape[0]):\n temp.append(img[i])\n if (i + 1) % 28 == 0:\n result.append(temp)\n temp = []\n img = np.matrix(result, dtype=np.float)#获取第一幅图片的矩阵形式\n ax = fig.add_subplot(130 + k + 1)\n ax.imshow(img)\nplt.show()",
"Extracting MNIST_data/train-images-idx3-ubyte.gz\nExtracting MNIST_data/train-labels-idx1-ubyte.gz\nExtracting MNIST_data/t10k-images-idx3-ubyte.gz\nExtracting MNIST_data/t10k-labels-idx1-ubyte.gz\n训练集数据的大小为:(55000, 784)\n"
]
],
[
[
"从上面代码的输出可以看到:该数据集的训练集的规模是55000x784。数据集中每一个行向量代表着一个28x28图片的一维展开,虽然说在图片识别中像素点的位置页蕴含着非常大的信息,但是在这里就不在意那么多了,仅仅将其一维展开就可以。笔者用mat将数据集中的前三个图像画了出来展示在上面,接下来就要用到softmax回归的方法实现一个基本的首写字母识别。\n\n## softmax回归\n\n在这里简要介绍一下softmax回归的相关知识。softmax回归也是线性回归的一种,只不过是在输出层使用了softmax函数进行处理。具体的训练方法上也是使用了经典的随机梯度下降法训练。其判别函数有如下形式:\n\n$$f(x)=wx+b$$\n\n**注意:softmax回归可以用于处理多分类问题,因此上式中的所有变量都是矩阵或向量形式。**\n\n模型的输出f(x)还并不是最终的输出,还需要用softmax函数进行处理,softmax函数的形式如下所示:\n\n$$softmax(x)=\\frac{exp(x_{i})}{\\sum_{j}^{n}exp(x_{j})}$$\n\n这样处理后的模型输出可以表达输入数据在各个标签分类上的概率表达,此外使用softmax函数还有着其他很多的好处,最主要的还是在损失函数上的便利。此处不一一列举。\n\nsoftmax回归的损失函数采用信息熵的形式给出:\n\n$$H_{y^{'}}(y)=-\\sum y_{i}^{'}ln(y_{i})$$\n\n最后,笔者想在softmax函数下推导上述损失函数的随机梯度下降法的迭代公式,虽然tensorflow为我们做了这件事,但是作为算法编写者的我们依然有必要了解这其中的细节。首先,我们需要得到损失函数关于`w`的梯度:\n\n$$\\frac{\\partial H_{y^{'}}(y)}{\\partial w}=\\frac{\\partial -\\sum y_{i}^{'}ln(y_{i})}{\\partial w}=\\frac{\\partial -yln(softmax(f(xw + b))}{\\partial w}$$\n\n该求导比较复杂,采用链式求导法:\n\n$$\\frac{\\partial Loss}{\\partial w}=\\frac{\\partial Loss}{\\partial softmax(xw + b)}\\frac{\\partial softmax(xw + b)}{\\partial xw + b}\\frac{\\partial xw + b}{\\partial w}$$\n\n上述链式求导就比较简单,第一项和最后一项的求导都很容易得到,关键是第二项的求导。在这里我们直接给出softmax函数的求导公式。\n\n$$\\frac{\\partial softmax(xw + b)}{\\partial xw + b}=softmax(xw + b)(1 - softmax(xw + b))$$\n\n又由于上述第一项和第二项的求导为:\n\n$$\\frac{\\partial Loss}{\\partial softmax(xw + b)}=\\frac{\\partial -yln(softmax(f(xw + b))}{\\partial softmax(xw + b)}=\\frac{-y}{softmax(xw + b)}$$\n\n$$\\frac{\\partial xw + b}{\\partial w}=x$$\n\n因此:\n\n$$\\frac{\\partial -yln(softmax(f(xw + b))}{\\partial w}=y(softmax(xw + b) - 1)x$$\n\n接下来就可以使用随机梯度下降法的迭代公式来迭代求解`w`:\n\n$$w=w+\\alpha \\frac{\\partial -yln(softmax(f(xw + b))}{\\partial w}$$\n\n## tensroflow实现softmax回归\n\n首先我们定义模型中的各个参数,其中x和真实的标签值y_不设定死,使用`placeholder`占据计算图的一个节点。代码如下:",
"_____no_output_____"
]
],
[
[
"import tensorflow as tf\n\nsession = tf.InteractiveSession()#定义一个交互式会话\nx = tf.placeholder(tf.float32, [None, 784])\nw = tf.Variable(tf.zeros([784, 10]))#权重w初始化为0\nb = tf.Variable(tf.zeros([1, 10]))#bias初始化为0\ny = tf.nn.softmax(tf.matmul(x, w) + b)\ny_ = tf.placeholder(tf.float32, [None, 10])",
"_____no_output_____"
]
],
[
[
"设定其损失函数`cross_entry`,规定优化目标,初始化全局参数:",
"_____no_output_____"
]
],
[
[
"cross_entry = -tf.reduce_mean(tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))\ntrain_set = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entry)\ntf.global_variables_initializer().run()",
"_____no_output_____"
]
],
[
[
"准备工作都差不多做完了,接下来应该进行模型的训练。在本例中迭代一千次,每次从训练集中随机选取100组数据训练模型。",
"_____no_output_____"
]
],
[
[
"for i in range(1000):\n trainSet, trainLabel = mnist.train.next_batch(100)\n train_set.run(feed_dict={x: trainSet, y_: trainLabel})",
"_____no_output_____"
]
],
[
[
"以上,我们已经完成了模型的训练,现在应该检测一下模型的效果。我们使用mnist的测试数据来测试模型的分类效果:",
"_____no_output_____"
]
],
[
[
"accuracy = tf.cast(tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1)), dtype=tf.float32)\naccuracy = tf.reduce_mean(accuracy)\nprint(\"模型的分类正确率为{0:.3f}\".format(session.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels})))",
"模型的分类正确率为0.917\n"
]
],
[
[
"由上面代码的输出可以看到,该例程的分类准确率还是非常高的,达到了接近92%。这篇入门介绍中用到的softmax回归其实本质上可以看作是一个无隐层的神经网络模型,只拥有一个输入层和一个输出层,二者中间使用了简单的线性方法连接。在实际的手写图片识别中可能并不会用到这样的线性模型,更多的是使用卷积神经网络(CNN)。\n\n## 后记\n\n这篇文章算是笔者tensorflow入门的一个小应用。说一下我对神经网络,机器学习和tensorflow的看法吧。最近这几年最热的名词可能就是人工智能,深度学习了。笔者也未能免俗,随着这股洪流加入了浩浩荡荡的机器学习大军。从最开始最简单的随机梯度下降法,线性回归到后来有些难的序列最小优化算法,支持向量机。一步步走来发现这个计算机学科的分枝还是非常有意思的,看似非常严谨,枯燥却又十分优美的数学模型竟然能够表现出一丝丝“智能”,有时候真的会惊艳到我。\n\n从2006年起,随着计算机计算能力的快速提升,曾经被冷落的神经网络又开始热了起来。对于神经网络,个人对它的未来不是很确定,一则是因为神经网络在历史上的命运可谓是大起大落,曾经的感知机,BP都是炙手可热,却又都草草收场,谁知道这一次的AI火热是不是能持续下去,也许碰到某一个天花板就又归于沉积了呢?说到底目前的AI行业所研究的“弱人工智能”距离真正的AI还是相差甚远。都已经说不清这到底是人类技术的问题,还是哲学的问题。二则是,目前神经网络模型的可解释性太差,相较于SVM这样的模型,人们似乎说不出为什么神经网络能够表现出如此强大的分类能力。三则是,人类在AI行业的探索上总是在刻意模仿自己的大脑,在神经网络的设计中融入了很多人脑中的机制,但这真的是一条正确的路吗?人类飞上蓝天不是靠着挥舞的翅膀而是飞机的机翼。\n\nTensorflow是Google公司推出的一款非常流行的机器学习框架,目前看来已经占据了机器学习框架的绝对霸主地位。对于这些机器学习框架我个人的感觉是不能脱离它们自己闭门造车,但也不能过度依赖。之前笔者是排斥一切框架的,很多经典的机器学习算法都是自己编写。然而到了神经网络这一块,自己再手动编程的代价太大了,于是就不得不入了框架的坑。我的观点是:对于一个算法一定要自己将其弄懂,其中的数学推导搞清楚再看代码,用框架。切忌将模型作为一个黑箱工具来使用,虽然这在短期来看确实效率很高,但长期来看绝对是百害无一利的。学习的过程中还可以自己动手做一些小demo,提升一下编程的乐趣,还是一件非常有益的事情。",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d0c6c0d995ffa4dad4cb4f965e2c5c8ab213bb37 | 12,308 | ipynb | Jupyter Notebook | lectures/09_Functional_python.ipynb | juditacs/bprof-python | dc1e00f177ac617f6802fce09c63acad8670e366 | [
"MIT"
] | null | null | null | lectures/09_Functional_python.ipynb | juditacs/bprof-python | dc1e00f177ac617f6802fce09c63acad8670e366 | [
"MIT"
] | null | null | null | lectures/09_Functional_python.ipynb | juditacs/bprof-python | dc1e00f177ac617f6802fce09c63acad8670e366 | [
"MIT"
] | null | null | null | 18.648485 | 138 | 0.443858 | [
[
[
"# Functional Python\n\n## BProf Python course\n\n### June 25-29, 2018\n\n#### Judit Ács",
"_____no_output_____"
],
[
"Python has 3 built-in functions that originate from functional programming.\n\n## Map\n\n- `map` applies a function on each element of a sequence",
"_____no_output_____"
]
],
[
[
"def double(e):\n return e * 2\n\nl = [2, 3, \"abc\"]\n\nlist(map(double, l))",
"_____no_output_____"
],
[
"map(double, l)",
"_____no_output_____"
],
[
"%%python2\n\ndef double(e):\n return e * 2\n\nl = [2, 3, \"abc\"]\n\nprint(map(double, l))",
"[4, 6, 'abcabc']\n"
],
[
"list(map(lambda x: x * 2, [2, 3, \"abc\"]))",
"_____no_output_____"
],
[
"class Doubler:\n def __call__(self, arg):\n return arg * 2\n \nlist(map(Doubler(), l))",
"_____no_output_____"
],
[
"[x * 2 for x in l]",
"_____no_output_____"
]
],
[
[
"## Filter\n\n- filter creates a list of elements for which a function returns true",
"_____no_output_____"
]
],
[
[
"def is_even(n):\n return n % 2 == 0\n\nl = [2, 3, -1, 0, 2]\n\nlist(filter(is_even, l))",
"_____no_output_____"
],
[
"list(filter(lambda x: x % 2 == 0, range(8)))",
"_____no_output_____"
],
[
"[e for e in l if e % 2 == 0]",
"_____no_output_____"
]
],
[
[
"### Most comprehensions can be rewritten using map and filter",
"_____no_output_____"
]
],
[
[
"l = [2, 3, 0, -1, 2, 0, 1]\n\nsignum = [x / abs(x) if x != 0 else x for x in l]\nprint(signum)",
"[1.0, 1.0, 0, -1.0, 1.0, 0, 1.0]\n"
],
[
"list(map(lambda x: x / abs(x) if x != 0 else 0, l))",
"_____no_output_____"
],
[
"even = [x for x in l if x % 2 == 0]\nprint(even)",
"[2, 0, 2, 0]\n"
],
[
"print(list(filter(lambda x: x % 2 == 0, l)))",
"[2, 0, 2, 0]\n"
]
],
[
[
"## Reduce\n\n- reduce applies a rolling computation on a sequence\n- the first argument of `reduce` is two-argument function\n- the second argument is the sequence\n- the result is accumulated in an accumulator",
"_____no_output_____"
]
],
[
[
"from functools import reduce\n\nl = [1, 2, -1, 4]\nreduce(lambda x, y: x*y, l)",
"_____no_output_____"
]
],
[
[
"an initial value for the accumulator may be supplied",
"_____no_output_____"
]
],
[
[
"reduce(lambda x, y: x*y, l, 10)",
"_____no_output_____"
],
[
"reduce(lambda x, y: max(x, y), l)\nreduce(max, l)",
"_____no_output_____"
],
[
"reduce(max, map(lambda n: n*n, l))",
"_____no_output_____"
],
[
"reduce(lambda x, y: x + int(y % 2 == 0), l, 0)",
"_____no_output_____"
]
],
[
[
"# `any` and `all`\n\nChecks if any or every element of an iterable evaluates to `False` in a boolean context.",
"_____no_output_____"
]
],
[
[
"def is_even(num):\n if num % 2 == 0:\n print(\"{} is even\".format(num))\n return True\n print(\"{} is odd\".format(num))\n return False\n\nl = [2, 4, 0, -1, 6, 8, 1]\n\n# all(map(is_even, l))\nall(is_even(i) for i in l)",
"2 is even\n4 is even\n0 is even\n-1 is odd\n"
],
[
"l = [3, 1, 5, 0, 7, 0, 0]\n# any(map(is_even, l))\nany(is_even(i) for i in l)",
"3 is odd\n1 is odd\n5 is odd\n0 is even\n"
]
],
[
[
"## `zip`",
"_____no_output_____"
]
],
[
[
"x = [1, 2, 0]\ny = [-2, 6, 0, 2]\n\nfor pair in zip(x, y):\n print(type(pair), pair)",
"<class 'tuple'> (1, -2)\n<class 'tuple'> (2, 6)\n<class 'tuple'> (0, 0)\n"
],
[
"for pair in zip(x, y, x, y):\n print(type(pair), pair)",
"<class 'tuple'> (1, -2, 1, -2)\n<class 'tuple'> (2, 6, 2, 6)\n<class 'tuple'> (0, 0, 0, 0)\n"
]
],
[
[
"## for and while loops do not create a new scope but functions do",
"_____no_output_____"
]
],
[
[
"y = \"outside foo\"\n\ndef foo():\n i = 2\n for _ in range(4):\n y = 3\n print(y)\n \nprint(\"Calling foo\")\nfoo()\nprint(\"Global y unchanged: {}\".format(y))",
"Calling foo\n3\nGlobal y unchanged: outside foo\n"
]
],
[
[
"# Global Interpreter Lock (GIL)\n\n- CPython, the reference implementation has a reference counting garbage collector\n- reference counting GC is **not** thread-safe :(\n- \"GIL, is a mutex that protects access to Python objects, preventing multiple threads from executing Python bytecodes at once\"\n- IO, image processing and Numpy (numerical computation and matrix library) heavy lifting happens outside the GIL\n- other computations cannot fully take advantage of multithreading :(\n- Jython and IronPython do not have a GIL\n\n## See also\n\n[Python wiki page on the GIL](https://wiki.python.org/moin/GlobalInterpreterLock)\n\n[Live GIL removal (advanced)](https://www.youtube.com/watch?v=pLqv11ScGsQ)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d0c6d11dba2de03679efc78ae7b0da9b02ec8b59 | 8,124 | ipynb | Jupyter Notebook | samples/notebooks/csharp/Samples/HousingML.ipynb | jmarolf/interactive | 845d965b41d3001fbe23984520a99390935e73bb | [
"MIT"
] | null | null | null | samples/notebooks/csharp/Samples/HousingML.ipynb | jmarolf/interactive | 845d965b41d3001fbe23984520a99390935e73bb | [
"MIT"
] | null | null | null | samples/notebooks/csharp/Samples/HousingML.ipynb | jmarolf/interactive | 845d965b41d3001fbe23984520a99390935e73bb | [
"MIT"
] | null | null | null | 24.543807 | 134 | 0.487198 | [
[
[
"#r \"nuget:Microsoft.ML,1.4.0\"\n#r \"nuget:Microsoft.ML.AutoML,0.16.0\"\n#r \"nuget:Microsoft.Data.Analysis,0.1.0\"",
"_____no_output_____"
],
[
"using Microsoft.Data.Analysis;\nusing XPlot.Plotly;",
"_____no_output_____"
],
[
"using Microsoft.AspNetCore.Html;\nFormatter<DataFrame>.Register((df, writer) =>\n{\n var headers = new List<IHtmlContent>();\n headers.Add(th(i(\"index\")));\n headers.AddRange(df.Columns.Select(c => (IHtmlContent) th(c.Name)));\n var rows = new List<List<IHtmlContent>>();\n var take = 20;\n for (var i = 0; i < Math.Min(take, df.RowCount); i++)\n {\n var cells = new List<IHtmlContent>();\n cells.Add(td(i));\n foreach (var obj in df[i])\n {\n cells.Add(td(obj));\n }\n rows.Add(cells);\n }\n \n var t = table(\n thead(\n headers),\n tbody(\n rows.Select(\n r => tr(r))));\n \n writer.Write(t);\n}, \"text/html\");",
"_____no_output_____"
],
[
"using System.IO;\nusing System.Net.Http;\nstring housingPath = \"housing.csv\";\nif (!File.Exists(housingPath))\n{\n var contents = new HttpClient()\n .GetStringAsync(\"https://raw.githubusercontent.com/ageron/handson-ml2/master/datasets/housing/housing.csv\").Result;\n \n File.WriteAllText(\"housing.csv\", contents);\n}",
"_____no_output_____"
],
[
"var housingData = DataFrame.LoadCsv(housingPath);\nhousingData",
"_____no_output_____"
],
[
"housingData.Description()",
"_____no_output_____"
],
[
"Chart.Plot(\n new Graph.Histogram()\n {\n x = housingData[\"median_house_value\"],\n nbinsx = 20\n }\n)",
"_____no_output_____"
],
[
"var chart = Chart.Plot(\n new Graph.Scattergl()\n {\n x = housingData[\"longitude\"],\n y = housingData[\"latitude\"],\n mode = \"markers\",\n marker = new Graph.Marker()\n {\n color = housingData[\"median_house_value\"],\n colorscale = \"Jet\"\n }\n }\n);\n\nchart.Width = 600;\nchart.Height = 600;\ndisplay(chart);",
"_____no_output_____"
],
[
"static T[] Shuffle<T>(T[] array)\n{\n Random rand = new Random();\n for (int i = 0; i < array.Length; i++)\n {\n int r = i + rand.Next(array.Length - i);\n T temp = array[r];\n array[r] = array[i];\n array[i] = temp;\n }\n return array;\n}\n\nint[] randomIndices = Shuffle(Enumerable.Range(0, (int)housingData.RowCount).ToArray());\nint testSize = (int)(housingData.RowCount * .1);\nint[] trainRows = randomIndices[testSize..];\nint[] testRows = randomIndices[..testSize];\n\nDataFrame housing_train = housingData[trainRows];\nDataFrame housing_test = housingData[testRows];\n\ndisplay(housing_train.RowCount);\ndisplay(housing_test.RowCount);",
"_____no_output_____"
],
[
"using Microsoft.ML;\nusing Microsoft.ML.Data;\nusing Microsoft.ML.AutoML;",
"_____no_output_____"
],
[
"#!time\n\nvar mlContext = new MLContext();\n\nvar experiment = mlContext.Auto().CreateRegressionExperiment(maxExperimentTimeInSeconds: 15);\nvar result = experiment.Execute(housing_train, labelColumnName:\"median_house_value\");",
"_____no_output_____"
],
[
"var scatters = result.RunDetails.Where(d => d.ValidationMetrics != null).GroupBy( \n r => r.TrainerName,\n (name, details) => new Graph.Scattergl()\n {\n name = name,\n x = details.Select(r => r.RuntimeInSeconds),\n y = details.Select(r => r.ValidationMetrics.MeanAbsoluteError),\n mode = \"markers\",\n marker = new Graph.Marker() { size = 12 }\n });\n\nvar chart = Chart.Plot(scatters);\nchart.WithXTitle(\"Training Time\");\nchart.WithYTitle(\"Error\");\ndisplay(chart);\n\nConsole.WriteLine($\"Best Trainer:{result.BestRun.TrainerName}\");",
"_____no_output_____"
],
[
"var testResults = result.BestRun.Model.Transform(housing_test);\n\nvar trueValues = testResults.GetColumn<float>(\"median_house_value\");\nvar predictedValues = testResults.GetColumn<float>(\"Score\");\n\nvar predictedVsTrue = new Graph.Scattergl()\n{\n x = trueValues,\n y = predictedValues,\n mode = \"markers\",\n};\n\nvar maximumValue = Math.Max(trueValues.Max(), predictedValues.Max());\n\nvar perfectLine = new Graph.Scattergl()\n{\n x = new[] {0, maximumValue},\n y = new[] {0, maximumValue},\n mode = \"lines\",\n};\n\nvar chart = Chart.Plot(new[] {predictedVsTrue, perfectLine });\nchart.WithXTitle(\"True Values\");\nchart.WithYTitle(\"Predicted Values\");\nchart.WithLegend(false);\nchart.Width = 600;\nchart.Height = 600;\ndisplay(chart);",
"_____no_output_____"
],
[
"#!lsmagic",
"_____no_output_____"
],
[
"new [] { 1,2,3 } ",
"_____no_output_____"
],
[
"new { foo =\"123\" }",
"_____no_output_____"
],
[
"#!fsharp\n[1;2;3]",
"_____no_output_____"
],
[
"b(\"hello\").ToString()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0c6d46145437ddba9b09db0b4aef43130c58545 | 175,880 | ipynb | Jupyter Notebook | mwdsbe/.ipynb_checkpoints/PA_state_business_license-checkpoint.ipynb | BinnyDaBin/MWDSBE | aa0de50f2289e47f7c2e9134334b23c3b5594f0c | [
"MIT"
] | null | null | null | mwdsbe/.ipynb_checkpoints/PA_state_business_license-checkpoint.ipynb | BinnyDaBin/MWDSBE | aa0de50f2289e47f7c2e9134334b23c3b5594f0c | [
"MIT"
] | 10 | 2021-03-10T01:06:45.000Z | 2022-02-26T21:02:40.000Z | mwdsbe/.ipynb_checkpoints/PA_state_business_license-checkpoint.ipynb | BinnyDaBin/MWDSBE | aa0de50f2289e47f7c2e9134334b23c3b5594f0c | [
"MIT"
] | null | null | null | 37.325976 | 182 | 0.374756 | [
[
[
"# Matching Registry and PA State Business License Data",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport mwdsbe\nimport mwdsbe.datasets.licenses as licenses\nimport schuylkill as skool\nimport time",
"_____no_output_____"
],
[
"def drop_duplicates_by_date(df, date_column):\n df.sort_values(by=date_column, ascending=False, inplace=True)\n df = df.loc[~df.index.duplicated(keep=\"first\")]\n df.sort_index(inplace=True)\n return df",
"_____no_output_____"
]
],
[
[
"## Data",
"_____no_output_____"
]
],
[
[
"registry = mwdsbe.load_registry() # geopandas df\nlicense = licenses.CommercialActivityLicenses().get()",
"_____no_output_____"
],
[
"registry.head()",
"_____no_output_____"
],
[
"state_license = pd.read_csv('./data/PAStateBusinessLicense/Sales_Tax_Licenses_and_Certificates_Current_Monthly_County_Revenue.csv')",
"_____no_output_____"
],
[
"print('Size of state_license data:', len(state_license))",
"Size of state_license data: 347532\n"
],
[
"# convert state_license column names from titlecase to snakecase\ndef to_snake_case(aList):\n res = []\n for item in aList:\n words = item.strip().lower().split(' ')\n item = '_'.join(words)\n res.append(item)\n return res",
"_____no_output_____"
],
[
"state_license.columns = to_snake_case(state_license.columns.tolist())",
"_____no_output_____"
],
[
"# clean data\nignore_words = ['inc', 'group', 'llc', 'corp', 'pc', 'incorporated', 'ltd', 'co', 'associates', 'services', 'company', 'enterprises', 'enterprise', 'service', 'corporation']\ncleaned_registry = skool.clean_strings(registry, ['company_name', 'dba_name'], True, ignore_words)\ncleaned_license = skool.clean_strings(license, ['company_name'], True, ignore_words)\ncleaned_state_license = skool.clean_strings(state_license, ['legal_name', 'trade_name'], True, ignore_words)\n\ncleaned_registry = cleaned_registry.dropna(subset=['company_name'])\ncleaned_license = cleaned_license.dropna(subset=['company_name'])\ncleaned_state_license = cleaned_state_license.dropna(subset=['legal_name'])",
"_____no_output_____"
],
[
"len(cleaned_license)",
"_____no_output_____"
],
[
"cleaned_state_license.head()",
"_____no_output_____"
],
[
"# just getting PA state in registry\npa_registry = cleaned_registry[cleaned_registry.location_state == 'PA']",
"_____no_output_____"
],
[
"len(pa_registry)",
"_____no_output_____"
]
],
[
[
"## Merge registry and state_license by company_name and legal_name / trade name",
"_____no_output_____"
]
],
[
[
"# t1 = time.time()\n# merged = (\n# skool.tf_idf_merge(pa_registry, cleaned_state_license, left_on=\"company_name\", right_on=\"legal_name\", score_cutoff=85)\n# .pipe(skool.tf_idf_merge, pa_registry, cleaned_state_license, left_on=\"company_name\", right_on=\"trade_name\", score_cutoff=85)\n# .pipe(skool.tf_idf_merge, pa_registry, cleaned_state_license, left_on=\"dba_name\", right_on=\"legal_name\", score_cutoff=85)\n# .pipe(skool.tf_idf_merge, pa_registry, cleaned_state_license, left_on=\"dba_name\", right_on=\"trade_name\", score_cutoff=85)\n# )\n# t = time.time() - t1",
"_____no_output_____"
],
[
"# print('Execution time:', t/60, 'min')",
"Execution time: 22.08829193909963 min\n"
],
[
"# matched = merged.dropna(subset=['legal_name'])",
"_____no_output_____"
],
[
"# matched.to_excel (r'C:\\Users\\dabinlee\\Desktop\\mwdsbe\\data\\state_license\\pa-registry-full-state-license\\tf-idf-85.xlsx', header=True)",
"_____no_output_____"
],
[
"matched_state = pd.read_excel(r'C:\\Users\\dabinlee\\Desktop\\mwdsbe\\data\\state_license\\pa-registry-full-state-license\\tf-idf-85.xlsx')",
"_____no_output_____"
],
[
"len(matched_state)",
"_____no_output_____"
],
[
"exact_matches = matched_state[matched_state.match_probability == 1]",
"_____no_output_____"
],
[
"len(exact_matches)",
"_____no_output_____"
]
],
[
[
"##### Eliminate companies with different zip code",
"_____no_output_____"
]
],
[
[
"matched_state['postal_code_clean'] = matched_state.postal_code.astype(str).apply(lambda x : x.split(\"-\")[0]).astype(float)",
"_____no_output_____"
],
[
"matched_state = matched_state.set_index('left_index')",
"_____no_output_____"
],
[
"matched_state_zip = matched_state[matched_state.zip_code == matched_state.postal_code_clean]",
"_____no_output_____"
],
[
"len(matched_state_zip)",
"_____no_output_____"
],
[
"matched_state_zip['expiration_date'] = pd.to_datetime(matched_state_zip['expiration_date'], errors='coerce')",
"_____no_output_____"
],
[
"matched_state_zip = drop_duplicates_by_date(matched_state_zip, 'expiration_date')",
"_____no_output_____"
],
[
"len(matched_state_zip) # state_license, same zip code, without duplicates",
"_____no_output_____"
],
[
"matched_state_zip",
"_____no_output_____"
]
],
[
[
"## Comparing between match between \"registry-opendata license\" and \"registry-state license\"\nHow many more new companies do we get from registry-opendata license matching?",
"_____no_output_____"
]
],
[
[
"# t1 = time.time()\n# merged = (\n# skool.tf_idf_merge(cleaned_registry, cleaned_license, on=\"company_name\", score_cutoff=85)\n# .pipe(skool.tf_idf_merge, cleaned_registry, cleaned_license, left_on=\"dba_name\", right_on=\"company_name\", score_cutoff=85)\n# )\n# t = time.time() - t1",
"_____no_output_____"
],
[
"# print('Execution time:', t/60, 'min')",
"Execution time: 3.3725738008817037 min\n"
],
[
"# matched_openphilly_license = merged.dropna(subset=['company_name_y'])",
"_____no_output_____"
],
[
"# len(matched_openphilly_license)",
"_____no_output_____"
],
[
"# matched_openphilly_license.issue_date = matched_openphilly_license.issue_date.astype(str)",
"C:\\Users\\dabinlee\\AppData\\Local\\Continuum\\anaconda3\\lib\\site-packages\\pandas\\core\\generic.py:5208: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n self[name] = value\n"
],
[
"# matched_openphilly_license.to_excel (r'C:\\Users\\dabinlee\\Desktop\\mwdsbe\\data\\license-opendataphilly\\tf-idf\\tf-idf-85.xlsx', header=True)",
"_____no_output_____"
]
],
[
[
"##### Loading matched of registry and opendataphilly_license data",
"_____no_output_____"
]
],
[
[
"matched_opendataphilly_license = pd.read_excel(r'C:\\Users\\dabinlee\\Desktop\\mwdsbe\\data\\license-opendataphilly\\tf-idf\\tf-idf-85.xlsx')",
"_____no_output_____"
],
[
"matched_opendataphilly_license = matched_opendataphilly_license.set_index('left_index')",
"_____no_output_____"
],
[
"len(matched_opendataphilly_license)",
"_____no_output_____"
],
[
"matched_opendataphilly_license = drop_duplicates_by_date(matched_opendataphilly_license, \"issue_date\") # without duplicates",
"C:\\Users\\dabinlee\\AppData\\Local\\Continuum\\anaconda3\\lib\\site-packages\\ipykernel_launcher.py:4: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n after removing the cwd from sys.path.\n"
],
[
"len(matched_opendataphilly_license)",
"_____no_output_____"
],
[
"matched_opendataphilly_license.tail()",
"_____no_output_____"
],
[
"# unique company?\nlen(matched_opendataphilly_license.index.unique()) # yes",
"_____no_output_____"
],
[
"diff = matched_state_zip.index.difference(matched_opendataphilly_license.index).tolist()",
"_____no_output_____"
],
[
"len(diff)",
"_____no_output_____"
],
[
"matched_state_zip.loc[diff][['company_name', 'dba_name', 'legal_name', 'trade_name']]",
"_____no_output_____"
],
[
"# newly matched: matching with state_license data\ndifference = matched_state_zip.loc[diff]",
"_____no_output_____"
],
[
"difference",
"_____no_output_____"
],
[
"# difference.to_excel (r'C:\\Users\\dabinlee\\Desktop\\mwdsbe\\data\\difference.xlsx', header=True)",
"_____no_output_____"
]
],
[
[
"## Investigate missing companies in opendataphilly license data\nWe found 94 newly matched companies from matching between registry and state_license, why these are not appeared in matching between registry and opendataphilly license data?",
"_____no_output_____"
]
],
[
[
"t1 = time.time()\nmerged = (\n skool.tf_idf_merge(cleaned_registry, cleaned_license, on=\"company_name\", score_cutoff=0)\n .pipe(skool.tf_idf_merge, cleaned_registry, cleaned_license, left_on=\"dba_name\", right_on=\"company_name\", score_cutoff=0)\n)\nt = time.time() - t1",
"_____no_output_____"
],
[
"print('Execution time:', t/60, 'min')",
"Execution time: 4.075732707977295 min\n"
],
[
"matched = merged.dropna(subset=['company_name_y'])",
"_____no_output_____"
],
[
"matched = drop_duplicates_by_date(matched, 'issue_date')",
"C:\\Users\\dabinlee\\AppData\\Local\\Continuum\\anaconda3\\lib\\site-packages\\ipykernel_launcher.py:2: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n \nC:\\Users\\dabinlee\\AppData\\Local\\Continuum\\anaconda3\\lib\\site-packages\\ipykernel_launcher.py:4: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n after removing the cwd from sys.path.\n"
],
[
"matched",
"_____no_output_____"
],
[
"cleaned_registry.loc[cleaned_registry.index.difference(matched.index)]",
"_____no_output_____"
],
[
"matched = matched.loc[difference.index]",
"C:\\Users\\dabinlee\\AppData\\Local\\Continuum\\anaconda3\\lib\\site-packages\\ipykernel_launcher.py:1: FutureWarning: \nPassing list-likes to .loc or [] with any missing label will raise\nKeyError in the future, you can use .reindex() as an alternative.\n\nSee the documentation here:\nhttps://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#deprecate-loc-reindex-listlike\n \"\"\"Entry point for launching an IPython kernel.\n"
],
[
"matched.issue_date = matched.issue_date.astype(str)\nmatched.to_excel (r'C:\\Users\\dabinlee\\Desktop\\mwdsbe\\data\\missing94.xlsx', header=True)",
"_____no_output_____"
],
[
"matched.match_probability.median()",
"_____no_output_____"
],
[
"difference.match_probability.median()",
"_____no_output_____"
]
],
[
[
"### Compare intersection between matched_opendataphilly_license and matched_state_zip\n* matched_opendataphilly_license: matched data between pa_registry and license data from opendataphilly\n* matched_state_zip: matched data between pa_registry and license data from state_registry and filter matches which do not match zipcodes",
"_____no_output_____"
]
],
[
[
"intersection = matched_state_zip.index.intersection(matched_opendataphilly_license.index).tolist()",
"_____no_output_____"
],
[
"len(intersection) # 246 - 94",
"_____no_output_____"
],
[
"intersection1 = matched_opendataphilly_license.loc[intersection]",
"_____no_output_____"
],
[
"len(intersection1)",
"_____no_output_____"
],
[
"intersection2 = matched_state_zip.loc[intersection]",
"_____no_output_____"
],
[
"len(intersection2)",
"_____no_output_____"
],
[
"intersection1 = intersection1[['company_name_x', 'dba_name', 'match_probability', 'company_name_y']]",
"_____no_output_____"
],
[
"intersection2 = intersection2[['match_probability', 'legal_name', 'trade_name']]",
"_____no_output_____"
],
[
"intersection = intersection1.merge(intersection2, left_index=True, right_index=True)",
"_____no_output_____"
],
[
"intersection",
"_____no_output_____"
],
[
"# intersection.to_excel (r'C:\\Users\\dabinlee\\Desktop\\mwdsbe\\data\\intersection.xlsx', header=True)",
"_____no_output_____"
]
],
[
[
"## Merge by address - Not matching well",
"_____no_output_____"
]
],
[
[
"cleaned_state_license.head()",
"_____no_output_____"
],
[
"# split street address from address_with_lat/long\ncleaned_state_license['street_address'] = cleaned_state_license['address_with_lat/long'].astype(str).apply(lambda x : x.split(\"\\n\")[0])",
"_____no_output_____"
],
[
"cleaned_state_license.head()",
"_____no_output_____"
],
[
"pa_registry.head()",
"_____no_output_____"
],
[
"# clean street information\ncleaned_pa_registry = skool.clean_strings(pa_registry, ['location'], True)\ncleaned_state_license = skool.clean_strings(cleaned_state_license, ['street_address'], True)\n\ncleaned_pa_registry = cleaned_pa_registry.dropna(subset=['location'])\ncleaned_state_license = cleaned_state_license.dropna(subset=['street_address'])",
"_____no_output_____"
],
[
"t1 = time.time()\nmerged_by_street = skool.tf_idf_merge(cleaned_pa_registry, cleaned_state_license, left_on='location', right_on='street_address', score_cutoff=95)\nt = time.time() - t1",
"_____no_output_____"
],
[
"print('Execution time:', t/60, 'min')",
"Execution time: 7.25786319176356 min\n"
],
[
"matched_by_street = merged_by_street.dropna(subset=['street_address'])",
"_____no_output_____"
],
[
"len(matched_by_street)",
"_____no_output_____"
],
[
"len(matched_by_street.index.unique()) # bug in tf-idf merge: not doing best match",
"_____no_output_____"
],
[
"matched_by_street[['company_name', 'location', 'match_probability', 'legal_name', 'trade_name', 'street_address']]",
"_____no_output_____"
],
[
"# matched_by_street.to_excel (r'C:\\Users\\dabinlee\\Desktop\\mwdsbe\\data\\state_license\\by_street\\tf-idf-95.xlsx', header=True)",
"_____no_output_____"
],
[
"diff = matched_by_street.index.difference(matched_opendataphilly_license.index) # newly catched matches",
"_____no_output_____"
],
[
"len(diff)",
"_____no_output_____"
],
[
"newly_matched_by_street = matched_by_street.loc[diff][['company_name', 'dba_name', 'location', 'legal_name', 'trade_name', 'street_address']]",
"_____no_output_____"
],
[
"# newly_matched_by_street.to_excel (r'C:\\Users\\dabinlee\\Desktop\\mwdsbe\\data\\state_license\\by_street\\tf-idf-95-diff.xlsx', header=True)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0c703c98d7a8a4fbb37bb2545cdfe5012dbc9b3 | 64,658 | ipynb | Jupyter Notebook | Next_Season_Forecasts.ipynb | sanzgiri/orange_mamba_nbadh18 | b00a140f1628d6d0383329a112a4afc72533aa47 | [
"MIT"
] | null | null | null | Next_Season_Forecasts.ipynb | sanzgiri/orange_mamba_nbadh18 | b00a140f1628d6d0383329a112a4afc72533aa47 | [
"MIT"
] | null | null | null | Next_Season_Forecasts.ipynb | sanzgiri/orange_mamba_nbadh18 | b00a140f1628d6d0383329a112a4afc72533aa47 | [
"MIT"
] | null | null | null | 31.741777 | 181 | 0.455999 | [
[
[
"import pandas as pd\nfrom datetime import datetime, timedelta\nimport pickle",
"_____no_output_____"
],
[
"start_date = datetime(2018, 10, 16, 0, 0, 0)\nend_date = datetime(2018, 12, 31, 0, 0, 0)\nnum_days = (end_date - start_date).days + 1\ndfs = pd.DataFrame(index=range(num_days))\nentries = []\n\n\nfor d in range(num_days):\n day = start_date + timedelta(days=d)\n dstr = day.strftime('%Y%m%d')\n url = 'http://www.espn.com/nba/schedule/_/date/{0}'.format(dstr)\n x = pd.read_html(url)\n df = x[0]\n if (len(df) > 1):\n for j in range(len(df)):\n t1 = df['matchup'].iloc[j]\n t2 = df['Unnamed: 1'].iloc[j]\n t1s = t1.split(' ')\n home = t1s[-1:][0]\n t2s = t2.split(' ')\n away = t2s[-1:][0]\n entries.append((day, home, away))\n print(dstr, home, away)\n\ndfs = pd.DataFrame(entries, columns=['day', 'home', 'away'])",
"20181016 PHI BOS\n20181016 OKC GS\n20181017 MIL CHA\n20181017 BKN DET\n20181017 MEM IND\n20181017 MIA ORL\n20181017 ATL NY\n20181017 CLE TOR\n20181017 NO HOU\n20181017 MIN SA\n20181017 UTAH SAC\n20181017 DAL PHX\n20181017 DEN LAC\n20181018 CHI PHI\n20181018 MIA WSH\n20181018 LAL POR\n20181019 CHA ORL\n20181019 NY BKN\n20181019 BOS TOR\n20181019 ATL MEM\n20181019 CLE MIN\n20181019 SAC NO\n20181019 IND MIL\n20181019 GS UTAH\n20181019 OKC LAC\n20181020 TOR WSH\n20181020 BKN IND\n20181020 BOS NY\n20181020 ORL PHI\n20181020 CHA MIA\n20181020 DET CHI\n20181020 MIN DAL\n20181020 PHX DEN\n20181020 SA POR\n20181020 HOU LAL\n20181021 ATL CLE\n20181021 SAC OKC\n20181021 GS DEN\n20181021 HOU LAC\n20181022 ORL BOS\n20181022 CHA TOR\n20181022 NY MIL\n20181022 IND MIN\n20181022 CHI DAL\n20181022 MEM UTAH\n20181022 WSH POR\n20181022 PHX GS\n20181022 SA LAL\n20181023 PHI DET\n20181023 LAC NO\n20181023 SAC DEN\n20181024 DAL ATL\n20181024 BKN CLE\n20181024 NY MIA\n20181024 MIN TOR\n20181024 CHA CHI\n20181024 UTAH HOU\n20181024 IND SA\n20181024 PHI MIL\n20181024 LAL PHX\n20181024 MEM SAC\n20181024 WSH GS\n20181025 CLE DET\n20181025 POR ORL\n20181025 BOS OKC\n20181025 DEN LAL\n20181026 CHI CHA\n20181026 GS NY\n20181026 DAL TOR\n20181026 LAC HOU\n20181026 MIL MIN\n20181026 BKN NO\n20181026 WSH SAC\n20181027 BOS DET\n20181027 UTAH NO\n20181027 CHI ATL\n20181027 IND CLE\n20181027 CHA PHI\n20181027 POR MIA\n20181027 PHX MEM\n20181027 ORL MIL\n20181027 LAL SA\n20181028 GS BKN\n20181028 UTAH DAL\n20181028 PHX OKC\n20181028 WSH LAC\n20181029 POR IND\n20181029 ATL PHI\n20181029 SAC MIA\n20181029 BKN NY\n20181029 GS CHI\n20181029 TOR MIL\n20181029 LAL MIN\n20181029 DAL SA\n20181029 NO DEN\n20181030 MIA CHA\n20181030 ATL CLE\n20181030 SAC ORL\n20181030 DET BOS\n20181030 PHI TOR\n20181030 POR HOU\n20181030 WSH MEM\n20181030 LAC OKC\n20181031 DET BKN\n20181031 IND NY\n20181031 DEN CHI\n20181031 UTAH MIN\n20181031 NO GS\n20181031 DAL LAL\n20181031 SA PHX\n20181101 OKC CHA\n20181101 DEN CLE\n20181101 LAC PHI\n20181101 SAC ATL\n20181101 MIL BOS\n20181101 NO POR\n20181102 LAC ORL\n20181102 HOU BKN\n20181102 OKC WSH\n20181102 IND CHI\n20181102 NY DAL\n20181102 MEM UTAH\n20181102 TOR PHX\n20181102 MIN GS\n20181103 DET PHI\n20181103 CLE CHA\n20181103 BOS IND\n20181103 MIA ATL\n20181103 HOU CHI\n20181103 NO SA\n20181103 UTAH DEN\n20181103 LAL POR\n20181104 SAC MIL\n20181104 PHI BKN\n20181104 NY WSH\n20181104 ORL SA\n20181104 MEM PHX\n20181104 MIN POR\n20181104 TOR LAL\n20181105 MIA DET\n20181105 HOU IND\n20181105 CLE ORL\n20181105 CHI NY\n20181105 NO OKC\n20181105 BOS DEN\n20181105 TOR UTAH\n20181105 MEM GS\n20181105 MIN LAC\n20181106 ATL CHA\n20181106 WSH DAL\n20181106 BKN PHX\n20181106 MIL POR\n20181107 OKC CLE\n20181107 DET ORL\n20181107 NY ATL\n20181107 SA MIA\n20181107 PHI IND\n20181107 DEN MEM\n20181107 CHI NO\n20181107 DAL UTAH\n20181107 TOR SAC\n20181107 MIN LAL\n20181108 HOU OKC\n20181108 BOS PHX\n20181108 LAC POR\n20181108 MIL GS\n20181109 WSH ORL\n20181109 CHA PHI\n20181109 DET ATL\n20181109 IND MIA\n20181109 BKN DEN\n20181109 BOS UTAH\n20181109 MIN SAC\n20181110 NY TOR\n20181110 MIL LAC\n20181110 PHX NO\n20181110 WSH MIA\n20181110 CLE CHI\n20181110 PHI MEM\n20181110 HOU SA\n20181110 BKN GS\n20181110 OKC DAL\n20181110 LAL SAC\n20181111 CHA DET\n20181111 IND HOU\n20181111 ORL NY\n20181111 MIL DEN\n20181111 BOS POR\n20181111 ATL LAL\n20181112 ORL WSH\n20181112 PHI MIA\n20181112 NO TOR\n20181112 DAL CHI\n20181112 UTAH MEM\n20181112 BKN MIN\n20181112 PHX OKC\n20181112 SA SAC\n20181112 GS LAC\n20181113 CHA CLE\n20181113 HOU DEN\n20181113 ATL GS\n20181114 PHI ORL\n20181114 CLE WSH\n20181114 CHI BOS\n20181114 MIA BKN\n20181114 DET TOR\n20181114 MEM MIL\n20181114 NO MIN\n20181114 NY OKC\n20181114 UTAH DAL\n20181114 SA PHX\n20181114 POR LAL\n20181115 GS HOU\n20181115 ATL DEN\n20181115 SA LAC\n20181116 TOR BOS\n20181116 MIA IND\n20181116 UTAH PHI\n20181116 BKN WSH\n20181116 SAC MEM\n20181116 POR MIN\n20181116 NY NO\n20181116 CHI MIL\n20181117 LAC BKN\n20181117 PHI CHA\n20181117 ATL IND\n20181117 LAL ORL\n20181117 DEN NO\n20181117 UTAH BOS\n20181117 TOR CHI\n20181117 SAC HOU\n20181117 GS DAL\n20181117 OKC PHX\n20181118 MEM MIN\n20181118 LAL MIA\n20181118 NY ORL\n20181118 POR WSH\n20181118 GS SA\n20181119 BOS CHA\n20181119 CLE DET\n20181119 UTAH IND\n20181119 PHX PHI\n20181119 LAC ATL\n20181119 DAL MEM\n20181119 DEN MIL\n20181119 SA NO\n20181119 OKC SAC\n20181120 TOR ORL\n20181120 LAC WSH\n20181120 BKN MIA\n20181120 POR NY\n20181121 IND CHA\n20181121 NO PHI\n20181121 TOR ATL\n20181121 NY BOS\n20181121 LAL CLE\n20181121 PHX CHI\n20181121 DET HOU\n20181121 POR MIL\n20181121 DEN MIN\n20181121 BKN DAL\n20181121 MEM SA\n20181121 SAC UTAH\n20181121 OKC GS\n20181123 MIN BKN\n20181123 MEM LAC\n20181123 HOU DET\n20181123 BOS ATL\n20181123 NO NY\n20181123 CLE PHI\n20181123 WSH TOR\n20181123 SA IND\n20181123 MIA CHI\n20181123 CHA OKC\n20181123 PHX MIL\n20181123 ORL DEN\n20181123 POR GS\n20181123 UTAH LAL\n20181124 HOU CLE\n20181124 NO WSH\n20181124 CHI MIN\n20181124 DEN OKC\n20181124 BOS DAL\n20181124 SA MIL\n20181124 SAC GS\n20181125 ORL LAL\n20181125 PHX DET\n20181125 CHA ATL\n20181125 PHI BKN\n20181125 MIA TOR\n20181125 NY MEM\n20181125 UTAH SAC\n20181125 LAC POR\n20181126 MIL CHA\n20181126 MIN CLE\n20181126 HOU WSH\n20181126 SA CHI\n20181126 BOS NO\n20181126 IND UTAH\n20181126 ORL GS\n20181127 NY DET\n20181127 ATL MIA\n20181127 TOR MEM\n20181127 LAL DEN\n20181127 IND PHX\n20181128 ATL CHA\n20181128 NY PHI\n20181128 UTAH BKN\n20181128 DAL HOU\n20181128 CHI MIL\n20181128 SA MIN\n20181128 WSH NO\n20181128 CLE OKC\n20181128 ORL POR\n20181128 PHX LAC\n20181129 GS TOR\n20181129 IND LAL\n20181129 LAC SAC\n20181130 CLE BOS\n20181130 UTAH CHA\n20181130 CHI DET\n20181130 WSH PHI\n20181130 MEM BKN\n20181130 NO MIA\n20181130 ATL OKC\n20181130 HOU SA\n20181130 ORL PHX\n20181130 DAL LAL\n20181130 DEN POR\n20181201 MIL NY\n20181201 GS DET\n20181201 BKN WSH\n20181201 TOR CLE\n20181201 CHI HOU\n20181201 BOS MIN\n20181201 IND SAC\n20181202 PHX LAL\n20181202 NO CHA\n20181202 UTAH MIA\n20181202 MEM PHI\n20181202 LAC DAL\n20181202 POR SA\n20181203 OKC DET\n20181203 GS ATL\n20181203 CLE BKN\n20181203 WSH NY\n20181203 DEN TOR\n20181203 HOU MIN\n20181203 LAC NO\n20181204 CHI IND\n20181204 ORL MIA\n20181204 POR DAL\n20181204 SAC PHX\n20181204 SA UTAH\n20181205 GS CLE\n20181205 DEN ORL\n20181205 WSH ATL\n20181205 OKC BKN\n20181205 PHI TOR\n20181205 LAC MEM\n20181205 DET MIL\n20181205 CHA MIN\n20181205 DAL NO\n20181205 SA LAL\n20181206 NY BOS\n20181206 PHX POR\n20181206 HOU UTAH\n20181207 DEN CHA\n20181207 PHI DET\n20181207 IND ORL\n20181207 TOR BKN\n20181207 SAC CLE\n20181207 OKC CHI\n20181207 MEM NO\n20181207 LAL SA\n20181207 MIA PHX\n20181207 GS MIL\n20181208 HOU DAL\n20181208 SAC IND\n20181208 DEN ATL\n20181208 WSH CLE\n20181208 BKN NY\n20181208 BOS CHI\n20181208 LAL MEM\n20181208 MIN POR\n20181208 MIA LAC\n20181209 NO DET\n20181209 MIL TOR\n20181209 UTAH SA\n20181209 CHA NY\n20181210 WSH IND\n20181210 DET PHI\n20181210 NO BOS\n20181210 SAC CHI\n20181210 CLE MIL\n20181210 UTAH OKC\n20181210 ORL DAL\n20181210 MEM DEN\n20181210 LAC PHX\n20181210 MIN GS\n20181210 MIA LAL\n20181211 POR HOU\n20181211 PHX SA\n20181211 TOR LAC\n20181212 DET CHA\n20181212 NY CLE\n20181212 MIL IND\n20181212 BKN PHI\n20181212 BOS WSH\n20181212 POR MEM\n20181212 OKC NO\n20181212 ATL DAL\n20181212 MIA UTAH\n20181212 MIN SAC\n20181212 TOR GS\n20181213 LAL HOU\n20181213 LAC SA\n20181213 CHI ORL\n20181213 DAL PHX\n20181214 ATL BOS\n20181214 NY CHA\n20181214 WSH BKN\n20181214 MIL CLE\n20181214 IND PHI\n20181214 MIA MEM\n20181214 OKC DEN\n20181214 TOR POR\n20181214 GS SAC\n20181215 UTAH ORL\n20181215 LAL CHA\n20181215 BOS DET\n20181215 HOU MEM\n20181215 CHI SA\n20181215 LAC OKC\n20181215 MIN PHX\n20181216 ATL BKN\n20181216 PHI CLE\n20181216 NY IND\n20181216 LAL WSH\n20181216 SAC DAL\n20181216 MIA NO\n20181216 TOR DEN\n20181217 MIL DET\n20181217 PHX NY\n20181217 UTAH HOU\n20181217 SAC MIN\n20181217 CHI OKC\n20181217 PHI SA\n20181217 MEM GS\n20181217 POR LAC\n20181218 CLE IND\n20181218 WSH ATL\n20181218 LAL BKN\n20181218 DAL DEN\n20181219 CLE CHA\n20181219 SA ORL\n20181219 NY PHI\n20181219 PHX BOS\n20181219 IND TOR\n20181219 BKN CHI\n20181219 WSH HOU\n20181219 NO MIL\n20181219 DET MIN\n20181219 GS UTAH\n20181219 MEM POR\n20181219 OKC SAC\n20181220 HOU MIA\n20181220 DAL LAC\n20181221 DET CHA\n20181221 CLE TOR\n20181221 IND BKN\n20181221 ATL NY\n20181221 MIL BOS\n20181221 ORL CHI\n20181221 MIN SA\n20181221 UTAH POR\n20181221 MEM SAC\n20181221 NO LAL\n20181222 DEN LAC\n20181222 PHX WSH\n20181222 TOR PHI\n20181222 MIL MIA\n20181222 SA HOU\n20181222 DAL GS\n20181222 OKC UTAH\n20181223 ATL DET\n20181223 WSH IND\n20181223 CHA BOS\n20181223 PHX BKN\n20181223 CHI CLE\n20181223 MIA ORL\n20181223 NO SAC\n20181223 MIN OKC\n20181223 LAC GS\n20181223 DAL POR\n20181223 MEM LAL\n"
],
[
"dfs.home.unique()",
"_____no_output_____"
],
[
"mydf = dfs.copy()",
"_____no_output_____"
],
[
"atlantic = ['BOS', 'BRK', 'NYK', 'PHI', 'TOR']\ncentral = ['CHI', 'CLE', 'DET', 'IND', 'MIL']\nsoutheast = ['ATL', 'CHA', 'MIA', 'ORL', 'WAS']\n\nsouthwest = ['DAL', 'HOU', 'MEM', 'NOP', 'SAS']\nnorthwest = ['DEN', 'MIN', 'OKC', 'POR', 'UTA']\npacific = ['GSW', 'LAC', 'LAL', 'PHX', 'SAC']",
"_____no_output_____"
],
[
"mydf.home.replace({'NO': 'NOP', 'BKN': 'BRK', 'NY': 'NYK', 'UTAH': 'UTA', 'GS': 'GSW', 'SA': 'SAS', \n 'WSH': 'WAS'}, inplace=True) \nmydf.away.replace({'NO': 'NOP', 'BKN': 'BRK', 'NY': 'NYK', 'UTAH': 'UTA', 'GS': 'GSW', 'SA': 'SAS', \n 'WSH': 'WAS'}, inplace=True)",
"_____no_output_____"
],
[
"mydf.head()",
"_____no_output_____"
],
[
"mydf['month'] = mydf['day'].dt.month",
"_____no_output_____"
],
[
"mydf.head()",
"_____no_output_____"
],
[
"mydf.shape",
"_____no_output_____"
],
[
"nbs_df = pd.read_csv('social_nba_2.csv')",
"_____no_output_____"
],
[
"nbs_df2 = nbs_df[(nbs_df.year==2018) & (nbs_df.month==8)]\nnbs_df2.team.unique()",
"_____no_output_____"
],
[
"dx = nbs_df2.copy()",
"_____no_output_____"
],
[
"dx.columns",
"_____no_output_____"
],
[
"dx[(dx.team==\"MIA\") | (dx.team==\"BOS\") | (dx.team==\"OKC\") | (dx.team==\"NYK\") | (dx.team==\"CHI\") ]",
"_____no_output_____"
],
[
"dx[(dx.team==\"SAS\")]",
"_____no_output_____"
],
[
"for j in range(len(mydf)):\n mydf.loc[mydf.index==j, 'gts'] = dx[dx.team==mydf.iloc[j].home]['gts'].iloc[0] + dx[dx.team==mydf.iloc[j].away]['gts'].iloc[0]\n mydf.loc[mydf.index==j, 'wp'] = dx[dx.team==mydf.iloc[j].home]['wp_pageviews'].iloc[0] + dx[dx.team==mydf.iloc[j].away]['wp_pageviews'].iloc[0]\n mydf.loc[mydf.index==j, 'tts'] = dx[dx.team==mydf.iloc[j].home]['TTS'].iloc[0] + dx[dx.team==mydf.iloc[j].away]['TTS'].iloc[0]\n mydf.loc[mydf.index==j, 'unq'] = dx[dx.team==mydf.iloc[j].home]['UNQ'].iloc[0] + dx[dx.team==mydf.iloc[j].away]['UNQ'].iloc[0]\n mydf.loc[mydf.index==j, 'fb_foll'] = dx[dx.team==mydf.iloc[j].home]['Followers_Facebook'].iloc[0] + dx[dx.team==mydf.iloc[j].away]['Followers_Facebook'].iloc[0]\n mydf.loc[mydf.index==j, 'tw_foll'] = dx[dx.team==mydf.iloc[j].home]['Followers_Twitter'].iloc[0] + dx[dx.team==mydf.iloc[j].away]['Followers_Twitter'].iloc[0]\n mydf.loc[mydf.index==j, 'inst_foll'] = dx[dx.team==mydf.iloc[j].home]['Followers_Instagram'].iloc[0] + dx[dx.team==mydf.iloc[j].away]['Followers_Instagram'].iloc[0]\n mydf.loc[mydf.index==j, 'snap_foll'] = dx[dx.team==mydf.iloc[j].home]['Followers_Snapchat'].iloc[0] + dx[dx.team==mydf.iloc[j].away]['Followers_Snapchat'].iloc[0]\n mydf.loc[mydf.index==j, 'wb_foll'] = dx[dx.team==mydf.iloc[j].home]['Followers_Weibo'].iloc[0] + dx[dx.team==mydf.iloc[j].away]['Followers_Weibo'].iloc[0]\n mydf.loc[mydf.index==j, 'fb_eng'] = dx[dx.team==mydf.iloc[j].home]['Engagements_Facebook'].iloc[0] + dx[dx.team==mydf.iloc[j].away]['Engagements_Facebook'].iloc[0]\n mydf.loc[mydf.index==j, 'tw_eng'] = dx[dx.team==mydf.iloc[j].home]['Engagements_Twitter'].iloc[0] + dx[dx.team==mydf.iloc[j].away]['Engagements_Twitter'].iloc[0]\n mydf.loc[mydf.index==j, 'inst_eng'] = dx[dx.team==mydf.iloc[j].home]['Engagements_Instagram'].iloc[0] + dx[dx.team==mydf.iloc[j].away]['Engagements_Instagram'].iloc[0]\n mydf.loc[mydf.index==j, 'fb_imps'] = dx[dx.team==mydf.iloc[j].home]['Impressions_Facebook'].iloc[0] + dx[dx.team==mydf.iloc[j].away]['Impressions_Facebook'].iloc[0]\n mydf.loc[mydf.index==j, 'tw_imps'] = dx[dx.team==mydf.iloc[j].home]['Impressions_Twitter'].iloc[0] + dx[dx.team==mydf.iloc[j].away]['Impressions_Twitter'].iloc[0]",
"_____no_output_____"
],
[
"mydf.head()",
"_____no_output_____"
],
[
"mydf.shape",
"_____no_output_____"
],
[
"social_feats = [None]*4\n\nfor i in range(4):\n \n j = str(i+1)\n social_feats[i] = [None]*14\n \n mydf['gts_'+j] = mydf['gts']\n mydf['wp_'+j] = mydf['wp']\n mydf['tts_'+j] = mydf['tts']\n mydf['unq_'+j] = mydf['unq']\n\n mydf['fb_foll_'+j] = mydf['fb_foll']\n mydf['inst_foll_'+j] = mydf['inst_foll']\n mydf['tw_foll_'+j] = mydf['tw_foll']\n mydf['snap_foll_'+j] = mydf['snap_foll']\n mydf['wb_foll_'+j] = mydf['wb_foll']\n\n mydf['fb_eng_'+j] = mydf['fb_eng']\n mydf['inst_eng_'+j] = mydf['inst_eng']\n mydf['tw_eng_'+j] = mydf['tw_eng']\n mydf['fb_imps_'+j] = mydf['fb_imps']\n mydf['tw_imps_'+j] = mydf['tw_imps']\n \n social_feats[i] = ['gts_'+j, 'wp_'+j, 'tts_'+j, 'unq_'+j,\n 'fb_foll_'+j, 'inst_foll_'+j, 'tw_foll_'+j, 'snap_foll_'+j, 'wb_foll_'+j, \n 'fb_eng_'+j, 'inst_eng_'+j, 'tw_eng_'+j, 'fb_imps_'+j, 'tw_imps_'+j]\n ",
"_____no_output_____"
],
[
"mydf.shape",
"_____no_output_____"
],
[
"mydf.head()",
"_____no_output_____"
],
[
"!ls *.pkl",
"LR_model_avg_markup_norm_1.pkl\tRF_model_avg_markup_norm_2.pkl\r\nLR_model_avg_markup_norm_2.pkl\tRF_model_avg_markup_norm_3.pkl\r\nLR_model_avg_markup_norm_3.pkl\tRF_model_avg_markup_norm_4.pkl\r\nLR_model_avg_markup_norm_4.pkl\tRF_model_norm_minutes_1.pkl\r\nLR_model_norm_minutes_1.pkl\tRF_model_norm_minutes_2.pkl\r\nLR_model_norm_minutes_2.pkl\tRF_model_norm_minutes_3.pkl\r\nLR_model_norm_minutes_3.pkl\tRF_model_norm_minutes_4.pkl\r\nLR_model_norm_minutes_4.pkl\tRF_model_Unique_Viewers_1.pkl\r\nLR_model_Unique_Viewers_1.pkl\tRF_model_Unique_Viewers_2.pkl\r\nLR_model_Unique_Viewers_2.pkl\tRF_model_Unique_Viewers_3.pkl\r\nLR_model_Unique_Viewers_3.pkl\tRF_model_Unique_Viewers_4.pkl\r\nLR_model_Unique_Viewers_4.pkl\ttesting_gts.pkl\r\nRF_model_avg_markup_norm_1.pkl\ttraining_gts.pkl\r\n"
],
[
"modelfile = 'RF_model_Unique_Viewers_2.pkl'\nwith open(modelfile, 'rb') as fd:\n model = pickle.load(fd)",
"_____no_output_____"
],
[
"mydf1 = mydf[mydf.month==10]\nmydf2 = mydf[mydf.month==11]\nmydf3 = mydf[mydf.month==12]",
"_____no_output_____"
],
[
"j=1\ntgt_features = ['Unique_Viewers', 'norm_minutes', 'avg_markup_norm']\nfor f in tgt_features:\n modelfile = 'RF_model_'+f+'_'+str(j+1)+'.pkl'\n with open(modelfile, 'rb') as fd:\n model = pickle.load(fd)\n mydf1.loc[:, f] = model.predict(mydf1[social_feats[j]])",
"/home/asanzgiri/miniconda3/envs/py36/lib/python3.6/site-packages/pandas/core/indexing.py:362: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\n self.obj[key] = _infer_fill_value(value)\n/home/asanzgiri/miniconda3/envs/py36/lib/python3.6/site-packages/pandas/core/indexing.py:543: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\n self.obj[item] = s\n"
],
[
"j=2\ntgt_features = ['Unique_Viewers', 'norm_minutes', 'avg_markup_norm']\nfor f in tgt_features:\n modelfile = 'RF_model_'+f+'_'+str(j+1)+'.pkl'\n with open(modelfile, 'rb') as fd:\n model = pickle.load(fd)\n mydf2.loc[:, f] = model.predict(mydf2[social_feats[j]])",
"/home/asanzgiri/miniconda3/envs/py36/lib/python3.6/site-packages/pandas/core/indexing.py:543: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\n self.obj[item] = s\n/home/asanzgiri/miniconda3/envs/py36/lib/python3.6/site-packages/pandas/core/indexing.py:362: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\n self.obj[key] = _infer_fill_value(value)\n"
],
[
"j=3\ntgt_features = ['Unique_Viewers', 'norm_minutes', 'avg_markup_norm']\nfor f in tgt_features:\n modelfile = 'RF_model_'+f+'_'+str(j+1)+'.pkl'\n with open(modelfile, 'rb') as fd:\n model = pickle.load(fd)\n mydf3.loc[:, f] = model.predict(mydf3[social_feats[j]])",
"/home/asanzgiri/miniconda3/envs/py36/lib/python3.6/site-packages/pandas/core/indexing.py:362: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\n self.obj[key] = _infer_fill_value(value)\n/home/asanzgiri/miniconda3/envs/py36/lib/python3.6/site-packages/pandas/core/indexing.py:543: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\n self.obj[item] = s\n"
],
[
"mydf_final = pd.concat([mydf1, mydf2, mydf3])",
"_____no_output_____"
],
[
"mydf_final.shape",
"_____no_output_____"
],
[
"mydf_final.columns",
"_____no_output_____"
],
[
"mydf_save = mydf_final[['day', 'home', 'away', 'gts', 'wp', 'tts', 'unq', 'fb_foll',\n 'tw_foll', 'inst_foll', 'snap_foll', 'wb_foll', 'fb_eng', 'tw_eng',\n 'inst_eng', 'fb_imps', 'tw_imps', 'Unique_Viewers', 'norm_minutes',\n 'avg_markup_norm']]",
"_____no_output_____"
],
[
"mydf_save.to_csv('final_predictions.csv', index=None)",
"_____no_output_____"
],
[
"def get_games_for_day(df, day):\n \n dfg = df[df.day==day][['home', 'away', 'Unique_Viewers', 'norm_minutes', 'avg_markup_norm']]\n print(dfg.head())",
"_____no_output_____"
],
[
"get_games_for_day(mydf_save, '2018-10-16')",
" home away Unique_Viewers norm_minutes avg_markup_norm\n0 PHI BOS 32430.807999 37.403667 1.323228\n1 OKC GSW 51098.976960 55.213687 2.313532\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0c70b95a56958cca5b9c563387bf3b04383f32f | 112,692 | ipynb | Jupyter Notebook | temporal-difference/Expected_SARSA.ipynb | prp1679/Udacity-Deep-Learning-NanoDegree | 0cc3f94db94f8b3d814321a78d22788f2785ea26 | [
"MIT"
] | null | null | null | temporal-difference/Expected_SARSA.ipynb | prp1679/Udacity-Deep-Learning-NanoDegree | 0cc3f94db94f8b3d814321a78d22788f2785ea26 | [
"MIT"
] | 4 | 2020-09-26T00:48:12.000Z | 2022-02-10T01:09:24.000Z | temporal-difference/Expected_SARSA.ipynb | prp1679/Udacity-Deep-Learning-NanoDegree | 0cc3f94db94f8b3d814321a78d22788f2785ea26 | [
"MIT"
] | null | null | null | 345.680982 | 66,028 | 0.923872 | [
[
[
"import sys\nimport gym\nimport numpy as np\nimport random\nimport math\nfrom collections import defaultdict, deque\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nimport check_test\nfrom plot_utils import plot_values",
"_____no_output_____"
],
[
"#create an instance of CliffWalking environment\nenv = gym.make('CliffWalking-v0')",
"_____no_output_____"
],
[
"print(\"Actions_space: {0}\".format(env.action_space))\nprint(\"State Space: {0}\".format(env.observation_space))\nprint(\"Action Space (env.action_space.n) {0}: \".format(env.action_space.n))",
"Actions_space: Discrete(4)\nState Space: Discrete(48)\nAction Space (env.action_space.n) 4: \n"
],
[
"def epsilon_greedy(Q, state, nA, eps):\n if random.random() > eps:\n return np.argmax(Q[state])\n else:\n return random.choice(np.arange(nA))\n\n\"\"\" \ndef update_Q_expected_sarsa(alpha, gamma, Q, \\\n eps, nA,\\\n state, action, reward, next_state=None):\n \n current = Q[state][action]\n \n #construct an epsilon-greedy policy\n policy_s = np.ones(nA) * ( eps / nA) #epsilon-greedy strategy for equiprobable selection\n policy_s[np.argmax(Q[state])] = 1 - eps + ( eps /nA) # epsilon-greedy strategy for greedy selection\n \n #In case of Expected SARSA , each state_action is multiplied with the probability\n next_reward = np.dot(Q[next_state] , policy_s)\n \n target = reward + gamma * next_reward\n \n new_current_reward = current + (alpha * ( target - current))\n \n return new_current_reward\n\n\"\"\"\n\ndef update_Q_expected_sarsa(alpha, gamma, Q, nA, eps, state,action, reward, next_state=None):\n #print(\"The state is : {0}\".format(state))\n #print(\"The action is : {0}\".format(action))\n current = Q[state][action]\n\n policy_s = get_probs(Q, eps, nA)\n Qsa_next = np.dot(Q[next_state] , policy_s)\n \n target = reward + gamma * Qsa_next\n \n new_value = current + (alpha * (target - current))\n \n return new_value\n \n \n\ndef get_probs(Q, epsilon, nA):\n \"\"\" obtains the action probabilities corresponding to epsilon-greedy policy \"\"\"\n policy_s = np.ones(nA) * (epsilon / nA)\n greedy_action = np.argmax(Q)\n policy_s[greedy_action] = 1 - epsilon + (epsilon / nA)\n return policy_s",
"_____no_output_____"
],
[
"def expected_sarsa(env, num_episodes, alpha, gamma=1.0,plot_every=100):\n # initialize empty dictionary of arrays\n Q = defaultdict(lambda: np.zeros(env.nA))\n \n tmp_scores = deque(maxlen=plot_every)\n avg_scores = deque(maxlen=num_episodes)\n \n \n # loop over episodes\n for i_episode in range(1, num_episodes+1):\n \n # monitor progress\n if i_episode % 100 == 0:\n print(\"\\rEpisode {}/{}\".format(i_episode, num_episodes), end=\"\")\n sys.stdout.flush()\n \n eps = 0.05\n state = env.reset()\n score = 0 \n\n \n while True:\n action = epsilon_greedy(Q, state, env.nA, eps )\n \n next_state, reward, done, info = env.step(action)\n score += reward\n Q[state][action] = update_Q_expected_sarsa(alpha, gamma, Q, env.nA, eps, state,action, reward, next_state)\n state = next_state\n \n if done:\n tmp_scores.append(score) # append score\n break\n \n if (i_episode % plot_every == 0):\n avg_scores.append(np.mean(tmp_scores))\n \n # plot performance\n plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores))\n plt.xlabel('Episode Number')\n plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every)\n plt.show()\n # print best 100-episode performance\n print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores))\n return Q",
"_____no_output_____"
],
[
"def expected_sarsa(env, num_episodes, alpha, gamma=1.0,max_steps_per_episode=100):\n # initialize empty dictionary of arrays\n Q = defaultdict(lambda: np.zeros(env.nA))\n \n tmp_scores = deque(maxlen=max_steps_per_episode)\n avg_scores = deque(maxlen=num_episodes)\n \n \n # loop over episodes\n for i_episode in range(1, num_episodes+1):\n \n # monitor progress\n if i_episode % 100 == 0:\n print(\"\\rEpisode {}/{}\".format(i_episode, num_episodes), end=\"\")\n sys.stdout.flush()\n \n eps = 0.05\n state = env.reset()\n score = 0 \n \n for step in range(max_steps_per_episode): \n action = epsilon_greedy(Q, state, env.nA, eps )\n \n next_state, reward, done, info = env.step(action)\n score += reward\n Q[state][action] = update_Q_expected_sarsa(alpha, gamma, Q, env.nA, eps, state,action, reward, next_state)\n state = next_state\n \n if done:\n tmp_scores.append(score) # append score\n break\n \n if (i_episode % max_steps_per_episode == 0):\n avg_scores.append(np.mean(tmp_scores))\n \n # plot performance\n plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores))\n plt.xlabel('Episode Number')\n plt.ylabel('Average Reward (Over Next %d Episodes)' % max_steps_per_episode)\n plt.show()\n # print best 100-episode performance\n print(('Best Average Reward over %d Episodes: ' % max_steps_per_episode), np.max(avg_scores))\n return Q",
"_____no_output_____"
],
[
"# obtain the estimated optimal policy and corresponding action-value function\nQ_expsarsa = expected_sarsa(env, 10000, 1)\n\n# print the estimated optimal policy\npolicy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12)\ncheck_test.run_check('td_control_check', policy_expsarsa)\nprint(\"\\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):\")\nprint(policy_expsarsa)\n\n# plot the estimated optimal state-value function\nplot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)])",
"Episode 10000/10000"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0c71738d83526fdca007ac4f498d119ec7a7984 | 269,616 | ipynb | Jupyter Notebook | Bagian 4 - Fundamental Pandas/2. Exploratory Data Analysis.ipynb | selmakaramy/Data-warehouse | aad16a6ae3d2bff5f16974a5a738134288c90ad3 | [
"MIT"
] | null | null | null | Bagian 4 - Fundamental Pandas/2. Exploratory Data Analysis.ipynb | selmakaramy/Data-warehouse | aad16a6ae3d2bff5f16974a5a738134288c90ad3 | [
"MIT"
] | null | null | null | Bagian 4 - Fundamental Pandas/2. Exploratory Data Analysis.ipynb | selmakaramy/Data-warehouse | aad16a6ae3d2bff5f16974a5a738134288c90ad3 | [
"MIT"
] | null | null | null | 195.94186 | 89,996 | 0.897732 | [
[
[
"# Secara Visual",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline\nplt.style.use('seaborn')",
"_____no_output_____"
],
[
"iris = pd.read_csv('data/iris.csv')",
"_____no_output_____"
],
[
"iris.shape",
"_____no_output_____"
],
[
"iris.head()",
"_____no_output_____"
],
[
"iris.plot(x='sepal_length', y='sepal_width')",
"_____no_output_____"
],
[
"iris.plot(x='sepal_length', y='sepal_width', kind='scatter')\nplt.xlabel('sepal length (cm)')\nplt.ylabel('sepal width (cm)')",
"_____no_output_____"
],
[
"iris.plot(y='sepal_length', kind='box')\nplt.ylabel('sepal width (cm)')",
"_____no_output_____"
],
[
"iris.plot(y='sepal_length', kind='hist')\nplt.xlabel('sepal length (cm)')",
"_____no_output_____"
],
[
"iris.plot(y='sepal_length', kind='hist', bins=30, range=(4,8), normed=True)\nplt.xlabel('sepal length (cm)')\nplt.show()",
"C:\\Users\\ASUS\\anaconda3\\lib\\site-packages\\pandas\\plotting\\_matplotlib\\hist.py:59: MatplotlibDeprecationWarning: \nThe 'normed' kwarg was deprecated in Matplotlib 2.1 and will be removed in 3.1. Use 'density' instead.\n n, bins, patches = ax.hist(y, bins=bins, bottom=bottom, **kwds)\n"
],
[
"iris.plot(y='sepal_length', kind='hist', bins=30, range=(4,8), cumulative=True, normed=True)\nplt.xlabel('sepal length (cm)')\nplt.title('Cumulative distribution function (CDF)')",
"_____no_output_____"
]
],
[
[
"# Secara Statistik",
"_____no_output_____"
]
],
[
[
"iris.describe()",
"_____no_output_____"
],
[
"iris.count()",
"_____no_output_____"
],
[
"iris['sepal_length'].count()",
"_____no_output_____"
],
[
"iris.mean()",
"_____no_output_____"
],
[
"iris.std()",
"_____no_output_____"
],
[
"iris.median()",
"_____no_output_____"
],
[
"q = 0.5\niris.quantile(q)",
"_____no_output_____"
],
[
"q = [0.25, 0.75]\niris.quantile(q)",
"_____no_output_____"
],
[
"iris.min()",
"_____no_output_____"
],
[
"iris.max()",
"_____no_output_____"
],
[
"iris.plot(kind= 'box')\nplt.ylabel('[cm]')",
"_____no_output_____"
]
],
[
[
"# Filtering",
"_____no_output_____"
]
],
[
[
"iris.head()",
"_____no_output_____"
],
[
"iris['species'].describe()",
"_____no_output_____"
],
[
"iris['species'].value_counts()",
"_____no_output_____"
],
[
"iris['species'].unique()",
"_____no_output_____"
],
[
"index_setosa = iris['species'] == 'setosa'\nindex_versicolor = iris['species'] == 'versicolor'\nindex_virginica = iris['species'] == 'virginica'",
"_____no_output_____"
],
[
"setosa = iris[index_setosa]\nversicolor = iris[index_versicolor]\nvirginica = iris[index_virginica]",
"_____no_output_____"
],
[
"setosa['species'].unique()",
"_____no_output_____"
],
[
"versicolor['species'].unique()",
"_____no_output_____"
],
[
"virginica['species'].unique()",
"_____no_output_____"
],
[
"setosa.head(2)",
"_____no_output_____"
],
[
"versicolor.head(2)",
"_____no_output_____"
],
[
"virginica.head(2)",
"_____no_output_____"
],
[
"iris.plot(kind= 'hist', bins=50, range=(0,8), alpha=0.4)\nplt.title('Entire iris data set')\nplt.xlabel('[cm]')",
"_____no_output_____"
],
[
"setosa.plot(kind='hist', bins=50, range=(0,8), alpha=0.5)\nplt.title('Setosa data set')\nplt.xlabel('[cm]')",
"_____no_output_____"
],
[
"versicolor.plot(kind='hist', bins=50, range=(0,8), alpha=0.5)\nplt.title('Versicolor data set')\nplt.xlabel('[cm]')",
"_____no_output_____"
],
[
"virginica.plot(kind='hist', bins=50, range=(0,8), alpha=0.3)\nplt.title('Virginica data set')\nplt.xlabel('[cm]')",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0c71d638580a27e184d4ada4d69fd94b58de61d | 2,863 | ipynb | Jupyter Notebook | examples/00-load/create-geometric-objects.ipynb | vtkiorg/vtki-examples | 79e4b1d9915f987cecd2c6b51f94380f5f748cc6 | [
"MIT"
] | null | null | null | examples/00-load/create-geometric-objects.ipynb | vtkiorg/vtki-examples | 79e4b1d9915f987cecd2c6b51f94380f5f748cc6 | [
"MIT"
] | null | null | null | examples/00-load/create-geometric-objects.ipynb | vtkiorg/vtki-examples | 79e4b1d9915f987cecd2c6b51f94380f5f748cc6 | [
"MIT"
] | null | null | null | 31.811111 | 711 | 0.539644 | [
[
[
"%matplotlib inline\nfrom pyvista import set_plot_theme\nset_plot_theme('document')",
"_____no_output_____"
]
],
[
[
"Geometric Objects {#ref_geometric_example}\n=================\n\nThe \\\"Hello, world!\\\" of VTK\n",
"_____no_output_____"
]
],
[
[
"import pyvista as pv",
"_____no_output_____"
]
],
[
[
"This runs through several of the available geometric objects available\nin VTK which PyVista provides simple convenience methods for generating.\n\nLet\\'s run through creating a few geometric objects!\n",
"_____no_output_____"
]
],
[
[
"cyl = pv.Cylinder()\narrow = pv.Arrow()\nsphere = pv.Sphere()\nplane = pv.Plane()\nline = pv.Line()\nbox = pv.Box()\ncone = pv.Cone()\npoly = pv.Polygon()\ndisc = pv.Disc()",
"_____no_output_____"
]
],
[
[
"Now let\\'s plot them all in one window\n",
"_____no_output_____"
]
],
[
[
"p = pv.Plotter(shape=(3, 3))\n# Top row\np.subplot(0, 0)\np.add_mesh(cyl, color=\"tan\", show_edges=True)\np.subplot(0, 1)\np.add_mesh(arrow, color=\"tan\", show_edges=True)\np.subplot(0, 2)\np.add_mesh(sphere, color=\"tan\", show_edges=True)\n# Middle row\np.subplot(1, 0)\np.add_mesh(plane, color=\"tan\", show_edges=True)\np.subplot(1, 1)\np.add_mesh(line, color=\"tan\", line_width=3)\np.subplot(1, 2)\np.add_mesh(box, color=\"tan\", show_edges=True)\n# Bottom row\np.subplot(2, 0)\np.add_mesh(cone, color=\"tan\", show_edges=True)\np.subplot(2, 1)\np.add_mesh(poly, color=\"tan\", show_edges=True)\np.subplot(2, 2)\np.add_mesh(disc, color=\"tan\", show_edges=True)\n# Render all of them\np.show()",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0c72e285096a77547222a78b9bbf2fb0779c098 | 4,025 | ipynb | Jupyter Notebook | Robotique/Alan_Collongues/Alan_Collongues_3/traiter.ipynb | ECaMorlaix-2SI-1718/CR | 10a2acac7e476ef22e74745dc5f1792d6c1a9631 | [
"MIT"
] | null | null | null | Robotique/Alan_Collongues/Alan_Collongues_3/traiter.ipynb | ECaMorlaix-2SI-1718/CR | 10a2acac7e476ef22e74745dc5f1792d6c1a9631 | [
"MIT"
] | null | null | null | Robotique/Alan_Collongues/Alan_Collongues_3/traiter.ipynb | ECaMorlaix-2SI-1718/CR | 10a2acac7e476ef22e74745dc5f1792d6c1a9631 | [
"MIT"
] | 2 | 2018-01-25T13:13:33.000Z | 2018-02-01T13:09:28.000Z | 33.823529 | 281 | 0.626087 | [
[
[
"# *bienvenue sur notre page qui presentera la fonction traiter des differents suports robotiques mindstorm*",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"### Ici vous trouverez toute les information sur les briques de controle des lego Mindstorm EV3.\n\n",
"_____no_output_____"
],
[
"la brique de controlle du EV3 est un mini aurdinateur sous linux avec un ecran de petite taille qui permet de faire de faire de petite action avec une configuration minimale: \nSystème d'exploitation – LINUX \nProcesseur ARM9 300 MHz \nMémoire flash – 16 Mo \nMémoire vive – 64 Mo \nRésolution de l'écran – 178x128/noir & blanc \nCommunication USB 2.0 vers PC – Jusqu'à 480 Mbit/s \nCommunication USB 1.1 – Jusqu'à 12 Mbit/s \nCarte MicroSD – Compatible SDHC, version 2.0, \nmax. 32 Go \nPorts pour moteurs et capteurs \nConnecteurs – RJ12 \nCompatible Auto ID \nAlimentation – 6 piles AA \n(rechargeables) \n\n### Le fontionnement de la brique de contrôle:\n\nla brique de controlle est programmer par ordinateur et gere toute les info elle même grace a son processeur \nil y a plusieur capteurs qui peuvent etre mis sur le robot ce qui permet un nombre infinit de possibilité de creation \nAvec tout ces capteur le robot peut effectue avec la bonne programmation plusieur action en autonomie\n# la programmation de la brique\n\n### le logiciel :\n\nle lego mindstorm est programmable sur un logiciel spécial fournit gratuitement.\n\nle logiciel est un logiciel de programmation graphique \nil existe un lçogiciel tiers qui s'appele [enchanting](http://enchanting.robotclub.ab.ca/tiki-index.php \"site officiel\")\n",
"_____no_output_____"
],
[
"## la brique de version anterieur la NXT\n\nelle est la petite soeur de la brique EV3 et embarque moins de possibilité tel que le bluetooth pour la programmer avec une tablet ou un smartphone \n\non voit sont anteriorité sur sont logiciel de programmation qui est moins disigne et plus technique. \nles deux logiciel on un points commun c'est que la programmation s'effectue equipement après equipement avec des block comme dans scratch la brique NXT peut aussi être programmé avec le logiciel [enchanting](http://enchanting.robotclub.ab.ca/tiki-index.php \"site officiel\")",
"_____no_output_____"
],
[
"## _le but du projet_ \n \n le but de notre projet est de participer a robofesta qui est une conpetition en 2 épreuve, une epreuve de sauvetage et une épreuve de chorégraphie.",
"_____no_output_____"
]
]
] | [
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
d0c739c0fa1d81ca3ce6335c726483f546f865c1 | 68,390 | ipynb | Jupyter Notebook | src_yolov5/Untitled.ipynb | diddytpq/Tennisball-Tracking-in-Video | 86fbd156f8f4f2502cd5bb766dfa6b3736e72aa6 | [
"MIT"
] | null | null | null | src_yolov5/Untitled.ipynb | diddytpq/Tennisball-Tracking-in-Video | 86fbd156f8f4f2502cd5bb766dfa6b3736e72aa6 | [
"MIT"
] | null | null | null | src_yolov5/Untitled.ipynb | diddytpq/Tennisball-Tracking-in-Video | 86fbd156f8f4f2502cd5bb766dfa6b3736e72aa6 | [
"MIT"
] | 1 | 2021-09-28T05:12:58.000Z | 2021-09-28T05:12:58.000Z | 28.711167 | 1,615 | 0.483623 | [
[
[
"import cv2\nimport numpy as np\nimport os\nimport math\nfrom scipy.spatial import distance as dist\nfrom collections import OrderedDict\nfrom scipy.optimize import linear_sum_assignment\nfrom kalman_utils.KFilter import *",
"_____no_output_____"
],
[
"from filterpy.kalman import KalmanFilter, UnscentedKalmanFilter, MerweScaledSigmaPoints\nfrom filterpy.common import Q_discrete_white_noise",
"_____no_output_____"
],
[
"a = [[170, 175, 196, 209], [150, 557, 174, 577], [625, 194, 640, 209], [170, 175, 196, 209], [173, 225, 202, 253], [435, 526, 476, 568], [435, 576, 476, 603]]\n\nb = np.array([[170, 175, 196, 209], [150, 557, 174, 577], [625, 194, 640, 209], [170, 175, 196, 209], [173, 225, 202, 253], [435, 526, 476, 568], [435, 576, 476, 603]])\n\nno_ball_box = list(set([tuple(set(i)) for i in a]))",
"_____no_output_____"
],
[
"no_ball_box",
"_____no_output_____"
],
[
"for i in a:\n print(set(i))",
"[170, 175, 196, 209]\n[150, 557, 174, 577]\n[625, 194, 640, 209]\n[170, 175, 196, 209]\n[173, 225, 202, 253]\n[435, 526, 476, 568]\n[435, 576, 476, 603]\n"
],
[
"for i in a:\n print(set(i))",
"{209, 170, 196, 175}\n{577, 174, 557, 150}\n{640, 625, 194, 209}\n{209, 170, 196, 175}\n{225, 202, 173, 253}\n{568, 435, 476, 526}\n{576, 603, 435, 476}\n"
],
[
"new_array = [tuple(row) for row in b]",
"_____no_output_____"
],
[
"c = np.unique(new_array)",
"_____no_output_____"
],
[
"c",
"_____no_output_____"
],
[
"new_array",
"_____no_output_____"
],
[
"['person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light',\n 'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow',\n 'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee',\n 'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard',\n 'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple',\n 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch',\n 'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone',\n 'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear',\n 'hair drier', 'toothbrush'].index(\"tennis racket\")",
"_____no_output_____"
],
[
"for i in range(640, 1080):\n a = i\n\n \n if ((9 * a)/16).is_integer():\n print(\"x : \",i)\n print(\"y : \",(9 * a)/16)\n print(\"total_pixel : \", a * (9 * a)/16 *2 )\n ",
"x : 640\ny : 360.0\ntotal_pixel : 460800.0\nx : 656\ny : 369.0\ntotal_pixel : 484128.0\nx : 672\ny : 378.0\ntotal_pixel : 508032.0\nx : 688\ny : 387.0\ntotal_pixel : 532512.0\nx : 704\ny : 396.0\ntotal_pixel : 557568.0\nx : 720\ny : 405.0\ntotal_pixel : 583200.0\nx : 736\ny : 414.0\ntotal_pixel : 609408.0\nx : 752\ny : 423.0\ntotal_pixel : 636192.0\nx : 768\ny : 432.0\ntotal_pixel : 663552.0\nx : 784\ny : 441.0\ntotal_pixel : 691488.0\nx : 800\ny : 450.0\ntotal_pixel : 720000.0\nx : 816\ny : 459.0\ntotal_pixel : 749088.0\nx : 832\ny : 468.0\ntotal_pixel : 778752.0\nx : 848\ny : 477.0\ntotal_pixel : 808992.0\nx : 864\ny : 486.0\ntotal_pixel : 839808.0\nx : 880\ny : 495.0\ntotal_pixel : 871200.0\nx : 896\ny : 504.0\ntotal_pixel : 903168.0\nx : 912\ny : 513.0\ntotal_pixel : 935712.0\nx : 928\ny : 522.0\ntotal_pixel : 968832.0\nx : 944\ny : 531.0\ntotal_pixel : 1002528.0\nx : 960\ny : 540.0\ntotal_pixel : 1036800.0\nx : 976\ny : 549.0\ntotal_pixel : 1071648.0\nx : 992\ny : 558.0\ntotal_pixel : 1107072.0\nx : 1008\ny : 567.0\ntotal_pixel : 1143072.0\nx : 1024\ny : 576.0\ntotal_pixel : 1179648.0\nx : 1040\ny : 585.0\ntotal_pixel : 1216800.0\nx : 1056\ny : 594.0\ntotal_pixel : 1254528.0\nx : 1072\ny : 603.0\ntotal_pixel : 1292832.0\n"
],
[
"1276,600",
"_____no_output_____"
],
[
"x_meter2pix = 23.77 / x_pix_length\ny_meter2pix = 10.97 / y_pix_length",
"_____no_output_____"
],
[
"y_pix_length, x_pix_length = 600, 1276",
"_____no_output_____"
],
[
"x_meter2pix * 1500\n810 * y_meter2pix",
"_____no_output_____"
],
[
"(1500 - 1276)/3",
"_____no_output_____"
],
[
"def trans_xy(img_ori, point_list):\n\n for i in range(len(point_list[1])):\n x_cen, y_cen = point_list[1][i]\n\n point_list[1][i][1] = y_cen - (img_ori.shape[0] / 2)\n\n return point_list",
"_____no_output_____"
],
[
"img = np.zeros([752, 423 * 2, 3])\n \ntrans_xy(img,a)",
"_____no_output_____"
],
[
"def cal_ball_position(ball_stats_list):\n \n net_length = 13.11\n post_hegith_avg = 1.125\n \n for i in range(len(ball_stats_list)):\n \n ball_distance_list = ball_stats_list[i][0]\n ball_height_list = ball_stats_list[i][1]\n\n height = sum(ball_height_list) / 2 - post_hegith_avg\n\n if sum(ball_distance_list) < 13:\n return [np.nan, np.nan, np.nan]\n\n ball2net_length_x_L = ball_distance_list[0] * np.sin(theta_L)\n ball_position_y_L = ball_distance_list[0] * np.cos(theta_L)\n\n ball_plate_angle_L = np.arcsin(height / ball2net_length_x_L)\n\n ball_position_x_L = ball2net_length_x_L * np.cos(ball_plate_angle_L)\n\n ball2net_length_x_R = ball_distance_list[1] * np.sin(theta_R)\n ball_position_y_R = ball_distance_list[1] * np.cos(theta_R)\n\n ball_plate_angle_R = np.arcsin(height / ball2net_length_x_R)\n\n ball_position_x_R = ball2net_length_x_R * np.cos(ball_plate_angle_R)\n\n\n \"\"\"print(\"theta_L, theta_R : \", np.rad2deg(self.theta_L), np.rad2deg(self.theta_R))\n print(\"ball_plate_angle_L, ball_plate_angle_R : \", np.rad2deg(ball_plate_angle_L), np.rad2deg(ball_plate_angle_R))\n print([-ball_position_x_L, ball_position_y_L - 6.4, height + 1])\n print([-ball_position_x_R, 6.4 - ball_position_y_R, height + 1])\"\"\"\n\n if theta_L > theta_R:\n ball_position_y = ball_position_y_L - (net_length / 2)\n\n else :\n ball_position_y = (net_length / 2) - ball_position_y_R\n\n return [-ball_position_x_L, ball_position_y, height + post_hegith_avg]\n",
"_____no_output_____"
],
[
"def get_depth_height(L_pos, R_pos):\n \n depth_height = []\n\n cx = 360\n cy = 204\n focal_length = 320.754\n\n net_length = 13.11\n\n post_hegith_left = 1.13\n post_hegith_right = 1.12 \n\n for i in range(len(L_pos)):\n x_L, y_L = L_pos[i][0] - cx, L_pos[i][1] - cy\n\n for j in range(len(R_pos)):\n x_R, y_R = R_pos[j][0] - cx, R_pos[j][1] - cy\n\n\n c_L = np.sqrt(focal_length ** 2 + x_L ** 2 + y_L ** 2)\n a_L = np.sqrt(focal_length ** 2 + x_L ** 2)\n\n if x_L < 0:\n th_L = 0.785398 + np.arccos(focal_length / a_L)\n\n else :\n th_L = 0.785398 - np.arccos(focal_length / a_L)\n\n\n b_L = a_L * np.cos(th_L)\n\n c_R = np.sqrt(focal_length ** 2 + x_R ** 2 + y_R ** 2)\n a_R = np.sqrt(focal_length ** 2 + x_R ** 2)\n\n if x_R > 0:\n th_R = 0.785398 + np.arccos(focal_length / a_R)\n\n else :\n th_R = 0.785398 - np.arccos(focal_length / a_R)\n\n b_R = a_R * np.cos(th_R)\n\n theta_L = np.arccos(b_L/c_L)\n theta_R = np.arccos(b_R/c_R)\n\n\n D_L = net_length * np.sin(theta_R) / np.sin(3.14 - (theta_L + theta_R))\n D_R = net_length * np.sin(theta_L) / np.sin(3.14 - (theta_L + theta_R))\n\n height_L = abs(D_L * np.sin(np.arcsin(y_L/c_L)))\n height_R = abs(D_R * np.sin(np.arcsin(y_R/c_R)))\n\n #height_L = abs(D_L * np.sin(np.arctan(y_L/a_L)))\n #height_R = abs(D_R * np.sin(np.arctan(y_R/a_R)))\n\n if y_L < 0:\n height_L += post_hegith_left\n\n else:\n height_L -= post_hegith_left \n\n\n if y_R < 0:\n height_R += post_hegith_right\n\n else:\n height_R -= post_hegith_right \n\n\n print(L_pos[i],R_pos[j])\n print([D_L, D_R, height_L, height_R])\n depth_height.append([[D_L, D_R], [height_L, height_R]])\n\n return depth_height",
"_____no_output_____"
],
[
"ball_cen_left = [[260, 162]]\nball_cen_right = [[351, 167]]\n\nball_stats_list = get_depth_height(ball_cen_left,ball_cen_right)",
"[260, 162] [351, 167]\n[9.458568396152303, 12.128951367682397, 2.3032568761005505, 2.509357032389613]\n"
],
[
"cal_ball_position(ball_stats_list)",
"_____no_output_____"
],
[
"def get_ball_pos(L_pos, R_pos):\n \n depth_height = []\n\n cx = 360\n cy = 204\n focal_length = 320.754\n\n net_length = 13.11\n\n post_hegith_left = 1.13\n post_hegith_right = 1.12 \n\n post_hegith_avg = (post_hegith_left + post_hegith_right) / 2\n\n for i in range(len(L_pos)):\n x_L, y_L = L_pos[i][0] - cx, L_pos[i][1] - cy\n\n for j in range(len(R_pos)):\n x_R, y_R = R_pos[j][0] - cx, R_pos[j][1] - cy\n\n\n c_L = np.sqrt(focal_length ** 2 + x_L ** 2 + y_L ** 2)\n a_L = np.sqrt(focal_length ** 2 + x_L ** 2)\n\n if x_L < 0:\n th_L = 0.785398 + np.arccos(focal_length / a_L)\n\n else :\n th_L = 0.785398 - np.arccos(focal_length / a_L)\n\n\n b_L = a_L * np.cos(th_L)\n\n c_R = np.sqrt(focal_length ** 2 + x_R ** 2 + y_R ** 2)\n a_R = np.sqrt(focal_length ** 2 + x_R ** 2)\n\n if x_R > 0:\n th_R = 0.785398 + np.arccos(focal_length / a_R)\n\n else :\n th_R = 0.785398 - np.arccos(focal_length / a_R)\n\n b_R = a_R * np.cos(th_R)\n\n theta_L = np.arccos(b_L/c_L)\n theta_R = np.arccos(b_R/c_R)\n\n\n D_L = net_length * np.sin(theta_R) / np.sin(3.14 - (theta_L + theta_R))\n D_R = net_length * np.sin(theta_L) / np.sin(3.14 - (theta_L + theta_R))\n\n height_L = abs(D_L * np.sin(np.arcsin(y_L/c_L)))\n height_R = abs(D_R * np.sin(np.arcsin(y_R/c_R)))\n\n #height_L = abs(D_L * np.sin(np.arctan(y_L/a_L)))\n #height_R = abs(D_R * np.sin(np.arctan(y_R/a_R)))\n\n if y_L < 0:\n height_L += post_hegith_left\n\n else:\n height_L -= post_hegith_left \n\n\n if y_R < 0:\n height_R += post_hegith_right\n\n else:\n height_R -= post_hegith_right \n\n ball_height_list = [height_L, height_R]\n ball_distance_list = [D_L, D_R]\n\n height = sum(ball_height_list) / 2 - post_hegith_avg\n\n ball2net_length_x_L = ball_distance_list[0] * np.sin(theta_L)\n ball_position_y_L = ball_distance_list[0] * np.cos(theta_L)\n\n ball_plate_angle_L = np.arcsin(height / ball2net_length_x_L)\n\n ball_position_x_L = ball2net_length_x_L * np.cos(ball_plate_angle_L)\n\n ball2net_length_x_R = ball_distance_list[1] * np.sin(theta_R)\n ball_position_y_R = ball_distance_list[1] * np.cos(theta_R)\n\n ball_plate_angle_R = np.arcsin(height / ball2net_length_x_R)\n\n ball_position_x_R = ball2net_length_x_R * np.cos(ball_plate_angle_R)\n\n if theta_L > theta_R:\n ball_position_y = ball_position_y_L - (net_length / 2)\n\n else :\n ball_position_y = (net_length / 2) - ball_position_y_R\n\n\n print(L_pos[i],R_pos[j])\n #print([D_L, D_R, height_L, height_R])\n print([-ball_position_x_L, ball_position_y, height + post_hegith_avg])\n\n depth_height.append([[D_L, D_R], [height_L, height_R]])\n\n return [-ball_position_x_L, ball_position_y, height + post_hegith_avg]",
"_____no_output_____"
],
[
"ball_cen_left = [[298, 153]]\nball_cen_right = [[319, 160]]\n\nball_stats_list = get_ball_pos(ball_cen_left,ball_cen_right)",
"[298, 153] [319, 160]\n[-6.66652032789926, -2.0337485082041438, 2.493953152765253]\n"
],
[
"def check_vel_noise():\n\n y_vel_list = np.array(esti_ball_val_list)[:,1]\n\n\n if len(y_vel_list) > 3 :\n\n vel_mean = np.mean(y_vel_list)\n\n if abs(abs(vel_mean) - abs(y_vel_list[-1])) > 2:\n\n vel_mean = np.mean(y_vel_list[:-1])\n esti_ball_val_list[-1][1] = vel_mean\n\n return esti_ball_val_list[-1]\n\n else:\n return esti_ball_val_list[-1]\n",
"_____no_output_____"
],
[
"def cal_landing_point(pos):\n\n t_list = []\n\n #vel = self.check_vel_noise()\n\n x0, y0, z0 = pos[0], pos[1], pos[2]\n vx, vy, vz = vel[0], vel[1], vel[2]\n\n a = -((0.5 * 0.507 * 1.2041 * np.pi * (0.033 ** 2) * vz ** 2 ) / 0.057 + 9.8 / 2 )\n b = vz\n c = z0\n\n t_list.append((-b + np.sqrt(b ** 2 - 4 * a * c))/(2 * a))\n t_list.append((-b - np.sqrt(b ** 2 - 4 * a * c))/(2 * a))\n\n t = max(t_list)\n\n x = np.array(x0 + vx * t - (0.5 * 0.507 * 1.2041 * np.pi * (0.033 ** 2) * vx ** 2 ) * (t ** 2) / 0.057,float)\n y = np.array(y0 + vy * t - (0.5 * 0.507 * 1.2041 * np.pi * (0.033 ** 2) * vy ** 2 ) * (t ** 2) / 0.057,float)\n z = np.array(z0 + vz * t - ((0.5 * 0.507 * 1.2041 * np.pi * (0.033 ** 2) * vz ** 2 ) / 0.057 + 9.8 / 2) * (t ** 2),float)\n\n return [np.round(x,3), np.round(y,3), np.round(z,3)]",
"_____no_output_____"
],
[
"cal_landing_point(cal_landing_point)",
"_____no_output_____"
],
[
"for i in range(3):\n for j in range(4):\n print(j)\n if j == 3:\n break\n \n ",
"0\n1\n2\n3\n0\n1\n2\n3\n0\n1\n2\n3\n"
],
[
"\nclass Ball_Pos_Estimation():\n\n def __init__(self):\n self.pre_ball_cen_left_list = []\n self.pre_ball_cen_right_list = []\n\n\n\n def check_ball_move_update(self, ball_cen_left_list, ball_cen_right_list):\n\n self.swing_check = True\n\n left_flag = False\n right_flag = False\n\n self.ball_cen_left_list = ball_cen_left_list\n self.ball_cen_right_list = ball_cen_right_list\n\n if len(self.pre_ball_cen_left_list) or len(self.pre_ball_cen_right_list):\n \n for i in range(len(self.ball_cen_left_list)):\n\n if left_flag:\n break \n\n x_cen = self.ball_cen_left_list[i][0]\n \n for j in range(len(self.pre_ball_cen_left_list)):\n\n pre_x_cen = self.pre_ball_cen_left_list[j][0]\n \n if x_cen > pre_x_cen:\n\n self.pre_ball_cen_left_list = self.ball_cen_left_list\n left_flag = True\n break\n \n for i in range(len(self.ball_cen_right_list)):\n\n if right_flag:\n break \n\n x_cen = self.ball_cen_right_list[i][0]\n \n for j in range(len(self.pre_ball_cen_right_list)):\n\n pre_x_cen = self.pre_ball_cen_right_list[j][0]\n\n if x_cen < pre_x_cen:\n\n self.pre_ball_cen_right_list = self.ball_cen_right_list\n right_flag = True\n \n break\n \n if left_flag == False and right_flag == False : \n self.pre_ball_cen_left_list = []\n self.pre_ball_cen_right_list = []\n self.swing_check = False\n return False\n\n return True\n\n else:\n self.pre_ball_cen_left_list = self.ball_cen_left_list\n self.pre_ball_cen_right_list = self.ball_cen_right_list\n\n self.swing_check = False\n\n return True",
"_____no_output_____"
],
[
"estimation_ball = Ball_Pos_Estimation()",
"_____no_output_____"
],
[
"ball_cen_left = [[419, 151]]\nball_cen_right = [[201, 153]]\n\nif estimation_ball.check_ball_move_update(ball_cen_left, ball_cen_right):\n pass\n\nprint(estimation_ball.swing_check)",
"False\n"
],
[
"ball_cen_left = [[392, 160]]\nball_cen_right = [[223, 160]]\n\nif estimation_ball.check_ball_move_update(ball_cen_left, ball_cen_right):\n pass\n\nprint(estimation_ball.swing_check)",
"False\n"
],
[
"ball_cen_left = [[281, 194], [716, 230]]\nball_cen_right = []\n\nestimation_ball.check_ball_move_update(ball_cen_left, ball_cen_right)",
"_____no_output_____"
],
[
"ball_pos_list = [np.nan,np.nan,np.nan]",
"_____no_output_____"
],
[
"np.isnan(ball_pos_list[0])",
"_____no_output_____"
],
[
"a = 2\n\nif a == 1:\n print(1)\n \nelif a == 2:\n print(2)",
"2\n"
],
[
"\n\ndef fx(x, dt):\n # state transition function - predict next state based\n # on constant velocity model x = vt + x_0\n\n F = np.matrix([[1.0, 0.0, 0.0, dt, 0.0, 0.0, 1/2.0*dt**2, 0.0, 0.0],\n [0.0, 1.0, 0.0, 0.0, dt, 0.0, 0.0, 1/2.0*dt**2, 0.0],\n [0.0, 0.0, 1.0, 0.0, 0.0, dt, 0.0, 0.0, 1/2.0*dt**2],\n [0.0, 0.0, 0.0, 1.0, 0.0, 0.0, dt, 0.0, 0.0],\n [0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, dt, 0.0],\n [0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, dt],\n [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0],\n [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0],\n [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0]])\n\n return np.dot(F,x)\n\ndef hx(x):\n # measurement function - convert state into a measurement\n # where measurements are [x_pos, y_pos]\n\n \n H = np.matrix([[1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],\n [0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],\n [0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]])\n \n return np.array([x[0], x[3], x[6]])\n\nclass UK_filter():\n\n def __init__(self, dt, std_acc, x_std_meas, y_std_meas,z_std_meas, init_x, init_y, init_z): \n \n self.init_x = init_x \n self.init_y = init_y\n self.init_z = init_z\n \n self.dt = dt\n self.z_std = 0.1\n\n self.points = MerweScaledSigmaPoints(9, alpha=.1, beta=2., kappa=-1)\n self.f = UnscentedKalmanFilter(dim_x=9, dim_z=3, dt=self.dt, fx=fx, hx=hx, points=self.points)\n\n self.f.x = np.array([self.init_x,0,0,self.init_y,0,0, self.init_z, 0,0])\n\n\n self.f.P = np.eye(9)\n\n self.f.Q = np.matrix([[(dt**6)/36, 0, 0, (dt**5)/12, 0, 0, (dt**4)/6, 0, 0],\n [0, (dt**6)/36, 0, 0, (dt**5)/12, 0, 0, (dt**4)/6, 0],\n [0, 0, (dt**6)/36, 0, 0, (dt**5)/12, 0, 0, (dt**4)/6],\n [(dt**5)/12, 0, 0, (dt**4)/4, 0, 0, (dt**3)/2, 0, 0],\n [0, (dt**5)/12, 0, 0, (dt**4)/4, 0, 0, (dt**3)/2, 0],\n [0, 0, (dt**5)/12, 0, 0, (dt**4)/4, 0, 0, (dt**3)/2],\n [(dt**4)/6, 0, 0, (dt**3)/2, 0, 0, (dt**2), 0, 0],\n [0, (dt**4)/6, 0, 0, (dt**3)/2, 0, 0, (dt**2), 0],\n [0, 0, (dt**4)/6, 0, 0, (dt**3)/2, 0, 0, (dt**2)]]) *std_acc**2\n\n\n #self.f.Q = Q_discrete_white_noise(2, dt = self.dt, var = 0.01**2, block_size = 2)\n\n\n self.f.R = np.array([[x_std_meas**2, 0, 0],\n [0, y_std_meas**2, 0],\n [0, 0, z_std_meas**2]])\n\n #self.f.predict()",
"_____no_output_____"
],
[
"a = UK_filter(dt = 0.1, std_acc = 10, x_std_meas = 1, y_std_meas = 1,z_std_meas = 1,\n init_x = 0, init_y = 0, init_z = 0)",
"_____no_output_____"
],
[
"a.f.predict()",
"_____no_output_____"
],
[
"a.f.update([1,1,1])",
"_____no_output_____"
],
[
"a.f.x.reshape([3,3])",
"_____no_output_____"
],
[
"a.f.update([2,2,2])",
"_____no_output_____"
],
[
"a.f.update([10,10,10])",
"_____no_output_____"
],
[
"a.f.update([20,20,20])",
"_____no_output_____"
],
[
"a.f.update([30,30,30])",
"_____no_output_____"
],
[
"ball_cand_trajectory = [[], [[[-8.499031225886979, 0.311750401425118, 1.404671677765355]]], [[[-7.654383459951477, 0.37339492038427213, 1.528884790477008]]], [[[-6.812550514254101, 0.4358689710714039, 1.6515142171759307]]], [[[-6.005829995239098, 0.5240465335704965, 1.7282085158171043]]], [[[-5.174512338451282, 0.5984737933774342, 1.802634978706663]]], [[[-4.387167345972433, 0.6819632256971389, 1.8761140620728325]]], [[[-3.625711866264724, 0.7480947422039854, 1.9102820296223895]]], [[[-2.8421647085007566, 0.8618655406628255, 1.950691295218776]]], [[[-2.058537458662639, 1.064584863759384, 1.9592570246492662]]], [[]], [[]], [[]]]",
"_____no_output_____"
],
[
"len(ball_cand_trajectory)",
"_____no_output_____"
],
[
"ball_cand_trajectory[3][0][0]",
"_____no_output_____"
],
[
"ball_pos_list = [[-1,0,0], [1,3,4],[2,4,5]]",
"_____no_output_____"
],
[
"b = np.array(ball_pos_list)",
"_____no_output_____"
],
[
"b[:,0].argmin()",
"_____no_output_____"
],
[
"b[:][0]",
"_____no_output_____"
],
[
"a = [0, 0, 0]\nb = [1, 2, 3]\n\n\n\ndef get_distance(point_1, point_2):\n\n return (np.sqrt((point_2[0]-point_1[0])**2 + (point_2[1]-point_1[1])**2 + (point_2[2]-point_1[2])**2))\n",
"_____no_output_____"
],
[
"get_distance(a,b)",
"_____no_output_____"
],
[
"ball_pos_list = [[-7.923465928004007, -0.6755867599611189, 2.580941671512611]]",
"_____no_output_____"
],
[
"x_pos, y_pos, z_pos = ball_pos_list[np.array(ball_pos_list)[:,0].argmin()]\n",
"_____no_output_____"
],
[
"np.array(ball_pos_list)[:,0].argmin()\n",
"_____no_output_____"
],
[
"x_pos, y_pos, z_pos",
"_____no_output_____"
],
[
"a = np.array([[1,2,3],[0,-5,0],[-8,0,0]])\n",
"_____no_output_____"
],
[
"a.append([[1,2],[3,4]])",
"_____no_output_____"
],
[
"a",
"_____no_output_____"
],
[
"from kalman_utils.KFilter import *",
"_____no_output_____"
],
[
"dT = 1 / 25",
"_____no_output_____"
],
[
"ball_pos = [-9.285799665284836, -1.5959832449913565, 2.874695965876609]",
"_____no_output_____"
],
[
"kf = Kalman_filiter(ball_pos[0], ball_pos[1], ball_pos[2], dT)",
"nstates 7\ntransitionMatrix: shape:(7, 7)\n[[1. 0. 0. 0. 0. 0. 0.]\n [0. 1. 0. 0. 0. 0. 0.]\n [0. 0. 1. 0. 0. 0. 0.]\n [0. 0. 0. 1. 0. 0. 0.]\n [0. 0. 0. 0. 1. 0. 0.]\n [0. 0. 0. 0. 0. 1. 0.]\n [0. 0. 0. 0. 0. 0. 1.]]\nmeasurementMatrix: shape:(3, 7)\n[[1. 0. 0. 0. 0. 0. 0.]\n [0. 1. 0. 0. 0. 0. 0.]\n [0. 0. 1. 0. 0. 0. 0.]]\nprocessNoiseCov: shape:(7, 7)\n[[1.e-06 0.e+00 0.e+00 0.e+00 0.e+00 0.e+00 0.e+00]\n [0.e+00 1.e-06 0.e+00 0.e+00 0.e+00 0.e+00 0.e+00]\n [0.e+00 0.e+00 1.e-06 0.e+00 0.e+00 0.e+00 0.e+00]\n [0.e+00 0.e+00 0.e+00 8.e+00 0.e+00 0.e+00 0.e+00]\n [0.e+00 0.e+00 0.e+00 0.e+00 8.e+00 0.e+00 0.e+00]\n [0.e+00 0.e+00 0.e+00 0.e+00 0.e+00 8.e+00 0.e+00]\n [0.e+00 0.e+00 0.e+00 0.e+00 0.e+00 0.e+00 1.e-06]]\nmeasurementNoiseCov: shape:(3, 3)\n[[1.e-06 0.e+00 0.e+00]\n [0.e+00 1.e-06 0.e+00]\n [0.e+00 0.e+00 1.e-06]]\nstatePost: shape:(7, 1)\n[[-9.2858 ]\n [-1.5959833]\n [ 2.874696 ]\n [ 0.1 ]\n [ 0.1 ]\n [ 0.1 ]\n [ 9.801 ]]\n1111\n"
],
[
"kf.get_predict()",
"_____no_output_____"
],
[
"ball_pos_list = [[-8.3324296128703, -1.426689529754115, 2.8019436403665923],[-7.459506668999277, -1.286501720742379, 2.720148095378055],[-6.540641694266555, -1.135716227096026, 2.6019241593327513],[-5.68329300302514, -0.9730271651650142, 2.519130845990981]]",
"_____no_output_____"
],
[
"kf.update(ball_pos[0], ball_pos[1], ball_pos[2], dT)",
"dT: 0.4000\n---------------------------------------------------\nmeas current : [-8.33243 -1.4266895 2.8019435]\npred predicted : [-8.789115 -1.4913363 2.8583198]\n\n"
],
[
"ball_pos = ball_pos_list.pop(0)\nkf.update(ball_pos[0], ball_pos[1], ball_pos[2], dT)",
"dT: 0.4000\n---------------------------------------------------\nmeas current : [-7.4595065 -1.2865018 2.720148 ]\npred predicted : [-7.4595075 -1.2865019 2.7201471]\n\n"
],
[
"ball_pos = ball_pos_list.pop(0)\nkf.update(ball_pos[0], ball_pos[1], ball_pos[2], dT)",
"dT: 0.4000\n---------------------------------------------------\nmeas current : [-6.540642 -1.1357162 2.6019242]\npred predicted : [-6.5406413 -1.1357162 2.601923 ]\n\n"
],
[
"ball_pos = ball_pos_list.pop(0)\nkf.update(ball_pos[0], ball_pos[1], ball_pos[2], dT)",
"dT: 0.4000\n---------------------------------------------------\nmeas current : [-5.683293 -0.97302717 2.519131 ]\npred predicted : [-5.683293 -0.97302717 2.5191298 ]\n\n"
],
[
"kf.predict(dT)",
"dT: 0.4000\n---------------------------------------------------\nmeas current : None (only predicting)\npred predicted without meas: [-4.8259444 -0.8103382 0.8681732]\n\n"
],
[
"kf.predict(dT)",
"dT: 0.4000\n---------------------------------------------------\nmeas current : None (only predicting)\npred predicted without meas: [-3.968596 -0.6476492 -2.3509436]\n\n"
],
[
"kf.get_predict()",
"_____no_output_____"
],
[
"kf.KF.getPostState()",
"_____no_output_____"
],
[
"np .array(ball_pos_list)[:,1][:-1]",
"_____no_output_____"
],
[
"np.mean(np.array(ball_pos_list)[:,1][:-1])",
"_____no_output_____"
],
[
"a = [[[-9.039131345617234, -0.8137119203553631, 2.7444703070749017]], [[-7.900829857194978, -0.654493068471937, 2.6104207999239812]], [[-6.849655790366586, -0.5241522117656086, 2.4875451129360613]], [[-5.793390039377407, -0.39658750289812605, 2.3529254544909817]]]\n",
"_____no_output_____"
],
[
"b = np.array(a).reshape([-1,3])\nb",
"_____no_output_____"
],
[
"sum(np.diff(b[:,1])) / 0.04",
"_____no_output_____"
],
[
"for i in []:\n print(1)",
"_____no_output_____"
],
[
"ball_cand_pos = [[-9.03913135, -0.81371192, 2.74447031],\n [-7.90082986, -0.65449307, 2.6104208 ],\n [-6.84965579, -0.52415221, 2.48754511],\n [-5.79339004, -0.3965875 , 2.35292545]]\n\ndel_list = [1,2,3]\n\na= np.array(ball_cand_pos)\n\nnp.delete(a,del_list,axis = 0)",
"_____no_output_____"
],
[
"ball_cand_pos.pop(tuple(del_list))",
"_____no_output_____"
],
[
"tuple(del_list)",
"_____no_output_____"
],
[
"if 3 == 3:\n print(1)",
"1\n"
],
[
"estimation_ball_trajectory_list = np.array([[-8.075992233062022, -2.0712591029119727, 2.0143476038492176], [-7.041614424760605, -1.9654479963268496, 1.998508488845969], [-6.065044696824322, -1.879832764271466, 1.9635872373067076], [-5.107183755941063, -1.7968180783990304, 1.925614986159784], [-4.169832727705863, -1.6838107370497974, 1.884759164010521], [-3.2611958072772738, -1.5644043096331428, 1.81101188769961], [-2.368649316849956, -1.4805807124989778, 1.737028408823873], [-1.4834200661316506, -1.378965700223568, 1.6496367839315553], [-0.7006541039923906, -1.2862463955187806, 1.5580097756344515]])\nestimation_ball_trajectory_list",
"_____no_output_____"
],
[
"x_pos_list = estimation_ball_trajectory_list[:,0]\ny_pos_list = estimation_ball_trajectory_list[:,1]\nz_pos_list = estimation_ball_trajectory_list[:,2]\n",
"_____no_output_____"
],
[
"np.diff(estimation_ball_trajectory_list)",
"_____no_output_____"
],
[
"x_pos_list\nnp.diff(x_pos_list)[-1]",
"_____no_output_____"
],
[
"def cal_landing_point(pos_list):\n\n t_list = []\n\n if len(pos_list) < 4 : return [np.nan, np.nan, np.nan]\n\n pos = pos_list[-1]\n\n x0, y0, z0 = pos[0], pos[1], pos[2]\n\n vx, vy, vz = get_velocity(pos_list)\n\n a = -((0.5 * 0.507 * 1.2041 * np.pi * (0.033 ** 2) * vz ** 2 ) / 0.057 + 9.8 / 2 )\n b = vz\n c = z0\n\n t_list.append((-b + np.sqrt(b ** 2 - 4 * a * c))/(2 * a))\n t_list.append((-b - np.sqrt(b ** 2 - 4 * a * c))/(2 * a))\n\n t = max(t_list)\n \n x = np.array(x0 + vx * t - (0.5 * 0.507 * 1.2041 * np.pi * (0.033 ** 2) * vx ** 2 ) * (t ** 2) / 0.057,float)\n y = np.array(y0 + vy * t - (0.5 * 0.507 * 1.2041 * np.pi * (0.033 ** 2) * vy ** 2 ) * (t ** 2) / 0.057,float)\n z = np.array(z0 + vz * t - ((0.5 * 0.507 * 1.2041 * np.pi * (0.033 ** 2) * vz ** 2 ) / 0.057 + 9.8 / 2) * (t ** 2),float)\n \n return [np.round(x,3), np.round(y,3), np.round(z,3)]\n\n\ndef get_velocity(pos_list):\n \n t = 1 / 30\n \n np_pos_list = np.array(pos_list)\n\n x_pos_list = np_pos_list[:,0]\n y_pos_list = np_pos_list[:,1]\n z_pos_list = np_pos_list[:,2] \n\n vel_x_list = np.diff(x_pos_list) / t\n vel_y_list = np.diff(y_pos_list) / t\n vel_z_list = np.diff(z_pos_list) / t\n\n return vel_x_list[-1], vel_y_list[-1], vel_z_list[-1]\n",
"_____no_output_____"
],
[
"ball_pos_jrajectory = [[-7.654350032583985, 0.37375046201544926, 1.5602039272816657], [-6.812516855329211, 0.4362023210314696, 1.6809758292885233], [-6.005632695613204, 0.5230876456730043, 1.7703496818844444], [-5.17434312120148, 0.5969135946437563, 1.842121410014573], [-4.40319049584631, 0.6610575247176333, 1.90162399321874], [-3.6256202508366666, 0.7485738124651196, 1.9551274411299535], [-2.8420151676001075, 0.8677493333923483, 1.980619130934988], [-2.058931199304543, 0.9710952377297057, 1.9892246369063118], [-1.3541772365570068, 1.0641107559204102, 1.9812870025634766], [-0.7198931574821472, 1.1478245258331299, 1.9584616422653198], [-0.14903749525547028, 1.223166823387146, 1.9222372770309448]]\nball_pos_jrajectory",
"_____no_output_____"
],
[
"cal_landing_point(ball_pos_jrajectory)",
"_____no_output_____"
],
[
"a = np.array([ -3.1019, -2.3294, -1.513])",
"_____no_output_____"
],
[
"np.mean(np.diff(a))/(1/25)",
"_____no_output_____"
],
[
"np.diff(a)/(1/25)",
"_____no_output_____"
],
[
"x0, y0, z0 = -0.14778512716293335, -3.731870412826538, 2.6436607837677\nvx, vy, vz = 13.437911868095398, 0.15355348587036133, 0.08598566055297852\n\n",
"_____no_output_____"
],
[
"t_list = []\na = -((0.5 * 0.507 * 1.2041 * np.pi * (0.033 ** 2) * vz ** 2 ) / 0.057 + 9.8 / 2 )\nb = vz\nc = z0\n\nt_list.append((-b + np.sqrt(b ** 2 - 4 * a * c))/(2 * a))\nt_list.append((-b - np.sqrt(b ** 2 - 4 * a * c))/(2 * a))\n\nt = max(t_list)\n\ndrag_x = (0.5 * 0.507 * 1.2041 * np.pi * (0.033 ** 2) * vx ** 2 )\ndrag_y = (0.5 * 0.507 * 1.2041 * np.pi * (0.033 ** 2) * vy ** 2 )\ndrag_z = (0.5 * 0.507 * 1.2041 * np.pi * (0.033 ** 2) * vz ** 2 )\n\ndrag_x = 0\ndrag_y = 0\ndrag_z = 0\n\nx = np.array(x0 + vx * t - drag_x * (t ** 2) / 0.057,float)\ny = np.array(y0 + vy * t - drag_y * (t ** 2) / 0.057,float)\nz = np.array(z0 + vz * t - (drag_z / 0.057 + 9.8 / 2) * (t ** 2),float)\n\n[np.round(x,3), np.round(y,3), np.round(z,3)]\n",
"_____no_output_____"
],
[
"t_list",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0c739ff10cb56ef92e456891fb9154b434919ab | 4,746 | ipynb | Jupyter Notebook | ANALYSIS_06_merge_results.ipynb | initze/thaw_slump_preprocessing | 0a36f381a5bd97cd71535abb6aeee1469b0f6f5e | [
"MIT"
] | null | null | null | ANALYSIS_06_merge_results.ipynb | initze/thaw_slump_preprocessing | 0a36f381a5bd97cd71535abb6aeee1469b0f6f5e | [
"MIT"
] | null | null | null | ANALYSIS_06_merge_results.ipynb | initze/thaw_slump_preprocessing | 0a36f381a5bd97cd71535abb6aeee1469b0f6f5e | [
"MIT"
] | null | null | null | 22.28169 | 162 | 0.525284 | [
[
[
"import geopandas as gpd\nfrom pathlib import Path\nimport pandas as pd\nimport numpy as np\nimport swifter\nimport matplotlib.pyplot as plt\nimport rasterio\nimport datetime\nimport os, shutil\nfrom joblib import delayed, Parallel\nimport tqdm",
"_____no_output_____"
]
],
[
[
"### Settings ",
"_____no_output_____"
]
],
[
[
"# Slumps\nINFERENCE_DIR = Path(r'Q:\\p_aicore_pf\\initze\\processed\\inference\\RTS_6Regions_V01_UnetPlusPlus_resnet34_FocalLoss_sh6_50_bs100_2021-12-04_22-27-53')\n# Pingo\n#INFERENCE_DIR = Path(r'Q:\\p_aicore_pf\\initze\\processed\\inference\\pingo_UnetPP_v1_2021-12-12_09-56-50')\n\nOUTPUT_DIR = Path(r'C:\\Users\\initze\\OneDrive\\100_AI-CORE\\16_inference_statistics')\nout_file = OUTPUT_DIR / f'{INFERENCE_DIR.stem}_merged_datasets.shp'",
"_____no_output_____"
],
[
"print(out_file)",
"_____no_output_____"
],
[
"def get_vector(f):\n gdf = gpd.read_file(f).to_crs(epsg=4326)\n gdf['id_local'] = gdf.index\n gdf['dataset'] = f.stem\n gdf['model'] = f.parts[-2]\n split = f.stem.split('_')\n if len(split)==4:\n gdf[['scene', 'tile_id', 'date', 'sensor']] = split\n else:\n gdf[['date', 'scene', 'sensor']] = split\n return gdf\n\ndef load_dataset(f):\n try:\n return get_vector(f)\n except:\n print(f'Error on {f.stem}')",
"_____no_output_____"
]
],
[
[
"### create filelist",
"_____no_output_____"
]
],
[
[
"flist = list(INFERENCE_DIR.glob('*'))",
"_____no_output_____"
]
],
[
[
"#### Load files and add to list ",
"_____no_output_____"
]
],
[
[
"%time ds_list = Parallel(n_jobs=10)(delayed(load_dataset)(f) for f in tqdm.tqdm_notebook(flist[:]))",
"_____no_output_____"
],
[
"ds_list = []\nfor f in flist[:]:\n try:\n ds_list.append(get_vector(f))\n except:\n print(f'Error on {f.stem}')",
"_____no_output_____"
]
],
[
[
"#### Merge all GDF to one ",
"_____no_output_____"
]
],
[
[
"rdf = gpd.GeoDataFrame( pd.concat( ds_list, ignore_index=True) )",
"_____no_output_____"
]
],
[
[
"#### Set projection (got lost during merge with pandas) ",
"_____no_output_____"
]
],
[
[
"rdf = rdf.set_crs(epsg=4326)",
"_____no_output_____"
]
],
[
[
"#### Calculate time variables for later analysis ",
"_____no_output_____"
]
],
[
[
"rdf['year'] = pd.to_datetime(rdf.iloc[:]['date'], infer_datetime_format=True).dt.year\nrdf['month'] = pd.to_datetime(rdf.iloc[:]['date'], infer_datetime_format=True).dt.month\nrdf['doy'] = pd.to_datetime(rdf.iloc[:]['date'], infer_datetime_format=True).dt.day_of_year",
"_____no_output_____"
]
],
[
[
"#### Write to file",
"_____no_output_____"
]
],
[
[
"rdf.to_file(out_file)",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0c765689aa54494ab5e23303f1283d2bf164a79 | 17,245 | ipynb | Jupyter Notebook | how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-publish-and-run-using-rest-endpoint.ipynb | oliverw1/MachineLearningNotebooks | 5080053a3542647d788053ce4e919c9f8efd98e9 | [
"MIT"
] | 2 | 2021-06-25T17:45:54.000Z | 2021-06-26T02:38:06.000Z | how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-publish-and-run-using-rest-endpoint.ipynb | oliverw1/MachineLearningNotebooks | 5080053a3542647d788053ce4e919c9f8efd98e9 | [
"MIT"
] | 4 | 2020-08-14T23:21:54.000Z | 2020-08-14T23:34:35.000Z | how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-publish-and-run-using-rest-endpoint.ipynb | oliverw1/MachineLearningNotebooks | 5080053a3542647d788053ce4e919c9f8efd98e9 | [
"MIT"
] | 3 | 2020-12-02T14:29:29.000Z | 2020-12-03T10:46:00.000Z | 36.613588 | 474 | 0.565497 | [
[
[
"Copyright (c) Microsoft Corporation. All rights reserved. \nLicensed under the MIT License.",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"# How to Publish a Pipeline and Invoke the REST endpoint\nIn this notebook, we will see how we can publish a pipeline and then invoke the REST endpoint.",
"_____no_output_____"
],
[
"## Prerequisites and Azure Machine Learning Basics\nIf you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, make sure you go through the [configuration Notebook](https://aka.ms/pl-config) first if you haven't. This sets you up with a working config file that has information on your workspace, subscription id, etc. \n\n### Initialization Steps",
"_____no_output_____"
]
],
[
[
"import azureml.core\nfrom azureml.core import Workspace, Datastore, Experiment, Dataset\nfrom azureml.core.compute import AmlCompute\nfrom azureml.core.compute import ComputeTarget\n\n# Check core SDK version number\nprint(\"SDK version:\", azureml.core.VERSION)\n\nfrom azureml.data.data_reference import DataReference\nfrom azureml.pipeline.core import Pipeline, PipelineData\nfrom azureml.pipeline.steps import PythonScriptStep\nfrom azureml.pipeline.core.graph import PipelineParameter\n\nprint(\"Pipeline SDK-specific imports completed\")\n\nws = Workspace.from_config()\nprint(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\\n')\n\n# Default datastore (Azure blob storage)\n# def_blob_store = ws.get_default_datastore()\ndef_blob_store = Datastore(ws, \"workspaceblobstore\")\nprint(\"Blobstore's name: {}\".format(def_blob_store.name))",
"_____no_output_____"
]
],
[
[
"### Compute Targets\n#### Retrieve an already attached Azure Machine Learning Compute",
"_____no_output_____"
]
],
[
[
"from azureml.core.compute_target import ComputeTargetException\n\naml_compute_target = \"cpu-cluster\"\ntry:\n aml_compute = AmlCompute(ws, aml_compute_target)\n print(\"found existing compute target.\")\nexcept ComputeTargetException:\n print(\"creating new compute target\")\n \n provisioning_config = AmlCompute.provisioning_configuration(vm_size = \"STANDARD_D2_V2\",\n min_nodes = 1, \n max_nodes = 4) \n aml_compute = ComputeTarget.create(ws, aml_compute_target, provisioning_config)\n aml_compute.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)\n",
"_____no_output_____"
],
[
"# For a more detailed view of current Azure Machine Learning Compute status, use get_status()\n# example: un-comment the following line.\n# print(aml_compute.get_status().serialize())",
"_____no_output_____"
]
],
[
[
"## Building Pipeline Steps with Inputs and Outputs\nA step in the pipeline can take [dataset](https://docs.microsoft.com/python/api/azureml-core/azureml.data.filedataset?view=azure-ml-py) as input. This dataset can be a data source that lives in one of the accessible data locations, or intermediate data produced by a previous step in the pipeline.",
"_____no_output_____"
]
],
[
[
"# Uploading data to the datastore\ndata_path = def_blob_store.upload_files([\"./20news.pkl\"], target_path=\"20newsgroups\", overwrite=True)",
"_____no_output_____"
],
[
"# Reference the data uploaded to blob storage using file dataset\n# Assign the datasource to blob_input_data variable\nblob_input_data = Dataset.File.from_files(data_path).as_named_input(\"test_data\")\nprint(\"Dataset created\")",
"_____no_output_____"
],
[
"# Define intermediate data using PipelineData\nprocessed_data1 = PipelineData(\"processed_data1\",datastore=def_blob_store)\nprint(\"PipelineData object created\")",
"_____no_output_____"
]
],
[
[
"#### Define a Step that consumes a dataset and produces intermediate data.\nIn this step, we define a step that consumes a dataset and produces intermediate data.\n\n**Open `train.py` in the local machine and examine the arguments, inputs, and outputs for the script. That will give you a good sense of why the script argument names used below are important.** \n\nThe best practice is to use separate folders for scripts and its dependent files for each step and specify that folder as the `source_directory` for the step. This helps reduce the size of the snapshot created for the step (only the specific folder is snapshotted). Since changes in any files in the `source_directory` would trigger a re-upload of the snapshot, this helps keep the reuse of the step when there are no changes in the `source_directory` of the step.",
"_____no_output_____"
]
],
[
[
"# trainStep consumes the datasource (Datareference) in the previous step\n# and produces processed_data1\n\nsource_directory = \"publish_run_train\"\n\ntrainStep = PythonScriptStep(\n script_name=\"train.py\", \n arguments=[\"--input_data\", blob_input_data, \"--output_train\", processed_data1],\n inputs=[blob_input_data],\n outputs=[processed_data1],\n compute_target=aml_compute, \n source_directory=source_directory\n)\nprint(\"trainStep created\")",
"_____no_output_____"
]
],
[
[
"#### Define a Step that consumes intermediate data and produces intermediate data\nIn this step, we define a step that consumes an intermediate data and produces intermediate data.\n\n**Open `extract.py` in the local machine and examine the arguments, inputs, and outputs for the script. That will give you a good sense of why the script argument names used below are important.** ",
"_____no_output_____"
]
],
[
[
"# extractStep to use the intermediate data produced by step4\n# This step also produces an output processed_data2\nprocessed_data2 = PipelineData(\"processed_data2\", datastore=def_blob_store)\nsource_directory = \"publish_run_extract\"\n\nextractStep = PythonScriptStep(\n script_name=\"extract.py\",\n arguments=[\"--input_extract\", processed_data1, \"--output_extract\", processed_data2],\n inputs=[processed_data1],\n outputs=[processed_data2],\n compute_target=aml_compute, \n source_directory=source_directory)\nprint(\"extractStep created\")",
"_____no_output_____"
]
],
[
[
"#### Define a Step that consumes multiple intermediate data and produces intermediate data\nIn this step, we define a step that consumes multiple intermediate data and produces intermediate data.",
"_____no_output_____"
],
[
"### PipelineParameter",
"_____no_output_____"
],
[
"This step also has a [PipelineParameter](https://docs.microsoft.com/en-us/python/api/azureml-pipeline-core/azureml.pipeline.core.graph.pipelineparameter?view=azure-ml-py) argument that help with calling the REST endpoint of the published pipeline.",
"_____no_output_____"
]
],
[
[
"# We will use this later in publishing pipeline\npipeline_param = PipelineParameter(name=\"pipeline_arg\", default_value=10)\nprint(\"pipeline parameter created\")",
"_____no_output_____"
]
],
[
[
"**Open `compare.py` in the local machine and examine the arguments, inputs, and outputs for the script. That will give you a good sense of why the script argument names used below are important.**",
"_____no_output_____"
]
],
[
[
"# Now define step6 that takes two inputs (both intermediate data), and produce an output\nprocessed_data3 = PipelineData(\"processed_data3\", datastore=def_blob_store)\nsource_directory = \"publish_run_compare\"\n\ncompareStep = PythonScriptStep(\n script_name=\"compare.py\",\n arguments=[\"--compare_data1\", processed_data1, \"--compare_data2\", processed_data2, \"--output_compare\", processed_data3, \"--pipeline_param\", pipeline_param],\n inputs=[processed_data1, processed_data2],\n outputs=[processed_data3], \n compute_target=aml_compute, \n source_directory=source_directory)\nprint(\"compareStep created\")",
"_____no_output_____"
]
],
[
[
"#### Build the pipeline",
"_____no_output_____"
]
],
[
[
"pipeline1 = Pipeline(workspace=ws, steps=[compareStep])\nprint (\"Pipeline is built\")",
"_____no_output_____"
]
],
[
[
"## Run published pipeline\n### Publish the pipeline",
"_____no_output_____"
]
],
[
[
"published_pipeline1 = pipeline1.publish(name=\"My_New_Pipeline\", description=\"My Published Pipeline Description\", continue_on_step_failure=True)\npublished_pipeline1",
"_____no_output_____"
]
],
[
[
"Note: the continue_on_step_failure parameter specifies whether the execution of steps in the Pipeline will continue if one step fails. The default value is False, meaning when one step fails, the Pipeline execution will stop, canceling any running steps.",
"_____no_output_____"
],
[
"### Publish the pipeline from a submitted PipelineRun\nIt is also possible to publish a pipeline from a submitted PipelineRun",
"_____no_output_____"
]
],
[
[
"# submit a pipeline run\npipeline_run1 = Experiment(ws, 'Pipeline_experiment').submit(pipeline1)\n# publish a pipeline from the submitted pipeline run\npublished_pipeline2 = pipeline_run1.publish_pipeline(name=\"My_New_Pipeline2\", description=\"My Published Pipeline Description\", version=\"0.1\", continue_on_step_failure=True)\npublished_pipeline2",
"_____no_output_____"
]
],
[
[
"### Get published pipeline\n\nYou can get the published pipeline using **pipeline id**.\n\nTo get all the published pipelines for a given workspace(ws): \n```css\nall_pub_pipelines = PublishedPipeline.get_all(ws)\n```",
"_____no_output_____"
]
],
[
[
"from azureml.pipeline.core import PublishedPipeline\n\npipeline_id = published_pipeline1.id # use your published pipeline id\npublished_pipeline = PublishedPipeline.get(ws, pipeline_id)\npublished_pipeline",
"_____no_output_____"
]
],
[
[
"### Run published pipeline using its REST endpoint\n[This notebook](https://aka.ms/pl-restep-auth) shows how to authenticate to AML workspace.",
"_____no_output_____"
]
],
[
[
"from azureml.core.authentication import InteractiveLoginAuthentication\nimport requests\n\nauth = InteractiveLoginAuthentication()\naad_token = auth.get_authentication_header()\n\nrest_endpoint1 = published_pipeline.endpoint\n\nprint(\"You can perform HTTP POST on URL {} to trigger this pipeline\".format(rest_endpoint1))\n\n# specify the param when running the pipeline\nresponse = requests.post(rest_endpoint1, \n headers=aad_token, \n json={\"ExperimentName\": \"My_Pipeline1\",\n \"RunSource\": \"SDK\",\n \"ParameterAssignments\": {\"pipeline_arg\": 45}})",
"_____no_output_____"
],
[
"try:\n response.raise_for_status()\nexcept Exception: \n raise Exception('Received bad response from the endpoint: {}\\n'\n 'Response Code: {}\\n'\n 'Headers: {}\\n'\n 'Content: {}'.format(rest_endpoint, response.status_code, response.headers, response.content))\n\nrun_id = response.json().get('Id')\nprint('Submitted pipeline run: ', run_id)",
"_____no_output_____"
]
],
[
[
"# Next: Data Transfer\nThe next [notebook](https://aka.ms/pl-data-trans) will showcase data transfer steps between different types of data stores.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
d0c76930be0a0857f06f6f5a33af9e35d202505c | 210,452 | ipynb | Jupyter Notebook | Mapping/lidar_to_grid_map/lidar_to_grid_map_tutorial.ipynb | Shelfcol/PythonRobotics | 175adfe2ff4b704856c314d269184d1fe7a5d987 | [
"MIT"
] | null | null | null | Mapping/lidar_to_grid_map/lidar_to_grid_map_tutorial.ipynb | Shelfcol/PythonRobotics | 175adfe2ff4b704856c314d269184d1fe7a5d987 | [
"MIT"
] | null | null | null | Mapping/lidar_to_grid_map/lidar_to_grid_map_tutorial.ipynb | Shelfcol/PythonRobotics | 175adfe2ff4b704856c314d269184d1fe7a5d987 | [
"MIT"
] | null | null | null | 659.724138 | 162,432 | 0.94834 | [
[
[
"## LIDAR to 2D grid map example\n\nThis simple tutorial shows how to read LIDAR (range) measurements from a file and convert it to occupancy grid.\n\nOccupancy grid maps (_Hans Moravec, A.E. Elfes: High resolution maps from wide angle sonar, Proc. IEEE Int. Conf. Robotics Autom. (1985)_) are a popular, probabilistic approach to represent the environment. The grid is basically discrete representation of the environment, which shows if a grid cell is occupied or not. Here the map is represented as a `numpy array`, and numbers close to 1 means the cell is occupied (_marked with red on the next image_), numbers close to 0 means they are free (_marked with green_). The grid has the ability to represent unknown (unobserved) areas, which are close to 0.5.\n\n\n\n\nIn order to construct the grid map from the measurement we need to discretise the values. But, first let's need to `import` some necessary packages.",
"_____no_output_____"
]
],
[
[
"import math\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom math import cos, sin, radians, pi",
"_____no_output_____"
]
],
[
[
"The measurement file contains the distances and the corresponding angles in a `csv` (comma separated values) format. Let's write the `file_read` method:",
"_____no_output_____"
]
],
[
[
"def file_read(f):\n \"\"\"\n Reading LIDAR laser beams (angles and corresponding distance data)\n \"\"\"\n measures = [line.split(\",\") for line in open(f)]\n angles = []\n distances = []\n for measure in measures:\n angles.append(float(measure[0]))\n distances.append(float(measure[1]))\n angles = np.array(angles)\n distances = np.array(distances)\n return angles, distances",
"_____no_output_____"
]
],
[
[
"From the distances and the angles it is easy to determine the `x` and `y` coordinates with `sin` and `cos`. \nIn order to display it `matplotlib.pyplot` (`plt`) is used.",
"_____no_output_____"
]
],
[
[
"ang, dist = file_read(\"lidar01.csv\")\nox = np.sin(ang) * dist\noy = np.cos(ang) * dist\nplt.figure(figsize=(6,10))\nplt.plot([oy, np.zeros(np.size(oy))], [ox, np.zeros(np.size(oy))], \"ro-\") # lines from 0,0 to the \nplt.axis(\"equal\")\nbottom, top = plt.ylim() # return the current ylim\nplt.ylim((top, bottom)) # rescale y axis, to match the grid orientation\nplt.grid(True)\nplt.show()",
"_____no_output_____"
]
],
[
[
"The `lidar_to_grid_map.py` contains handy functions which can used to convert a 2D range measurement to a grid map. For example the `bresenham` gives the a straight line between two points in a grid map. Let's see how this works.",
"_____no_output_____"
]
],
[
[
"import lidar_to_grid_map as lg\nmap1 = np.ones((50, 50)) * 0.5\nline = lg.bresenham((2, 2), (40, 30))\nfor l in line:\n map1[l[0]][l[1]] = 1\nplt.imshow(map1)\nplt.colorbar()\nplt.show()",
"_____no_output_____"
],
[
"line = lg.bresenham((2, 30), (40, 30))\nfor l in line:\n map1[l[0]][l[1]] = 1\nline = lg.bresenham((2, 30), (2, 2))\nfor l in line:\n map1[l[0]][l[1]] = 1\nplt.imshow(map1)\nplt.colorbar()\nplt.show()",
"_____no_output_____"
]
],
[
[
"To fill empty areas, a queue-based algorithm can be used that can be used on an initialized occupancy map. The center point is given: the algorithm checks for neighbour elements in each iteration, and stops expansion on obstacles and free boundaries.",
"_____no_output_____"
]
],
[
[
"from collections import deque\ndef flood_fill(cpoint, pmap):\n \"\"\"\n cpoint: starting point (x,y) of fill\n pmap: occupancy map generated from Bresenham ray-tracing\n \"\"\"\n # Fill empty areas with queue method\n sx, sy = pmap.shape\n fringe = deque()\n fringe.appendleft(cpoint)\n while fringe:\n n = fringe.pop()\n nx, ny = n\n # West\n if nx > 0:\n if pmap[nx - 1, ny] == 0.5:\n pmap[nx - 1, ny] = 0.0\n fringe.appendleft((nx - 1, ny))\n # East\n if nx < sx - 1:\n if pmap[nx + 1, ny] == 0.5:\n pmap[nx + 1, ny] = 0.0\n fringe.appendleft((nx + 1, ny))\n # North\n if ny > 0:\n if pmap[nx, ny - 1] == 0.5:\n pmap[nx, ny - 1] = 0.0\n fringe.appendleft((nx, ny - 1))\n # South\n if ny < sy - 1:\n if pmap[nx, ny + 1] == 0.5:\n pmap[nx, ny + 1] = 0.0\n fringe.appendleft((nx, ny + 1))",
"_____no_output_____"
]
],
[
[
"This algotihm will fill the area bounded by the yellow lines starting from a center point (e.g. (10, 20)) with zeros:",
"_____no_output_____"
]
],
[
[
"flood_fill((10, 20), map1)\nmap_float = np.array(map1)/10.0\nplt.imshow(map1)\nplt.colorbar()\nplt.show()",
"_____no_output_____"
]
],
[
[
"Let's use this flood fill on real data:",
"_____no_output_____"
]
],
[
[
"xyreso = 0.02 # x-y grid resolution\nyawreso = math.radians(3.1) # yaw angle resolution [rad]\nang, dist = file_read(\"lidar01.csv\")\nox = np.sin(ang) * dist\noy = np.cos(ang) * dist\npmap, minx, maxx, miny, maxy, xyreso = lg.generate_ray_casting_grid_map(ox, oy, xyreso, False)\nxyres = np.array(pmap).shape\nplt.figure(figsize=(20,8))\nplt.subplot(122)\nplt.imshow(pmap, cmap = \"PiYG_r\") \nplt.clim(-0.4, 1.4)\nplt.gca().set_xticks(np.arange(-.5, xyres[1], 1), minor = True)\nplt.gca().set_yticks(np.arange(-.5, xyres[0], 1), minor = True)\nplt.grid(True, which=\"minor\", color=\"w\", linewidth = .6, alpha = 0.5)\nplt.colorbar()\nplt.show()",
"The grid map is 150 x 100 .\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0c76b06ec24ebe819e46b5a2e14eddfe1058722 | 12,878 | ipynb | Jupyter Notebook | Recursion_Udemy.ipynb | DebjitHore/Complete-Data-Structures-and-Algorithms-in-Python | d083eaf7a0de58a6f8f6e1cc5c19d2e11aa3b3d9 | [
"Apache-2.0"
] | null | null | null | Recursion_Udemy.ipynb | DebjitHore/Complete-Data-Structures-and-Algorithms-in-Python | d083eaf7a0de58a6f8f6e1cc5c19d2e11aa3b3d9 | [
"Apache-2.0"
] | null | null | null | Recursion_Udemy.ipynb | DebjitHore/Complete-Data-Structures-and-Algorithms-in-Python | d083eaf7a0de58a6f8f6e1cc5c19d2e11aa3b3d9 | [
"Apache-2.0"
] | null | null | null | 22.672535 | 272 | 0.408216 | [
[
[
"<a href=\"https://colab.research.google.com/github/DebjitHore/Complete-Data-Structures-and-Algorithms-in-Python/blob/main/Recursion_Udemy.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"Recursion- A way of solving a problem by having a function call itself. Performing the same operation with different inputs. Usually smaller inputs for convergence, with a base condition to prevent infinite loop. \nIRL example- Russian Doll.",
"_____no_output_____"
],
[
"#Pseudocode",
"_____no_output_____"
]
],
[
[
"def openRussianDoll(doll):\n if doll==1: #Smallest Doll\n print('All Dolls opened')\n else:\n openRussianDoll(doll-1)\n",
"_____no_output_____"
]
],
[
[
"#Recursion logic.\n\nStack Memory is used and last in, first out is followed. \nRecursion is less time and space efficient as against iteration. But it is easier to code.",
"_____no_output_____"
],
[
"#How to write Recursion",
"_____no_output_____"
],
[
"##Factorial",
"_____no_output_____"
]
],
[
[
"def factorial(n):\n assert n>=0 and int (n)==n, 'The number must be positive integer only'\n if n in [0,1]:\n return 1\n else:\n return n*factorial(n-1)",
"_____no_output_____"
],
[
"factorial(5)",
"_____no_output_____"
]
],
[
[
"##Fibonnaci",
"_____no_output_____"
],
[
"n-th term of a Fibonnaci series",
"_____no_output_____"
]
],
[
[
"fib_list=[]\ndef fibonacci(n):\n assert n>=0 and int (n) ==n, 'The number must be integer and positive'\n if n in [0,1]:\n return n\n else:\n return fibonacci(n-1)+fibonacci(n-2)",
"_____no_output_____"
],
[
"fibonacci(5)",
"_____no_output_____"
]
],
[
[
"Fibonacci series containing of n terms",
"_____no_output_____"
]
],
[
[
"a=0 \nb=1\nfib_list=[0,1]\nn= int(input('No of terms you want'))\nassert int (n)==n and n>0\ni=0\nif n==1:\n print(0)\nelse:\n while i<n-2:\n temp=a+b\n fib_list.append(temp)\n a=b\n b=temp\n i+=1\n print(fib_list)",
"No of terms you want15\n[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377]\n"
]
],
[
[
"##Sum of digits of an integer number\n",
"_____no_output_____"
]
],
[
[
"def sumOfDigits(n):\n sum=0\n assert int (n) == n and n>=0, 'Must be a positive integer'\n while n>0:\n return n%10 +sumOfDigits(int (n/10))\n else:\n return 0",
"_____no_output_____"
],
[
"sumOfDigits(546)",
"_____no_output_____"
],
[
"#sumOfDigits(-87)",
"_____no_output_____"
]
],
[
[
"##Power of a number",
"_____no_output_____"
]
],
[
[
"def powerOfNumber(x,n):\n assert int (n) ==n and n >=0\n if n==0:\n return 1\n if n==1:\n return x\n else:\n return x*powerOfNumber(x, n-1)\n ",
"_____no_output_____"
],
[
"powerOfNumber( -2, 5)",
"_____no_output_____"
],
[
"def gcd(a, b):\n assert int (a) == a and int (b) ==b, 'Integer numbers only'\n if a%b==0:\n return b\n else:\n return gcd(b, a%b)",
"_____no_output_____"
],
[
"print(gcd(48,18))",
"6\n"
]
],
[
[
"## Decimal to Binary",
"_____no_output_____"
]
],
[
[
"def decToBinary(n):\n assert int (n) ==n\n if n is 0: \n return 0\n if n is 1:\n return 1\n else:\n return (int (n%2))+10*decToBinary(int (n/2))\n",
"_____no_output_____"
],
[
"decToBinary(13)",
"_____no_output_____"
]
],
[
[
"##Reverse of a number using Recursion",
"_____no_output_____"
]
],
[
[
"def reverseNumber(n, r):\n #assert int (n) == n and n >0\n if n ==0:\n return r\n else:\n return reverseNumber( int ( n/10), r*10+n%10)\n\n",
"_____no_output_____"
],
[
"reverseNumber(647, 0)",
"_____no_output_____"
]
],
[
[
"#Largest number in an Array.",
"_____no_output_____"
]
],
[
[
"def findMaxNumRec( sampleArray, n): #n is the length of array\n if n==1:\n return sampleArray[0]\n else:\n return max(sampleArray[n-1], findMaxNumRec(sampleArray, n-1))\n",
"_____no_output_____"
],
[
"findMaxNumRec([11, 14, 7, 9, 3, 12], 6)",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
d0c76b1a4afd0b0b748a8651b3ca32e3d372069b | 810 | ipynb | Jupyter Notebook | main.ipynb | jesusdavizon/Robotica | 1731d95f55df5bd5f8fb91d33a7b0d0551906352 | [
"MIT"
] | null | null | null | main.ipynb | jesusdavizon/Robotica | 1731d95f55df5bd5f8fb91d33a7b0d0551906352 | [
"MIT"
] | null | null | null | main.ipynb | jesusdavizon/Robotica | 1731d95f55df5bd5f8fb91d33a7b0d0551906352 | [
"MIT"
] | null | null | null | 16.2 | 34 | 0.490123 | [
[
[
"print(\"Hola mundo\")",
"Hola mundo\n"
]
]
] | [
"code"
] | [
[
"code"
]
] |
d0c76d5cce72cb2843822d8d9230b42961e379f8 | 13,735 | ipynb | Jupyter Notebook | notebooks/Scenario management.ipynb | TemoaProject/PowerGenome | 4cb1e13f032195e39867fe9e26528de1d1b9ec57 | [
"MIT"
] | null | null | null | notebooks/Scenario management.ipynb | TemoaProject/PowerGenome | 4cb1e13f032195e39867fe9e26528de1d1b9ec57 | [
"MIT"
] | null | null | null | notebooks/Scenario management.ipynb | TemoaProject/PowerGenome | 4cb1e13f032195e39867fe9e26528de1d1b9ec57 | [
"MIT"
] | null | null | null | 37.837466 | 614 | 0.616891 | [
[
[
"# Settings/scenario management\n\nCapacity expansion modeling is often an exercise in exploring the difference between different technical, cost, or policy scenarios across a range of planning years, so PowerGenome has a built-in method for creating modified versions of a single baseline scenario. Within the settings file this shows up in how planning periods are defined and a nested dictionary that allows any \"normal\" parameter to be modified for different scenarios.\n\n## Scenario management files\n\nScenario management is deeply build into the input file structure. So much so, in fact, that it might be difficult to create inputs for a single scenario without following the layout designed for multiple scenarios.\n\n### Scenario names\n\nEach scenario has a long name and a short identifier, defined in the `case_id_description_fn` file (`test_case_id_description.csv` in the example). These cases are assumed to be the same across planning periods. When using the command line interface, case folders are created using the format `<case_id>_<model_year>_<case_description>`, so they look something like `p1_2030_Tech_CES_with_RPS`. Case IDs are used in the `scenario_definitions_fn` file (it's `test_scenario_inputs.csv` or `test_scenario_inputs_short.csv` in the example), and the `emission_policies_fn` (`test_rps_ces_emission_limits.csv`).\n\n## Planning periods\n\nWhen running a single planning period, many functions expect the parameters `model_year` and `model_first_planning_year` to be integers (a single year). In a multi-planning period settings file, each of these parameters should be a list of integers and they should be the same length. They now represent a paired series of the first and last years in each of the planning periods to be investigated.\n\n```\nmodel_year: [2030, 2045]\nmodel_first_planning_year: [2020, 2031]\n```\n\nIn this case, planning years of 2030 and 2045 will be investigated. Hourly demand is calculated for planning years. The first year in a planning period is needed because technology costs are calculated as the average of all costs over a planning period. So for the first planning period of 2020-2030, load/demand will be calculated for 2030 and the cost of building a new generator will be the average of all values from 2020-2030.\n\n## Settings management\n\nThe parameter `settings_management` is a nested dictionary with alternative values for any parameters that will be modified as part of a sensitivity, or that might have different values across planning periods. The structure of this dictionary is:\n\n```\nsettings_management:\n <model year>:\n <sensitivity column name>:\n <sensitivity value name>:\n <settings parameter name>:\n <settings parameter value>\n```\n\n`<sensitivity column name>` is the name of a column in the `scenario_definitions_fn` parameter (it's `test_scenario_inputs.csv` in the example). The first columns of this file have a `case_id` and `year` that uniquely define each model run. Model runs might test the effect of different natural gas prices (`ng_price` in the example file), with values of `reference` and `low`. The corresponding section of the `settings_management` parameter for the planning year 2030 will look like:\n\n```\nsettings_management:\n 2030:\n ng_price: # <sensitivity column name>\n reference: # <sensitivity value name>\n aeo_fuel_scenarios: # <settings parameter name>\n naturalgas: reference # <settings parameter value>\n low:\n aeo_fuel_scenarios:\n naturalgas: high_resource\n```\nSo in this case we're modifying the settings parameter `aeo_fuel_scenarios` by defining different AEO scenario names for the `naturalgas` fuel type. By default, this section of the settings file looks like:\n\n```\neia_series_scenario_names:\n reference: REF2020\n low_price: LOWPRICE\n high_price: HIGHPRICE\n high_resource: HIGHOGS\n low_resource: LOWOGS\n\naeo_fuel_scenarios:\n coal: reference\n naturalgas: reference\n distillate: reference\n uranium: reference\n```\n\nSo we're changing the AEO case from `reference` to `high_resource` (which correspond to `REF2020` and `HIGHOGS` in the EIA open data API).\n\nIt's important to understand that parameter values are updated by searching for `key:value` pairs in a dictionary and updating them. This means that in the example above I was able to change the AEO scenario for just natural gas prices, and I didn't have to list the other fuel types. But if the `value` is a list and only one item should be changed, then the entire list must be included in `settings_management`. As an example, cost scenarios for new-build generators are usually defined like:\n\n```\n# Format for each list item is <technology>, <tech_detail>, <cost_case>, <size>\natb_new_gen:\n - [NaturalGas, CCCCSAvgCF, Mid, 500]\n - [NaturalGas, CCAvgCF, Mid, 500]\n - [NaturalGas, CTAvgCF, Mid, 100]\n - [LandbasedWind, LTRG1, Mid, 1]\n - [OffShoreWind, OTRG10, Mid, 1]\n - [UtilityPV, LosAngeles, Mid, 1]\n - [Battery, \"*\", Mid, 1]\n```\n\nIf I want to have low cost renewables capex in a scenario, the corresponding section of `settings_management` should include all technologies, even if they don't change. This is because the ATB technologies are defined in a list of lists.\n\n```\nsettings_management:\n 2030:\n renewable_capex:\n low:\n atb_new_gen:\n - [NaturalGas, CCCCSAvgCF, Mid, 500]\n - [NaturalGas, CCAvgCF, Mid, 500]\n - [NaturalGas, CTAvgCF, Mid, 100]\n - [LandbasedWind, LTRG1, Low, 1]\n - [OffShoreWind, OTRG10, Low, 1]\n - [UtilityPV, LosAngeles, Low, 1]\n - [Battery, \"*\", Low, 1]\n````",
"_____no_output_____"
]
],
[
[
"%load_ext autoreload\n%autoreload 2",
"_____no_output_____"
],
[
"from pathlib import Path\n\nimport pandas as pd\nfrom powergenome.util import (\n build_scenario_settings,\n init_pudl_connection,\n load_settings,\n check_settings\n)",
"_____no_output_____"
]
],
[
[
"## Import settings\n\nSettings are imported by reading the YAML file and converting it to a Python dictionary. In the code below I'm loading the settings and creating a nested dictionary `scenario_settings` that has all of the modified parameters for each case.\n\nSettings can also be checked for some common errors using the `check_settings` function.",
"_____no_output_____"
]
],
[
[
"cwd = Path.cwd()\n\nsettings_path = (\n cwd.parent / \"example_systems\" / \"CA_AZ\" / \"test_settings.yml\"\n)\nsettings = load_settings(settings_path)\nsettings[\"input_folder\"] = settings_path.parent / settings[\"input_folder\"]\nscenario_definitions = pd.read_csv(\n settings[\"input_folder\"] / settings[\"scenario_definitions_fn\"]\n)\nscenario_settings = build_scenario_settings(settings, scenario_definitions)\n\npudl_engine, pudl_out, pg_engine = init_pudl_connection(\n freq=\"AS\",\n start_year=min(settings.get(\"data_years\")),\n end_year=max(settings.get(\"data_years\")),\n)\n\ncheck_settings(settings, pg_engine)",
"_____no_output_____"
]
],
[
[
"We can check to see if the natural gas price has changed from case `p1` to `s1`, and confirm that they are different.",
"_____no_output_____"
]
],
[
[
"scenario_settings[2030][\"p1\"][\"aeo_fuel_scenarios\"]",
"_____no_output_____"
],
[
"scenario_settings[2030][\"s1\"][\"aeo_fuel_scenarios\"]",
"_____no_output_____"
]
],
[
[
"The values of `model_year` and `model_first_planning_year` have also changed from lists to integers.",
"_____no_output_____"
]
],
[
[
"settings[\"model_year\"], settings[\"model_first_planning_year\"]",
"_____no_output_____"
],
[
"scenario_settings[2030][\"p1\"][\"model_year\"], scenario_settings[2030][\"p1\"][\"model_first_planning_year\"]",
"_____no_output_____"
],
[
"scenario_settings[2045][\"p1\"][\"model_year\"], scenario_settings[2045][\"p1\"][\"model_first_planning_year\"]",
"_____no_output_____"
]
],
[
[
"## Scenario data not defined in the settings file\n\nSome case/scenario data is defined in input CSV files rather that the settings YAML file. This is true for demand response (`demand_response_fn`, or the example file `test_ev_load_shifting.csv`). If you are supplying your own hourly demand profiles, it is also true for `regional_load_fn` (`test_regional_load_profiles.csv`). \n\n### Demand response\n\nThe demand response CSV file has 4 header rows, which correspond to the resource type, the model planning year, the scenario name, and the model region. The resource type should match a resource defined in the settings parameter `demand_response_resources`. \n\n```\n# Name of the DSM resource, fraction of load that can be shifted, and number of hours\n# that it can be shifted\ndemand_response_resources:\n 2030:\n ev_load_shifting:\n fraction_shiftable: 0.8\n parameter_values:\n Max_DSM_delay: 5\n DR: 2\n 2045:\n ev_load_shifting:\n fraction_shiftable: 0.8\n parameter_values:\n Max_DSM_delay: 5\n DR: 2\ndemand_response: 'moderate'\n```\n\nThe settings parameter `demand_response` - which can be changed via `settings_management` - is used to select the DR scenario in the CSV file.\n\n\n### User-supplied load\n\nIf you want to use your own load projections, define an input file with the parameter `regional_load_fn`. The first three rows are headers corresponding to the model year, electrification scenario, and model region. The electrification scenario names should match values in the column `electrification` of `scenario_definitions_fn`. This doesn't match with how demand response is handled and may be changed in the future.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
]
] |
d0c775ce9bcbf003ef8fb9bbe600ba611d50c170 | 283,830 | ipynb | Jupyter Notebook | data analysis/Erdos_Renyi/.ipynb_checkpoints/Ploting Graph_finding intra thres-checkpoint.ipynb | junohpark221/BSc_individual_project | 44f49d3cbb93298880f046551056185b72324d17 | [
"MIT"
] | 1 | 2021-07-04T15:38:52.000Z | 2021-07-04T15:38:52.000Z | data analysis/Erdos_Renyi/Ploting Graph_finding intra thres.ipynb | junohpark221/BSc_individual_project | 44f49d3cbb93298880f046551056185b72324d17 | [
"MIT"
] | null | null | null | data analysis/Erdos_Renyi/Ploting Graph_finding intra thres.ipynb | junohpark221/BSc_individual_project | 44f49d3cbb93298880f046551056185b72324d17 | [
"MIT"
] | null | null | null | 134.325603 | 39,828 | 0.79356 | [
[
[
"import pandas as pd\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport statistics\nimport math\n\nfrom sklearn.linear_model import LinearRegression\nfrom scipy.optimize import curve_fit",
"_____no_output_____"
],
[
"er_cas_100_data = pd.read_csv('proc_er_cas_100.csv')\n\ndel er_cas_100_data['Unnamed: 0']",
"_____no_output_____"
],
[
"er_500_50_0012 = pd.read_csv('proc_er_500_50_0012.csv')\n\ndel er_500_50_0012['Unnamed: 0']",
"_____no_output_____"
],
[
"er_1000_50_0006 = pd.read_csv('proc_er_1000_50_0006.csv')\n\ndel er_1000_50_0006['Unnamed: 0']",
"_____no_output_____"
],
[
"er_1500_50_0004 = pd.read_csv('proc_er_1500_50_0004.csv')\n\ndel er_1500_50_0004['Unnamed: 0']",
"_____no_output_____"
],
[
"er_cas_100_data",
"_____no_output_____"
],
[
"er_500_50_0012",
"_____no_output_____"
],
[
"er_1000_50_0006",
"_____no_output_____"
],
[
"er_1500_50_0004",
"_____no_output_____"
],
[
"er_cas_100_dict = {}\n\nfor i in range(100):\n target = list(range(i*30, (i+1)*30))\n \n temp_er_cas_100 = er_cas_100_data[i*30 + 0 : (i+1)*30]\n \n alive = 0\n for index in target:\n if (temp_er_cas_100['alive_nodes'][index] != 0) and (temp_er_cas_100['fin_larg_comp_a'][index] != 0):\n alive += 1\n p_k = 0.8 * 499 * temp_er_cas_100['t'][index]\n \n if i == 0:\n er_cas_100_dict['attack_size'] = [statistics.mean(temp_er_cas_100['attack_size'].values.tolist())]\n er_cas_100_dict['t'] = [statistics.mean(temp_er_cas_100['t'].values.tolist())]\n er_cas_100_dict['init_intra_edge_a'] = [statistics.mean(temp_er_cas_100['init_intra_edge_a'].values.tolist())]\n er_cas_100_dict['alive ratio'] = [alive / 30]\n er_cas_100_dict['p<k>'] = [p_k]\n else:\n er_cas_100_dict['attack_size'].append(statistics.mean(temp_er_cas_100['attack_size'].values.tolist()))\n er_cas_100_dict['t'].append(statistics.mean(temp_er_cas_100['t'].values.tolist()))\n er_cas_100_dict['init_intra_edge_a'].append(statistics.mean(temp_er_cas_100['init_intra_edge_a'].values.tolist()))\n er_cas_100_dict['alive ratio'].append(alive / 30)\n er_cas_100_dict['p<k>'].append(p_k)",
"_____no_output_____"
],
[
"plt.plot(er_cas_100_dict['p<k>'], er_cas_100_dict['alive ratio'])\nplt.title('The ratio that shows whether largest component is alive or not')\nplt.show()",
"_____no_output_____"
],
[
"er_500_50_0012_dict = {}\n\nfor i in range(100):\n target = list(range(i*50, (i+1)*50))\n \n temp_er_500_50_0012 = er_500_50_0012[i*50 + 0 : (i+1)*50]\n \n alive = 0\n for index in target:\n if (temp_er_500_50_0012['alive_nodes'][index] != 0) and (temp_er_500_50_0012['fin_larg_comp_a'][index] != 0):\n alive += 1\n p_k = 0.8 * 499 * temp_er_500_50_0012['t'][index]\n \n if i == 0:\n er_500_50_0012_dict['attack_size'] = [statistics.mean(temp_er_500_50_0012['attack_size'].values.tolist())]\n er_500_50_0012_dict['t'] = [statistics.mean(temp_er_500_50_0012['t'].values.tolist())]\n er_500_50_0012_dict['init_intra_edge_a'] = [statistics.mean(temp_er_500_50_0012['init_intra_edge_a'].values.tolist())]\n er_500_50_0012_dict['alive ratio'] = [alive / 50]\n er_500_50_0012_dict['p<k>'] = [p_k]\n er_500_50_0012_dict['alive_nodes'] = [statistics.mean(temp_er_cas_100['alive_nodes'].values.tolist())]\n else:\n er_500_50_0012_dict['attack_size'].append(statistics.mean(temp_er_500_50_0012['attack_size'].values.tolist()))\n er_500_50_0012_dict['t'].append(statistics.mean(temp_er_500_50_0012['t'].values.tolist()))\n er_500_50_0012_dict['init_intra_edge_a'].append(statistics.mean(temp_er_500_50_0012['init_intra_edge_a'].values.tolist()))\n er_500_50_0012_dict['alive ratio'].append(alive / 50)\n er_500_50_0012_dict['p<k>'].append(p_k)\n er_500_50_0012_dict['alive_nodes'].append(statistics.mean(temp_er_cas_100['alive_nodes'].values.tolist()))",
"_____no_output_____"
],
[
"plt.plot(er_500_50_0012_dict['p<k>'], er_500_50_0012_dict['alive ratio'])\nplt.axvline(x=2.4554, color='r', linestyle='--')\nplt.title('N=500, K=100')\nplt.xlabel(\"p<k>\")\nplt.ylabel(\"proportion of survived largest component\")\nplt.savefig(\"er_n500_k100\")\nplt.show()",
"_____no_output_____"
],
[
"X = er_500_50_0012_dict['p<k>']\nY = er_500_50_0012_dict['log_reg_p<k>']",
"_____no_output_____"
],
[
"def sigmoid(x, L ,x0, k, b):\n y = L / (1 + np.exp(-k*(x-x0)))+b\n return (y)\n\np0 = [max(Y), np.median(X),1,min(Y)] # this is an mandatory initial guess\n\npopt, pcov = curve_fit(sigmoid, X, Y,p0, method='dogbox')",
"_____no_output_____"
],
[
"plt.scatter(X, Y, marker='.')\nplt.plot(X, Y, linewidth=2)\nplt.plot(X, sigmoid(X, *popt), color='red', linewidth=2)\nplt.show()",
"_____no_output_____"
],
[
"plt.plot(er_500_50_0012_dict['p<k>'], er_500_50_0012_dict['log_reg_p<k>'])\nplt.axvline(x=2.4554, color='r', linestyle='--')\nplt.title('N=500, K=100')\nplt.xlabel(\"p<k>\")\nplt.ylabel(\"percentage of survived largest component\")\nplt.savefig(\"er_n500_k100\")\nplt.show()",
"_____no_output_____"
],
[
"er_1000_50_0006_dict = {}\n\nfor i in range(100):\n target = list(range(i*50, (i+1)*50))\n \n temp_er_1000_50_0006 = er_1000_50_0006[i*50 + 0 : (i+1)*50]\n \n alive = 0\n for index in target:\n if (temp_er_1000_50_0006['alive_nodes'][index] != 0) and (temp_er_1000_50_0006['fin_larg_comp_a'][index] != 0):\n alive += 1\n p_k = 0.8 * 999 * temp_er_1000_50_0006['t'][index]\n \n if i == 0:\n er_1000_50_0006_dict['attack_size'] = [statistics.mean(temp_er_1000_50_0006['attack_size'].values.tolist())]\n er_1000_50_0006_dict['t'] = [statistics.mean(temp_er_1000_50_0006['t'].values.tolist())]\n er_1000_50_0006_dict['init_intra_edge_a'] = [statistics.mean(temp_er_1000_50_0006['init_intra_edge_a'].values.tolist())]\n er_1000_50_0006_dict['alive ratio'] = [alive / 50]\n er_1000_50_0006_dict['p<k>'] = [p_k]\n else:\n er_1000_50_0006_dict['attack_size'].append(statistics.mean(temp_er_1000_50_0006['attack_size'].values.tolist()))\n er_1000_50_0006_dict['t'].append(statistics.mean(temp_er_1000_50_0006['t'].values.tolist()))\n er_1000_50_0006_dict['init_intra_edge_a'].append(statistics.mean(temp_er_1000_50_0006['init_intra_edge_a'].values.tolist()))\n er_1000_50_0006_dict['alive ratio'].append(alive / 50)\n er_1000_50_0006_dict['p<k>'].append(p_k)",
"_____no_output_____"
],
[
"plt.plot(er_1000_50_0006_dict['p<k>'], er_1000_50_0006_dict['alive ratio'])\nplt.axvline(x=2.4554, color='r', linestyle='--')\nplt.title('N=1000, K=200')\nplt.xlabel(\"p<k>\")\nplt.ylabel(\"proportion of survived largest component\")\nplt.savefig(\"er_n1000_k200\")\nplt.show()",
"_____no_output_____"
],
[
"er_1500_50_0004_dict = {}\n\nfor i in range(100):\n target = list(range(i*50, (i+1)*50))\n \n temp_er_1500_50_0004 = er_1500_50_0004[i*50 + 0 : (i+1)*50]\n \n alive = 0\n for index in target:\n if (temp_er_1500_50_0004['alive_nodes'][index] != 0) and (temp_er_1500_50_0004['fin_larg_comp_a'][index] != 0):\n alive += 1\n p_k = 0.8 * 1499 * temp_er_1500_50_0004['t'][index]\n \n if i == 0:\n er_1500_50_0004_dict['attack_size'] = [statistics.mean(temp_er_1500_50_0004['attack_size'].values.tolist())]\n er_1500_50_0004_dict['t'] = [statistics.mean(temp_er_1500_50_0004['t'].values.tolist())]\n er_1500_50_0004_dict['init_intra_edge_a'] = [statistics.mean(temp_er_1500_50_0004['init_intra_edge_a'].values.tolist())]\n er_1500_50_0004_dict['alive ratio'] = [alive / 50]\n er_1500_50_0004_dict['p<k>'] = [p_k]\n else:\n er_1500_50_0004_dict['attack_size'].append(statistics.mean(temp_er_1500_50_0004['attack_size'].values.tolist()))\n er_1500_50_0004_dict['t'].append(statistics.mean(temp_er_1500_50_0004['t'].values.tolist()))\n er_1500_50_0004_dict['init_intra_edge_a'].append(statistics.mean(temp_er_1500_50_0004['init_intra_edge_a'].values.tolist()))\n er_1500_50_0004_dict['alive ratio'].append(alive / 50)\n er_1500_50_0004_dict['p<k>'].append(p_k)",
"_____no_output_____"
],
[
"plt.plot(er_1500_50_0004_dict['p<k>'], er_1500_50_0004_dict['alive ratio'])\nplt.axvline(x=2.4554, color='r', linestyle='--')\nplt.title('N=1500, K=300')\nplt.xlabel(\"p<k>\")\nplt.ylabel(\"proportion of survived largest component\")\nplt.savefig(\"er_n1500_k300\")\nplt.show()",
"_____no_output_____"
],
[
"plt.plot(er_500_50_0012_dict['p<k>'], er_500_50_0012_dict['alive ratio'])\nplt.plot(er_1000_50_0006_dict['p<k>'], er_1000_50_0006_dict['alive ratio'])\nplt.plot(er_1500_50_0004_dict['p<k>'], er_1500_50_0004_dict['alive ratio'])\nplt.axvline(x=2.4554, color='r', linestyle='--')\nplt.title('Total Graph (Expanded)')\nplt.xlabel(\"p<k>\")\nplt.ylabel(\"proportion of survived largest component\")\nplt.legend(['N=500', 'N=1000', 'N=1500'])\nplt.savefig(\"er_total_expanded\")\nplt.show()",
"_____no_output_____"
],
[
"plt.plot(er_500_50_0012_dict['p<k>'], er_500_50_0012_dict['alive ratio'])\nplt.plot(er_1000_50_0006_dict['p<k>'], er_1000_50_0006_dict['alive ratio'])\nplt.plot(er_1500_50_0004_dict['p<k>'], er_1500_50_0004_dict['alive ratio'])\nplt.axvline(x=2.4554, color='r', linestyle='--')\nplt.title('Total Graph')\nplt.xlabel(\"p<k>\")\nplt.ylabel(\"proportion of survived largest component\")\nplt.legend(['N=500', 'N=1000', 'N=1500'])\nplt.xlim([2.36, 2.5])\nplt.savefig(\"er_total\")\nplt.show()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0c776569ddf8f785943ac6561e226fc165246e6 | 5,971 | ipynb | Jupyter Notebook | notebooks/9. World Tree (Delete).ipynb | maniacalbrain/ck2-social-networks | c0a890aca1507fb9d3bdd04fe5a776ee76f5e4c3 | [
"MIT"
] | 23 | 2017-12-04T16:40:21.000Z | 2021-11-19T08:17:09.000Z | notebooks/9. World Tree (Delete).ipynb | maniacalbrain/ck2-social-networks | c0a890aca1507fb9d3bdd04fe5a776ee76f5e4c3 | [
"MIT"
] | null | null | null | notebooks/9. World Tree (Delete).ipynb | maniacalbrain/ck2-social-networks | c0a890aca1507fb9d3bdd04fe5a776ee76f5e4c3 | [
"MIT"
] | 1 | 2017-12-05T10:55:51.000Z | 2017-12-05T10:55:51.000Z | 23.415686 | 312 | 0.467426 | [
[
[
"from pymongo import MongoClient\nimport pandas as pd\nimport datetime",
"_____no_output_____"
],
[
"client = MongoClient()\ncharacters = client.ck2.characters",
"_____no_output_____"
]
],
[
[
"This notebook tries to build a world tree by drawing and edge between every character in the save file with their father and mother. Running this code will generate a network with over 270,000 nodes out of a total of almost 400,000. This was taking far too long for Gephi to graph so I did not continue.\n\nThe next 3 notebooks contain the first code I wrote for extracting data from the save file. I wild manually copy and paste out the dynasty data from both files, the character data and the title data and save them in seperate files.",
"_____no_output_____"
],
[
"## Get Parent/Child Edges",
"_____no_output_____"
]
],
[
[
"pipeline = [\n {\n \"$unwind\" : \"$parents\" \n },\n {\n \"$lookup\" :\n {\n \"from\" : \"dynasties\",\n \"localField\" : \"dnt\",\n \"foreignField\" : \"_id\",\n \"as\" : \"dynasty\"\n }\n },\n {\n \"$unwind\" : \"$dynasty\"\n },\n {\n \"$match\" : {\"parents\" : {\"$nin\" : [None]}, \"$or\" : [{\"cul\" : \"irish\"}, {\"dynasty.culture\" : \"irish\"}]}\n },\n {\n \"$project\" : {\"_id\" : 1, \"parents\" : 1}\n }\n]",
"_____no_output_____"
],
[
"relation_df = pd.DataFrame(list(characters.aggregate(pipeline)))",
"_____no_output_____"
]
],
[
[
"## Get all Characters",
"_____no_output_____"
]
],
[
[
"pipeline = [\n {\n \"$lookup\" :\n {\n \"from\" : \"dynasties\",\n \"localField\" : \"dnt\",\n \"foreignField\" : \"_id\",\n \"as\" : \"dynasty\"\n }\n },\n {\n \"$unwind\" : \"$dynasty\"\n },\n {\n \"$project\" : {\"_id\" : 1, \"name\" : {\"$concat\" : [\"$bn\", \" \", \"$dynasty.name\"]}, \n \"culture\" : {\"$ifNull\" : [\"$cul\", \"$dynasty.culture\"]}, \n \"religion\" : {\"$ifNull\" : [\"$rel\", \"$dynasty.religion\"]} }\n }\n]",
"_____no_output_____"
]
],
[
[
"## Build Network",
"_____no_output_____"
]
],
[
[
"import networkx as nx\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"chars = list(characters.aggregate(pipeline))",
"_____no_output_____"
],
[
"for char in chars:\n for key in list(char.keys()):\n val = char[key]\n if isinstance(val, type(None)):\n del char[key]",
"_____no_output_____"
],
[
"G = nx.Graph()\n\nfor char in chars: #characters.aggregate(pipeline):\n if \"culture\" in char and \"religion\" in char and \"name\" in char:\n G.add_node(char[\"_id\"], name = char['name'], culture = char['culture'], religion = char['religion'])",
"_____no_output_____"
],
[
"relation_df = relation_df.dropna(axis=0, how='any')\n\nfor i in range(len(relation_df)):\n G.add_edge(relation_df.loc[i, \"_id\"], relation_df.loc[i, \"parents\"])\n\nG.remove_nodes_from(nx.isolates(G)) #drop unconnected nodes ",
"_____no_output_____"
],
[
"#nx.draw(G)\n#plt.show()",
"_____no_output_____"
],
[
"nx.write_graphml(max(nx.connected_component_subgraphs(G), key=len), \"ck2-World-Tree-2.graphml\")",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0c77c8284a7a5be5584794a93fb22311771b030 | 3,693 | ipynb | Jupyter Notebook | 01_INRIX_data_preprocessing_journal18/INRIX_data_preprocessing_07_extract_speed_data_filter_dict_journal_merge_Apr_MD.ipynb | jingzbu/InverseVITraffic | c0d33d91bdd3c014147d58866c1a2b99fb8a9608 | [
"MIT"
] | null | null | null | 01_INRIX_data_preprocessing_journal18/INRIX_data_preprocessing_07_extract_speed_data_filter_dict_journal_merge_Apr_MD.ipynb | jingzbu/InverseVITraffic | c0d33d91bdd3c014147d58866c1a2b99fb8a9608 | [
"MIT"
] | null | null | null | 01_INRIX_data_preprocessing_journal18/INRIX_data_preprocessing_07_extract_speed_data_filter_dict_journal_merge_Apr_MD.ipynb | jingzbu/InverseVITraffic | c0d33d91bdd3c014147d58866c1a2b99fb8a9608 | [
"MIT"
] | null | null | null | 28.19084 | 91 | 0.578662 | [
[
[
"import json\n%run ../Python_files/util.py\n\n# tmc_ref_speed_dict.keys\n\n# AM: 7:00 am - 9:00 am\n# MD: 11:00 am - 13:00 pm\n# PM: 17:00 pm - 19:00 pm\n# NT: 21:00 pm - 23:00 pm\n\ndata_folder = '/home/jzh/INRIX/All_INRIX_2012_filtered_journal/'",
"No dicts found; please check load_dicts...\n"
],
[
"month = 4\n\n# Load JSON data\ninput_file = data_folder + 'filtered_month_%s_%s_MD_dict_journal.json' %(month, 1)\nwith open(input_file, 'r') as json_file:\n temp_dict_1 = json.load(json_file)\n \ninput_file = data_folder + 'filtered_month_%s_%s_MD_dict_journal.json' %(month, 2)\nwith open(input_file, 'r') as json_file:\n temp_dict_2 = json.load(json_file)\n \ninput_file = data_folder + 'filtered_month_%s_%s_MD_dict_journal.json' %(month, 3)\nwith open(input_file, 'r') as json_file:\n temp_dict_3 = json.load(json_file)\n \ninput_file = data_folder + 'filtered_month_%s_%s_MD_dict_journal.json' %(month, 4)\nwith open(input_file, 'r') as json_file:\n temp_dict_4 = json.load(json_file)\n \ninput_file = data_folder + 'filtered_month_%s_%s_MD_dict_journal.json' %(month, 5)\nwith open(input_file, 'r') as json_file:\n temp_dict_5 = json.load(json_file)\n \ninput_file = data_folder + 'filtered_month_%s_%s_MD_dict_journal.json' %(month, 6)\nwith open(input_file, 'r') as json_file:\n temp_dict_6 = json.load(json_file)\n \ninput_file = data_folder + 'filtered_month_%s_%s_MD_dict_journal.json' %(month, 7)\nwith open(input_file, 'r') as json_file:\n temp_dict_7 = json.load(json_file)\n \ninput_file = data_folder + 'filtered_month_%s_%s_MD_dict_journal.json' %(month, 8)\nwith open(input_file, 'r') as json_file:\n temp_dict_8 = json.load(json_file)",
"_____no_output_____"
],
[
"temp_dict_1.update(temp_dict_2)\ntemp_dict_1.update(temp_dict_3)\ntemp_dict_1.update(temp_dict_4)\ntemp_dict_1.update(temp_dict_5)\ntemp_dict_1.update(temp_dict_6)\ntemp_dict_1.update(temp_dict_7)\ntemp_dict_1.update(temp_dict_8)",
"_____no_output_____"
],
[
"# Writing JSON data\ninput_file_MD = data_folder + 'filtered_month_%s_MD_dict_journal.json' %(month)\nwith open(input_file_MD, 'w') as json_file_MD:\n json.dump(temp_dict_1, json_file_MD)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code"
]
] |
d0c7975064cd48dbccc7a12adc9efb6686b310ed | 5,530 | ipynb | Jupyter Notebook | TSP_problem.ipynb | C-Joey/YALMIP_LEARNING | 394d6937b75ab40b57e12b23057e17fac3f18111 | [
"MIT"
] | 4 | 2019-10-23T00:34:09.000Z | 2021-06-06T03:06:59.000Z | TSP_problem.ipynb | C-Joey/YALMIP_LEARNING | 394d6937b75ab40b57e12b23057e17fac3f18111 | [
"MIT"
] | null | null | null | TSP_problem.ipynb | C-Joey/YALMIP_LEARNING | 394d6937b75ab40b57e12b23057e17fac3f18111 | [
"MIT"
] | null | null | null | 5,530 | 5,530 | 0.532007 | [
[
[
"% 利用yamlip求解TSP问题\nclear;clc;close all;\nd = load('tsp_dist_matrix.txt')'; %导入邻接矩阵\nn = size(d,1);",
"_____no_output_____"
],
[
"% 决策变量\nx = binvar(n,n,'full');\nu = sdpvar(1,n);",
"_____no_output_____"
]
],
[
[
"* [.*](https://zhidao.baidu.com/question/318809970.html)\n 数组加法:A+B,数组加法和矩阵加法相同。\n\n 数组减法:A-B ,数组减法和矩阵减法相同。\n\n 数组乘法:A.*B,A 和 B 的元素逐个对应相乘,两数组之间必须有相同的形,或其中一个是标量。\n\n 矩阵乘法:A*B,A 和 B 的矩阵乘法,A 的列数必须和 B 的行数相同。\n\n 数组右除法:A./B,A 和 B 的元素逐个对应相除:A(i,j)/B(i,j)两数组之间必须有相同的形,或其中一个是标量。\n\n 数组左除法:A.\\B,A 和 B 的元素逐个对应相除:B(i,j)/A(i,j)两数组之间必须有相同的形,或其中一个是标量。\n\n 矩阵右除法:A/B 矩阵除法,等价于 A*inv(B), inv(B)是 B 的逆阵。\n\n 矩阵左除法:A\\B 矩阵除法,等价于 inv(B)*A, inv(A)是 A 的逆阵。\n\n 数组指数运算:A.^B,AB中的元素逐个进行如下运算:A(i,j)^B(i,j),A(i,j)/B(i,j)两数组之间必须有相同的形,或其中一个是标量。",
"_____no_output_____"
]
],
[
[
"% 目标 \nz = sum(sum(d.*x));",
"_____no_output_____"
]
],
[
[
"\\begin{aligned} \\min Z=& \\sum_{i=1}^{n} \\sum_{j=1}^{n} d_{i j} x_{i j} \\\\ \n&\\left(\\begin{array}{cc}{\\sum_{i=1, i \\neq j}^{n} x_{i j}=1,} & {j=1, \\cdots, n} \\\\ \n{\\sum_{j=1, j \\neq i}^{n} x_{i j}=1,} & {i=1, \\cdots, n} \\\\ \n{u_{i}-u_{j}+n x_{i j} \\leq n-1,} & { \\quad 1<i \\neq j \\leq n}\\\\\n{x_{i j}=\\{0 , 1\\},} & {i, j=1, \\cdots, n} \\\\ \n{u_{i}\\in\\mathbb{R},} & {i=1, \\cdots, n}\\end{array}\\right.\\end{aligned}",
"_____no_output_____"
],
[
"- '>'(大于),>=(大于等于),<(小于),<=(小于等于), ==(等于)~=(不等于)\n* matlab逻辑符号:\n &(与),|(或),~(非), xor(异或)",
"_____no_output_____"
]
],
[
[
"% 约束添加\nC = [];\nfor j = 1:n\n s = sum(x(:,j))-x(j,j);\n C = [C, s == 1];\nend\nfor i = 1:n\n s = sum(x(i,:)) - x(i,i);\n C = [C, s == 1];\nend\nfor i = 2:n\n for j = 2:n\n if i~=j\n C = [C,u(i)-u(j) + n*x(i,j)<=n-1];\n end\n end\nend",
"\n"
],
[
"% 求解 假如不把z加进去,则能得到一个可行解\nresult= optimize(C,z);",
"警告: 文件: C:\\Program Files\\IBM\\ILOG\\CPLEX_Studio_Community128\\cplex\\matlab\\x64_win64\\@Cplex\\Cplex.p 行: 965 列: 0\n在嵌套函数中定义 \"changedParam\" 会将其与父函数共享。在以后的版本中,要在父函数和嵌套函数之间共享 \"changedParam\",请在父函数中显式定义它。\n> In cplexoptimset\n In sdpsettings>setup_cplex_options (line 617)\n In sdpsettings (line 145)\n In solvesdp (line 131)\n In optimize (line 31)\nOptimize a model with 92 rows, 99 columns and 396 nonzeros\nVariable types: 9 continuous, 90 integer (90 binary)\nCoefficient statistics:\n Matrix range [1e+00, 1e+01]\n Objective range [3e+00, 3e+01]\n Bounds range [1e+00, 1e+00]\n RHS range [1e+00, 9e+00]\nFound heuristic solution: objective 117.0000000\nPresolve removed 0 rows and 1 columns\nPresolve time: 0.00s\nPresolved: 92 rows, 98 columns, 548 nonzeros\nVariable types: 8 continuous, 90 integer (90 binary)\n\nRoot relaxation: objective 7.460000e+01, 40 iterations, 0.00 seconds\n\n Nodes | Current Node | Objective Bounds | Work\n Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time\n\n 0 0 74.60000 0 10 117.00000 74.60000 36.2% - 0s\nH 0 0 77.0000000 74.60000 3.12% - 0s\n 0 0 cutoff 0 77.00000 77.00000 0.00% - 0s\n\nCutting planes:\n Learned: 3\n MIR: 3\n\nExplored 1 nodes (47 simplex iterations) in 0.04 seconds\nThread count was 8 (of 8 available processors)\n\nSolution count 2: 77 117 \n\nOptimal solution found (tolerance 1.00e-04)\nBest objective 7.700000000000e+01, best bound 7.700000000000e+01, gap 0.0000%\n\n"
],
[
"% 求解\nif result.problem == 0\n value(x)\n value(z)\nelse\n disp('求解过程中出错');\nend",
"\nans =\n\n NaN 0 0 1 0 0 0 0 0 0\n 0 NaN 0 0 0 0 1 0 0 0\n 0 1 NaN 0 0 0 0 0 0 0\n 0 0 1 NaN 0 0 0 0 0 0\n 0 0 0 0 NaN 1 0 0 0 0\n 0 0 0 0 0 NaN 0 1 0 0\n 0 0 0 0 1 0 NaN 0 0 0\n 0 0 0 0 0 0 0 NaN 0 1\n 1 0 0 0 0 0 0 0 NaN 0\n 0 0 0 0 0 0 0 0 1 NaN\n\n\nans =\n\n 77\n\n\n"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
]
] |
d0c799519f35848f82f52e5400e8be63efa82129 | 8,668 | ipynb | Jupyter Notebook | notebooks/11_python_basics.ipynb | mpkato/iam | 36f7a40294161be10ac550b124d65a3638cade15 | [
"MIT"
] | null | null | null | notebooks/11_python_basics.ipynb | mpkato/iam | 36f7a40294161be10ac550b124d65a3638cade15 | [
"MIT"
] | null | null | null | notebooks/11_python_basics.ipynb | mpkato/iam | 36f7a40294161be10ac550b124d65a3638cade15 | [
"MIT"
] | null | null | null | 21.245098 | 114 | 0.439778 | [
[
[
"# Pythonの基礎",
"_____no_output_____"
],
[
"## データ型",
"_____no_output_____"
]
],
[
[
"# [変数名] = [値] と書くと,その変数名を持った変数に値を代入できる.代入された値は後で利用したりすることができる.\n\nx = 1 # 整数\ny = 2.1 # 実数\nz = 2 * x + y # 四則演算 (×→*, ÷→/)\nprint(z) # print(x)でxを表示.print(x, y, ...)ともかけて,xとyにスペースが入って出力される",
"4.1\n"
],
[
"str1 = \"これは文字列\"\nstr2 = 'これも文字列'\nstr3 = \"\"\"\nこのように書くと\n複数行に渡る文字列\nを表現できる\n\"\"\"\nstr4 = '''\nこちらでも\n良い\n'''\nstr5 = str1 + str2 # 文字列の+は文字列の結合\nprint(str5)",
"これは文字列これも文字列\n"
],
[
"arr1 = [1, 2, 3] # リスト\narr2 = [3, 4, 5]\narr3 = arr1 + arr2 # リストの+はリストの結合\nprint(arr3)\narr4 = [\"文字列\", \"文字列\"] # 何のリストであっても良い\narr5 = [1, \"文字列\"] # 文字と数字が入り乱れても良い\n\n# arr[i] でarrリストのi+1番目の値を返す.iがマイナスの場合,末尾から数えて-i番目の値を返す\nfirst = arr1[0] # 1番目の値を返す\nprint(first) \nsecond = arr1[1] # 2番目の値を返す\nprint(second) \nlast = arr1[-1] # 後ろから1番目の値を返す\nprint(last)\n\n# arr[i:j]でi+1番目以降,j+1番目より前の値をリストで返す.iとjは省略可能で,省略されるとそれぞれ「最初から」,「最後まで」と解釈される.\nprint(arr1[1:]) # 2番目以降の値をリストで返す\nprint(arr1[:2]) # 3番目より前の値をリストで返す\nprint(arr3[1:3]) # 1番目以降,4番目より前の値をリストで返す\n\n# 文字列も配列のようにアクセス可能\ns = \"あいうえお\"\nprint(s[1])\nprint(s[1:3])\n\n# arr.append(x) で 配列arrの末尾に要素xを追加する\narr = [1, 2]\narr.append(3)\nprint(arr)",
"[1, 2, 3, 3, 4, 5]\n1\n2\n3\n[2, 3]\n[1, 2]\n[2, 3]\nい\nいう\n[1, 2, 3]\n"
],
[
"dict1 = {\"a\": 1, \"b\": 3, \"c\": 4} # 辞書.{key1: val1, key2: val2, ...}と書くと辞書を作れる.\n# 辞書はキーと値からなる順序付けされていないデータ\n# 1つのキーに対応する値は1つのみ\nprint(dict1[\"a\"]) # dict[key]とすれば,dict辞書のkeyに対応した値を返す\ndict2 = {} # 空の辞書も作れる\ndict2[\"a\"] = 2 # dict[key] = valueとすれば,dict辞書のkeyに値valueを対応させられる\ndict2[\"b\"] = 10\nprint(dict2)\ndict3 = {1: \"b\", 3: \"d\"} # keyは数値,文字列などを用いることが可能,リストや辞書は不可.valueは何でも良い.",
"1\n{'b': 10, 'a': 2}\n"
],
[
"# ブール型.True or False\nbool1 = True\nbool2 = False \nbool3 = 1 == 2 # 1 == 2はFalseとなる\nbool4 = 5 > 2 # 5 > 2はTrueとなる\nbool5 = 4 in [1, 2, 3] # x in AはAがリストの場合Aにxが含まれればTrue\nprint(bool5)\nbool6 = 4 in {1: \"a\", 3: \"b\", 4: \"c\"} # Aが辞書の場合,Aのキーにxが含まれればTrue\nprint(bool6)\nbool7 = \"きく\" in \"かきくけこ\" # Aが文字列の場合,xがAの部分文字列ならTrue\nprint(bool7)\n\n# ブール型は特にif文で利用される(詳細は後述)\nd = {\"a\": 1, \"b\": 2}\nif \"a\" in d:\n print(\"The condition was True\")\nelse:\n print(\"The condition was False\")",
"False\nTrue\nTrue\nThe condition was True\n"
]
],
[
[
"## 制御構文",
"_____no_output_____"
]
],
[
[
"\"\"\"\nif condition:\n # conditionがTrueのときにここが実行される\n else:\n # conditionがFalseのときにここが実行される(else:以降は省略可能)\n\"\"\"\n# 長いコメントは\"\"\"や'''で書ける\n\nx = 3\nif x > 2:\n print(\"x > 2\")\nelse:\n print(\"x <= 2\")\n\nif x == 3:\n print(\"x == 3\")\n\n# Pythonでインデントは構文の一部!! \n# インデントが増えた箇所から元のインデント数に戻るまで,インデント数が同じ箇所は同じコードブロックと見なされる.\n# コードブロック: if文やfor文,関数定義の影響が及ぼされる範囲\n\nif x < 2: # インデント0\n print(\"x < 2\") # インデント1\nelse: # インデント0\n print(\"x >= 2\") # インデント1\n print(\"This line was also executed\") # インデント1 この行もx < 2でない場合にのみ実行される\n \nif x / 3 == 1: # インデント0\n print(\"x / 3 == 1\") # インデント1\nelse: # インデント0\n print(\"x / 3 != 1\") # インデント1\nprint(\"This line is executed whatever the condition is\") # インデント0 この行は直前の行と同じコードブロックではない",
"x > 2\nx == 3\nx >= 2\nThis line was also executed\nx / 3 == 1\nThis line is executed whatever the condition is\n"
],
[
"\"\"\"\nfor x in arr:\n # リストarrの各要素xに対して,この箇所が実行される\n\"\"\"\n\nfor x in [0, 1, 2]:\n print(x * 2)\n\nfor x in range(4): # range(x)で(厳密には違うが)xより小さい整数までの配列が得られる\n print(x * 2 + 2)\n\nfor x in range(1, 4): # range(x, y)で(厳密には違うが)xからyより小さい整数までの配列が得られる\n print(x * 3)\n\nfor i, x in enumerate([3, 4, 5]): # for i, x in enumerate(arr)とすれば,iに0, 1, ...というように何番目の繰り返しかを表す整数が代入される.\n print(i, x)",
"0\n2\n4\n2\n4\n6\n8\n3\n6\n9\n0 3\n1 4\n2 5\n"
],
[
"\"\"\"\nwhile condition:\n # conditionがTureである間,この箇所が実行される\n\"\"\"\n\ni = 0\nwhile i < 3:\n print(i)\n i = i + 1 # i += 1とも書ける",
"0\n1\n2\n"
],
[
"\"\"\"\ntry:\n # 例外が起こりそうな操作\nexcept:\n # 例外が起こったときに実行される\n\"\"\"\n\ntry:\n z = \"文字\" + 1 # 文字列と数値は足せない\nexcept:\n print(\"例外が発生しました\")\n\ntry:\n z = \"文字\" + 1 # 文字列と数値は足せない\nexcept Exception as e: # 変数eに例外が代入される\n print(e)",
"例外が発生しました\nCan't convert 'int' object to str implicitly\n"
]
],
[
[
"## 関数",
"_____no_output_____"
]
],
[
[
"\"\"\"\ndef function_name(var1, var2, ...):\n # 関数の中身. e.g. x = var1 + var2\n return x\n# 引数var1, var2, ...を与えて,xを返す関数function_nameを定義できる\n\"\"\"\n\ndef square(x):\n result = x * x\n return result\n\ny = square(2)\nprint(y)",
"4\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0c79e9e24fc6c50bb284a6a76c3b59db7db4e6a | 670,657 | ipynb | Jupyter Notebook | MAPS/Mooers_Logbook/Fully_Convolutional_W/Latent_Space_Animation_No_Average_Fully_Conv.ipynb | gmooers96/CBRAIN-CAM | c5a26e415c031dea011d7cb0b8b4c1ca00751e2a | [
"MIT"
] | null | null | null | MAPS/Mooers_Logbook/Fully_Convolutional_W/Latent_Space_Animation_No_Average_Fully_Conv.ipynb | gmooers96/CBRAIN-CAM | c5a26e415c031dea011d7cb0b8b4c1ca00751e2a | [
"MIT"
] | null | null | null | MAPS/Mooers_Logbook/Fully_Convolutional_W/Latent_Space_Animation_No_Average_Fully_Conv.ipynb | gmooers96/CBRAIN-CAM | c5a26e415c031dea011d7cb0b8b4c1ca00751e2a | [
"MIT"
] | 5 | 2019-09-30T20:17:13.000Z | 2022-03-01T07:03:30.000Z | 992.096154 | 233,148 | 0.952666 | [
[
[
"import numpy as np\nimport itertools\nimport math\nimport scipy\nimport matplotlib.pyplot as plt\nimport matplotlib\nimport matplotlib.patches as patches\nfrom matplotlib import animation\nfrom matplotlib import transforms\nfrom mpl_toolkits.axes_grid1 import make_axes_locatable\nimport xarray as xr\nimport dask\nfrom sklearn.cluster import KMeans\nfrom sklearn.cluster import AgglomerativeClustering\nimport pandas as pd\nimport netCDF4",
"_____no_output_____"
],
[
"def latent_space_analysis(Images, title, iden):\n mean_image = np.mean(Images, axis=0)\n var_image = np.std(Images, axis=0)\n cmap=\"RdBu_r\"\n fig, ax = plt.subplots(1,2, figsize=(16,2))\n cs0 = ax[0].imshow(var_image, cmap=cmap)\n ax[0].set_title(\"Image Standard Deviation\")\n cs1 = ax[1].imshow(mean_image, cmap=cmap)\n ax[1].set_title(\"Image Mean\")\n ax[0].set_ylim(ax[0].get_ylim()[::-1])\n ax[1].set_ylim(ax[1].get_ylim()[::-1])\n ax[1].set_xlabel(\"CRMs\")\n ax[0].set_xlabel(\"CRMs\")\n ax[0].set_ylabel(\"Pressure\")\n ax[1].set_yticks([])\n y_ticks = np.arange(1300, 0, -300)\n ax[0].set_yticklabels(y_ticks)\n ax[1].set_yticklabels(y_ticks)\n divider = make_axes_locatable(ax[0])\n cax = divider.append_axes(\"right\", size=\"5%\", pad=0.05)\n fig.colorbar(cs0, cax=cax)\n divider = make_axes_locatable(ax[1])\n cax = divider.append_axes(\"right\", size=\"5%\", pad=0.05)\n fig.colorbar(cs1, cax=cax)\n plt.suptitle(title)\n #plt.savefig(\"/fast/gmooers/gmooers_git/CBRAIN-CAM/MAPS/model_graphs/latent_space_components/\"+iden+'_'+title+'.png')\n ",
"_____no_output_____"
],
[
"z_test_tsne = np.load(\"/fast/gmooers/gmooers_git/CBRAIN-CAM/MAPS/Synoptic_Latent_Spaces/2D_PCA_Latent_Space__31.npy\")\nTest_Images = np.load(\"/fast/gmooers/Preprocessed_Data/Centered_50_50/Space_Time_W_Test.npy\")\nMax_Scalar = np.load(\"/fast/gmooers/Preprocessed_Data/Centered_50_50/Space_Time_Max_Scalar.npy\")\nMin_Scalar = np.load(\"/fast/gmooers/Preprocessed_Data/Centered_50_50/Space_Time_Min_Scalar.npy\")\nTest_Images = np.interp(Test_Images, (0, 1), (Min_Scalar, Max_Scalar))",
"_____no_output_____"
],
[
"plt.scatter(x=z_test_tsne[:, 0], y=z_test_tsne[:, 1], c=\"#3D9AD1\", s=0.1)\nplt.show()",
"_____no_output_____"
],
[
"horz_line = np.squeeze(np.argwhere(np.logical_and(z_test_tsne[:,1] > -8.1, z_test_tsne[:,1] < -7.90)))\nvert_line = np.squeeze(np.argwhere(np.logical_and(z_test_tsne[:,0] > -12.30, z_test_tsne[:,0] < -11.70)))\n#horz_line = np.squeeze(np.argwhere(np.logical_and(z_test_tsne[:,1] > -8.005, z_test_tsne[:,1] < -7.995)))\n#vert_line = np.squeeze(np.argwhere(np.logical_and(z_test_tsne[:,0] > -12.025, z_test_tsne[:,0] < -11.975)))",
"_____no_output_____"
],
[
"horz_line_images = Test_Images[horz_line,:,:]\nhorz_line_latent = z_test_tsne[horz_line,:]\n\nvert_line_images = Test_Images[vert_line,:,:]\nvert_line_latent = z_test_tsne[vert_line,:]\n\nhorz_line_images_sorted = np.empty(horz_line_images.shape)\nhorz_line_latent_sorted = np.empty(horz_line_latent.shape)\nvert_line_images_sorted = np.empty(vert_line_images.shape)\nvert_line_latent_sorted = np.empty(vert_line_latent.shape)",
"_____no_output_____"
],
[
"count = 0\nfor i in range(len(horz_line_images_sorted)):\n ind = np.nanargmin(horz_line_latent[:,0])\n horz_line_images_sorted[count,:] = horz_line_images[ind,:]\n horz_line_latent_sorted[count,:] = horz_line_latent[ind,:]\n horz_line_latent[ind,:] = np.array([1000.0,1000.0])\n #horz_line_images[ind,:] = np.array([1000.0,1000.0])\n count = count+1\n \ncount = 0\nfor i in range(len(vert_line_images_sorted)):\n ind = np.nanargmin(vert_line_latent[:,1])\n vert_line_images_sorted[count,:] = vert_line_images[ind,:]\n vert_line_latent_sorted[count,:] = vert_line_latent[ind,:]\n vert_line_latent[ind,:] = np.array([10000.0,10000.0])\n #vert_line_image[ind,:] = np.array([1000.0,1000.0])\n count = count+1\n ",
"_____no_output_____"
],
[
"print(np.where(z_test_tsne == horz_line_latent_sorted[0]))\nprint(np.where(z_test_tsne == horz_line_latent_sorted[-1]))\nprint(np.where(z_test_tsne == vert_line_latent_sorted[0]))\nprint(np.where(z_test_tsne == vert_line_latent_sorted[-1]))",
"(array([4781, 4781]), array([0, 1]))\n(array([536, 536]), array([0, 1]))\n(array([20826, 20826]), array([0, 1]))\n(array([18929, 18929]), array([0, 1]))\n"
],
[
"plt.scatter(x=z_test_tsne[:, 0], y=z_test_tsne[:, 1], c=\"#3D9AD1\", s=2.0)\nplt.scatter(x=horz_line_latent_sorted[:, 0], y=horz_line_latent_sorted[:, 1], c=\"Red\", s=2.0)\nplt.scatter(x=vert_line_latent_sorted[:, 0], y=vert_line_latent_sorted[:, 1], c=\"Purple\", s=2.0)\nplt.show()",
"_____no_output_____"
],
[
"print(horz_line_latent_sorted.shape)\nprint(vert_line_latent_sorted.shape)",
"(23, 2)\n(36, 2)\n"
],
[
"path = \"/DFS-L/DATA/pritchard/gmooers/Workflow/MAPS/SPCAM/100_Days/New_SPCAM5/archive/TimestepOutput_Neuralnet_SPCAM_216/atm/hist/TimestepOutput_Neuralnet_SPCAM_216.cam.h1.2009-01-20-00000.nc\"\nextra_variables = xr.open_dataset(path)\nha = extra_variables.hyai.values\nhb = extra_variables.hybi.values\nPS = 1e5\nPressures_real = PS*ha+PS*hb\n\nfz = 15\nlw = 4\nsiz = 100\nXNNA = 1.25 # Abscissa where architecture-constrained network will be placed\nXTEXT = 0.25 # Text placement\nYTEXT = 0.3 # Text placement\n\nplt.rc('text', usetex=False)\nmatplotlib.rcParams['mathtext.fontset'] = 'stix'\nmatplotlib.rcParams['font.family'] = 'STIXGeneral'\n#mpl.rcParams[\"font.serif\"] = \"STIX\"\nplt.rc('font', family='serif', size=fz)\nmatplotlib.rcParams['lines.linewidth'] = lw\n\nothers = netCDF4.Dataset(\"/fast/gmooers/Raw_Data/extras/TimestepOutput_Neuralnet_SPCAM_216.cam.h1.2009-01-01-72000.nc\")\nlevs = np.array(others.variables['lev'])\nnew = np.flip(levs)\ncrms = np.arange(1,129,1)\nXs, Zs = np.meshgrid(crms, new)",
"_____no_output_____"
],
[
"horz_line_latent_sorted = np.flip(horz_line_latent_sorted, axis=0)\nvert_line_latent_sorted = np.flip(vert_line_latent_sorted, axis=0)\nhorz_line_images_sorted = np.flip(horz_line_images_sorted, axis=0)\nvert_line_images_sorted = np.flip(vert_line_images_sorted, axis=0)",
"_____no_output_____"
],
[
"# change vx/vy to location on sorted images\ndef mikes_latent_animation(h_coords, v_coords, h_const, v_const, latent_space, xdist, ydist, X, Z, hline, vline, h_images, v_images):\n fig, ax = plt.subplots(2,2, figsize=(36,16))\n feat_list = []\n #the real total you need\n num_steps = len(h_coords)\n #num_steps = 20\n cmap= \"RdBu_r\"\n \n dummy_horz = np.zeros(shape=(30,128))\n dummy_horz[:,:] = np.nan\n dummy_vert = np.zeros(shape=(30,128))\n dummy_vert[:,:] = np.nan\n count = 29\n for i in range(num_steps):\n \n for j in range(len(dummy_horz)):\n dummy_horz[count,:] = h_images[i,j,:]\n if i <= len(v_coords) -1:\n dummy_vert[count,:] = v_images[i,j,:]\n else:\n dummy_vert[count,:] = v_images[-1,j,:]\n count = count-1\n \n h_rect = patches.Rectangle((h_coords[i],h_const),xdist,ydist,linewidth=4,edgecolor='black',facecolor='none')\n if i <= len(v_coords) -1:\n v_rect = patches.Rectangle((v_const,v_coords[i]),xdist,ydist,linewidth=4,edgecolor='black',facecolor='none')\n else:\n v_rect = patches.Rectangle((v_const,v_coords[-1]),xdist,ydist,linewidth=4,edgecolor='black',facecolor='none')\n \n \n \n ax[0,0].scatter(latent_space[:, 0], latent_space[:, 1], c=\"#3D9AD1\", s=0.4, animated=True)\n ax[0,0].scatter(x=hline[:, 0], y=hline[:, 1], c=\"Red\", s=2.0, animated=True)\n cs0 = ax[0,0].add_patch(h_rect)\n \n cs2 = ax[1,0].scatter(latent_space[:, 0], latent_space[:, 1], c=\"#3D9AD1\", s=0.4, animated=True)\n ax[1,0].scatter(x=vline[:, 0], y=vline[:, 1], c=\"Green\", s=2.0, animated=True)\n cs2 = ax[1,0].add_patch(v_rect)\n \n \n cs3 = ax[1,1].pcolor(X, Z, dummy_vert, cmap=cmap, animated=True, vmin = -1.0, vmax = 1.0)\n ax[1,1].set_title(\"(y) Vertical Velocity\", fontsize=fz*2.0)\n cs1 = ax[0,1].pcolor(X, Z, dummy_horz, cmap=cmap, animated=True, vmin = -1.0, vmax = 1.0)\n ax[0,1].set_title(\"(x) Vertical Velocity\", fontsize=fz*2.0)\n \n ax[0,1].set_xlabel(\"CRMs\", fontsize=fz*1.5)\n ax[1,1].set_xlabel(\"CRMs\", fontsize=fz*1.5)\n ax[0,1].set_ylabel(\"Pressure (hpa)\", fontsize=fz*1.5)\n ax[1,1].set_ylabel(\"Pressure (hpa)\", fontsize=fz*1.5)\n \n y_ticks = np.array([1000, 800, 600, 400, 200])\n ax[1,1].set_yticklabels(y_ticks)\n ax[0,1].set_yticklabels(y_ticks)\n \n divider = make_axes_locatable(ax[1,1])\n cax = divider.append_axes(\"right\", size=\"5%\", pad=0.05)\n fig.colorbar(cs1, cax=cax)\n divider = make_axes_locatable(ax[0,1])\n cax = divider.append_axes(\"right\", size=\"5%\", pad=0.05)\n fig.colorbar(cs1, cax=cax)\n \n feat_list.append([cs2, cs3, cs1, cs0])\n \n\n count = 29 \n \n ani = animation.ArtistAnimation(fig, feat_list, interval = 125, blit = False, repeat = True)\n ani.save('/fast/gmooers/gmooers_git/CBRAIN-CAM/MAPS/Animations/Figures/31_W_Axis_Test_Horz_Vert_500.mp4')\n plt.show()\n \n \nmikes_latent_animation(horz_line_latent_sorted[:,0], vert_line_latent_sorted[:,1], -8.0, -12.0, z_test_tsne, 0.2, 1, Xs, Zs, horz_line_latent_sorted, vert_line_latent_sorted, horz_line_images_sorted, vert_line_images_sorted)",
"_____no_output_____"
],
[
"x0, y0 = -30, -14 # These are in _pixel_ coordinates!!\nx1, y1 = 10, 32\nlength = int(np.hypot(x1-x0, y1-y0))\nx, y = np.linspace(x0, x1, 8*length), np.linspace(y0, y1, 8*length)\n\nx2, y2 = -38, -12 # These are in _pixel_ coordinates!!\nx3, y3 = 115, 5\nlength = int(np.hypot(x3-x2, y3-y2))\nx3, y3 = np.linspace(x2, x3, 3*length), np.linspace(y2, y3, 3*length)",
"_____no_output_____"
],
[
"top_line = np.zeros(shape=(len(x3),2))\nshallow_line = np.zeros(shape=(len(x),2))\ntop_line[:,0] = x3\ntop_line[:,1] = y3\nshallow_line[:,0] = x\nshallow_line[:,1] = y",
"_____no_output_____"
],
[
"plt.scatter(x=z_test_tsne[:, 0], y=z_test_tsne[:, 1], c=\"#3D9AD1\", s=0.1)\nplt.scatter(x=top_line[:, 0], y=top_line[:, 1], c=\"red\", s=0.1)\nplt.scatter(x=shallow_line[:, 0], y=shallow_line[:, 1], c=\"green\", s=0.1)\nplt.show()",
"_____no_output_____"
],
[
"shallow_line.shape",
"_____no_output_____"
],
[
"z_test_tsne_saved = np.zeros(z_test_tsne.shape)\nfor i in range(len(z_test_tsne)):\n z_test_tsne_saved[i,:] = z_test_tsne[i,:]",
"_____no_output_____"
],
[
"def list_maker(original_array, latant_space, image_dataset):\n new_list = np.empty(original_array.shape)\n value_list =np.empty(latant_space[:,0].shape)\n new_images = np.empty(shape=(len(original_array),30,128))\n for i in range(len(original_array)):\n temp_x = original_array[i,0]\n temp_y = original_array[i,1]\n for j in range(len(latant_space)):\n #value_list[j] = np.abs(temp_x-latant_space[j,0])+np.abs(temp_y-latant_space[j,1])\n value_list[j] = np.sqrt((temp_x-latant_space[j,0])**2+(temp_y-latant_space[j,1])**2)\n point = np.argmin(value_list)\n new_list[i,:] = latant_space[point]\n new_images[i,:,:] = image_dataset[point,:,:]\n latant_space[point] = np.array([100000,100000])\n value_list[:] = np.nan\n \n \n return new_list, new_images\n \n \n \naxis_list, axis_images = list_maker(top_line, z_test_tsne, Test_Images)\nshallow_list, shallow_images = list_maker(shallow_line, z_test_tsne, Test_Images)",
"_____no_output_____"
],
[
"plt.scatter(x=z_test_tsne_saved[:, 0], y=z_test_tsne_saved[:, 1], c=\"#3D9AD1\", s=0.1)\nplt.scatter(x=top_line[:, 0], y=top_line[:, 1], c=\"red\", s=0.5)\nplt.scatter(x=axis_list[:, 0], y=axis_list[:, 1], c=\"green\", s=0.5)\nplt.scatter(x=shallow_line[:, 0], y=shallow_line[:, 1], c=\"red\", s=0.5)\nplt.scatter(x=shallow_list[:, 0], y=shallow_list[:, 1], c=\"green\", s=0.5)",
"_____no_output_____"
],
[
"print(shallow_line.shape)\nprint(axis_list.shape)\nprint(shallow_list.shape)\nprint(top_line.shape)",
"(60, 2)\n(153, 2)\n(60, 2)\n(153, 2)\n"
],
[
"shallow_line = np.flip(shallow_line, axis=0)\naxis_list = np.flip(axis_list, axis=0)\nshallow_list = np.flip(shallow_list, axis=0)\ntop_line = np.flip(top_line, axis=0)\naxis_images = np.flip(axis_images, axis=0)\nshallow_images = np.flip(shallow_images, axis=0)",
"_____no_output_____"
],
[
"# change vx/vy to location on sorted images\ndef rotated_latent_animation(h_coords, v_coords, latent_space, xdist, ydist, X, Z, h_images, v_images, hline, vline):\n fig, ax = plt.subplots(2,2, figsize=(36,16))\n feat_list = []\n #the real total you need\n num_steps = len(h_coords)\n #num_steps = 10\n cmap= \"RdBu_r\"\n \n dummy_horz = np.zeros(shape=(30,128))\n dummy_horz[:,:] = np.nan\n dummy_vert = np.zeros(shape=(30,128))\n dummy_vert[:,:] = np.nan\n count = 29\n for i in range(num_steps):\n \n for j in range(len(dummy_horz)):\n dummy_horz[count,:] = h_images[i,j,:]\n if i <= len(v_coords) -1:\n dummy_vert[count,:] = v_images[i,j,:]\n else:\n dummy_vert[count,:] = v_images[-1,j,:]\n count = count-1\n \n h_rect = patches.Rectangle((h_coords[i,0],h_coords[i,1]),xdist,ydist,linewidth=4,edgecolor='black',facecolor='none')\n if i <= len(v_coords) -1:\n v_rect = patches.Rectangle((v_coords[i,0],v_coords[i,1]),xdist,ydist,linewidth=4,edgecolor='black',facecolor='none')\n else:\n v_rect = patches.Rectangle((v_coords[-1,0],v_coords[-1,1]),xdist,ydist,linewidth=4,edgecolor='black',facecolor='none')\n \n \n \n ax[0,0].scatter(latent_space[:, 0], latent_space[:, 1], c=\"#3D9AD1\", s=0.4, animated=True)\n ax[0,0].scatter(x=hline[:, 0], y=hline[:, 1], c=\"Red\", s=2.0, animated=True)\n cs0 = ax[0,0].add_patch(h_rect)\n \n cs2 = ax[1,0].scatter(latent_space[:, 0], latent_space[:, 1], c=\"#3D9AD1\", s=0.4, animated=True)\n ax[1,0].scatter(x=vline[:, 0], y=vline[:, 1], c=\"Green\", s=2.0, animated=True)\n cs2 = ax[1,0].add_patch(v_rect)\n \n \n cs3 = ax[1,1].pcolor(X, Z, dummy_vert, cmap=cmap, animated=True, vmin = -1.0, vmax = 1.0)\n #ax[1,1].set_title(\"(y) Shallow convection\", fontsize=fz*2.0)\n cs1 = ax[0,1].pcolor(X, Z, dummy_horz, cmap=cmap, animated=True, vmin = -1.0, vmax = 1.0)\n #ax[0,1].set_title(\"(x) Deep Convection\", fontsize=fz*2.0)\n \n ax[0,1].set_xlabel(\"CRMs\", fontsize=fz*1.5)\n ax[1,1].set_xlabel(\"CRMs\", fontsize=fz*1.5)\n ax[0,1].set_ylabel(\"Pressure (hpa)\", fontsize=fz*1.5)\n ax[1,1].set_ylabel(\"Pressure (hpa)\", fontsize=fz*1.5)\n \n y_ticks = np.array([1000, 800, 600, 400, 200])\n ax[1,1].set_yticklabels(y_ticks)\n ax[0,1].set_yticklabels(y_ticks)\n \n divider = make_axes_locatable(ax[1,1])\n cax = divider.append_axes(\"right\", size=\"5%\", pad=0.05)\n fig.colorbar(cs1, cax=cax)\n divider = make_axes_locatable(ax[0,1])\n cax = divider.append_axes(\"right\", size=\"5%\", pad=0.05)\n fig.colorbar(cs1, cax=cax)\n \n feat_list.append([cs2, cs3, cs1, cs0])\n \n\n count = 29 \n \n ani = animation.ArtistAnimation(fig, feat_list, interval = 125, blit = False, repeat = True)\n ani.save('/fast/gmooers/gmooers_git/CBRAIN-CAM/MAPS/Animations/Figures/31_W_Diagonals_500.mp4')\n plt.show()\n \n \nrotated_latent_animation(top_line, shallow_line, z_test_tsne_saved, 0.4, 1.5, Xs, Zs, axis_images, shallow_images, axis_list, shallow_list)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0c7b30841a3d9d552e76e8b5217237d76ba506f | 121,181 | ipynb | Jupyter Notebook | 1D-series/Model-2 (Random Forest).ipynb | umitkacar/1D-Ensemble | 2977932eaccd1ba060081743099a34510deab32c | [
"MIT"
] | 2 | 2020-09-12T11:47:01.000Z | 2021-11-04T03:42:12.000Z | 1D-series/Model-2 (Random Forest).ipynb | umitkacar/1D-Ensemble | 2977932eaccd1ba060081743099a34510deab32c | [
"MIT"
] | null | null | null | 1D-series/Model-2 (Random Forest).ipynb | umitkacar/1D-Ensemble | 2977932eaccd1ba060081743099a34510deab32c | [
"MIT"
] | null | null | null | 43.340844 | 30,292 | 0.560797 | [
[
[
"# General\nfrom os import path\nfrom random import randrange\n\nfrom sklearn.model_selection import train_test_split, GridSearchCV #cross validation\nfrom sklearn.metrics import confusion_matrix, plot_confusion_matrix, make_scorer\nfrom sklearn.metrics import accuracy_score, roc_auc_score, balanced_accuracy_score\n\nfrom sklearn.preprocessing import LabelEncoder\n\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nimport xgboost as xgb\nfrom sklearn.ensemble import RandomForestClassifier\n\nimport pickle\nimport joblib \n",
"_____no_output_____"
]
],
[
[
"## TRAIN SET",
"_____no_output_____"
]
],
[
[
"trainDataFull = pd.read_csv(\"trainData.csv\")\ntrainDataFull.head(3)",
"_____no_output_____"
],
[
"trainDataFull.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 61878 entries, 0 to 61877\nColumns: 104 entries, v1 to target\ndtypes: float64(103), int64(1)\nmemory usage: 49.1 MB\n"
],
[
"trainDataFull.describe()",
"_____no_output_____"
],
[
"trainData = trainDataFull.loc[:,'v1':'v99']\ntrainData.head(3)",
"_____no_output_____"
],
[
"trainLabels = trainDataFull.loc[:,'target']\ntrainLabels.unique()",
"_____no_output_____"
],
[
"# encode string class values as integers\nlabel_encoder = LabelEncoder()\nlabel_encoder = label_encoder.fit(trainLabels)\nlabel_encoded_y = label_encoder.transform(trainLabels)\nlabel_encoded_y",
"_____no_output_____"
],
[
"X_train, X_test, y_train, y_test = train_test_split(trainData.values, \n label_encoded_y, \n test_size = 0.3, \n random_state = 33,\n shuffle = True,\n stratify = label_encoded_y)",
"_____no_output_____"
]
],
[
[
"## MODEL-2 (Random Forest Classifier)",
"_____no_output_____"
]
],
[
[
"RFC_model = RandomForestClassifier(n_estimators=800,\n verbose=2,\n random_state=0,\n criterion='gini')\nRFC_model",
"_____no_output_____"
],
[
"RFC_model.fit(X_train, y_train)",
"[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.1s remaining: 0.0s\n"
],
[
"# make predictions for test data\ny_pred = RFC_model.predict(X_test)\ny_pred",
"[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.0s remaining: 0.0s\n[Parallel(n_jobs=1)]: Done 800 out of 800 | elapsed: 2.8s finished\n"
],
[
"predictions = [round(value) for value in y_pred]",
"_____no_output_____"
],
[
"# evaluate predictions\naccuracy = accuracy_score(y_test, predictions)\nprint(\"Accuracy: %.2f%%\" % (accuracy * 100.0))",
"Accuracy: 80.88%\n"
],
[
"#fig = plt.figure(figsize=(10,10))\nplot_confusion_matrix(RFC_model,\n X_test,\n y_test,\n values_format='d')",
"[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.0s remaining: 0.0s\n[Parallel(n_jobs=1)]: Done 800 out of 800 | elapsed: 3.1s finished\n"
]
],
[
[
"## Save Valid Score",
"_____no_output_____"
]
],
[
[
"y_score = RFC_model.predict_proba(X_test)\ny_score[0]",
"[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.0s remaining: 0.0s\n[Parallel(n_jobs=1)]: Done 800 out of 800 | elapsed: 3.2s finished\n"
],
[
"valid_score = pd.DataFrame(y_score, columns=['c1','c2','c3','c4','c5','c6','c7','c8','c9'])\nvalid_score",
"_____no_output_____"
],
[
"valid_score.to_csv('./results/valid-submission-RFC.csv', index = False)",
"_____no_output_____"
]
],
[
[
"## Save & Load Model",
"_____no_output_____"
],
[
"## joblib",
"_____no_output_____"
]
],
[
[
"# Save the model as a pickle in a file \njoblib.dump(RFC_model, './model/model_RFC.pkl') \n \n# Load the model from the file \nRFC_model_from_joblib = joblib.load('./model/model_RFC.pkl') \n \n# Use the loaded model to make predictions \nRFC_model_predictions = RFC_model_from_joblib.predict(X_test) \n\n# evaluate predictions\naccuracy = accuracy_score(y_test, RFC_model_predictions)\nprint(\"Accuracy: %.2f%%\" % (accuracy * 100.0))",
"[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.0s remaining: 0.0s\n"
]
],
[
[
"## GridSearchCV ",
"_____no_output_____"
]
],
[
[
"clf = GridSearchCV(RFC_model_model,\n {'max_depth': [4, 6],\n 'n_estimators': [100, 200]}, \n verbose=1,\n cv=2)\nclf.fit(X_train, \n y_train, \n early_stopping_rounds=10,\n eval_metric='mlogloss',\n eval_set=[(X_train, y_train), (X_test, y_test)], \n verbose=True)\nprint(clf.best_score_)\nprint(clf.best_params_)",
"_____no_output_____"
],
[
"# Save the model as a pickle in a file \njoblib.dump(clf.best_estimator_, './model/clf.pkl')\n\n# Load the model from the file \nclf_from_joblib = joblib.load('./model/clf.pkl') \n\n# Use the loaded model to make predictions \nclf_predictions = clf_from_joblib.predict(X_test) \n\n# evaluate predictions\naccuracy = accuracy_score(y_test, clf_predictions)\nprint(\"Accuracy: %.2f%%\" % (accuracy * 100.0))",
"_____no_output_____"
]
],
[
[
"# TEST",
"_____no_output_____"
]
],
[
[
"testData = pd.read_csv(\"testData.csv\")\ntestData",
"_____no_output_____"
],
[
"# Use the loaded model to make predictions \ntest_predictions = RFC_model.predict(testData.values)\ntest_predictions",
"[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.0s remaining: 0.0s\n[Parallel(n_jobs=1)]: Done 800 out of 800 | elapsed: 26.3s finished\n"
],
[
"# Use the loaded model to make predictions probability\ntest_predictions = RFC_model.predict_proba(testData.values)\ntest_predictions",
"[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.\n[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 0.0s remaining: 0.0s\n[Parallel(n_jobs=1)]: Done 800 out of 800 | elapsed: 25.9s finished\n"
],
[
"result = pd.DataFrame(test_predictions, columns=['c1','c2','c3','c4','c5','c6','c7','c8','c9'])\nresult",
"_____no_output_____"
],
[
"result.to_csv('./results/test-submission-RFC.csv', index = False)",
"_____no_output_____"
]
],
[
[
"## REFERENCES",
"_____no_output_____"
],
[
"1- https://xgboost.readthedocs.io/en/latest/python/python_api.html#module-xgboost.sklearn\n\n2- https://github.com/dmlc/xgboost/blob/master/demo/guide-python/sklearn_examples.py\n\n3- https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html\n\n4- https://www.datacamp.com/community/tutorials/xgboost-in-python\n\n5- https://scikit-learn.org/stable/modules/ensemble.html#voting-classifier\n\n6- https://www.datacamp.com/community/tutorials/random-forests-classifier-python?utm_source=adwords_ppc&utm_campaignid=1455363063&utm_adgroupid=65083631748&utm_device=c&utm_keyword=&utm_matchtype=b&utm_network=g&utm_adpostion=&utm_creative=332602034364&utm_targetid=aud-392016246653:dsa-429603003980&utm_loc_interest_ms=&utm_loc_physical_ms=1012782&gclid=EAIaIQobChMI49HTjNO06wIVB-ztCh23nwMLEAAYASAAEgKKEvD_BwE",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
d0c7bf72de24549272f39394041794766589801d | 100,182 | ipynb | Jupyter Notebook | get_layers.ipynb | open-geodata/sp_datageo | 98e363a5924859056798a9d4622cc781aa92b12f | [
"MIT"
] | null | null | null | get_layers.ipynb | open-geodata/sp_datageo | 98e363a5924859056798a9d4622cc781aa92b12f | [
"MIT"
] | null | null | null | get_layers.ipynb | open-geodata/sp_datageo | 98e363a5924859056798a9d4622cc781aa92b12f | [
"MIT"
] | null | null | null | 262.256545 | 64,672 | 0.905013 | [
[
[
"<br>\n\n# Layers\n\nUma vez criada uma sequencia de códigos, foi possível definir uma funão que integra todos eles, apresentada abaixo:",
"_____no_output_____"
]
],
[
[
"from get_data_datageo import *",
"_____no_output_____"
]
],
[
[
"<br>\n\n## Sedes Municipais",
"_____no_output_____"
]
],
[
[
"# Input dos caminhos para os metadados\nurl = 'http://datageo.ambiente.sp.gov.br/geoportal/catalog/search/resource/details.page?uuid='\nid_metadados = '{64BF344A-3AD0-410A-A3AA-DFE01C4E9BBB}'\n\n# URL\nurl_meta = '{}{}'.format(url, id_metadados)\n\n# Download\ngdf = download_datageo_shp(url_meta)\n\n# Renomeia Colunas\ngdf = gdf.rename(\n columns={\n 'Nome': 'nome_municipio'\n }\n)\n\n# Deleta Colunas\ngdf = gdf.drop(['Codigo_CET'], axis=1)\n\n# Results\nprint(gdf.dtypes)\ndisplay(gdf.head(5))\n\n# Salva\ngdf.to_file(os.path.join('data', 'sedes_municipais.geojson'), driver='GeoJSON', encoding='utf-8')\ngdf.to_file(os.path.join('data', 'sedes_municipais.gpkg'), layer='Sedes', driver='GPKG')",
"Página com metadados: http://datageo.ambiente.sp.gov.br/geoportal/catalog/search/resource/details.page?uuid={64BF344A-3AD0-410A-A3AA-DFE01C4E9BBB}\nResposta da página foi <Response [200]>\n> Encontrei o shapefile\nLink: http://datageo.ambiente.sp.gov.br/geoserver/datageo/SedesMunicipais/wfs?version=1.0.0&request=GetFeature&outputFormat=SHAPE-ZIP&typeName=SedesMunicipais\nEncontrei 1 arquivos \".shp\", sendo que o primeiro deles é o \"SedesMunicipaisPoint.shp\"\n Codigo_CET Nome geometry\n0 730 Rosana POINT (-53.05578 -22.57789)\n1 725 Euclides da Cunha Paulista POINT (-52.59680 -22.55792)\n2 690 Teodoro Sampaio POINT (-52.16922 -22.53031)\n3 561 Presidente Epitácio POINT (-52.10945 -21.76577)\n4 435 Marabá Paulista POINT (-51.96234 -22.10382)\nepsg:4674\nepsg:4326\nnome_municipio object\ngeometry geometry\ndtype: object\n"
]
],
[
[
"<br>\n\n## Limite Municipal",
"_____no_output_____"
]
],
[
[
"# Input dos caminhos para os metadados\nurl = 'http://datageo.ambiente.sp.gov.br/geoportal/catalog/search/resource/details.page?uuid='\nid_metadados = '{74040682-561A-40B8-BB2F-E188B58088C1}'\n\n# URL\nurl_meta = '{}{}'.format(url, id_metadados)\n\n# Download\ngdf = download_datageo_shp(url_meta)\n\n# Renomeia Colunas\ngdf = gdf.rename(\n columns={\n 'Cod_ibge':'id_ibge',\n 'Nome':'nome_municipio',\n 'Rotulo':'rotulo_municipio'\n }\n)\n\n# Deleta Colunas\ngdf = gdf.drop(['Cod_Cetesb', 'UGRHI', 'Nome_ugrhi'], axis=1)\n\n# Results\nprint(gdf.dtypes)\ndisplay(gdf.head(5))\n\n# Salva\ngdf.to_file(os.path.join('data', 'limite_municipal.geojson'), driver='GeoJSON', encoding='utf-8')\ngdf.to_file(os.path.join('data', 'limite_municipal.gpkg'), layer='Limite', driver='GPKG')",
"Página com metadados: http://datageo.ambiente.sp.gov.br/geoportal/catalog/search/resource/details.page?uuid={74040682-561A-40B8-BB2F-E188B58088C1}\nResposta da página foi <Response [200]>\n> Encontrei o shapefile\nLink: http://datageo.ambiente.sp.gov.br/geoserver/datageo/LimiteMunicipal/wfs?version=1.0.0&request=GetFeature&outputFormat=SHAPE-ZIP&typeName=LimiteMunicipal\nEncontrei 1 arquivos \".shp\", sendo que o primeiro deles é o \"LimiteMunicipalPolygon.shp\"\n Cod_Cetesb Cod_ibge Nome Rotulo UGRHI \\\n0 150 3500105 Adamantina Adamantina 21 \n1 151 3500204 Adolfo Adolfo 16 \n2 152 3500303 Aguai Aguaí 9 \n3 154 3500402 Aguas da Prata Águas da Prata 9 \n4 153 3500501 Aguas de Lindoia Águas de Lindóia 9 \n\n Nome_ugrhi geometry \n0 PEIXE POLYGON ((-51.17735 -21.69213, -51.17716 -21.6... \n1 TIETÊ/BATALHA POLYGON ((-49.74715 -21.29637, -49.74682 -21.2... \n2 MOGI-GUAÇU POLYGON ((-47.23298 -22.05409, -47.23289 -22.0... \n3 MOGI-GUAÇU POLYGON ((-46.75758 -21.84763, -46.75750 -21.8... \n4 MOGI-GUAÇU POLYGON ((-46.66020 -22.47948, -46.66018 -22.4... \nepsg:4674\nepsg:4326\nid_ibge int64\nnome_municipio object\nrotulo_municipio object\ngeometry geometry\ndtype: object\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0c7c01647ea5262a6241eb864fcf7159a0608f1 | 17,634 | ipynb | Jupyter Notebook | data_resolution/Asian_subgroup_resolution.ipynb | amritaismypal/ASR-PM2.5 | 232f7417c3a928481fea00482f237730349a2302 | [
"MIT"
] | null | null | null | data_resolution/Asian_subgroup_resolution.ipynb | amritaismypal/ASR-PM2.5 | 232f7417c3a928481fea00482f237730349a2302 | [
"MIT"
] | null | null | null | data_resolution/Asian_subgroup_resolution.ipynb | amritaismypal/ASR-PM2.5 | 232f7417c3a928481fea00482f237730349a2302 | [
"MIT"
] | null | null | null | 33.781609 | 129 | 0.465238 | [
[
[
"%reload_ext autoreload\n%autoreload 2\n\nimport numpy as np\nimport pandas as pd\nimport sys\nsys.path.append(\"data_resolution\")\nfrom resolution_helpers import invalid_fips, remove_cols",
"_____no_output_____"
],
[
"asian_data_path = \"~/ASR-PM2.5/datasets/input_datasets/Asian_subgroup/2015Asiansubgroupdataset.csv\"\n\nasian_data = pd.read_csv(asian_data_path, dtype = {\"FIPS\": str}).dropna()",
"_____no_output_____"
],
[
"asian_data_updated = asian_data.rename(columns = {\"B01003_001E\": \"estimated_pop\"})",
"_____no_output_____"
],
[
"asian_data_updated",
"_____no_output_____"
],
[
"asian_fips = asian_data_updated[\"FIPS\"]\nasian_fips",
"_____no_output_____"
],
[
"invalid_asian_fips = invalid_fips(asian_fips)\nassert len(invalid_asian_fips) == 0",
"_____no_output_____"
],
[
"asian_name = asian_data_updated[\"NAME\"]",
"_____no_output_____"
],
[
"asian_name",
"_____no_output_____"
],
[
"zip(asian_data_updated.POPGROUP, asian_data_updated.POPGROUP_LABEL)",
"_____no_output_____"
],
[
"dict(zip(asian_data_updated.POPGROUP, asian_data_updated.POPGROUP_LABEL))",
"_____no_output_____"
],
[
"asian_data_updated.columns",
"_____no_output_____"
],
[
"keep_cols = [\"FIPS\", \"POPGROUP\", \"estimated_pop\", \"POPGROUP_LABEL\"]",
"_____no_output_____"
],
[
"remove_cols(asian_data_updated, keep_cols)\nasian_data_updated.columns",
"_____no_output_____"
],
[
"asian_data_output_path = \"~/ASR-PM2.5/datasets/intermediate_datasets/resultforasiandataset.csv\"",
"_____no_output_____"
],
[
"asian_data_updated.to_csv(asian_data_output_path)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0c7c70d034e6653a111bfabcde0d043d1ea1af7 | 140,450 | ipynb | Jupyter Notebook | examples/accreditation_tutorial.ipynb | hodgestar/qiskit-ignis | 0e511df442e864cd0e06efcdd1db7b03c011168b | [
"Apache-2.0"
] | null | null | null | examples/accreditation_tutorial.ipynb | hodgestar/qiskit-ignis | 0e511df442e864cd0e06efcdd1db7b03c011168b | [
"Apache-2.0"
] | null | null | null | examples/accreditation_tutorial.ipynb | hodgestar/qiskit-ignis | 0e511df442e864cd0e06efcdd1db7b03c011168b | [
"Apache-2.0"
] | 1 | 2021-04-01T17:28:33.000Z | 2021-04-01T17:28:33.000Z | 239.267462 | 62,056 | 0.912602 | [
[
[
"<img src=\"../../../images/qiskit_header.png\" alt=\"Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook\" align=\"middle\">",
"_____no_output_____"
],
[
"# Accreditation protocol",
"_____no_output_____"
],
[
"Accreditation Protocol (AP) is a protocol devised to characterize the reliability of noisy quantum devices.<br>\n\nGiven a noisy quantum device implementing a \"target\" quantum circuit, AP certifies an upper-bound on the variation distance between the probability distribution of the outputs returned by the device and the ideal probability distribution.\nThis method is based on Ferracin et al, \"Accrediting outputs of noisy intermediate-scale quantum devices\", https://arxiv.org/abs/1811.09709.\n\nThis notebook gives an example for how to use the ignis.characterization.accreditation module. This particular example shows how to accredit the outputs of a 4-qubit quantum circuit of depth 5. All the circuits are run using the noisy Aer simulator.",
"_____no_output_____"
]
],
[
[
"#Import general libraries (needed for functions)\nimport numpy as np\nfrom numpy import random\nimport qiskit\n\n#Import Qiskit classes\nfrom qiskit import QuantumCircuit, QuantumRegister, ClassicalRegister, Aer, execute\nfrom qiskit.providers.aer.noise import NoiseModel\nfrom qiskit.providers.aer.noise.errors.standard_errors import depolarizing_error\n\n#Import the accreditation functions.\nfrom qiskit.ignis.verification.accreditation import accreditationFitter\nfrom qiskit.ignis.verification.accreditation import accreditation_circuits",
"_____no_output_____"
]
],
[
[
"# Input to the protocol",
"_____no_output_____"
],
[
"AP can accredit the outputs of a __target circuit__ that<br>\n1) Takes as input $n$ qubits in the state $|{0}>$<br>\n2) Ends with single-qubit measurements in the Pauli-$Z$ basis<br>\n3) Is made of $m$ \"bands\", each band containing a round of single-qubit gates and a round of controlled-$Z$ gates.<br>\nThe accreditation is made by employing __trap circuits__, circuits that can be efficiently simulated on a classical computer and that whose outputs are used to witness the correct functionality of the device.<br>\n\nLet's now draw a target quantum circuit!\nWe start with a simple circuit to generate and measure 4-qubits GHZ states.",
"_____no_output_____"
]
],
[
[
"# Create a Quantum Register with n_qb qubits.\nq_reg = QuantumRegister(4, 'q')\n# Create a Classical Register with n_qb bits.\nc_reg = ClassicalRegister(4, 's')\n# Create a Quantum Circuit acting on the q register\ntarget_circuit = QuantumCircuit(q_reg, c_reg)\n\ntarget_circuit.h(0)\ntarget_circuit.h(1)\ntarget_circuit.h(2)\ntarget_circuit.h(3)\ntarget_circuit.cz(0,1)\ntarget_circuit.cz(0,2)\ntarget_circuit.cz(0,3)\ntarget_circuit.h(1)\ntarget_circuit.h(2)\ntarget_circuit.h(3)\n\ntarget_circuit.measure(q_reg, c_reg)\n\ntarget_circuit.draw(output = 'mpl')",
"_____no_output_____"
]
],
[
[
"# Generating accreditation circuits",
"_____no_output_____"
],
[
"The function $accreditation\\_circuits$ generates all the circuits required by AP, target and traps. It automatically appends random Pauli gates to the circuits (if the implementation is noisy, these random Pauli gates reduce the noise to Pauli errors ! ) <br>\n\nIt also returns the list $postp\\_list$ of strings required to post-process the outputs, as well as the number $v\\_zero$ indicating the circuit implementing the target.\n\nThis is the target circuit with randomly chosen Pauli gates:",
"_____no_output_____"
]
],
[
[
"v = 10\n\ncirc_list, postp_list, v_zero = accreditation_circuits(target_circuit, v)\ncirc_list[(v_zero)%(v+1)][0].draw(output = 'mpl')",
"_____no_output_____"
]
],
[
[
"This is how a trap looks like:",
"_____no_output_____"
]
],
[
[
"circ_list[(v_zero+1)%(v+1)][0].draw(output = 'mpl')",
"_____no_output_____"
]
],
[
[
"# Simulate the ideal circuits",
"_____no_output_____"
],
[
"Let's implement AP.\n\nWe use $accreditation\\_circuits$ to generate target and trap circuits.\nThen, we use the function $single\\_protocol\\_run$ to implement all these circuits, keeping the output of the target only if all of the traps return the correct output.",
"_____no_output_____"
]
],
[
[
"simulator = qiskit.Aer.get_backend('qasm_simulator')\n\ntest_1 = accreditationFitter()\n\n# Create target and trap circuits with random Pauli gates\ncircuit_list, postp_list, v_zero = accreditation_circuits(target_circuit, v)\n\noutputs_list = []\nfor circuit_k in range(v+1):\n job = execute(circuit_list[circuit_k], simulator,\n shots=1, memory=True)\n outputs_list.append([job.result().get_memory()[0]])\n\n# Post-process the outputs and see if the protocol accepts\ntest_1.single_protocol_run(outputs_list, postp_list, v_zero)\n\nprint(\"Outputs of the target: \",test_1.outputs,\" , AP\",test_1.flag,\"these outputs!\")",
"Outputs of the target: [0, 0, 0, 0] , AP accepted these outputs!\n"
]
],
[
[
"In the absence of noise, all traps return the expected output, therefore we always accept the output of the target.<br>\n\nTo obtain an upper-bound on the variation distance on the outputs of the target circuit, we need to implement AP $d$ times, each time with ___v___ different trap circuits.",
"_____no_output_____"
]
],
[
[
"# Number of runs\nd = 20\n\ntest_2 = accreditationFitter()\n\nfor run in range(d):\n \n # Create target and trap circuits with random Pauli gates\n circuit_list, postp_list, v_zero = accreditation_circuits(target_circuit, v)\n \n outputs_list = []\n # Implement all these circuits\n for circuit_k in range(v+1):\n job = execute(circuit_list[circuit_k], simulator,\n shots=1, memory=True)\n outputs_list.append([job.result().get_memory()[0]])\n\n # Post-process the outputs and see if the protocol accepts\n test_2.single_protocol_run(outputs_list, postp_list, v_zero)\n print(\"Protocol run number\",run+1,\", outputs of the target\",test_2.flag)\n \nprint('\\nAfter',test_2.num_runs,'runs, AP has accepted',test_2.N_acc,'outputs!')\n\nprint('\\nList of accepted outputs:\\n', test_2.outputs)",
"Protocol run number 1 , outputs of the target accepted\nProtocol run number 2 , outputs of the target accepted\nProtocol run number 3 , outputs of the target accepted\nProtocol run number 4 , outputs of the target accepted\nProtocol run number 5 , outputs of the target accepted\nProtocol run number 6 , outputs of the target accepted\nProtocol run number 7 , outputs of the target accepted\nProtocol run number 8 , outputs of the target accepted\nProtocol run number 9 , outputs of the target accepted\nProtocol run number 10 , outputs of the target accepted\nProtocol run number 11 , outputs of the target accepted\nProtocol run number 12 , outputs of the target accepted\nProtocol run number 13 , outputs of the target accepted\nProtocol run number 14 , outputs of the target accepted\nProtocol run number 15 , outputs of the target accepted\nProtocol run number 16 , outputs of the target accepted\nProtocol run number 17 , outputs of the target accepted\nProtocol run number 18 , outputs of the target accepted\nProtocol run number 19 , outputs of the target accepted\nProtocol run number 20 , outputs of the target accepted\n\nAfter 20 runs, AP has accepted 20 outputs!\n\nList of accepted outputs:\n [[1 1 1 1]\n [0 0 0 0]\n [1 1 1 1]\n [1 1 1 1]\n [1 1 1 1]\n [0 0 0 0]\n [1 1 1 1]\n [1 1 1 1]\n [1 1 1 1]\n [1 1 1 1]\n [1 1 1 1]\n [0 0 0 0]\n [0 0 0 0]\n [1 1 1 1]\n [1 1 1 1]\n [1 1 1 1]\n [0 0 0 0]\n [1 1 1 1]\n [0 0 0 0]\n [0 0 0 0]]\n"
]
],
[
[
"The function $bound\\_variation\\_distance$ calculates the upper-bound on the variation distance (VD) using\n\n$$VD\\leq \\frac{\\varepsilon}{N_{\\textrm{acc}}/d-\\theta}\\textrm{ ,}$$\n\nwhere $\\theta\\in[0,1]$ is a positive number and<br>\n\n$$\\varepsilon= \\frac{1.7}{v+1}$$\n\nis the maximum probability of accepting an incorrect state for the target.<br>\nThe function $bound\\_variation\\_distance$ also calculates the confidence in the bound as \n\n$$1-2\\textrm{exp}\\big(-2\\theta d^2\\big)$$",
"_____no_output_____"
]
],
[
[
"theta = 5/100\n\ntest_2.bound_variation_distance(theta)\n\nprint(\"AP accepted\",test_2.N_acc,\"out of\",test_2.num_runs,\"times.\")\nprint(\"With confidence\",test_2.confidence,\"AP certifies that VD is upper-bounded by\",test_2.bound)",
"AP accepted 20 out of 20 times.\nWith confidence 1.0 AP certifies that VD is upper-bounded by 0.16267942583732053\n"
]
],
[
[
"# Defining the noise model",
"_____no_output_____"
],
[
"We define a noise model for the simulator. We add depolarizing error probabilities to the cotrolled-$Z$ and single-qubit gates.",
"_____no_output_____"
]
],
[
[
"noise_model = NoiseModel()\n\np1q = 0.002\nnoise_model.add_all_qubit_quantum_error(depolarizing_error(p1q, 1), 'u1')\nnoise_model.add_all_qubit_quantum_error(depolarizing_error(p1q, 1), 'u2')\nnoise_model.add_all_qubit_quantum_error(depolarizing_error(p1q, 1), 'u3')\np2q = 0.02\nnoise_model.add_all_qubit_quantum_error(depolarizing_error(p2q, 2), 'cz')\n\nbasis_gates = ['u1','u2','u3','cz']",
"_____no_output_____"
]
],
[
[
"We then implement noisy circuits and pass their outputs to $single\\_protocol\\_run$.",
"_____no_output_____"
]
],
[
[
"test_3 = accreditationFitter()\n\nfor run in range(d):\n \n # Create target and trap circuits with random Pauli gates\n circuit_list, postp_list, v_zero = accreditation_circuits(target_circuit, v)\n \n outputs_list = []\n # Implement all these circuits with noise\n for circuit_k in range(v+1):\n job = execute(circuit_list[circuit_k], simulator,\n noise_model=noise_model, basis_gates=basis_gates,\n shots=1, memory=True)\n outputs_list.append([job.result().get_memory()[0]])\n\n # Post-process the outputs and see if the protocol accepts\n test_3.single_protocol_run(outputs_list, postp_list, v_zero)\n print(\"Protocol run number\",run+1,\", outputs of the target\",test_3.flag)\n \nprint(\"\\nAP accepted\",test_3.N_acc,\"out of\",test_3.num_runs,\"times.\")\nprint('\\nList of accepted outputs:\\n', test_3.outputs)\n\ntheta = 5/100\n\ntest_3.bound_variation_distance(theta)\nprint(\"\\nWith confidence\",test_3.confidence,\"AP certifies that VD is upper-bounded by\",test_3.bound)",
"Protocol run number 1 , outputs of the target rejected\nProtocol run number 2 , outputs of the target rejected\nProtocol run number 3 , outputs of the target rejected\nProtocol run number 4 , outputs of the target accepted\nProtocol run number 5 , outputs of the target accepted\nProtocol run number 6 , outputs of the target accepted\nProtocol run number 7 , outputs of the target accepted\nProtocol run number 8 , outputs of the target rejected\nProtocol run number 9 , outputs of the target accepted\nProtocol run number 10 , outputs of the target accepted\nProtocol run number 11 , outputs of the target accepted\nProtocol run number 12 , outputs of the target rejected\nProtocol run number 13 , outputs of the target rejected\nProtocol run number 14 , outputs of the target accepted\nProtocol run number 15 , outputs of the target rejected\nProtocol run number 16 , outputs of the target accepted\nProtocol run number 17 , outputs of the target accepted\nProtocol run number 18 , outputs of the target accepted\nProtocol run number 19 , outputs of the target rejected\nProtocol run number 20 , outputs of the target accepted\n\nAP accepted 12 out of 20 times.\n\nList of accepted outputs:\n [[0 0 0 0]\n [1 1 0 1]\n [1 1 1 1]\n [1 1 1 1]\n [0 0 0 0]\n [0 0 0 0]\n [1 1 1 1]\n [0 0 0 0]\n [0 0 0 0]\n [0 0 0 0]\n [0 0 0 0]\n [1 1 1 1]]\n\nWith confidence 1.0 AP certifies that VD is upper-bounded by 0.28099173553719003\n"
]
],
[
[
"Changing the number of trap circuits per protocol run changes the upper-bound on the VD, but not the confidence.<br>\n\nWhat number of trap circuits will ensure the minimal upper-bound for your target circuit?",
"_____no_output_____"
]
],
[
[
"min_traps = 4\nmax_traps = 10\n\n\nfor num_trap_circs in range(0,max_traps-min_traps): \n \n test_4 = accreditationFitter()\n for run in range(d):\n\n # Create target and trap circuits with random Pauli gates\n circuit_list, postp_list, v_zero = accreditation_circuits(target_circuit, num_trap_circs+min_traps)\n\n outputs_list = []\n # Implement all these circuits with noise\n for circuit_k in range(num_trap_circs+min_traps+1):\n job = execute(circuit_list[circuit_k], simulator,\n noise_model=noise_model, basis_gates=basis_gates,\n shots=1, memory=True)\n outputs_list.append([job.result().get_memory()[0]])\n\n # Post-process the outputs and see if the protocol accepts\n test_4.single_protocol_run(outputs_list, postp_list, v_zero)\n\n print(\"\\nWith\", num_trap_circs+min_traps,\n \"traps, AP accepted\", test_4.N_acc, \n \"out of\", test_4.num_runs, \"times.\")\n test_4.bound_variation_distance(theta)\n print(\"With confidence\", test_4.confidence,\n \"AP with\", num_trap_circs+min_traps,\n \"traps certifies that VD is upper-bounded by\", test_4.bound)",
"\nWith 4 traps, AP accepted 15 out of 20 times.\nWith confidence 1.0 AP with 4 traps certifies that VD is upper-bounded by 0.48571428571428577\n\nWith 5 traps, AP accepted 14 out of 20 times.\nWith confidence 1.0 AP with 5 traps certifies that VD is upper-bounded by 0.4358974358974359\n\nWith 6 traps, AP accepted 9 out of 20 times.\nWith confidence 1.0 AP with 6 traps certifies that VD is upper-bounded by 0.6071428571428572\n\nWith 7 traps, AP accepted 13 out of 20 times.\nWith confidence 1.0 AP with 7 traps certifies that VD is upper-bounded by 0.35416666666666674\n\nWith 8 traps, AP accepted 12 out of 20 times.\nWith confidence 1.0 AP with 8 traps certifies that VD is upper-bounded by 0.3434343434343434\n\nWith 9 traps, AP accepted 12 out of 20 times.\nWith confidence 1.0 AP with 9 traps certifies that VD is upper-bounded by 0.3090909090909091\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0c7cb338b9df8a1419101d672d685ccb515b8e3 | 10,267 | ipynb | Jupyter Notebook | README.ipynb | Jyothikumar-b/CarND-Behavioral-Cloning-P3 | ce1dcd37b2f93887602156fb5511dadafe631fb6 | [
"MIT"
] | null | null | null | README.ipynb | Jyothikumar-b/CarND-Behavioral-Cloning-P3 | ce1dcd37b2f93887602156fb5511dadafe631fb6 | [
"MIT"
] | null | null | null | README.ipynb | Jyothikumar-b/CarND-Behavioral-Cloning-P3 | ce1dcd37b2f93887602156fb5511dadafe631fb6 | [
"MIT"
] | null | null | null | 42.077869 | 346 | 0.621506 | [
[
[
"# **Behavioral Cloning** \n---",
"_____no_output_____"
],
[
"**Behavioral Cloning Project**\n\nThe goals / steps of this project are the following:\n* Use the simulator to collect data of good driving behavior\n* Build, a convolution neural network in Keras that predicts steering angles from images\n* Train and validate the model with a training and validation set\n* Test that the model successfully drives around track one without leaving the road\n* Summarize the results with a written report\n\n",
"_____no_output_____"
],
[
"## Rubric Points\n### Here I will consider the [rubric points](https://review.udacity.com/#!/rubrics/432/view) individually and describe how I addressed each point in my implementation. \n\n---",
"_____no_output_____"
],
[
"### Files Submitted & Code Quality\n\n#### 1. Submission includes all required files and can be used to run the simulator in autonomous mode\nMy project includes the following files:\n* `model.ipynb` containing the script to create and train the model\n* `drive.py` for driving the car in autonomous mode\n* `model.h5` containing a trained convolution neural network \n* `writeup_report.md` summarizing the results",
"_____no_output_____"
],
[
"#### 2. Submission includes functional code\nUsing the Udacity provided simulator and my drive.py file, the car can be driven autonomously around the track by executing \n```sh\npython drive.py model.h5\n```",
"_____no_output_____"
],
[
"#### 3. Submission code is usable and readable\n\nThe `model.ipynb` file contains the code for training and saving the convolution neural network. The file shows the pipeline I used for training and validating the model, and it contains comments to explain how the code works.",
"_____no_output_____"
],
[
"### Model Architecture and Training Strategy",
"_____no_output_____"
],
[
"#### 1. An appropriate model architecture has been employed\n\nMy model consists of a convolution neural network with 5x5 filter sizes and depths between 6 and 120 (`model.ipynb` file, cell 12) \n\nThe model includes LeakyReLU layers to introduce nonlinearity, and the data is normalized in the model using a Keras lambda layer. \n\n#### 2. Attempts to reduce overfitting in the model\n\nOverfitting is controlled by following steps.\n* In each convolution layer, I have used *Max Pooling*. It helps in reducing the dimension as well as makes neurans to perform better. \n* Using data augumentation techniques, I have distributed the training data across all the output class.\n* In addition to that, the model was trained and validated on different data sets to ensure that the model was not overfitting (code line 10-16). The model was tested by running it through the simulator and ensuring that the vehicle could stay on the track.\n\n#### 3. Model parameter tuning\n\n* `Learning rate` : The model used an adam optimizer, so the learning rate was not tuned manually (`model.ipynb` file, cell 12, line 40).\n\n#### 4. Appropriate training data\n\nI have used Udacity training data. There were **Three** images(Center,Left,Right) for every frame and steering angle for center image. We have *8036* frame details. So, totally there were *24108* images given as input.\n\n\n***Data Distribution Of Given Input Data***\n\n\n\nFrom the above graph, it is observed that we didn't have same amount of data in each output classes. we can achieve equal distribution by two ways.\n1. We can improve the samples for output classes which are lower\n2. Reducing the samples which has large amount of data\n\nI chose the second way. As most of the data has roughly 300 images in an average, increaing these output classes is not the good choice. For the given problem, we don't require these much data also. So, I have selected only maximum of 200 images per output class. Additionaly, I have skipped output classes which has less then 10 images.\n\n\n***Data Distribution Of Selected Data***\n\n\n\nThe above data is comparatively well distributed. Agin, this is not evenly distributed in all output classes. As we don't take large turn everytime. Mostly we drive straightly with slight turn. So, these selected data will work without any issue.\n\nI have used a combination of central, left and right images for training my model. This will help in recovering from the left and right sides of the road. \n\nFor details about how I created the training data, see the next section. ",
"_____no_output_____"
],
[
"### Model Architecture and Training Strategy",
"_____no_output_____"
],
[
"#### 1. Solution Design Approach\nI have divided the problem into Data Augumentation & Building the neural network. For each change in the data set, I will check my model on different model architecture. From each set, minimum one model will be selected for further improvement.\n\n***SET 1 :***\n\n\n***SET 2***\n\n\n***SET 3***\n\n\n***SET 4***\n\n> **Note :** For each SET seperate python notebook is used. These notebooks are also uploaded with results for reference.(For example : `SET 1` uses `1_Mode_Training.ipynb` respectively)",
"_____no_output_____"
],
[
"#### 2. Final Model Architecture\nMy final model consists of **Four** hidden layers.( 3 Convolution layer followed by one dense layer)\n\n| Layer \t\t| Description\t \t\t\t\t\t| \n|:---------------------:|:---------------------------------------------:| \n| Input \t\t| 160x320x3 \t\t\t\t\t\t | \n| Resizing Image | 85x320x3 |\n| Convolution 5x5 \t| 1x1 stride, valid padding, outputs 81x316x6 \t|\n| Leaky ReLU Activation\t\t\t\t\t\t\t\t\t\t\t\t\t|\n| Max pooling\t \t| 2x2 stride, 2x2 filter, outputs 40x158x6\t\t|\n| Convolution 5x5 \t| 1x1 stride, valid padding, outputs 36x154x36 \t|\n| Leaky ReLU Activation\t\t\t\t\t\t\t\t\t\t\t\t\t|\n| Max pooling\t \t| 2x2 stride, 2x2 filter, outputs 18x77x3\t |\n| Convolution 5x5 \t| 1x1 stride, valid padding, outputs 14x73x120 \t|\n| Leaky ReLU Activation\t\t\t\t\t\t\t\t\t\t\t\t\t|\n| Max pooling\t \t| 2x2 stride, 2x2 filter, outputs 7x36x120\t |\n| Fully connected#1\t\t| 30240 input, 256 output\t\t\t\t |\n| Fully connected#2\t\t| 256 input, 1 output \t\t\t\t |\n",
"_____no_output_____"
],
[
"#### 3. Creation of the Training Set & Training Process\n**Training Set Selection :**\n\nAs discussed in the previous section, apart from 24108 training images 10818 images selected. Among them 20 percent of the images are used for validation.\nThe input data is distributed among all output classes to avoid biased output. The whole data set is shuffled to get random classes in each batch.\n\n\n**Data Augumentation :**\n\nThe upper portion of the image not required for detecting the lanes. so, we are slicing the images in the following way. This will reduce the computation cost as well as increase the accuracy.\n> Input Image :\n>> \n\n> Output Image :\n>> \n\nThe cropped image is normalized using the below formulae:\n```python\n>> x=(x/255.0)-0.5\n```\n**Training Process :**\n\n* Among 80% of input data is taken for training and remaining 20% for validation.\n* A batch of 32 augumented image is evaluated by my model\n* The loss will be calculated using `Mean Square Error` function.\n* Depending upon the loss, `Adam optimizer` will update the weights by back propogation algorithm\n* This process is continued for all the batches in our training data. Then, the model is evaluated against the validation data\n\nThe whole training process is repeated for 20 cycle (Epochs). I am plotting the Epochs Vs Loss function to understand the behaviour of my model\n\n>  \n>>Red line : Validation loss\n\n>>Blue line : Training loss",
"_____no_output_____"
]
]
] | [
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
d0c7cc560b7d3636a28515a88baedf58b1dd09d4 | 16,424 | ipynb | Jupyter Notebook | Notebooks/Plots.ipynb | nusretipek/Flanders_Innovation | 36aef7da6a73690a50ff504b511e75bc6c9fb1ee | [
"MIT"
] | null | null | null | Notebooks/Plots.ipynb | nusretipek/Flanders_Innovation | 36aef7da6a73690a50ff504b511e75bc6c9fb1ee | [
"MIT"
] | null | null | null | Notebooks/Plots.ipynb | nusretipek/Flanders_Innovation | 36aef7da6a73690a50ff504b511e75bc6c9fb1ee | [
"MIT"
] | null | null | null | 273.733333 | 14,968 | 0.927484 | [
[
[
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\ngroup_names = ['Dutch', 'English', 'Other', 'French']\ncounts = pd.Series([1973, 882, 33, 25], \n index=['Dutch (67.7%)', 'English (30.3%)', 'Other (1.1%)', 'French (0.9%)'])\nexplode = (0, 0.05, 0.1, 0.3)\nfig1 = plt.gcf()\ncounts.plot(kind='pie', colors=['#33a02c','#1f78b4','#b2df8a','#a6cee3'], fontsize=11, explode=explode, labeldistance = 1.1, startangle=70)\nplt.axis('equal')\nplt.ylabel('')\n#plt.legend(labels=counts.index, loc=4, bbox_to_anchor=(0.2, 0.7))\nplt.show()\nfig1.savefig('language.png', dpi=200)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code"
]
] |
d0c7cf67dcc27b0f6853bae2623930147e52d68d | 598,761 | ipynb | Jupyter Notebook | Lab2.ipynb | ashraj98/rbf-sin-approx | e89e642a2e75cdf6911f018cea99f285a0cd8c5a | [
"Apache-2.0"
] | 1 | 2020-10-27T07:09:26.000Z | 2020-10-27T07:09:26.000Z | Lab2.ipynb | ashraj98/rbf-sin-approx | e89e642a2e75cdf6911f018cea99f285a0cd8c5a | [
"Apache-2.0"
] | 1 | 2020-10-27T08:10:01.000Z | 2020-10-27T08:12:44.000Z | Lab2.ipynb | ashraj98/rbf-sin-approx | e89e642a2e75cdf6911f018cea99f285a0cd8c5a | [
"Apache-2.0"
] | null | null | null | 1,054.15669 | 34,638 | 0.953036 | [
[
[
"<a href=\"https://colab.research.google.com/github/ashraj98/rbf-sin-approx/blob/main/Lab2.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"# Lab 2\n\n### Ashwin Rajgopal",
"_____no_output_____"
],
[
"Start off by importing numpy for matrix math, random for random ordering of samples and pyplot for plotting results.",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\nimport numpy as np\nimport random",
"_____no_output_____"
]
],
[
[
"#### Creating the samples\n\nX variables can be generated by using `np.random.rand` to generate a array of random numbers between 0 and 1, which is what is required. The same can be done to generate the noise, but then it needs to be divided by 5 and subtracted by .1 to fit the interval [-0.1, 0.1]. The expected values can then by generated applying the function to the inputs and adding the noise.\n\nFor plotting the original function that will be approximated by the RBF network, `linspace` can be used to generate equally spaced inputs to make a smooth plot of the function.",
"_____no_output_____"
]
],
[
[
"X = np.random.rand(1, 75).flatten()\nnoise = np.random.rand(1, 75).flatten() / 5 - 0.1\nD = 0.5 + 0.4 * np.sin(2 * np.pi * X) + noise\n\nfunc_X = np.linspace(0, 1, 100)\nfunc_Y = 0.5 + 0.4 * np.sin(2 * np.pi * func_X)",
"_____no_output_____"
]
],
[
[
"#### K-means algorithm\n\nThis function finds the centers and variances given uncategorized inputs and number of clusters. It also takes in a flag to determined whether to output an averaged variance for all clusters or use specialized variances for each cluster.\n\nThe algorithm begins by choosing random points from the inputs as the center of the clusters, so that every cluster will have at least point assigned to it. Then the algorithm repetitively assigns points to each cluster using Euclidean distance and averages the assigned points for each cluster to find the new centers. The new centers are compared with the old centers, and if they are the same, the algorithm is stopped.\n\nThen using the last assignment of the points, the variance for each cluster is calculated. If a cluster does not have more than one point assigned to it, it is skipped.\n\nIf `use_same_width=True`, then an normalized variance is used for all clusters. The maximum distance is used by using an outer subtraction between the centers array and itself, and then it is divided by `sqrt(2 * # of clusters)`.\n\nIf `use_same_width=False`, then for all clusters that had only one point assigned to it, the average of all the other variances is used as the variance for these clusters.",
"_____no_output_____"
]
],
[
[
"def kmeans(clusters=2, X=X, use_same_width=False):\n centers = np.random.choice(X, clusters, replace=False)\n diff = 1\n while diff != 0:\n assigned = [[] for i in range(clusters)]\n for x in X:\n assigned_center = np.argmin(np.abs(centers - x))\n assigned[assigned_center].append(x.item())\n new_centers = np.array([np.average(points) for points in assigned])\n diff = np.sum(np.abs(new_centers - centers))\n centers = new_centers\n variances = []\n no_var = []\n for i in range(clusters):\n if len(assigned[i]) < 2:\n no_var.append(i)\n else:\n variances.append(np.var(assigned[i]))\n if use_same_width:\n d_max = np.max(np.abs(np.subtract.outer(centers, centers)))\n avg_var = d_max / np.sqrt(2 * clusters)\n variances = [avg_var for i in range(clusters)]\n else:\n if len(no_var) > 0:\n avg_var = np.average(variances)\n for i in no_var:\n variances.insert(i, avg_var)\n return (centers, np.array(variances))",
"_____no_output_____"
]
],
[
[
"The function below defines the gaussian function. Given the centers and variances for all clusters, it calculates the output for all gaussians at once for a single input.",
"_____no_output_____"
]
],
[
[
"def gaussian(centers, variances, x):\n return np.exp((-1 / (2 * variances)) * ((centers - x) ** 2))",
"_____no_output_____"
]
],
[
[
"#### Training the RBF Network\n\nFor each gaussian, a random weight is generated in the interval [-1, 1]. The same happens for a bias term as well.\n\nThen, for the number of epochs specified, the algorithm calculates the gaussian outputs for each input, and then takes the weighted sum and adds the bias to get the output of the network. Then the LMS algorithm is applied.\n\nAfterwards, the `linspace`d inputs are used to generate the outputs, which allows for plotting the approximating function. Then both the approximated function (red) and the approximating function (blue) are plot, as well as the training data with the noise.",
"_____no_output_____"
]
],
[
[
"def train(centers, variances, lr, epochs=100):\n num_centers = len(centers)\n W = np.random.rand(1, num_centers) * 2 - 1\n b = np.random.rand(1, 1) * 2 - 1\n order = list(range(len(X)))\n for i in range(epochs):\n random.shuffle(order)\n for j in order:\n x = X[j]\n d = D[j]\n G = gaussian(centers, variances, x)\n y = W.dot(G) + b\n e = d - y\n W += lr * e * G.reshape(1, num_centers)\n b += lr * e\n est_Y = []\n for x in func_X:\n G = gaussian(centers, variances, x)\n y = W.dot(G) + b\n est_Y.append(y.item())\n est_Y = np.array(est_Y)\n fig = plt.figure()\n ax = plt.axes()\n ax.scatter(X, D, label='Sampled')\n ax.plot(func_X, est_Y, '-b', label='Approximate')\n ax.plot(func_X, func_Y, '-r', label='Original')\n plt.title(f'Bases = ${num_centers}, Learning Rate = ${lr}')\n plt.xlabel('x')\n plt.ylabel('y')\n plt.legend(loc=\"upper right\")",
"_____no_output_____"
]
],
[
[
"The learning rates and number of bases that needed to be tested are defined, and then K-means is run for each combination of base and learning rate. The output of the K-means is used as the input for the RBF training algorithm, and the results are plotted.",
"_____no_output_____"
]
],
[
[
"bases = [2, 4, 7, 11, 16]\nlearning_rates = [.01, .02]\nfor base in bases:\n for lr in learning_rates:\n centers, variances = kmeans(base, X)\n train(centers=centers, variances=variances, lr=lr)",
"_____no_output_____"
]
],
[
[
"The best function approximates seem to be with 2 bases. As soon as the bases are increased to 4, overfitting starts to occur, with 16 bases having extreme overfitting.\n\nIncreasing the learning rate seems to decrease the training error but in some cases increases the overfitting of the data.",
"_____no_output_____"
],
[
"Run the same combinations or number of bases and learning rate again, but this time using the same Gaussian width for all bases.",
"_____no_output_____"
]
],
[
[
"for base in bases:\n for lr in learning_rates:\n centers, variances = kmeans(base, X, use_same_width=True)\n train(centers=centers, variances=variances, lr=lr, epochs=100)",
"_____no_output_____"
]
],
[
[
"Using the same width for each base seems to drastically decrease overfitting. Even with 16 bases, the approximating function is very smooth. However, after 100 epochs, the training error is still very high, and the original function is not well approximated.\n\nAfter running the training with significantly more epochs (10,000 to 100,000), the function becomes well approximated for large number of bases. But for smaller number of bases like 2, the approximating function is still not close to the approximated function, whereas when using different Gaussian widths, 2 bases was the best approximator of the original function.\n\nSo, using the same widths, the training takes significantly longer and requires many bases to be used to approximate the original function well.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d0c7d71cb02cdb301e802e4a23df992aaea122ad | 4,992 | ipynb | Jupyter Notebook | Laboratory/Lab_02/Lab_02.ipynb | mriosrivas/HW-course-material- | c03210b53e4b787db3157a2f6c9e7fa80e86daa7 | [
"MIT"
] | 1 | 2022-03-24T22:59:56.000Z | 2022-03-24T22:59:56.000Z | Lab_02/Lab_02.ipynb | mriosrivas/Hardware_Labs_2021 | 138d76702889ec6007fc63dbb318d406e42ca0d7 | [
"MIT"
] | 1 | 2021-11-25T00:39:40.000Z | 2021-11-25T00:39:40.000Z | Lab_02/Lab_02.ipynb | mriosrivas/Hardware_Labs_2021 | 138d76702889ec6007fc63dbb318d406e42ca0d7 | [
"MIT"
] | null | null | null | 22.588235 | 92 | 0.478365 | [
[
[
"from pynq import Overlay\nimport numpy as np\n\n\noverlay = Overlay('/home/xilinx/pynq/overlays/Manuel/Lab_02/lab_02_design_ver_1d.bit')",
"_____no_output_____"
],
[
"#overlay?",
"_____no_output_____"
],
[
"# AXI lite interface object instance\n# Used for data signals\n#\n# Address | Address | Signal\n# (HEX) | (DEC) |\n# 0x0000 | 0000 | input pointer_A\n# 0x0004 | 0004 | input pointer_B\n# 0x0008 | 0008 | input pointer_code\n# 0x000C | 0012 | output pointer_Y\n# 0x0010 | 0016 | output pointer_Cout\n\n# Data Signals\npointer_code = 0x0000\npointer_A = 0x0004\npointer_B = 0x0008\npointer_Y = 0x000C\npointer_Cout =0x0010\n\naxi_interface = overlay.axi_alu_data_0",
"_____no_output_____"
],
[
"# Load A and B into memory\nA = 31\nB = 32\n\naxi_interface.write(pointer_A, A)\naxi_interface.write(pointer_B, B)",
"_____no_output_____"
],
[
"# code | function\n# 000 | A & B\n# 001 | A | B\n# 010 | A + B\n# 011 | Not defined\n# 100 | A & (~B)\n# 101 | A | (~B)\n# 110 | A - B\n# 111 | SLT\n\n# codes\nand_ = 0\nor_ = 1\nadd_ = 2\nand_not_ = 4\nor_not_ = 5\ndif_ = 6\nslt_ = 7\n\n#Load code\ncode = dif_\naxi_interface.write(pointer_code, code) ",
"_____no_output_____"
],
[
"# Load data to register\naxi_interface.write(pointer_Y, 1)",
"_____no_output_____"
],
[
"# Read result\nprint((np.array(axi_interface.read(pointer_Y))).astype('int16'))",
"-1\n"
],
[
"# Load data to register\naxi_interface.write(pointer_Cout,0)",
"_____no_output_____"
],
[
"# Read result\nprint((np.array(axi_interface.read(pointer_Cout))).astype('int16'))",
"0\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0c7f20fed957bffbe557d97a0865e4599985aa5 | 18,668 | ipynb | Jupyter Notebook | explore_data.ipynb | nadafalou/CSC413FinalProject | fa19b1a31ff6bba9b10139663d9065f78b61eb2c | [
"MIT"
] | null | null | null | explore_data.ipynb | nadafalou/CSC413FinalProject | fa19b1a31ff6bba9b10139663d9065f78b61eb2c | [
"MIT"
] | null | null | null | explore_data.ipynb | nadafalou/CSC413FinalProject | fa19b1a31ff6bba9b10139663d9065f78b61eb2c | [
"MIT"
] | null | null | null | 69.917603 | 2,717 | 0.513178 | [
[
[
"import csv\nimport numpy as np \n\nposts = []\npath = 'data/Constraint_Train.csv'\nnum_long_posts = 0\nnum_real = 0\n\nwith open(path, newline='', encoding='utf-8') as csvfile:\n spamreader = csv.reader(csvfile, delimiter=',', quotechar='\\\"')\n spamreader.__next__() # Skip header row\n for row in spamreader:\n if len(row[1]) > 280:\n num_long_posts += 1\n else: \n row[1] = row[1].replace(',', ' , ')\\\n .replace(\"'\", \" ' \")\\\n .replace('.', ' . ')\\\n .replace('!', ' ! ')\\\n .replace('?', ' ? ')\\\n .replace(';', ' ; ')\n words = row[1].split()\n num_real = num_real + 1 if row[2] == 'real' else num_real\n sentence = [word.lower() for word in words]\n posts.append(sentence)\n\nvocab = set([w for s in posts for w in s])\n\ntrain = posts[:]\n\nprint(\"First 10 posts in training set: \\n\", train[:10])\n\nfrom collections import Counter\n\nprint(\"- Number of datapoints in training set: \", len(posts))\nreal_percentage = num_real * 100 / len(posts)\nprint(\"- Split of training data between real and fake: \", real_percentage, \\\n \"% real, \", 100 - real_percentage, \"% fake\")\nlengths = [len(post) for post in train]\nprint(\"- Average post length in train:\", np.mean(lengths))\nchars = []\nfor post in train:\n length = 0\n for word in post:\n length += len(word)\n chars.append(length) \nprint(\"- Average num charachters in post in train: \", np.mean(chars))\nprint(\"- Num posts removed because they were longer than 280 charachters: \", num_long_posts)\nwords = [word for post in train for word in post]\ncnt = Counter(words)\nprint(\"- Number of unique words in train: \", len(cnt.keys()))\nprint(\"- 10 most common words in train: \", [(i, round(cnt[i] / len(words) * 100.0, 2)) for i, ntount in cnt.most_common(10)])\ntot = np.sum(list(cnt.values()))\nprint(\"- Total words in train:\", tot)",
"First 10 posts in training set: \n [['the', 'cdc', 'currently', 'reports', '99031', 'deaths', '.', 'in', 'general', 'the', 'discrepancies', 'in', 'death', 'counts', 'between', 'different', 'sources', 'are', 'small', 'and', 'explicable', '.', 'the', 'death', 'toll', 'stands', 'at', 'roughly', '100000', 'people', 'today', '.'], ['states', 'reported', '1121', 'deaths', 'a', 'small', 'rise', 'from', 'last', 'tuesday', '.', 'southern', 'states', 'reported', '640', 'of', 'those', 'deaths', '.', 'https://t', '.', 'co/yasgrtt4ux'], ['politically', 'correct', 'woman', '(almost)', 'uses', 'pandemic', 'as', 'excuse', 'not', 'to', 'reuse', 'plastic', 'bag', 'https://t', '.', 'co/thf8gunfpe', '#coronavirus', '#nashville'], ['#indiafightscorona:', 'we', 'have', '1524', '#covid', 'testing', 'laboratories', 'in', 'india', 'and', 'as', 'on', '25th', 'august', '2020', '36827520', 'tests', 'have', 'been', 'done', ':', '@profbhargava', 'dg', '@icmrdelhi', '#staysafe', '#indiawillwin', 'https://t', '.', 'co/yh3zxknnhz'], ['populous', 'states', 'can', 'generate', 'large', 'case', 'counts', 'but', 'if', 'you', 'look', 'at', 'the', 'new', 'cases', 'per', 'million', 'today', '9', 'smaller', 'states', 'are', 'showing', 'more', 'cases', 'per', 'million', 'than', 'california', 'or', 'texas:', 'al', 'ar', 'id', 'ks', 'ky', 'la', 'ms', 'nv', 'and', 'sc', '.', 'https://t', '.', 'co/1pyw6cwras'], ['covid', 'act', 'now', 'found', '\"on', 'average', 'each', 'person', 'in', 'illinois', 'with', 'covid-19', 'is', 'infecting', '1', '.', '11', 'other', 'people', '.', 'data', 'shows', 'that', 'the', 'infection', 'growth', 'rate', 'has', 'declined', 'over', 'time', 'this', 'factors', 'in', 'the', 'stay-at-home', 'order', 'and', 'other', 'restrictions', 'put', 'in', 'place', '.', '\"', 'https://t', '.', 'co/hhigdd24fe'], ['if', 'you', 'tested', 'positive', 'for', '#covid19', 'and', 'have', 'no', 'symptoms', 'stay', 'home', 'and', 'away', 'from', 'other', 'people', '.', 'learn', 'more', 'about', 'cdc’s', 'recommendations', 'about', 'when', 'you', 'can', 'be', 'around', 'others', 'after', 'covid-19', 'infection:', 'https://t', '.', 'co/z5kkxpqkyb', '.', 'https://t', '.', 'co/9pamy0rxaf'], ['obama', 'calls', 'trump’s', 'coronavirus', 'response', 'a', 'chaotic', 'disaster', 'https://t', '.', 'co/dedqzehasb'], ['?', '?', '?', 'clearly', ',', 'the', 'obama', 'administration', 'did', 'not', 'leave', 'any', 'kind', 'of', 'game', 'plan', 'for', 'something', 'like', 'this', '.', '?', '?', '�'], ['retraction—hydroxychloroquine', 'or', 'chloroquine', 'with', 'or', 'without', 'a', 'macrolide', 'for', 'treatment', 'of', 'covid-19:', 'a', 'multinational', 'registry', 'analysis', '-', 'the', 'lancet', 'https://t', '.', 'co/l5v2x6g9or']]\n- Number of datapoints in training set: 5604\n- Split of training data between real and fake: 49.10778015703069 % real, 50.89221984296931 % fake\n- Average post length in train: 27.63508208422555\n- Average num charachters in post in train: 138.05442541042112\n- Num posts removed because they were longer than 280 charachters: 816\n- Number of unique words in train: 19395\n- 10 most common words in train: [('.', 7.03), ('the', 3.49), ('of', 2.24), ('https://t', 2.18), ('to', 2.07), ('in', 1.94), ('a', 1.57), ('and', 1.41), ('is', 1.07), ('for', 0.92)]\n- Total words in train: 154867\n"
],
[
"import csv\nimport numpy as np \n\nposts = []\npath = 'data/Constraint_Val.csv'\nnum_long_posts = 0\nnum_real = 0\n\nwith open(path, newline='', encoding='utf-8') as csvfile:\n spamreader = csv.reader(csvfile, delimiter=',', quotechar='\\\"')\n spamreader.__next__() # Skip header row\n for row in spamreader:\n if len(row[1]) > 280:\n num_long_posts += 1\n else: \n row[1] = row[1].replace(',', ' , ')\\\n .replace(\"'\", \" ' \")\\\n .replace('.', ' . ')\\\n .replace('!', ' ! ')\\\n .replace('?', ' ? ')\\\n .replace(';', ' ; ')\n words = row[1].split()\n num_real = num_real + 1 if row[2] == 'real' else num_real\n sentence = [word.lower() for word in words]\n posts.append(sentence)\n\nvocab = set([w for s in posts for w in s])\n\nval = posts[:]\n\nprint(\"First 10 posts in validation set: \\n\", val[:10])\n\nfrom collections import Counter\n\nprint(\"- Number of datapoints in validation set: \", len(posts))\nreal_percentage = num_real * 100 / len(posts)\nprint(\"- Split of training data between real and fake: \", real_percentage, \\\n \"% real, \", 100 - real_percentage, \"% fake\")\nlengths = [len(post) for post in val]\nprint(\"- Average post length in val:\", np.mean(lengths))\nchars = []\nfor post in val:\n length = 0\n for word in post:\n length += len(word)\n chars.append(length) \nprint(\"- Average num charachters in post in val: \", np.mean(chars))\nprint(\"- Num posts removed because they were longer than 280 charachters: \", num_long_posts)\nwords = [word for post in val for word in post]\ncnt = Counter(words)\nprint(\"- Number of unique words in val: \", len(cnt.keys()))\nprint(\"- 10 most common words in val: \", [(i, round(cnt[i] / len(words) * 100.0, 2)) for i, ntount in cnt.most_common(10)])\ntot = np.sum(list(cnt.values()))\nprint(\"- Total words in val:\", tot)",
"First 10 posts in validation set: \n [['chinese', 'converting', 'to', 'islam', 'after', 'realising', 'that', 'no', 'muslim', 'was', 'affected', 'by', '#coronavirus', '#covd19', 'in', 'the', 'country'], ['11', 'out', 'of', '13', 'people', '(from', 'the', 'diamond', 'princess', 'cruise', 'ship)', 'who', 'had', 'intially', 'tested', 'negative', 'in', 'tests', 'in', 'japan', 'were', 'later', 'confirmed', 'to', 'be', 'positive', 'in', 'the', 'united', 'states', '.'], ['covid-19', 'is', 'caused', 'by', 'a', 'bacterium', ',', 'not', 'virus', 'and', 'can', 'be', 'treated', 'with', 'aspirin'], ['mike', 'pence', 'in', 'rnc', 'speech', 'praises', 'donald', 'trump’s', 'covid-19', '“seamless”', 'partnership', 'with', 'governors', 'and', 'leaves', 'out', 'the', 'president', \"'\", 's', 'state', 'feuds:', 'https://t', '.', 'co/qj6hsewtgb', '#rnc2020', 'https://t', '.', 'co/ofoerzdfyy'], ['6/10', 'sky', \"'\", 's', '@edconwaysky', 'explains', 'the', 'latest', '#covid19', 'data', 'and', 'government', 'announcement', '.', 'get', 'more', 'on', 'the', '#coronavirus', 'data', 'here👇', 'https://t', '.', 'co/jvgzlsbfjh', 'https://t', '.', 'co/pygskxesbg'], ['no', 'one', 'can', 'leave', 'managed', 'isolation', 'for', 'any', 'reason', 'without', 'returning', 'a', 'negative', 'test', '.', 'if', 'they', 'refuse', 'a', 'test', 'they', 'can', 'then', 'be', 'held', 'for', 'a', 'period', 'of', 'up', 'to', '28', 'days', '.', '\\u2063', '\\u2063', 'on', 'june', 'the', '16th', 'exemptions', 'on', 'compassionate', 'grounds', 'have', 'been', 'suspended', '.', '\\u2063', '\\u2063'], ['#indiafightscorona', 'india', 'has', 'one', 'of', 'the', 'lowest', '#covid19', 'mortality', 'globally', 'with', 'less', 'than', '2%', 'case', 'fatality', 'rate', '.', 'as', 'a', 'result', 'of', 'supervised', 'home', 'isolation', '&', ';', 'effective', 'clinical', 'treatment', 'many', 'states/uts', 'have', 'cfr', 'lower', 'than', 'the', 'national', 'average', '.', 'https://t', '.', 'co/qlik8ypp7e'], ['rt', '@who:', '#covid19', 'transmission', 'occurs', 'primarily', 'through', 'direct', 'indirect', 'or', 'close', 'contact', 'with', 'infected', 'people', 'through', 'their', 'saliva', 'and', 'res…'], ['news', 'and', 'media', 'outlet', 'abp', 'majha', 'on', 'the', 'basis', 'of', 'an', 'internal', 'memo', 'of', 'south', 'central', 'railway', 'reported', 'that', 'a', 'special', 'train', 'has', 'been', 'announced', 'to', 'take', 'the', 'stranded', 'migrant', 'workers', 'home', '.'], ['?', '?', '?', 'church', 'services', 'can', '?', '?', '?', 't', 'resume', 'until', 'we', '?', '?', '?', 're', 'all', 'vaccinated', ',', 'says', 'bill', 'gates', '.', '?', '?', '�']]\n- Number of datapoints in validation set: 1873\n- Split of training data between real and fake: 49.386011745862255 % real, 50.613988254137745 % fake\n- Average post length in val: 27.6903363587827\n- Average num charachters in post in val: 137.95835557928456\n- Num posts removed because they were longer than 280 charachters: 267\n- Number of unique words in val: 9225\n- 10 most common words in val: [('.', 7.04), ('the', 3.5), ('of', 2.24), ('https://t', 2.15), ('to', 2.02), ('in', 2.01), ('a', 1.56), ('and', 1.34), ('is', 1.11), ('for', 0.92)]\n- Total words in val: 51864\n"
],
[
"import csv\nimport numpy as np \n\nposts = []\npath = 'data/english_test_with_labels.csv'\nnum_long_posts = 0\nnum_real = 0\n\nwith open(path, newline='', encoding='utf-8') as csvfile:\n spamreader = csv.reader(csvfile, delimiter=',', quotechar='\\\"')\n spamreader.__next__() # Skip header row\n for row in spamreader:\n if len(row[1]) > 280:\n num_long_posts += 1\n else: \n row[1] = row[1].replace(',', ' , ')\\\n .replace(\"'\", \" ' \")\\\n .replace('.', ' . ')\\\n .replace('!', ' ! ')\\\n .replace('?', ' ? ')\\\n .replace(';', ' ; ')\n words = row[1].split()\n num_real = num_real + 1 if row[2] == 'real' else num_real\n sentence = [word.lower() for word in words]\n posts.append(sentence)\n\nvocab = set([w for s in posts for w in s])\n\ntest = posts[:]\n\nprint(\"First 10 posts in test set: \\n\", test[:10])\n\nfrom collections import Counter\n\nprint(\"- Number of datapoints in test set: \", len(posts))\nreal_percentage = num_real * 100 / len(posts)\nprint(\"- Split of training data between real and fake: \", real_percentage, \\\n \"% real, \", 100 - real_percentage, \"% fake\")\nlengths = [len(post) for post in test]\nprint(\"- Average post length in test:\", np.mean(lengths))\nchars = []\nfor post in test:\n length = 0\n for word in post:\n length += len(word)\n chars.append(length) \nprint(\"- Average num charachters in post in test: \", np.mean(chars))\nprint(\"- Num posts removed because they were longer than 280 charachters: \", num_long_posts)\nwords = [word for post in test for word in post]\ncnt = Counter(words)\nprint(\"- Number of unique words in test: \", len(cnt.keys()))\nprint(\"- 10 most common words in test: \", [(i, round(cnt[i] / len(words) * 100.0, 2)) for i, ntount in cnt.most_common(10)])\ntot = np.sum(list(cnt.values()))\nprint(\"- Total words in test:\", tot)",
"First 10 posts in test set: \n [['our', 'daily', 'update', 'is', 'published', '.', 'states', 'reported', '734k', 'tests', '39k', 'new', 'cases', 'and', '532', 'deaths', '.', 'current', 'hospitalizations', 'fell', 'below', '30k', 'for', 'the', 'first', 'time', 'since', 'june', '22', '.', 'https://t', '.', 'co/wzsyme0sht'], ['alfalfa', 'is', 'the', 'only', 'cure', 'for', 'covid-19', '.'], ['president', 'trump', 'asked', 'what', 'he', 'would', 'do', 'if', 'he', 'were', 'to', 'catch', 'the', 'coronavirus', 'https://t', '.', 'co/3mewhusrzi', '#donaldtrump', '#coronavirus'], ['states', 'reported', '630', 'deaths', '.', 'we', 'are', 'still', 'seeing', 'a', 'solid', 'national', 'decline', '.', 'death', 'reporting', 'lags', 'approximately', '28', 'days', 'from', 'symptom', 'onset', 'according', 'to', 'cdc', 'models', 'that', 'consider', 'lags', 'in', 'symptoms', 'time', 'in', 'hospital', 'and', 'the', 'death', 'reporting', 'process', '.', 'https://t', '.', 'co/lbmcot3h9a'], ['this', 'is', 'the', 'sixth', 'time', 'a', 'global', 'health', 'emergency', 'has', 'been', 'declared', 'under', 'the', 'international', 'health', 'regulations', 'but', 'it', 'is', 'easily', 'the', 'most', 'severe-@drtedros', 'https://t', '.', 'co/jvkc0ptett'], ['low', '#vitamind', 'was', 'an', 'independent', 'predictor', 'of', 'worse', 'prognosis', 'in', 'patients', 'with', 'covid-19', '.', 'https://t', '.', 'co/cgd6kphn31', 'https://t', '.', 'co/chtni8k4jd'], ['a', 'common', 'question:', 'why', 'are', 'the', 'cumulative', 'outcome', 'numbers', 'smaller', 'than', 'the', 'current', 'outcome', 'numbers', '?', 'a:', 'most', 'states', 'report', 'current', 'but', 'a', 'few', 'states', 'report', 'cumulative', '.', 'they', 'are', 'apples', 'and', 'oranges', 'and', 'we', 'don', \"'\", 't', 'feel', 'comfortable', 'filling', 'in', 'state', 'cumulative', 'boxes', 'with', 'current', '#s', '.'], ['the', 'government', 'should', 'consider', 'bringing', 'in', 'any', 'new', 'national', 'lockdown', 'rules', 'over', 'christmas', 'rather', 'than', 'now', 'says', 'an', 'oxford', 'university', 'professor', 'https://t', '.', 'co/pdols6cqon'], ['two', 'interesting', 'correlations:', '1)', 'children', 'tend', 'to', 'weather', 'covid-19', 'pretty', 'well', ';', 'they', 'also', 'get', 'a', 'ton', 'of', 'vitamin', 'd', '.', '2)', 'black', 'people', 'are', 'getting', 'slammed', 'by', 'covid-19', ';', 'black', 'people', 'also', 'have', 'much', 'higher', 'instances', 'of', 'vitamin', 'd', 'deficiency', '(76%', 'vs', '40%', 'in', 'the', 'general', 'population)', '.'], ['a', 'photo', 'shows', 'a', '19-year-old', 'vaccine', 'for', 'canine', 'coronavirus', 'that', 'could', 'be', 'used', 'to', 'prevent', 'the', 'new', 'coronavirus', 'causing', 'covid-19', '.']]\n- Number of datapoints in test set: 1848\n- Split of training data between real and fake: 48.917748917748916 % real, 51.082251082251084 % fake\n- Average post length in test: 27.34469696969697\n- Average num charachters in post in test: 136.79978354978354\n- Num posts removed because they were longer than 280 charachters: 292\n- Number of unique words in test: 9101\n- 10 most common words in test: [('.', 6.97), ('the', 3.53), ('of', 2.21), ('https://t', 2.19), ('to', 1.99), ('in', 1.92), ('a', 1.65), ('and', 1.35), ('?', 1.06), ('for', 1.03)]\n- Total words in test: 50533\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code"
]
] |
d0c7f22e2c35066cfd471402f9f39a1592b57ac7 | 10,732 | ipynb | Jupyter Notebook | notebooks/fit-spectrum.ipynb | adrn/ebak | 20fecda8a99345521b24498559e2b9aecd4cd4cd | [
"MIT"
] | 2 | 2016-06-20T19:39:03.000Z | 2016-08-23T07:06:14.000Z | notebooks/fit-spectrum.ipynb | adrn/ebak | 20fecda8a99345521b24498559e2b9aecd4cd4cd | [
"MIT"
] | 32 | 2016-05-24T14:09:46.000Z | 2016-09-04T18:23:31.000Z | notebooks/fit-spectrum.ipynb | adrn/ebak | 20fecda8a99345521b24498559e2b9aecd4cd4cd | [
"MIT"
] | null | null | null | 24.785219 | 187 | 0.52842 | [
[
[
"To finish, check out: http://articles.adsabs.harvard.edu/cgi-bin/nph-iarticle_query?1992AJ....104.2213L&data_type=PDF_HIGH&whole_paper=YES&type=PRINTER&filetype=.pdf",
"_____no_output_____"
]
],
[
[
"# Third-party\nfrom astropy.io import ascii, fits\nimport astropy.coordinates as coord\nimport astropy.units as u\nfrom astropy.constants import c\nimport matplotlib as mpl\nimport matplotlib.pyplot as pl\nimport numpy as np\nfrom scipy.interpolate import interp1d\n\npl.style.use('apw-notebook')\n%matplotlib inline\n# pl.style.use('classic')\n# %matplotlib notebook",
"_____no_output_____"
],
[
"data_files = [\"../data/apVisit-r5-6994-56770-261.fits\", \"../data/apVisit-r5-6994-56794-177.fits\"]\nmodel_file = \"../data/apStar-r5-2M00004994+1621552.fits\"",
"_____no_output_____"
],
[
"min_wvln = 15329\nmax_wvln = 15359",
"_____no_output_____"
],
[
"def load_file(filename, chip):\n hdulist1 = fits.open(filename)\n wvln = hdulist1[4].data[chip]\n ix = (wvln >= min_wvln) & (wvln <= max_wvln)\n \n wvln = wvln[ix]\n flux = hdulist1[1].data[chip,ix]\n flux_err = hdulist1[2].data[chip,ix]\n \n return {'wvln': wvln, 'flux': flux, 'flux_err': flux_err}\n \ndef load_model_file(filename):\n hdulist1 = fits.open(filename)\n flux = hdulist1[1].data[0]\n flux_err = hdulist1[2].data[0]\n wvln = 10**(hdulist1[0].header['CRVAL1'] + np.arange(flux.size) * hdulist1[0].header['CDELT1'])\n \n# ix = (wvln >= min_wvln) & (wvln <= max_wvln)\n ix = (wvln < 15750) & (wvln > 15150) # HACK: magic numbers\n return {'wvln': wvln[ix], 'flux': flux[ix], 'flux_err': flux_err[ix]}",
"_____no_output_____"
],
[
"d = load_file(fn, chip=2)\nd['wvln'].shape",
"_____no_output_____"
],
[
"chip = 2\n\nfig,ax = pl.subplots(1,1,figsize=(12,6))\n\nfor fn in data_files:\n d = load_file(fn, chip=chip)\n ax.plot(d['wvln'], d['flux'], drawstyle='steps', marker=None)\n \nref_spec = load_model_file(model_file)\nax.plot(ref_spec['wvln'], 3.2*ref_spec['flux'], drawstyle='steps', marker=None, lw=2.) # HACK: scale up\n\n# _d = 175\n# ax.set_xlim(15150.+_d, 15175.+_d)\n# ax.set_ylim(10000, 20000)",
"_____no_output_____"
],
[
"all_spectra = [load_file(f, chip=2) for f in files]",
"_____no_output_____"
],
[
"ref_spec['interp'] = interp1d(ref_spec['wvln'], ref_spec['flux'], kind='cubic', bounds_error=False)",
"_____no_output_____"
],
[
"def get_design_matrix(data, ref_spec, v1, v2):\n \"\"\"\n Note: Positive velocity is a redshift.\n \"\"\"\n X = np.ones((3, data['wvln'].shape[0]))\n X[1] = ref_spec['interp'](data['wvln'] * (1 + v1/c)) # this is only good to first order in (v/c)\n X[2] = ref_spec['interp'](data['wvln'] * (1 + v2/c))\n return X",
"_____no_output_____"
],
[
"def get_optimal_chisq(data, ref_spec, v1, v2):\n X = get_design_matrix(data, ref_spec, v1, v2)\n return np.linalg.solve( X.dot(X.T), X.dot(data['flux']) )",
"_____no_output_____"
],
[
"spec_i = 1\nv1 = 35 * u.km/u.s\nv2 = -5 * u.km/u.s\nX = get_design_matrix(all_spectra[spec_i], ref_spec, v1, v2)\nopt_pars = get_optimal_chisq(all_spectra[spec_i], ref_spec, v1, v2)\nopt_pars",
"_____no_output_____"
],
[
"def make_synthetic_spectrum(X, pars):\n return X.T.dot(pars)\n\ndef compute_chisq(data, X, opt_pars):\n synth_spec = make_synthetic_spectrum(X, opt_pars)\n return -np.sum((synth_spec - data['flux'])**2)",
"_____no_output_____"
],
[
"# opt_pars = np.array([1.1E+4, 0.5, 0.5])\nsynth_spec = make_synthetic_spectrum(X, opt_pars)",
"_____no_output_____"
],
[
"pl.plot(all_spectra[spec_i]['wvln'], all_spectra[spec_i]['flux'], marker=None, drawstyle='steps')\npl.plot(all_spectra[spec_i]['wvln'], synth_spec, marker=None, drawstyle='steps')",
"_____no_output_____"
],
[
"_v1_grid = np.linspace(25, 45, 32)\n_v2_grid = np.linspace(-15, 5, 32)\nshp = (_v1_grid.size, _v2_grid.size)\nv_grid = np.vstack(map(np.ravel, np.meshgrid(_v1_grid, _v2_grid))).T\nv_grid.shape",
"_____no_output_____"
],
[
"chisq = np.zeros(v_grid.shape[0])\nfor i in range(v_grid.shape[0]):\n v1,v2 = v_grid[i]\n opt_pars = get_optimal_chisq(all_spectra[spec_i], ref_spec, \n v1*u.km/u.s, v2*u.km/u.s)\n chisq[i] = compute_chisq(all_spectra[spec_i], X, opt_pars)",
"_____no_output_____"
],
[
"fig,ax = pl.subplots(1,1,figsize=(9,8))\n\ncb = ax.pcolormesh(v_grid[:,0].reshape(shp), v_grid[:,1].reshape(shp), \n chisq.reshape(shp), cmap='magma')\n\nfig.colorbar(cb)",
"_____no_output_____"
],
[
"fig,ax = pl.subplots(1,1,figsize=(9,8))\n\ncb = ax.pcolormesh(v_grid[:,0].reshape(shp), v_grid[:,1].reshape(shp), \n np.exp(chisq-chisq.max()).reshape(shp), cmap='magma')\n\nfig.colorbar(cb)",
"_____no_output_____"
],
[
"fig,ax = pl.subplots(1,1,figsize=(9,8))\n\ncb = ax.pcolormesh(v_grid[:,0].reshape(shp), v_grid[:,1].reshape(shp), \n chisq.reshape(shp), cmap='magma')\n\nfig.colorbar(cb)",
"_____no_output_____"
],
[
"fig,ax = pl.subplots(1,1,figsize=(9,8))\n\ncb = ax.pcolormesh(v_grid[:,0].reshape(shp), v_grid[:,1].reshape(shp), \n np.exp(chisq-chisq.max()).reshape(shp), cmap='magma')\n\nfig.colorbar(cb)",
"_____no_output_____"
]
],
[
[
"---\n\ntry using levmar to optimize",
"_____no_output_____"
]
],
[
[
"from scipy.optimize import leastsq",
"_____no_output_____"
],
[
"def errfunc(pars, data_spec, ref_spec):\n v1,v2,a,b,c = pars\n X = get_design_matrix(data_spec, ref_spec, v1*u.km/u.s, v2*u.km/u.s)\n synth_spec = make_synthetic_spectrum(X, [a,b,c])\n return (synth_spec - data_spec['flux'])",
"_____no_output_____"
],
[
"levmar_opt_pars,ier = leastsq(errfunc, x0=[35,-5]+opt_pars.tolist(), args=(all_spectra[0], ref_spec))",
"_____no_output_____"
],
[
"levmar_opt_pars",
"_____no_output_____"
],
[
"data_spec = all_spectra[0]\nX = get_design_matrix(data_spec, ref_spec, levmar_opt_pars[0]*u.km/u.s, levmar_opt_pars[1]*u.km/u.s)\nsynth_spec = make_synthetic_spectrum(X, levmar_opt_pars[2:])\npl.plot(data_spec['wvln'], data_spec['flux'], marker=None, drawstyle='steps')\npl.plot(data_spec['wvln'], synth_spec, marker=None, drawstyle='steps')",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
d0c7f332155f9f86661ea7a3e81c4d319064f770 | 30,984 | ipynb | Jupyter Notebook | 4_Applied ML with Python/Course/Week-4/Check_Last_Logreg.ipynb | syedmeesamali/CourseraPlus | 0e729d10938ecb55fde69433c6b02cb02b8e6d10 | [
"MIT"
] | null | null | null | 4_Applied ML with Python/Course/Week-4/Check_Last_Logreg.ipynb | syedmeesamali/CourseraPlus | 0e729d10938ecb55fde69433c6b02cb02b8e6d10 | [
"MIT"
] | null | null | null | 4_Applied ML with Python/Course/Week-4/Check_Last_Logreg.ipynb | syedmeesamali/CourseraPlus | 0e729d10938ecb55fde69433c6b02cb02b8e6d10 | [
"MIT"
] | null | null | null | 84.887671 | 22,946 | 0.851762 | [
[
[
"# Final Checks for model",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd\nimport seaborn as sns\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"df = pd.read_csv(f\"D:/Docs/train_1.csv\", encoding='mac_roman')",
"_____no_output_____"
]
],
[
[
"## 1. Use ONLY compliance available columns",
"_____no_output_____"
]
],
[
[
"df = df[df['compliance'].notna()]\ndf.shape",
"_____no_output_____"
],
[
"df['fine_amount'] = df['fine_amount'].fillna(0)\ndf.shape",
"_____no_output_____"
],
[
"df['compliance'].value_counts()",
"_____no_output_____"
]
],
[
[
"## 2. Build the actual model",
"_____no_output_____"
]
],
[
[
"from sklearn.linear_model import LogisticRegression\nfrom sklearn.model_selection import train_test_split\nfeature_names_tickets = ['ticket_id', 'fine_amount']\nX_tickets = df[feature_names_tickets]\ny_tickets = df['compliance']\n\n#Test size is chosen to get X_test value of 61,001 as the same is provided test data\nX_train, X_test, y_train, y_test = train_test_split(X_tickets, y_tickets, test_size = 0.38153900, random_state = 0)\nclf = LogisticRegression(C=100).fit(X_train, y_train)\nprint(X_train.shape)\nprint(X_test.shape)",
"(98879, 2)\n(61001, 2)\n"
]
],
[
[
"## 3. Apply GridSearchCV",
"_____no_output_____"
]
],
[
[
"from sklearn.model_selection import GridSearchCV\nparam_grid = {'C': [0.001, 0.01, 0.1, 1, 10, 100, 1000] }\ngrid_search = GridSearchCV(estimator = clf, param_grid = param_grid,\n scoring = 'accuracy', cv = 5, verbose=0)\ngrid_search.fit(X_train, y_train)\nprint('Best coring:\\n Best C: {}'.format(grid_search.best_score_))",
"Best coring:\n Best C: 0.9269713488926803\n"
],
[
"#Fit based on new model now\nclf_best = LogisticRegression(C = 0.92).fit(X_train, y_train)",
"_____no_output_____"
]
],
[
[
"## 7. Check ROC / AUC",
"_____no_output_____"
]
],
[
[
"# First we need to load our test dataset\ndf1 = pd.read_csv(f\"D:/Docs/test_1.csv\", encoding='mac_roman')\ndf1['fine_amount'] = df1['fine_amount'].fillna(0)\ndf1.shape",
"_____no_output_____"
],
[
"feature_names_test = ['ticket_id', 'fine_amount']\nX_test_new = df1[feature_names_test]\nprint(X_test.shape)\nprint(X_test_new.shape)",
"(61001, 2)\n(61001, 2)\n"
],
[
"from sklearn.metrics import roc_curve, auc\ny_score_lr = clf_best.decision_function(X_test_new)\nfpr_lr, tpr_lr, _ = roc_curve(y_test, y_score_lr)\nroc_auc_lr = auc(fpr_lr, tpr_lr)\nplt.figure()\nplt.xlim([-0.01, 1.00])\nplt.ylim([-0.01, 1.01])\nplt.plot(fpr_lr, tpr_lr, lw=3, label='LogRegr ROC curve (area = {:0.2f})'.format(roc_auc_lr))\nplt.xlabel('False Positive Rate', fontsize=16)\nplt.ylabel('True Positive Rate', fontsize=16)\nplt.title('ROC curve', fontsize=16)\nplt.legend(loc='lower right', fontsize=13)\nplt.plot([0, 1], [0, 1], color='red', lw=3, linestyle='--')\nplt.axes().set_aspect('equal')\nplt.show()",
"<ipython-input-11-9dbb75066776>:14: MatplotlibDeprecationWarning: Adding an axes using the same arguments as a previous axes currently reuses the earlier instance. In a future version, a new instance will always be created and returned. Meanwhile, this warning can be suppressed, and the future behavior ensured, by passing a unique label to each axes instance.\n plt.axes().set_aspect('equal')\n"
],
[
"score = clf.score(X_test_new, y_test)\nprint(score)",
"0.9282634710906379\n"
],
[
"predictions = clf.predict(X_test_new)\npredictions.shape",
"_____no_output_____"
],
[
"print(predictions.sum())",
"0.0\n"
],
[
"pred_values = pd.DataFrame(predictions, columns='Pred') \npred_values.to_csv('result_pred.csv')",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0c8017320e6b432262b8102c3f85bea9bc16a0c | 23,958 | ipynb | Jupyter Notebook | tutorials/nlp/02_NLP_Tokenizers.ipynb | shahin-trunk/NeMo | a10ac29a6deb05bcfc672ad287f4a8279c1f9289 | [
"Apache-2.0"
] | null | null | null | tutorials/nlp/02_NLP_Tokenizers.ipynb | shahin-trunk/NeMo | a10ac29a6deb05bcfc672ad287f4a8279c1f9289 | [
"Apache-2.0"
] | null | null | null | tutorials/nlp/02_NLP_Tokenizers.ipynb | shahin-trunk/NeMo | a10ac29a6deb05bcfc672ad287f4a8279c1f9289 | [
"Apache-2.0"
] | null | null | null | 40.744898 | 578 | 0.456758 | [
[
[
"BRANCH = 'main'",
"_____no_output_____"
],
[
"\"\"\"\nYou can run either this notebook locally (if you have all the dependencies and a GPU) or on Google Colab.\n\nInstructions for setting up Colab are as follows:\n1. Open a new Python 3 notebook.\n2. Import this notebook from GitHub (File -> Upload Notebook -> \"GITHUB\" tab -> copy/paste GitHub URL)\n3. Connect to an instance with a GPU (Runtime -> Change runtime type -> select \"GPU\" for hardware accelerator)\n4. Run this cell to set up dependencies.\n\"\"\"\n# If you're using Google Colab and not running locally, run this cell\n\n# install NeMo\nBRANCH = 'main'\n!python -m pip install git+https://github.com/NVIDIA/NeMo.git@$BRANCH#egg=nemo_toolkit[nlp]",
"_____no_output_____"
],
[
"import os\nimport wget\nfrom nemo.collections import nlp as nemo_nlp\nfrom nemo.collections import common as nemo_common\nfrom omegaconf import OmegaConf",
"_____no_output_____"
]
],
[
[
"# Tokenizers Background\n\nFor Natural Language Processing, tokenization is an essential part of data preprocessing. It is the process of splitting a string into a list of tokens. One can think of token as parts like a word is a token in a sentence.\nDepending on the application, different tokenizers are more suitable than others. \n\n\nFor example, a WordTokenizer that splits the string on any whitespace, would tokenize the following string \n\n\"My first program, Hello World.\" -> [\"My\", \"first\", \"program,\", \"Hello\", \"World.\"]\n\nTo turn the tokens into numerical model input, the standard method is to use a vocabulary and one-hot vectors for [word embeddings](https://en.wikipedia.org/wiki/Word_embedding). If a token appears in the vocabulary, its index is returned, if not the index of the unknown token is returned to mitigate out-of-vocabulary (OOV).\n\n\n",
"_____no_output_____"
],
[
"# Tokenizers in NeMo\n\nIn NeMo, we support the most used tokenization algorithms. We offer a wrapper around [Hugging Faces's AutoTokenizer](https://huggingface.co/transformers/model_doc/auto.html#autotokenizer) - a factory class that gives access to all Hugging Face tokenizers. This includes particularly all BERT-like model tokenizers, such as BertTokenizer, AlbertTokenizer, RobertaTokenizer, GPT2Tokenizer. Apart from that, we also support other tokenizers such as WordTokenizer, CharTokenizer, and [Google's SentencePieceTokenizer](https://github.com/google/sentencepiece). \n\n\nWe make sure that all tokenizers are compatible with BERT-like models, e.g. BERT, Roberta, Albert, and Megatron. For that, we provide a high-level user API `get_tokenizer()`, which allows the user to instantiate a tokenizer model with only four input arguments: \n* `tokenizer_name: str`\n* `tokenizer_model: Optional[str] = None`\n* `vocab_file: Optional[str] = None`\n* `special_tokens: Optional[Dict[str, str]] = None`\n\nHugging Face and Megatron tokenizers (which uses Hugging Face underneath) can be automatically instantiated by only `tokenizer_name`, which downloads the corresponding `vocab_file` from the internet. \n\nFor SentencePieceTokenizer, WordTokenizer, and CharTokenizers `tokenizer_model` or/and `vocab_file` can be generated offline in advance using [`scripts/tokenizers/process_asr_text_tokenizer.py`](https://github.com/NVIDIA/NeMo/blob/stable/scripts/tokenizers/process_asr_text_tokenizer.py)\n\nThe tokenizers in NeMo are designed to be used interchangeably, especially when\nused in combination with a BERT-based model.\n\nLet's take a look at the list of available tokenizers:",
"_____no_output_____"
]
],
[
[
"nemo_nlp.modules.get_tokenizer_list()",
"_____no_output_____"
]
],
[
[
"# Hugging Face AutoTokenizer",
"_____no_output_____"
]
],
[
[
"# instantiate tokenizer wrapper using pretrained model name only\ntokenizer1 = nemo_nlp.modules.get_tokenizer(tokenizer_name=\"bert-base-cased\")\n\n# the wrapper has a reference to the original HuggingFace tokenizer\nprint(tokenizer1.tokenizer)",
"_____no_output_____"
],
[
"# check vocabulary (this can be very long)\nprint(tokenizer1.tokenizer.vocab)",
"_____no_output_____"
],
[
"# show all special tokens if it has any\nprint(tokenizer1.tokenizer.all_special_tokens)",
"_____no_output_____"
],
[
"# instantiate tokenizer using custom vocabulary\nvocab_file = \"myvocab.txt\"\nvocab = [\"he\", \"llo\", \"world\"]\nwith open(vocab_file, 'w', encoding='utf-8') as vocab_fp:\n vocab_fp.write(\"\\n\".join(vocab))",
"_____no_output_____"
],
[
"tokenizer2 = nemo_nlp.modules.get_tokenizer(tokenizer_name=\"bert-base-cased\", vocab_file=vocab_file)",
"_____no_output_____"
],
[
"# Since we did not overwrite special tokens they should be the same as before\nprint(tokenizer1.tokenizer.all_special_tokens == tokenizer2.tokenizer.all_special_tokens )",
"_____no_output_____"
]
],
[
[
"## Adding Special tokens\n\nWe do not recommend overwriting special tokens for Hugging Face pretrained models, \nsince these are the commonly used default values. \n\nIf a user still wants to overwrite the special tokens, specify some of the following keys:",
"_____no_output_____"
]
],
[
[
"special_tokens_dict = {\"unk_token\": \"<UNK>\", \n \"sep_token\": \"<SEP>\", \n \"pad_token\": \"<PAD>\", \n \"bos_token\": \"<CLS>\", \n \"mask_token\": \"<MASK>\",\n \"eos_token\": \"<SEP>\",\n \"cls_token\": \"<CLS>\"}\ntokenizer3 = nemo_nlp.modules.get_tokenizer(tokenizer_name=\"bert-base-cased\",\n vocab_file=vocab_file,\n special_tokens=special_tokens_dict)\n\n# print newly set special tokens\nprint(tokenizer3.tokenizer.all_special_tokens)\n# the special tokens should be different from the previous special tokens\nprint(tokenizer3.tokenizer.all_special_tokens != tokenizer1.tokenizer.all_special_tokens )",
"_____no_output_____"
]
],
[
[
"Notice, that if you specify tokens that were not previously included in the tokenizer's vocabulary file, new tokens will be added to the vocabulary file. You will see a message like this:",
"_____no_output_____"
],
[
"`['<MASK>', '<CLS>', '<SEP>', '<PAD>', '<SEP>', '<CLS>', '<UNK>'] \n will be added to the vocabulary.\n Please resize your model accordingly`",
"_____no_output_____"
]
],
[
[
"# A safer way to add special tokens is the following:\n\n# define your model\npretrained_model_name = 'bert-base-uncased'\nconfig = {\"language_model\": {\"pretrained_model_name\": pretrained_model_name}, \"tokenizer\": {}}\nomega_conf = OmegaConf.create(config)\nmodel = nemo_nlp.modules.get_lm_model(cfg=omega_conf)\n\n# define pretrained tokenizer\ntokenizer_default = nemo_nlp.modules.get_tokenizer(tokenizer_name=pretrained_model_name)",
"_____no_output_____"
],
[
"tokenizer_default.text_to_tokens('<MY_NEW_TOKEN> and another word')",
"_____no_output_____"
]
],
[
[
"As you can see in the above, the tokenizer splits `<MY_NEW_TOKEN>` into subtokens. Let's add this to the special tokens to make sure the tokenizer does not split this into subtokens.",
"_____no_output_____"
]
],
[
[
"special_tokens = {'bos_token': '<BOS>',\n 'cls_token': '<CSL>',\n 'additional_special_tokens': ['<MY_NEW_TOKEN>', '<ANOTHER_TOKEN>']}\ntokenizer_default.add_special_tokens(special_tokens_dict=special_tokens)\n\n# resize your model so that the embeddings for newly added tokens are updated during training/finetuning\nmodel.resize_token_embeddings(tokenizer_default.vocab_size)\n\n# let's make sure the tokenizer doesn't split our special tokens into subtokens\ntokenizer_default.text_to_tokens('<MY_NEW_TOKEN> and another word')",
"_____no_output_____"
]
],
[
[
"Now, the model doesn't break down our special token into the subtokens.",
"_____no_output_____"
],
[
"## Megatron model tokenizer",
"_____no_output_____"
]
],
[
[
"# Megatron tokenizers are instances of the Hugging Face BertTokenizer. \ntokenizer4 = nemo_nlp.modules.get_tokenizer(tokenizer_name=\"megatron-bert-cased\")",
"_____no_output_____"
]
],
[
[
"# Train custom tokenizer model and vocabulary from text file ",
"_____no_output_____"
],
[
"We use the [`scripts/tokenizers/process_asr_text_tokenizer.py`](https://github.com/NVIDIA/NeMo/blob/stable/scripts/tokenizers/process_asr_text_tokenizer.py) script to create a custom tokenizer model with its own vocabulary from an input file",
"_____no_output_____"
]
],
[
[
"# download tokenizer script\nscript_file = \"process_asr_text_tokenizer.py\"\n\nif not os.path.exists(script_file):\n print('Downloading script file...')\n wget.download(f'https://raw.githubusercontent.com/NVIDIA/NeMo/{BRANCH}/scripts/tokenizers/process_asr_text_tokenizer.py')\nelse:\n print ('Script already exists')",
"_____no_output_____"
],
[
"# Let's prepare some small text data for the tokenizer\ndata_text = \"NeMo is a toolkit for creating Conversational AI applications. \\\nNeMo toolkit makes it possible for researchers to easily compose complex neural network architectures \\\nfor conversational AI using reusable components - Neural Modules. \\\nNeural Modules are conceptual blocks of neural networks that take typed inputs and produce typed outputs. \\\nSuch modules typically represent data layers, encoders, decoders, language models, loss functions, or methods of combining activations. \\\nThe toolkit comes with extendable collections of pre-built modules and ready-to-use models for automatic speech recognition (ASR), \\\nnatural language processing (NLP) and text synthesis (TTS). \\\nBuilt for speed, NeMo can utilize NVIDIA's Tensor Cores and scale out training to multiple GPUs and multiple nodes.\"",
"_____no_output_____"
],
[
"# Write the text data into a file\ndata_file=\"data.txt\"\n\nwith open(data_file, 'w') as data_fp:\n data_fp.write(data_text)",
"_____no_output_____"
],
[
"# Some additional parameters for the tokenizer\n# To tokenize at unigram, char or word boundary instead of using bpe, change --spe_type accordingly. \n# More details see https://github.com/google/sentencepiece#train-sentencepiece-model\n\ntokenizer_spe_type = \"bpe\" # <-- Can be `bpe`, `unigram`, `word` or `char`\nvocab_size = 32",
"_____no_output_____"
],
[
"! python process_asr_text_tokenizer.py --data_file=$data_file --data_root=. --vocab_size=$vocab_size --tokenizer=spe --spe_type=$tokenizer_spe_type",
"_____no_output_____"
],
[
"# See created tokenizer model and vocabulary\nspe_model_dir=f\"tokenizer_spe_{tokenizer_spe_type}_v{vocab_size}\"\n! ls $spe_model_dir",
"_____no_output_____"
]
],
[
[
"# Use custom tokenizer for data preprocessing\n## Example: SentencePiece for BPE",
"_____no_output_____"
]
],
[
[
"# initialize tokenizer with created tokenizer model, which inherently includes the vocabulary and specify optional special tokens\ntokenizer_spe = nemo_nlp.modules.get_tokenizer(tokenizer_name=\"sentencepiece\", tokenizer_model=spe_model_dir+\"/tokenizer.model\", special_tokens=special_tokens_dict)\n\n# specified special tokens are added to the vocabuary\nprint(tokenizer_spe.vocab_size)",
"_____no_output_____"
]
],
[
[
"# Using any tokenizer to tokenize text into BERT compatible input\n",
"_____no_output_____"
]
],
[
[
"text=\"hello world\"\n\n# create tokens\ntokenized = [tokenizer_spe.bos_token] + tokenizer_spe.text_to_tokens(text) + [tokenizer_spe.eos_token]\nprint(tokenized)\n\n# turn token into input_ids for a neural model, such as BERTModule\n\nprint(tokenizer_spe.tokens_to_ids(tokenized))",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0c80505585a2311d9338a8bc26589ccff26ade3 | 140,035 | ipynb | Jupyter Notebook | docs/tutorials/t2ramsey_tutorial.ipynb | blakejohnson/qiskit-experiments | 2ecffa8f7d6aa6e8e6c1fc0a1c30f7776c49c493 | [
"Apache-2.0"
] | null | null | null | docs/tutorials/t2ramsey_tutorial.ipynb | blakejohnson/qiskit-experiments | 2ecffa8f7d6aa6e8e6c1fc0a1c30f7776c49c493 | [
"Apache-2.0"
] | null | null | null | docs/tutorials/t2ramsey_tutorial.ipynb | blakejohnson/qiskit-experiments | 2ecffa8f7d6aa6e8e6c1fc0a1c30f7776c49c493 | [
"Apache-2.0"
] | null | null | null | 309.128035 | 33,176 | 0.922598 | [
[
[
"# T<sub>2</sub> Ramsey Experiment",
"_____no_output_____"
],
[
"This experiment serves as one of the series of experiments used to characterize a single qubit. Its purpose is to determine two of the qubit's properties: *Ramsey* or *detuning frequency* and $T_2\\ast$. The rough frequency of the qubit was already determined previously. Here, we would like to measure the *detuning*, that is the difference between the qubit's precise frequency and the frequency of the rotation pulses (based on the rough frequency). This part of the experiment is called a *Ramsey Experiment*. $T_2\\ast$ represents the rate of decay toward a mixed state, when the qubit is initialized to the |+⟩ state.",
"_____no_output_____"
]
],
[
[
"import qiskit\nfrom qiskit_experiments.library import T2Ramsey",
"_____no_output_____"
]
],
[
[
"The circuit used for the experiment comprises the following:\n\n 1. Hadamard gate\n 2. delay\n 3. p (phase) gate that rotates the qubit in the x-y plane \n 4. Hadamard gate\n 5. measurement\n\nDuring the delay time, we expect the qubit to precess about the z-axis. If the p gate and the precession offset each other perfectly, then the qubit will arrive at the |0⟩ state (after the second Hadamard gate). By varying the extension of the delays, we get a series of oscillations of the qubit state between the |0⟩ and |1⟩ states. We can draw the graph of the resulting function, and can analytically extract the desired values.",
"_____no_output_____"
]
],
[
[
"# set the computation units to microseconds\nunit = 'us' #microseconds\nqubit = 0\n# set the desired delays\ndelays = list(range(1, 150, 2))",
"_____no_output_____"
],
[
"# Create a T2Ramsey experiment. Print the first circuit as an example\nexp1 = T2Ramsey(qubit, delays, unit=unit)\nprint(exp1.circuits()[0])",
" ┌───┐┌──────────────┐┌──────┐ ░ ┌───┐ ░ ┌─┐\nq_0: ┤ H ├┤ Delay(1[us]) ├┤ P(0) ├─░─┤ H ├─░─┤M├\n └───┘└──────────────┘└──────┘ ░ └───┘ ░ └╥┘\nc: 1/═════════════════════════════════════════╩═\n 0 \n"
]
],
[
[
"We run the experiment on a simple, simulated backend, created specifically for this experiment's tutorial.",
"_____no_output_____"
]
],
[
[
"from qiskit_experiments.test.t2ramsey_backend import T2RamseyBackend\n# FakeJob is a wrapper for the backend, to give it the form of a job\nfrom qiskit_experiments.test.utils import FakeJob\nimport qiskit_experiments.matplotlib\nfrom qiskit_experiments.matplotlib import pyplot, requires_matplotlib\nfrom qiskit_experiments.matplotlib import HAS_MATPLOTLIB\n\nconversion_factor = 1E-6\n# The behavior of the backend is determined by the following parameters\nbackend = T2RamseyBackend(\n p0={\"a_guess\":[0.5], \"t2ramsey\":[80.0], \"f_guess\":[0.02], \"phi_guess\":[0.0],\n \"b_guess\": [0.5]},\n initial_prob_plus=[0.0],\n readout0to1=[0.02],\n readout1to0=[0.02],\n conversion_factor=conversion_factor,\n )\n",
"_____no_output_____"
]
],
[
[
"The resulting graph will have the form:\n$ f(t) = a^{-t/T_2*} \\cdot cos(2 \\pi f t + \\phi) + b $\nwhere *t* is the delay, $T_2*$ is the decay factor, and *f* is the detuning frequency.\n`conversion_factor` is a scaling factor that depends on the measurement units used. It is 1E-6 here, because the unit is microseconds.",
"_____no_output_____"
]
],
[
[
"exp1.set_analysis_options(user_p0=None, plot=True)\nexpdata1 = exp1.run(backend=backend, shots=2000)\nexpdata1.block_for_results() # Wait for job/analysis to finish.\n# Display the figure\ndisplay(expdata1.figure(0))",
"_____no_output_____"
],
[
"# T2* results:\nt2ramsey = expdata1.analysis_results(0).data()\nt2ramsey",
"_____no_output_____"
],
[
"# Frequency result:\nfrequency = expdata1.analysis_results(1).data()\nfrequency",
"_____no_output_____"
]
],
[
[
"### Providing initial user estimates\nThe user can provide initial estimates for the parameters to help the analysis process. Because the curve is expected to decay toward $0.5$, the natural choice for parameters $A$ and $B$ is $0.5$. Varying the value of $\\phi$ will shift the graph along the x-axis. Since this is not of interest to us, we can safely initialize $\\phi$ to 0. In this experiment, `t2ramsey` and `f` are the parameters of interest. Good estimates for them are values computed in previous experiments on this qubit or a similar values computed for other qubits.",
"_____no_output_____"
]
],
[
[
"from qiskit_experiments.library.characterization import T2RamseyAnalysis\nuser_p0={\n \"A\": 0.5,\n \"t2ramsey\": 85.0,\n \"f\": 0.021,\n \"phi\": 0,\n \"B\": 0.5\n }\nexp_with_p0 = T2Ramsey(qubit, delays, unit=unit)\nexp_with_p0.set_analysis_options(user_p0=user_p0, plot=True)\nexpdata_with_p0 = exp_with_p0.run(backend=backend, shots=2000)\nexpdata_with_p0.block_for_results()\ndisplay(expdata_with_p0.figure(0))\nt2ramsey = expdata_with_p0.analysis_results(0).data()[\"value\"]\nfrequency = expdata_with_p0.analysis_results(1).data()[\"value\"]\nprint(\"T2Ramsey:\", t2ramsey)\nprint(\"Fitted frequency:\", frequency)",
"_____no_output_____"
]
],
[
[
"The units can be changed, but the output in the result is always given in seconds. The units in the backend must be adjusted accordingly.",
"_____no_output_____"
]
],
[
[
"from qiskit.utils import apply_prefix\nunit = 'ns'\ndelays = list(range(1000, 150000, 2000))\nconversion_factor = apply_prefix(1, unit)\nprint(conversion_factor)",
"1e-09\n"
],
[
"p0={\"a_guess\":[0.5], \"t2ramsey\":[80000], \"f_guess\":[0.00002], \"phi_guess\":[0.0],\n \"b_guess\": [0.5]}\nbackend_in_ns = T2RamseyBackend(\n p0=p0,\n initial_prob_plus=[0.0],\n readout0to1=[0.02],\n readout1to0=[0.02],\n conversion_factor=conversion_factor\n )\nexp_in_ns = T2Ramsey(qubit, delays, unit=unit)\nexp_in_ns.set_analysis_options(user_p0=None, plot=True)\nexpdata_in_ns = exp_in_ns.run(backend=backend_in_ns, shots=2000)\nexpdata_in_ns.block_for_results()\ndisplay(expdata_in_ns.figure(0))\nt2ramsey = expdata_in_ns.analysis_results(0).data()[\"value\"]\nfrequency = expdata_in_ns.analysis_results(1).data()[\"value\"]\nprint(\"T2Ramsey:\", t2ramsey)\nprint(\"Fitted frequency:\", frequency)",
"_____no_output_____"
]
],
[
[
"### Adding data to an existing experiment\nIt is possible to add data to an experiment, after the analysis of the first set of data. In the next example we add exp2 to `exp_in_ns` that we showed above.",
"_____no_output_____"
]
],
[
[
"more_delays = list(range(2000, 150000, 2000)) \nexp_new = T2Ramsey(qubit, more_delays, unit=unit)\nexp_new.set_analysis_options(user_p0=None, plot=True)\nexpdata_new = exp_new.run(\n backend=backend_in_ns,\n experiment_data=expdata_in_ns,\n shots=2000\n )\nexpdata_new.block_for_results()\ndisplay(expdata_new.figure(1))",
"_____no_output_____"
],
[
"# The results of the second execution are indices 2 and 3 of the analysis result\nt2ramsey = expdata_new.analysis_results(2).data()[\"value\"]\nfrequency = expdata_new.analysis_results(3).data()[\"value\"]\nprint(\"T2Ramsey:\", t2ramsey)\nprint(\"Fitted frequency:\", frequency)",
"T2Ramsey: 8.006074688405924e-05\nFitted frequency: 98980008.19000898\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d0c80517247613aa97b2a1db8b51b46d6984699e | 49,339 | ipynb | Jupyter Notebook | HeroesOfPymoli/RaulVilla.ipynb | Deathwishx89/PANDAS_CHALLENGE | c8bc9f315aee722e8a363b39937845a93c8690e8 | [
"ADSL"
] | null | null | null | HeroesOfPymoli/RaulVilla.ipynb | Deathwishx89/PANDAS_CHALLENGE | c8bc9f315aee722e8a363b39937845a93c8690e8 | [
"ADSL"
] | null | null | null | HeroesOfPymoli/RaulVilla.ipynb | Deathwishx89/PANDAS_CHALLENGE | c8bc9f315aee722e8a363b39937845a93c8690e8 | [
"ADSL"
] | null | null | null | 33.678498 | 156 | 0.383287 | [
[
[
"### Note\n* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.",
"_____no_output_____"
]
],
[
[
"# Dependencies and Setup\nimport pandas as pd\n\n# File to Load (Remember to Change These)\nfile_to_load = \"Resources/purchase_data.csv\"\n\n# Read Purchasing File and store into Pandas data frame\npurchase_data = pd.read_csv(file_to_load)\npurchase_data",
"_____no_output_____"
]
],
[
[
"## Player Count",
"_____no_output_____"
],
[
"* Display the total number of players\n",
"_____no_output_____"
]
],
[
[
"players = purchase_data[\"SN\"].unique()\n\nplayer_count = pd.DataFrame([{\"Unique User Count\":len(players)}])\nplayer_count\n\n",
"_____no_output_____"
]
],
[
[
"## Purchasing Analysis (Total)",
"_____no_output_____"
],
[
"* Run basic calculations to obtain number of unique items, average price, etc.\n\n\n* Create a summary data frame to hold the results\n\n\n* Optional: give the displayed data cleaner formatting\n\n\n* Display the summary data frame\n",
"_____no_output_____"
]
],
[
[
"unique_items =len(purchase_data[\"Item Name\"].unique())\ntotal_revenue = purchase_data[\"Price\"].sum()\npurchases_count = len(purchase_data[\"Price\"])\naverage_price = total_revenue / purchases_count\n#the data frame for Analysis of the purchases\npurchasing_analysis = pd.DataFrame([{\"Number of Unique Items\":unique_items,\n \"Average Price\":average_price,\n \"Number of Purchases\":purchases_count,\n \"Total Revenue\":total_revenue}])\n\n#change the currency of the prices and the total revenue\npurchasing_analysis[\"Average Price\"] = purchasing_analysis[\"Average Price\"].map(\"${0:,.2f}\".format)\npurchasing_analysis[\"Total Revenue\"] = purchasing_analysis[\"Total Revenue\"].map(\"${0:,.2f}\".format)\n\npurchasing_analysis",
"_____no_output_____"
]
],
[
[
"## Gender Demographics",
"_____no_output_____"
],
[
"* Percentage and Count of Male Players\n\n\n* Percentage and Count of Female Players\n\n\n* Percentage and Count of Other / Non-Disclosed\n\n\n",
"_____no_output_____"
]
],
[
[
"#graphics summary\n\n\n#Lets create a df so we can store the screen names/gender and purchase data from the script\npurchaser_genders = purchase_data[[\"SN\",\"Gender\",\"Price\"]].copy()\n\n#seperate the differet genders for the players\ndif_player_gen = purchaser_genders.drop_duplicates(subset=[\"SN\",\"Gender\"], keep=\"first\")\ndif_player_count = len(dif_player_gen)\n\n#save the values to the summary of the Demographics summary\ngender_counts = dif_player_gen[\"Gender\"].value_counts()\n\ngender_demog = pd.DataFrame({\"Unique Player by Gender\":gender_counts,\n \"% of Total\":gender_counts/dif_player_count})\n \n#change the format of the % to a percentile format\ngender_demog[\"% of Total\"] = gender_demog[\"% of Total\"].map(\"{0:.1%}\".format) ",
"_____no_output_____"
]
],
[
[
"\n## Purchasing Analysis (Gender)",
"_____no_output_____"
],
[
"* Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. by gender\n\n\n\n\n* Create a summary data frame to hold the results\n\n\n* Optional: give the displayed data cleaner formatting\n\n\n* Display the summary data frame",
"_____no_output_____"
]
],
[
[
"# THe Analysis for the purchase by gender\ngroup_purchaser_gen = purchaser_genders.groupby([\"Gender\"])\n\npurchase_cby_gen = group_purchaser_gen[\"Price\"].count()\naverage_purchase_pby_gen = group_purchaser_gen[\"Price\"].mean()\npurchase_sumby_gen = group_purchaser_gen[\"Price\"].sum()\n\n#create the summury for purchase by genders\npurchasing_analysis_gen = pd.DataFrame({\"Purchase Count\":purchase_cby_gen,\n \"Avg Purchase Price\":average_purchase_pby_gen,\n \"Total Purchase Value\":purchase_sumby_gen,\n \"% of Total Revenue\":purchase_sumby_gen/total_revenue,\n \"Avg Purchase / Person\":purchase_cby_gen/gender_counts})\n# the format must be set for the summary\npurchasing_analysis_gen[\"Avg Purchase Price\"] = purchasing_analysis_gen[\"Avg Purchase Price\"].map(\"${0:,.2f}\".format)\npurchasing_analysis_gen[\"Total Purchase Value\"] = purchasing_analysis_gen[\"Total Purchase Value\"].map(\"${0:,.2f}\".format)\npurchasing_analysis_gen[\"% of Total Revenue\"] = purchasing_analysis_gen[\"% of Total Revenue\"].map(\"{0:.1%}\".format)\npurchasing_analysis_gen[\"Avg Purchase / Person\"] = purchasing_analysis_gen[\"Avg Purchase / Person\"].map(\"{0:.1f}\".format)\n\npurchasing_analysis_gen\n",
"_____no_output_____"
]
],
[
[
"## Age Demographics",
"_____no_output_____"
],
[
"* Establish bins for ages\n\n\n* Categorize the existing players using the age bins. Hint: use pd.cut()\n\n\n* Calculate the numbers and percentages by age group\n\n\n* Create a summary data frame to hold the results\n\n\n* Optional: round the percentage column to two decimal points\n\n\n* Display Age Demographics Table\n",
"_____no_output_____"
]
],
[
[
"##AGE DEMOGRAPGHICS####\n#Create data frame to stor names, age, and purchase data\npurchaser_ages = purchase_data[[\"SN\",\"Age\",\"Price\"]].copy()\n#eliminate the duplicates and create a for them\ndif_player_ages = purchaser_ages.drop_duplicates(subset=[\"SN\",\"Age\"],keep=\"first\")\n\n#create a bukey for storage for different ages and move them to a new list \nage_stg = (0,10,15,20,25,30,35,100)\nage_lb = (\"<10\",\"10-14\",\"15-19\",\"20-24\",\"25-29\",\"30-34\",\"50+\")\nage_list = pd.cut(dif_player_ages[\"Age\"], bins=age_stg, right=False, labels=age_lb)\n\ndif_age_counts = pd.DataFrame({\"Unique User Count\":dif_player_ages[\"SN\"],\n \"Age Bin\":age_list})\ndif_gage_counts = dif_age_counts.groupby([\"Age Bin\"]).count()\n\ndif_gage_counts[\"% of Total\"] = dif_gage_counts[\"Unique User Count\"] / dif_player_count\n\n#Format the % of the Total\ndif_gage_counts[\"% of Total\"] = dif_gage_counts[\"% of Total\"].map(\"{0:.2%}\".format)\n\n\ndif_gage_counts\n",
"_____no_output_____"
]
],
[
[
"## Purchasing Analysis (Age)",
"_____no_output_____"
],
[
"* Bin the purchase_data data frame by age\n\n\n* Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. in the table below\n\n\n* Create a summary data frame to hold the results\n\n\n* Optional: give the displayed data cleaner formatting\n\n\n* Display the summary data frame",
"_____no_output_____"
]
],
[
[
"#Purchase Analysis\n#make rows and colums labeled withthe data pulled and age from the lists\nall_purchase_ab = pd.cut(purchase_data[\"Age\"], bins=age_stg,\n right=False, labels=age_lb)\npurchaser_ages[\"Age Bin\"] = all_purchase_ab\n\ngpurchaser_acounts = purchaser_ages.groupby([\"Age Bin\"]).count()\ngpurchaser_asum = purchaser_ages.groupby([\"Age Bin\"]).sum()\ngpurchaser_amean = purchaser_ages.groupby([\"Age Bin\"]).mean()\n\npurchase_cby_age = gpurchaser_acounts[\"SN\"]\npurchase_sby_age = gpurchaser_asum[\"Price\"]\npurchase_aby_age = gpurchaser_amean[\"Price\"]\navg_purchase_by_person = purchase_sby_age / dif_gage_counts[\"Unique User Count\"]\n\npurchasing_analysis_age = pd.DataFrame({\"Purchase Count\":purchase_cby_age,\n \"Average Purchase Price\":purchase_aby_age,\n \"Total Purchase Value\":purchase_sby_age,\n \"% of Total Revenue\":purchase_sby_age / total_revenue,\n \"Avg Total Purchase / Person\":avg_purchase_by_person})\n\n\n\n\n\n\n#create dataframe for age of the purchasing analysis\npurchasing_analysis_age[\"Average Purchase Price\"] = purchasing_analysis_age[\"Average Purchase Price\"].map(\"${0:,.2f}\".format)\npurchasing_analysis_age[\"Total Purchase Value\"] = purchasing_analysis_age[\"Total Purchase Value\"].map(\"${0:,.2f}\".format)\npurchasing_analysis_age[\"% of Total Revenue\"] = purchasing_analysis_age[\"% of Total Revenue\"].map(\"{0:.1%}\".format)\npurchasing_analysis_age[\"Avg Total Purchase / Person\"] = purchasing_analysis_age[\"Avg Total Purchase / Person\"].map(\"${0:,.2f}\".format)\n\n\npurchasing_analysis_age",
"_____no_output_____"
]
],
[
[
"## Top Spenders",
"_____no_output_____"
],
[
"* Run basic calculations to obtain the results in the table below\n\n\n* Create a summary data frame to hold the results\n\n\n* Sort the total purchase value column in descending order\n\n\n* Optional: give the displayed data cleaner formatting\n\n\n* Display a preview of the summary data frame\n\n",
"_____no_output_____"
]
],
[
[
"#find the top 5 spenders\n\npurchase_total_sum_SN = purchase_data.groupby([\"SN\"]).sum()\nsort_purchase_total_sum_SN = purchase_total_sum_SN.sort_values(\"Price\", ascending=False)\n\n\npurchase_cby_sn = purchase_data[\"SN\"].value_counts()\n\npurchase_sby_sn = sort_purchase_total_sum_SN[\"Price\"]\n\ntop_spenders = pd.DataFrame({\"Purchase Count\":purchase_cby_sn,\n \"Avg Purchase Price\":purchase_sby_sn / purchase_cby_sn,\n \"Total Purchase Value\":purchase_sby_sn})\n\n\n#create the summary of the spenders from highest to lowest\ntop_spenders = top_spenders.sort_values(\"Total Purchase Value\", ascending=False)\n\n# the Formatting for the data summary\ntop_spenders[\"Avg Purchase Price\"] = top_spenders[\"Avg Purchase Price\"].map(\"${0:,.2f}\".format)\ntop_spenders[\"Total Purchase Value\"] = top_spenders[\"Total Purchase Value\"].map(\"${0:,.2f}\".format)\n\n\n\ntop_spenders.head(5)\n",
"_____no_output_____"
]
],
[
[
"## Most Popular Items",
"_____no_output_____"
],
[
"* Retrieve the Item ID, Item Name, and Item Price columns\n\n\n* Group by Item ID and Item Name. Perform calculations to obtain purchase count, item price, and total purchase value\n\n\n* Create a summary data frame to hold the results\n\n\n* Sort the purchase count column in descending order\n\n\n* Optional: give the displayed data cleaner formatting\n\n\n* Display a preview of the summary data frame\n\n",
"_____no_output_____"
]
],
[
[
"#Iteams that are the most popular\nitems= purchase_data[[\"Item ID\",\"Item Name\",\"Price\"]]\ngroup_items = items.groupby([\"Item ID\",\"Item Name\"])\n\npurchase_cby_item = group_items[\"Item ID\"].count()\ntotal_purchase_vby_item = group_items[\"Price\"].sum()\nitem_price = total_purchase_vby_item / purchase_cby_item\n\n#create a df for most popular items in the shop\nmost_pop_items = pd.DataFrame({\"Purchase Count\": purchase_cby_item,\n \"Item Price\": item_price,\n \"Total Purchase Value\": total_purchase_vby_item})\n\n\nmost_pop_items = most_pop_items.sort_values(\"Purchase Count\", ascending=False)\n\n\n#Formating for the summary of pupular items\nmost_pop_items[\"Item Price\"] =most_pop_items[\"Item Price\"].map(\"${0:,.2f}\".format)\nmost_pop_items[\"Total Purchase Value\"] =most_pop_items[\"Total Purchase Value\"].map(\"${0:,.2f}\".format)\n\n\n\n\n\nmost_pop_items.head(5)",
"_____no_output_____"
]
],
[
[
"## Most Profitable Items",
"_____no_output_____"
],
[
"* Sort the above table by total purchase value in descending order\n\n\n* Optional: give the displayed data cleaner formatting\n\n\n* Display a preview of the data frame\n\n",
"_____no_output_____"
]
],
[
[
"###Profitable items####\n#make a copy of the most popular items table and sort by the purchase value\nmost_profit_items = pd.DataFrame({\"Purchase Count\": purchase_cby_item,\n \"Item Price\": item_price,\n \"Total Purchase Value\": total_purchase_vby_item})\n\nmost_profit_items = most_profit_items.sort_values(\"Total Purchase Value\", ascending=False)\n\n\nmost_profit_items[\"Item Price\"] =most_profit_items[\"Item Price\"].map(\"${0:,.2f}\".format)\nmost_profit_items[\"Total Purchase Value\"] =most_profit_items[\"Total Purchase Value\"].map(\"${0:,.2f}\".format)\n\n\nmost_profit_items",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
]
] |
d0c808323602598601d2d519fa6d1f49fcfbf272 | 23,571 | ipynb | Jupyter Notebook | notebooks/simple.ipynb | m4rz910/NYISOToolk | ec949025109383b8267f756b246af7ae4133c31c | [
"MIT"
] | 21 | 2020-09-14T17:40:45.000Z | 2022-03-10T08:45:18.000Z | notebooks/simple.ipynb | m4rz910/NYISOToolk | ec949025109383b8267f756b246af7ae4133c31c | [
"MIT"
] | 20 | 2020-08-26T13:01:21.000Z | 2021-04-21T00:37:45.000Z | notebooks/simple.ipynb | m4rz910/NYISOToolk | ec949025109383b8267f756b246af7ae4133c31c | [
"MIT"
] | 2 | 2020-10-20T00:30:04.000Z | 2020-10-24T02:13:41.000Z | 277.305882 | 2,146 | 0.714861 | [
[
[
"import pytest\nfrom datetime import datetime\nimport pytz\nimport matplotlib.pyplot as plt\n\nfrom nyisotoolkit import NYISOVis\nfrom nyisotoolkit.nyisovis.nyisovis import basic_plots, statistical_plots\nfrom nyisotoolkit.nyisodata.utils import current_year\n\nbasic_plots({\"redownload\":True, 'year': 2021})",
"Downloading 2021 fuel_mix_5m...Completed!\n"
]
]
] | [
"code"
] | [
[
"code"
]
] |
d0c8094988a1c4484914a49d895fd5d4c7f2cdc5 | 45,214 | ipynb | Jupyter Notebook | outputs/mnist/64_sparse_pca.ipynb | cenkbircanoglu/scikit_learn_benchmarking | 09c5324148d2f52c2dc9e9dd803408bb8bd6c488 | [
"MIT"
] | null | null | null | outputs/mnist/64_sparse_pca.ipynb | cenkbircanoglu/scikit_learn_benchmarking | 09c5324148d2f52c2dc9e9dd803408bb8bd6c488 | [
"MIT"
] | null | null | null | outputs/mnist/64_sparse_pca.ipynb | cenkbircanoglu/scikit_learn_benchmarking | 09c5324148d2f52c2dc9e9dd803408bb8bd6c488 | [
"MIT"
] | null | null | null | 33.00292 | 138 | 0.351683 | [
[
[
"import os\nimport pandas as pd\nfrom sklearn.model_selection import StratifiedShuffleSplit\n\nfrom dimensionality_reduction import reduce_dimension\nimport load_database\nfrom algorithms import *",
"_____no_output_____"
],
[
"import warnings\nwarnings.filterwarnings('ignore')",
"_____no_output_____"
],
[
"database_name = os.environ['DATABASE']\nn_components = int(os.environ['N_COMPONENTS'])\ndimensionality_algorithm = os.environ['DIMENSIONALITY_ALGORITHM']",
"_____no_output_____"
],
[
"result_path = 'results/%s_%s_%s.csv' %(database_name, n_components, dimensionality_algorithm)",
"_____no_output_____"
],
[
"X, y = load_database.load(database_name)\nX = reduce_dimension(dimensionality_algorithm, X, n_components) if n_components else X",
"_____no_output_____"
],
[
"X.shape",
"_____no_output_____"
],
[
"results = {}",
"_____no_output_____"
],
[
"sss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)\nfor train_index, test_index in sss.split(X, y):\n X_train, X_test = X[train_index], X[test_index]\n y_train, y_test = y[train_index], y[test_index]",
"_____no_output_____"
],
[
"result = train_test(X_train, y_train, X_test, y_test, 'ada_boost')\nresults.update(result)",
"42.233148999999685\n{'algorithm': 'SAMME.R', 'learning_rate': 0.7, 'n_estimators': 90}\n0.7327142857142858\n0.736 0.7313571428571428 0.7375798240707979 0.7326600256298489\n"
],
[
"result = train_test(X_train, y_train, X_test, y_test, 'bagging')\nresults.update(result)",
"176.19572799999878\n{'bootstrap_features': 1, 'n_estimators': 45}\n0.941375\n1.0 0.9496428571428571 1.0 0.9495383409311723\n"
],
[
"result = train_test(X_train, y_train, X_test, y_test, 'extra_trees')\nresults.update(result)",
"3.843316999998933\n{'criterion': 'gini', 'n_estimators': 45, 'warm_start': 1}\n0.9479464285714285\n1.0 0.9527142857142857 1.0 0.9526039398542944\n"
],
[
"result = train_test(X_train, y_train, X_test, y_test, 'random_forest')\nresults.update(result)",
"19.513483999995515\n{'criterion': 'gini', 'n_estimators': 45, 'oob_score': 1, 'warm_start': 1}\n0.943125\n1.0 0.9510714285714286 1.0 0.9509864700085621\n"
],
[
"result = train_test(X_train, y_train, X_test, y_test, 'logistic_regression')\nresults.update(result)",
"90.69816300000093\n{'C': 1.4, 'solver': 'newton-cg', 'tol': 0.0001}\n0.8970892857142857\n0.902125 0.8981428571428571 0.9015956335472866 0.897764044362089\n"
],
[
"result = train_test(X_train, y_train, X_test, y_test, 'passive_aggressive')\nresults.update(result)",
"2.882740000000922\n{'early_stopping': False, 'loss': 'squared_hinge', 'tol': 2.5e-05, 'warm_start': 0}\n0.881125\n0.8405 0.8385 0.8438246136342006 0.8409161984961777\n"
],
[
"result = train_test(X_train, y_train, X_test, y_test, 'ridge')\nresults.update(result)",
"0.9156469999943511\n{'alpha': 1.1, 'tol': 0.0001}\n0.8463571428571428\n0.8481428571428572 0.845 0.8460823674045024 0.8429885992486512\n"
],
[
"result = train_test(X_train, y_train, X_test, y_test, 'sgd')\nresults.update(result)",
"26.65538199999719\n{'alpha': 0.0008, 'loss': 'hinge', 'penalty': 'none', 'tol': 1.4285714285714285e-05}\n0.8955714285714286\n0.8986964285714286 0.8958571428571429 0.8984431924595215 0.8956926757062251\n"
],
[
"result = train_test(X_train, y_train, X_test, y_test, 'bernoulli')\nresults.update(result)",
"0.388264999994135\n{'alpha': 0.1}\n0.11253571428571428\n0.11348214285714285 0.1125 0.024675544278649344 0.022752808988764042\n"
],
[
"result = train_test(X_train, y_train, X_test, y_test, 'gaussian')\nresults.update(result)",
"0.3508640000000014\n{'var_smoothing': 1e-10}\n0.8674285714285714\n0.8691607142857143 0.8647142857142858 0.8698778976207744 0.8654317751133429\n"
],
[
"result = train_test(X_train, y_train, X_test, y_test, 'k_neighbors')\nresults.update(result)",
"2.91838600000483\n{'algorithm': 'ball_tree', 'n_neighbors': 4, 'p': 1, 'weights': 'distance'}\n0.9627678571428572\n1.0 0.9689285714285715 1.0 0.968860965105337\n"
],
[
"result = train_test(X_train, y_train, X_test, y_test, 'nearest_centroid')\nresults.update(result)",
"0.23393499999656342\n{'metric': 'euclidean'}\n0.8398214285714286\n0.8411964285714286 0.839 0.8412242565990308 0.8390338703940627\n"
],
[
"result = train_test(X_train, y_train, X_test, y_test, 'mlp')\nresults.update(result)",
"338.5127929999944\n{'activation': 'tanh', 'alpha': 3.3333333333333333e-06, 'early_stopping': True, 'learning_rate': 'constant', 'solver': 'lbfgs'}\n0.94825\n0.9532321428571429 0.9481428571428572 0.9532164172741119 0.9481439841265297\n"
],
[
"result = train_test(X_train, y_train, X_test, y_test, 'linear_svc')\nresults.update(result)",
"27.007058999995934\n{'C': 1.4, 'multi_class': 'crammer_singer', 'penalty': 'l2', 'tol': 0.0001}\n0.9112857142857143\n0.9166071428571428 0.9128571428571428 0.9163419270321945 0.9126324414696202\n"
],
[
"result = train_test(X_train, y_train, X_test, y_test, 'decision_tree')\nresults.update(result)",
"14.737384999993083\n{'criterion': 'entropy', 'splitter': 'best'}\n0.8263928571428572\n1.0 0.8470714285714286 1.0 0.8470532342817909\n"
],
[
"result = train_test(X_train, y_train, X_test, y_test, 'extra_tree')\nresults.update(result)",
"2.188074999998207\n{'criterion': 'entropy', 'splitter': 'best'}\n0.7529107142857143\n1.0 0.7637857142857143 1.0 0.7639222694989904\n"
],
[
"result = train_test(X_train, y_train, X_test, y_test, 'gradient_boosting')\nresults.update(result)",
"331.9900750000015\n{'criterion': 'friedman_mse', 'learning_rate': 0.3, 'loss': 'deviance', 'tol': 1e-05}\n0.9321071428571429\n0.9848571428571429 0.9386428571428571 0.9848517392629834 0.9386072061126688\n"
],
[
"result = train_test(X_train, y_train, X_test, y_test, 'hist_gradient_boosting')\nresults.update(result)",
"62.312362999997276\n{'l2_regularization': 0, 'tol': 1e-08}\n0.9595714285714285\n1.0 0.9657142857142857 1.0 0.9657132879641288\n"
],
[
"df = pd.DataFrame.from_records(results)",
"_____no_output_____"
],
[
"df",
"_____no_output_____"
],
[
"df.to_csv(result_path)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0c81a6c9472ce5b8d2141b6055910475e8fba5f | 20,300 | ipynb | Jupyter Notebook | Foil Open Area/Open Area.ipynb | balarsen/pymc_learning | e4a077d492af6604a433433e64b835ce4ed0333a | [
"BSD-3-Clause"
] | null | null | null | Foil Open Area/Open Area.ipynb | balarsen/pymc_learning | e4a077d492af6604a433433e64b835ce4ed0333a | [
"BSD-3-Clause"
] | null | null | null | Foil Open Area/Open Area.ipynb | balarsen/pymc_learning | e4a077d492af6604a433433e64b835ce4ed0333a | [
"BSD-3-Clause"
] | 1 | 2017-05-23T16:38:55.000Z | 2017-05-23T16:38:55.000Z | 33.115824 | 810 | 0.556059 | [
[
[
"# Experimental data analysis on foil open area\n## Brian Larsen, ISR-1\n## Data provided by Phil Fernandes, ISR-1 2016-9-14",
"_____no_output_____"
],
[
"The setup is a foil in its holder mounted to a foil holder meant to bock incident ions. The foil has a ~0.6mm hole in it to provide a baseline. The goal is to use the relative intensity of the witness hole to determine the intensity of holes in the foil.\n\nA quick summary:\n* Foil is placed 0.66” from front of MCP surface\n* Beam is rastered to cover full foil and “witness” aperture\n* Beam is 1.0 keV Ar+, slightly underfocused\n* Accumulate data for set period of time (either 60s or 180s, identified in spreadsheet)\n* Total_cts is the # of counts through the foil and the witness aperture\n* Witness_cts is the # of counts in the witness aperture only\n* Foil_cts = total_cts – witness_cts\n* Open area OA = (foil_cts/witness_cts) * (witness_area/foil_area)",
"_____no_output_____"
]
],
[
[
"import itertools\nfrom pprint import pprint\nfrom operator import getitem\n\nimport matplotlib.pyplot as plt\nfrom matplotlib.colors import LogNorm\nimport numpy as np\nimport spacepy.plot as spp\nimport pymc as mc\nimport tqdm\n\nfrom MCA_file_viewer_v001 import GetMCAfile",
"/Users/balarsen/miniconda3/envs/python3/lib/python3.6/site-packages/matplotlib/__init__.py:913: UserWarning: axes.color_cycle is deprecated and replaced with axes.prop_cycle; please use the latter.\n warnings.warn(self.msg_depr % (key, alt_key))\n"
],
[
"def plot_box(x, y, c='r', lw=0.6, ax=None):\n if ax is None:\n plt.plot((xind[0], xind[0]), (yind[0], yind[1]), lw=lw, c=c)\n plt.plot((xind[1], xind[1]), (yind[0], yind[1]), lw=lw, c=c)\n plt.plot((xind[0], xind[1]), (yind[0], yind[0]), lw=lw, c=c)\n plt.plot((xind[0], xind[1]), (yind[1], yind[1]), lw=lw, c=c)\n else:\n ax.plot((xind[0], xind[0]), (yind[0], yind[1]), lw=lw, c=c)\n ax.plot((xind[1], xind[1]), (yind[0], yind[1]), lw=lw, c=c)\n ax.plot((xind[0], xind[1]), (yind[0], yind[0]), lw=lw, c=c)\n ax.plot((xind[0], xind[1]), (yind[1], yind[1]), lw=lw, c=c)\n ",
"_____no_output_____"
],
[
"ZZ, XX, YY = GetMCAfile('16090203.mca')\n# It is believed as of 2016-09-19 that the MCA records 2 counts for each count. \n# This means all data are even and all the data can be divided by 2 to give the\n# right number of counts. Per emails Larsen-Fernandes 2016-09-17\n# These data are integres and care muct be taken to assure that /2 does not\n# lead to number that are not representable in float\nZZ = ZZ.astype(float)\nZZ /= 2\nXX = XX.astype(np.uint16) # as they all should be integers anyway\n",
"_____no_output_____"
],
[
"xind = (986, 1003)\nyind = (492, 506)\n\nfig = plt.figure(figsize=(20,8))\nax1 = fig.add_subplot(131)\nax2 = fig.add_subplot(132)\nax3 = fig.add_subplot(133)\n\npc = ax1.pcolormesh(XX, YY, ZZ, norm=LogNorm())\nplt.colorbar(pc, ax=ax1)\nplot_box(xind, yind, ax=ax1)\n\nax2.hist(ZZ.flatten(), 20)\nax2.set_yscale('log')\n\nax3.hist(ZZ.flatten(), 20, normed=True)\nax3.set_yscale('log')",
"_____no_output_____"
]
],
[
[
"## Do some calculations to try and match Phil's analysis",
"_____no_output_____"
],
[
"Phil's data:\n\nFile name\tWitness cts\tTotal cts\tFoil cts\tOpen area\n\n16090203\t658\t4570\t3912\t0.00102",
"_____no_output_____"
]
],
[
[
"total_cnts = ZZ.sum()\nprint('Total counts:{0} -- Phil got {1} -- remember /2'.format(total_cnts, 4570/2)) # remember we did a /2",
"_____no_output_____"
],
[
"# Is the whitness hole at x=1000, y=500?\nXX.shape, YY.shape, ZZ.shape",
"_____no_output_____"
],
[
"\nprint(ZZ[yind[0]:yind[1], xind[0]:xind[1]])\nplt.figure()\nplt.pcolormesh(XX[xind[0]:xind[1]], YY[yind[0]:yind[1]], ZZ[yind[0]:yind[1], xind[0]:xind[1]] , norm=LogNorm())\nplt.colorbar()\n\nwitness_counts = ZZ[yind[0]:yind[1], xind[0]:xind[1]].sum()\n\nprint('Witness counts: {0}, Phil got {1}/2={2}'.format(witness_counts, 658, 658/2))\nwit_pixels = 46\nprint('There {0} pixels in the witness peak'.format(wit_pixels))\n\ntotal_counts = ZZ.sum()\nprint(\"There are a total of {0} counts\".format(total_counts))\n",
"_____no_output_____"
]
],
[
[
"## Can we get a noise estimate? \n1) Try all pixels with a value where a neighbor does not. This assumes that real holes are large enough to have a point spread function and therefore cannot be in a single pixel.",
"_____no_output_____"
]
],
[
[
"def neighbor_inds(x, y, xlim=(0,1023), ylim=(0,1023), center=False, mask=False):\n \"\"\"\n given an x and y index return the 8 neighbor indices\n \n if center also return the center index\n if mask return a boolean mask over the whole 2d array\n \"\"\"\n xi = np.clip([x + v for v in [-1, 0, 1]], xlim[0], xlim[1])\n yi = np.clip([y + v for v in [-1, 0, 1]], ylim[0], ylim[1])\n ans = [(i, j) for i, j in itertools.product(xi, yi)]\n if not center:\n ans.remove((x,y))\n if mask:\n out = np.zeros((np.diff(xlim)+1, np.diff(ylim)+1), dtype=np.bool)\n for c in ans:\n out[c] = True\n else:\n out = ans\n return np.asarray(out)\n\nprint(neighbor_inds(2,2))\nprint(neighbor_inds(2,2, mask=True))\nprint(ZZ[neighbor_inds(500, 992, mask=True)])\n\n ",
"_____no_output_____"
],
[
"def get_alone_pixels(dat):\n \"\"\"\n loop over all the data and store the value of all lone pixels\n \"\"\"\n ans = []\n for index, x in tqdm.tqdm_notebook(np.ndenumerate(dat)):\n if (np.sum([ZZ[i, j] for i, j in neighbor_inds(index[0], index[1])]) == 0) and x != 0:\n ans.append((index, x))\n return ans\n# print((neighbor_inds(5, 4)))\nalone = get_alone_pixels(ZZ)\npprint(alone)\n# ZZ[neighbor_inds(5, 4)[0]].shape\n# print((neighbor_inds(5, 4))[0])\n# print(ZZ[(neighbor_inds(5, 4))[0]].shape)\n# ZZ[4,3]",
"_____no_output_____"
],
[
"ZZ[(965, 485)]",
"_____no_output_____"
],
[
"print(neighbor_inds(4,3)[0])\nprint(ZZ[neighbor_inds(4,3)[0]])\nprint(ZZ[3,2])\n\nni = neighbor_inds(4,3)[0]\nprint(ZZ[ni[0], ni[1]])",
"_____no_output_____"
],
[
"(ZZ % 2).any() # not all even any longer",
"_____no_output_____"
]
],
[
[
"### Noise estimates\nNot we assume that all lone counts are noise that can be considered random and uniform over the MCP. \nThis then provides a number of counts per MCA pixel that we can use. ",
"_____no_output_____"
]
],
[
[
"n_noise = np.sum([v[1] for v in alone])\nn_pixels = 1024*1024\nnoise_pixel = n_noise/n_pixels\nprint(\"There were a total of {0} random counts over {1} pixels, {2} cts/pixel\".format(n_noise, n_pixels, noise_pixel))",
"_____no_output_____"
]
],
[
[
"Maybe we should consider just part of the MCP, lets get the min,max X and min,max Y where there are counts and just use that area. This will increase the cts/pixel.",
"_____no_output_____"
]
],
[
[
"minx_tmp = ZZ.sum(axis=0)\nminx_tmp.shape\nprint(minx_tmp)\n\nminy_tmp = ZZ.sum(axis=1)\nminy_tmp.shape\nprint(miny_tmp)\n\n",
"_____no_output_____"
]
],
[
[
"Looks to go all the way to all sides in X-Y.",
"_____no_output_____"
],
[
"## Work to total open area calculations\nNow we can model the total open area of the foil given the noise estimate per pixel and the pixels that are a part of the witness sample and the total area.\n\nWe model the observed background as Poisson with center at the real background:\n\n$obsnbkg \\sim Pois(nbkg)$\n\nWe model the observed witness sample, $obswit$, as Poisson with center of background per pixel times number of pixels in peak plus the number of real counts:\n\n$obswit \\sim Pois(nbkg/C + witc)$, $C = \\frac{A_w}{A_t}$\n\nThis then leaves the number of counts in open areas of the system (excluding witness) as a Poisson with center of background per pixel times number of pixels in the system (less witness) plus the real number of counts.\n\n$obsopen \\sim Pois(nbkg/D + realc)$, $D=\\frac{A_t - A_w}{A_t}$\n\nThen then the open area is given by the ratio number of counts, $realc$, over an unknown area, $A_o$, as related to witness counts, $witc$, to the witness area, $A_w$, which is assumed perfect as as 0.6mm hole.\n\n$\\frac{A_o}{realc}=\\frac{A_w}{witc} => A_o = \\frac{A_w}{witc}realc $\n",
"_____no_output_____"
]
],
[
[
"Aw = np.pi*(0.2/2)**2 # mm**2\nAf = 182.75 # mm**2 this is the area of the foil\nW_F_ratio = Aw/Af\n\nprint(Aw, Af, W_F_ratio)\n\nC = wit_pixels/n_pixels\nD = (n_pixels-wit_pixels)/n_pixels\nprint('C', C, 'D', D)\n\n\nnbkg = mc.Uniform('nbkg', 1, n_noise*5) # just 1 to some large number\nobsnbkg = mc.Poisson('obsnbkg', nbkg, observed=True, value=n_noise)\n\nwitc = mc.Uniform('witc', 0, witness_counts*5) # just 0 to some large number\nobswit = mc.Poisson('obswit', nbkg*C + witc, observed=True, value=witness_counts)\n\nrealc = mc.Uniform('realc', 0, total_counts*5) # just 0 to some large number\nobsopen = mc.Poisson('obsopen', nbkg*D + realc, observed=True, value=total_counts-witness_counts)\n\[email protected](plot=True)\ndef open_area(realc=realc, witc=witc):\n return realc*Aw/witc/Af\n\nmodel = mc.MCMC([nbkg, obsnbkg, witc, obswit, realc, obsopen, open_area])",
"_____no_output_____"
],
[
"model.sample(200000, burn=100, thin=30, burn_till_tuned=True)\nmc.Matplot.plot(model)\n\n# 1000, burn=100, thin=30 0.000985 +/- 0.000058\n# 10000, burn=100, thin=30 0.000982 +/- 0.000061\n# 100000, burn=100, thin=30 0.000984 +/- 0.000059\n# 200000, burn=100, thin=30 0.000986 +/- 0.000059\n# 1000000, burn=100, thin=30 0.000985 +/- 0.000059",
"_____no_output_____"
],
[
"print(\"Foil 1 \\n\")\n\nwitc_mean = np.mean(witc.trace()[...])\nwitc_std = np.std(witc.trace()[...])\n\nprint(\"Found witness counts of {0} turn into {1} +/- {2} ({3:.2f}%)\\n\".format(witness_counts, witc_mean, witc_std, witc_std/witc_mean*100))\n\nrealc_mean = np.mean(realc.trace()[...])\nrealc_std = np.std(realc.trace()[...])\n\nprint(\"Found non-witness counts of {0} turn into {1} +/- {2} ({3:.2f}%)\\n\".format(total_counts-witness_counts, realc_mean, realc_std, realc_std/realc_mean*100))\n\nnbkg_mean = np.mean(nbkg.trace()[...])\nnbkg_std = np.std(nbkg.trace()[...])\n\nprint(\"Found noise counts of {0} turn into {1} +/- {2} ({3:.2f}%)\\n\".format(0, nbkg_mean, nbkg_std, nbkg_std/nbkg_mean*100))\n\nOA_median = np.median(open_area.trace()[...])\nOA_mean = np.mean(open_area.trace()[...])\nOA_std = np.std(open_area.trace()[...])\nprint(\"The open area fraction is {0:.6f} +/- {1:.6f} ({2:.2f}%) at the 1 stddev level from 1 measurement\\n\".format(OA_mean, OA_std,OA_std/OA_mean*100 ))\nprint(\"Phil got {0} for 1 measurement\\n\".format(0.00139))\nprint(\"The ratio Brian/Phil is: {0:.6f} or {1:.6f}\".format(OA_mean/0.00139, 0.00139/OA_mean))\n",
"_____no_output_____"
]
],
[
[
"## Run again allowing some uncertainity on witness and foil areas",
"_____no_output_____"
]
],
[
[
"_Aw = np.pi*(0.2/2)**2 # mm**2\n_Af = 182.75 # mm**2 this is the area of the foil\n\nAw = mc.Normal('Aw', _Aw, (_Aw*0.2)**-2) # 20%\nAf = mc.Normal('Af', _Af, (_Af*0.1)**-2) # 10%\n\nprint(_Aw, _Af)\n\nC = wit_pixels/n_pixels\nD = (n_pixels-wit_pixels)/n_pixels\nprint('C', C, 'D', D)\n\n\nnbkg = mc.Uniform('nbkg', 1, n_noise*5) # just 1 to some large number\nobsnbkg = mc.Poisson('obsnbkg', nbkg, observed=True, value=n_noise)\n\nwitc = mc.Uniform('witc', 0, witness_counts*5) # just 0 to some large number\nobswit = mc.Poisson('obswit', nbkg*C + witc, observed=True, value=witness_counts)\n\nrealc = mc.Uniform('realc', 0, total_counts*5) # just 0 to some large number\nobsopen = mc.Poisson('obsopen', nbkg*D + realc, observed=True, value=total_counts-witness_counts)\n\[email protected](plot=True)\ndef open_area(realc=realc, witc=witc, Aw=Aw, Af=Af):\n return realc*Aw/witc/Af\n\nmodel = mc.MCMC([nbkg, obsnbkg, witc, obswit, realc, obsopen, open_area, Af, Aw])",
"_____no_output_____"
],
[
"model.sample(200000, burn=100, thin=30, burn_till_tuned=True)\n",
"_____no_output_____"
],
[
"mc.Matplot.plot(nbkg)\nmc.Matplot.plot(witc)\nmc.Matplot.plot(realc)\n# mc.Matplot.plot(open_area)\nmc.Matplot.plot(Aw)\n\n_ = spp.plt.hist(open_area.trace(), 20)\n\n",
"_____no_output_____"
],
[
"print(\"Foil 1 \\n\")\n\nwitc_mean = np.mean(witc.trace()[...])\nwitc_std = np.std(witc.trace()[...])\n\nprint(\"Found witness counts of {0} turn into {1} +/- {2} ({3:.2f}%)\\n\".format(witness_counts, witc_mean, witc_std, witc_std/witc_mean*100))\n\nrealc_mean = np.mean(realc.trace()[...])\nrealc_std = np.std(realc.trace()[...])\n\nprint(\"Found non-witness counts of {0} turn into {1} +/- {2} ({3:.2f}%)\\n\".format(total_counts-witness_counts, realc_mean, realc_std, realc_std/realc_mean*100))\n\nnbkg_mean = np.mean(nbkg.trace()[...])\nnbkg_std = np.std(nbkg.trace()[...])\n\nprint(\"Found noise counts of {0} turn into {1} +/- {2} ({3:.2f}%)\\n\".format(0, nbkg_mean, nbkg_std, nbkg_std/nbkg_mean*100))\n\nOA_median = np.median(open_area.trace()[...])\nOA_mean = np.mean(open_area.trace()[...])\nOA_std = np.std(open_area.trace()[...])\nprint(\"The open area fraction is {0:.6f} +/- {1:.6f} ({2:.2f}%) at the 1 stddev level from 1 measurement\\n\".format(OA_mean, OA_std,OA_std/OA_mean*100 ))\nprint(\"Phil got {0} for 1 measurement\\n\".format(0.00139))\nprint(\"The ratio Brian/Phil is: {0:.6f} or {1:.6f}\".format(OA_mean/0.00139, 0.00139/OA_mean))\n",
"_____no_output_____"
],
[
"mc.Matplot.plot(Aw)\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
d0c81d269f0f8ce81673a907f8cba446f6ad1174 | 7,370 | ipynb | Jupyter Notebook | colabs/sdf_to_bigquery.ipynb | Gregorfran/starthinker | 4c9031f3001d380dbfc213a83b11ec61dfcffe47 | [
"Apache-2.0"
] | null | null | null | colabs/sdf_to_bigquery.ipynb | Gregorfran/starthinker | 4c9031f3001d380dbfc213a83b11ec61dfcffe47 | [
"Apache-2.0"
] | 6 | 2021-03-19T12:00:18.000Z | 2022-02-10T09:43:42.000Z | colabs/sdf_to_bigquery.ipynb | Gregorfran/starthinker-gregor | 4c9031f3001d380dbfc213a83b11ec61dfcffe47 | [
"Apache-2.0"
] | null | null | null | 42.356322 | 322 | 0.549661 | [
[
[
"#1. Install Dependencies\nFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.\n",
"_____no_output_____"
]
],
[
[
"!pip install git+https://github.com/google/starthinker\n",
"_____no_output_____"
]
],
[
[
"#2. Get Cloud Project ID\nTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.\n",
"_____no_output_____"
]
],
[
[
"CLOUD_PROJECT = 'PASTE PROJECT ID HERE'\n\nprint(\"Cloud Project Set To: %s\" % CLOUD_PROJECT)\n",
"_____no_output_____"
]
],
[
[
"#3. Get Client Credentials\nTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.\n",
"_____no_output_____"
]
],
[
[
"CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'\n\nprint(\"Client Credentials Set To: %s\" % CLIENT_CREDENTIALS)\n",
"_____no_output_____"
]
],
[
[
"#4. Enter SDF Download Parameters\nDownload SDF reports into a BigQuery table.\n 1. Select your filter types and the filter ideas.\n 1. Enter the <a href='https://developers.google.com/bid-manager/v1.1/sdf/download' target='_blank'>file types</a> using commas.\n 1. SDF_ will be prefixed to all tables and date appended to daily tables.\n 1. File types take the following format: FILE_TYPE_CAMPAIGN, FILE_TYPE_AD_GROUP,...\nModify the values below for your use case, can be done multiple times, then click play.\n",
"_____no_output_____"
]
],
[
[
"FIELDS = {\n 'auth_write': 'service', # Credentials used for writing data.\n 'partner_id': '', # The sdf file types.\n 'file_types': [], # The sdf file types.\n 'filter_type': '', # The filter type for the filter ids.\n 'filter_ids': [], # Comma separated list of filter ids for the request.\n 'dataset': '', # Dataset to be written to in BigQuery.\n 'version': '5', # The sdf version to be returned.\n 'table_suffix': '', # Optional: Suffix string to put at the end of the table name (Must contain alphanumeric or underscores)\n 'time_partitioned_table': False, # Is the end table a time partitioned\n 'create_single_day_table': False, # Would you like a separate table for each day? This will result in an extra table each day and the end table with the most up to date SDF.\n}\n\nprint(\"Parameters Set To: %s\" % FIELDS)\n",
"_____no_output_____"
]
],
[
[
"#5. Execute SDF Download\nThis does NOT need to be modified unles you are changing the recipe, click play.\n",
"_____no_output_____"
]
],
[
[
"from starthinker.util.project import project\nfrom starthinker.script.parse import json_set_fields, json_expand_includes\n\nUSER_CREDENTIALS = '/content/user.json'\n\nTASKS = [\n {\n 'dataset': {\n 'auth': 'user',\n 'dataset': {'field': {'name': 'dataset','kind': 'string','order': 6,'default': '','description': 'Dataset to be written to in BigQuery.'}}\n }\n },\n {\n 'sdf': {\n 'auth': 'user',\n 'version': {'field': {'name': 'version','kind': 'choice','order': 6,'default': '5','description': 'The sdf version to be returned.','choices': ['SDF_VERSION_5','SDF_VERSION_5_1']}},\n 'partner_id': {'field': {'name': 'partner_id','kind': 'integer','order': 1,'description': 'The sdf file types.'}},\n 'file_types': {'field': {'name': 'file_types','kind': 'string_list','order': 2,'default': [],'description': 'The sdf file types.'}},\n 'filter_type': {'field': {'name': 'filter_type','kind': 'choice','order': 3,'default': '','description': 'The filter type for the filter ids.','choices': ['FILTER_TYPE_ADVERTISER_ID','FILTER_TYPE_CAMPAIGN_ID','FILTER_TYPE_INSERTION_ORDER_ID','FILTER_TYPE_MEDIA_PRODUCT_ID','FILTER_TYPE_LINE_ITEM_ID']}},\n 'read': {\n 'filter_ids': {\n 'single_cell': True,\n 'values': {'field': {'name': 'filter_ids','kind': 'integer_list','order': 4,'default': [],'description': 'Comma separated list of filter ids for the request.'}}\n }\n },\n 'time_partitioned_table': {'field': {'name': 'time_partitioned_table','kind': 'boolean','order': 7,'default': False,'description': 'Is the end table a time partitioned'}},\n 'create_single_day_table': {'field': {'name': 'create_single_day_table','kind': 'boolean','order': 8,'default': False,'description': 'Would you like a separate table for each day? This will result in an extra table each day and the end table with the most up to date SDF.'}},\n 'dataset': {'field': {'name': 'dataset','kind': 'string','order': 6,'default': '','description': 'Dataset to be written to in BigQuery.'}},\n 'table_suffix': {'field': {'name': 'table_suffix','kind': 'string','order': 6,'default': '','description': 'Optional: Suffix string to put at the end of the table name (Must contain alphanumeric or underscores)'}}\n }\n }\n]\n\njson_set_fields(TASKS, FIELDS)\njson_expand_includes(TASKS)\n\nproject.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True)\nproject.execute()\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0c8252c5de62f8e4a05b85e098f2c214569858a | 52,837 | ipynb | Jupyter Notebook | HMM Tagger.ipynb | bijanh/NLP-Proj-1 | 230b056103d6b891ce9f5c3db675e109ce9554b5 | [
"MIT"
] | null | null | null | HMM Tagger.ipynb | bijanh/NLP-Proj-1 | 230b056103d6b891ce9f5c3db675e109ce9554b5 | [
"MIT"
] | null | null | null | HMM Tagger.ipynb | bijanh/NLP-Proj-1 | 230b056103d6b891ce9f5c3db675e109ce9554b5 | [
"MIT"
] | null | null | null | 42.168396 | 660 | 0.583928 | [
[
[
"# Project: Part of Speech Tagging with Hidden Markov Models \n---\n### Introduction\n\nPart of speech tagging is the process of determining the syntactic category of a word from the words in its surrounding context. It is often used to help disambiguate natural language phrases because it can be done quickly with high accuracy. Tagging can be used for many NLP tasks like determining correct pronunciation during speech synthesis (for example, _dis_-count as a noun vs dis-_count_ as a verb), for information retrieval, and for word sense disambiguation.\n\nIn this notebook, you'll use the [Pomegranate](http://pomegranate.readthedocs.io/) library to build a hidden Markov model for part of speech tagging using a \"universal\" tagset. Hidden Markov models have been able to achieve [>96% tag accuracy with larger tagsets on realistic text corpora](http://www.coli.uni-saarland.de/~thorsten/publications/Brants-ANLP00.pdf). Hidden Markov models have also been used for speech recognition and speech generation, machine translation, gene recognition for bioinformatics, and human gesture recognition for computer vision, and more. \n\n\n\nThe notebook already contains some code to get you started. You only need to add some new functionality in the areas indicated to complete the project; you will not need to modify the included code beyond what is requested. Sections that begin with **'IMPLEMENTATION'** in the header indicate that you must provide code in the block that follows. Instructions will be provided for each section, and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!",
"_____no_output_____"
],
[
"<div class=\"alert alert-block alert-info\">\n**Note:** Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You must then **export the notebook** by running the last cell in the notebook, or by using the menu above and navigating to **File -> Download as -> HTML (.html)** Your submissions should include both the `html` and `ipynb` files.\n</div>",
"_____no_output_____"
],
[
"<div class=\"alert alert-block alert-info\">\n**Note:** Code and Markdown cells can be executed using the `Shift + Enter` keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode.\n</div>",
"_____no_output_____"
],
[
"### The Road Ahead\nYou must complete Steps 1-3 below to pass the project. The section on Step 4 includes references & resources you can use to further explore HMM taggers.\n\n- [Step 1](#Step-1:-Read-and-preprocess-the-dataset): Review the provided interface to load and access the text corpus\n- [Step 2](#Step-2:-Build-a-Most-Frequent-Class-tagger): Build a Most Frequent Class tagger to use as a baseline\n- [Step 3](#Step-3:-Build-an-HMM-tagger): Build an HMM Part of Speech tagger and compare to the MFC baseline\n- [Step 4](#Step-4:-[Optional]-Improving-model-performance): (Optional) Improve the HMM tagger",
"_____no_output_____"
],
[
"<div class=\"alert alert-block alert-warning\">\n**Note:** Make sure you have selected a **Python 3** kernel in Workspaces or the hmm-tagger conda environment if you are running the Jupyter server on your own machine.\n</div>",
"_____no_output_____"
]
],
[
[
"# Jupyter \"magic methods\" -- only need to be run once per kernel restart\n%load_ext autoreload\n%aimport helpers, tests\n%autoreload 1",
"_____no_output_____"
],
[
"# import python modules -- this cell needs to be run again if you make changes to any of the files\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nfrom IPython.core.display import HTML\nfrom itertools import chain\nfrom collections import Counter, defaultdict\nfrom helpers import show_model, Dataset\nfrom pomegranate import State, HiddenMarkovModel, DiscreteDistribution",
"_____no_output_____"
]
],
[
[
"## Step 1: Read and preprocess the dataset\n---\nWe'll start by reading in a text corpus and splitting it into a training and testing dataset. The data set is a copy of the [Brown corpus](https://en.wikipedia.org/wiki/Brown_Corpus) (originally from the [NLTK](https://www.nltk.org/) library) that has already been pre-processed to only include the [universal tagset](https://arxiv.org/pdf/1104.2086.pdf). You should expect to get slightly higher accuracy using this simplified tagset than the same model would achieve on a larger tagset like the full [Penn treebank tagset](https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html), but the process you'll follow would be the same.\n\nThe `Dataset` class provided in helpers.py will read and parse the corpus. You can generate your own datasets compatible with the reader by writing them to the following format. The dataset is stored in plaintext as a collection of words and corresponding tags. Each sentence starts with a unique identifier on the first line, followed by one tab-separated word/tag pair on each following line. Sentences are separated by a single blank line.\n\nExample from the Brown corpus. \n```\nb100-38532\nPerhaps\tADV\nit\tPRON\nwas\tVERB\nright\tADJ\n;\t.\n;\t.\n\nb100-35577\n...\n```",
"_____no_output_____"
]
],
[
[
"data = Dataset(\"tags-universal.txt\", \"brown-universal.txt\", train_test_split=0.8)\n\nprint(\"There are {} sentences in the corpus.\".format(len(data)))\nprint(\"There are {} sentences in the training set.\".format(len(data.training_set)))\nprint(\"There are {} sentences in the testing set.\".format(len(data.testing_set)))\n\nassert len(data) == len(data.training_set) + len(data.testing_set), \\\n \"The number of sentences in the training set + testing set should sum to the number of sentences in the corpus\"",
"There are 57340 sentences in the corpus.\nThere are 45872 sentences in the training set.\nThere are 11468 sentences in the testing set.\n"
]
],
[
[
"### The Dataset Interface\n\nYou can access (mostly) immutable references to the dataset through a simple interface provided through the `Dataset` class, which represents an iterable collection of sentences along with easy access to partitions of the data for training & testing. Review the reference below, then run and review the next few cells to make sure you understand the interface before moving on to the next step.\n\n```\nDataset-only Attributes:\n training_set - reference to a Subset object containing the samples for training\n testing_set - reference to a Subset object containing the samples for testing\n\nDataset & Subset Attributes:\n sentences - a dictionary with an entry {sentence_key: Sentence()} for each sentence in the corpus\n keys - an immutable ordered (not sorted) collection of the sentence_keys for the corpus\n vocab - an immutable collection of the unique words in the corpus\n tagset - an immutable collection of the unique tags in the corpus\n X - returns an array of words grouped by sentences ((w11, w12, w13, ...), (w21, w22, w23, ...), ...)\n Y - returns an array of tags grouped by sentences ((t11, t12, t13, ...), (t21, t22, t23, ...), ...)\n N - returns the number of distinct samples (individual words or tags) in the dataset\n\nMethods:\n stream() - returns an flat iterable over all (word, tag) pairs across all sentences in the corpus\n __iter__() - returns an iterable over the data as (sentence_key, Sentence()) pairs\n __len__() - returns the nubmer of sentences in the dataset\n```\n\nFor example, consider a Subset, `subset`, of the sentences `{\"s0\": Sentence((\"See\", \"Spot\", \"run\"), (\"VERB\", \"NOUN\", \"VERB\")), \"s1\": Sentence((\"Spot\", \"ran\"), (\"NOUN\", \"VERB\"))}`. The subset will have these attributes:\n\n```\nsubset.keys == {\"s1\", \"s0\"} # unordered\nsubset.vocab == {\"See\", \"run\", \"ran\", \"Spot\"} # unordered\nsubset.tagset == {\"VERB\", \"NOUN\"} # unordered\nsubset.X == ((\"Spot\", \"ran\"), (\"See\", \"Spot\", \"run\")) # order matches .keys\nsubset.Y == ((\"NOUN\", \"VERB\"), (\"VERB\", \"NOUN\", \"VERB\")) # order matches .keys\nsubset.N == 7 # there are a total of seven observations over all sentences\nlen(subset) == 2 # because there are two sentences\n```\n\n<div class=\"alert alert-block alert-info\">\n**Note:** The `Dataset` class is _convenient_, but it is **not** efficient. It is not suitable for huge datasets because it stores multiple redundant copies of the same data.\n</div>",
"_____no_output_____"
],
[
"#### Sentences\n\n`Dataset.sentences` is a dictionary of all sentences in the training corpus, each keyed to a unique sentence identifier. Each `Sentence` is itself an object with two attributes: a tuple of the words in the sentence named `words` and a tuple of the tag corresponding to each word named `tags`.",
"_____no_output_____"
]
],
[
[
"key = 'b100-38532'\nprint(\"Sentence: {}\".format(key))\nprint(\"words:\\n\\t{!s}\".format(data.sentences[key].words))\nprint(\"tags:\\n\\t{!s}\".format(data.sentences[key].tags))",
"Sentence: b100-38532\nwords:\n\t('Perhaps', 'it', 'was', 'right', ';', ';')\ntags:\n\t('ADV', 'PRON', 'VERB', 'ADJ', '.', '.')\n"
]
],
[
[
"<div class=\"alert alert-block alert-info\">\n**Note:** The underlying iterable sequence is **unordered** over the sentences in the corpus; it is not guaranteed to return the sentences in a consistent order between calls. Use `Dataset.stream()`, `Dataset.keys`, `Dataset.X`, or `Dataset.Y` attributes if you need ordered access to the data.\n</div>\n\n#### Counting Unique Elements\n\nYou can access the list of unique words (the dataset vocabulary) via `Dataset.vocab` and the unique list of tags via `Dataset.tagset`.",
"_____no_output_____"
]
],
[
[
"print(\"There are a total of {} samples of {} unique words in the corpus.\"\n .format(data.N, len(data.vocab)))\nprint(\"There are {} samples of {} unique words in the training set.\"\n .format(data.training_set.N, len(data.training_set.vocab)))\nprint(\"There are {} samples of {} unique words in the testing set.\"\n .format(data.testing_set.N, len(data.testing_set.vocab)))\nprint(\"There are {} words in the test set that are missing in the training set.\"\n .format(len(data.testing_set.vocab - data.training_set.vocab)))\n\nassert data.N == data.training_set.N + data.testing_set.N, \\\n \"The number of training + test samples should sum to the total number of samples\"",
"There are a total of 1161192 samples of 56057 unique words in the corpus.\nThere are 928458 samples of 50536 unique words in the training set.\nThere are 232734 samples of 25112 unique words in the testing set.\nThere are 5521 words in the test set that are missing in the training set.\n"
]
],
[
[
"#### Accessing word and tag Sequences\nThe `Dataset.X` and `Dataset.Y` attributes provide access to ordered collections of matching word and tag sequences for each sentence in the dataset.",
"_____no_output_____"
]
],
[
[
"# accessing words with Dataset.X and tags with Dataset.Y \nfor i in range(2): \n print(\"Sentence {}:\".format(i + 1), data.X[i])\n print()\n print(\"Labels {}:\".format(i + 1), data.Y[i])\n print()",
"Sentence 1: ('Mr.', 'Podger', 'had', 'thanked', 'him', 'gravely', ',', 'and', 'now', 'he', 'made', 'use', 'of', 'the', 'advice', '.')\n\nLabels 1: ('NOUN', 'NOUN', 'VERB', 'VERB', 'PRON', 'ADV', '.', 'CONJ', 'ADV', 'PRON', 'VERB', 'NOUN', 'ADP', 'DET', 'NOUN', '.')\n\nSentence 2: ('But', 'there', 'seemed', 'to', 'be', 'some', 'difference', 'of', 'opinion', 'as', 'to', 'how', 'far', 'the', 'board', 'should', 'go', ',', 'and', 'whose', 'advice', 'it', 'should', 'follow', '.')\n\nLabels 2: ('CONJ', 'PRT', 'VERB', 'PRT', 'VERB', 'DET', 'NOUN', 'ADP', 'NOUN', 'ADP', 'ADP', 'ADV', 'ADV', 'DET', 'NOUN', 'VERB', 'VERB', '.', 'CONJ', 'DET', 'NOUN', 'PRON', 'VERB', 'VERB', '.')\n\n"
]
],
[
[
"#### Accessing (word, tag) Samples\nThe `Dataset.stream()` method returns an iterator that chains together every pair of (word, tag) entries across all sentences in the entire corpus.",
"_____no_output_____"
]
],
[
[
"# use Dataset.stream() (word, tag) samples for the entire corpus\nprint(\"\\nStream (word, tag) pairs:\\n\")\nfor i, pair in enumerate(data.stream()):\n print(\"\\t\", pair)\n if i > 5: break",
"\nStream (word, tag) pairs:\n\n\t ('Mr.', 'NOUN')\n\t ('Podger', 'NOUN')\n\t ('had', 'VERB')\n\t ('thanked', 'VERB')\n\t ('him', 'PRON')\n\t ('gravely', 'ADV')\n\t (',', '.')\n"
]
],
[
[
"\nFor both our baseline tagger and the HMM model we'll build, we need to estimate the frequency of tags & words from the frequency counts of observations in the training corpus. In the next several cells you will complete functions to compute the counts of several sets of counts. ",
"_____no_output_____"
],
[
"## Step 2: Build a Most Frequent Class tagger\n---\n\nPerhaps the simplest tagger (and a good baseline for tagger performance) is to simply choose the tag most frequently assigned to each word. This \"most frequent class\" tagger inspects each observed word in the sequence and assigns it the label that was most often assigned to that word in the corpus.",
"_____no_output_____"
],
[
"### IMPLEMENTATION: Pair Counts\n\nComplete the function below that computes the joint frequency counts for two input sequences.",
"_____no_output_____"
]
],
[
[
"def pair_counts(sequences_A, sequences_B):\n \"\"\"Return a dictionary keyed to each unique value in the first sequence list\n that counts the number of occurrences of the corresponding value from the\n second sequences list.\n \n For example, if sequences_A is tags and sequences_B is the corresponding\n words, then if 1244 sequences contain the word \"time\" tagged as a NOUN, then\n you should return a dictionary such that pair_counts[NOUN][time] == 1244\n \"\"\"\n # TODO: Finish this function!\n \n dict = {}\n \n for i in range(len(sequences_A)):\n seq_A = sequences_A[i]\n seq_B = sequences_B[i]\n \n for j in range(len(seq_A)):\n element_A = seq_A[j]\n element_B = seq_B[j]\n \n if element_A in dict:\n if element_B in dict[element_A]:\n dict[element_A][element_B] += 1\n else:\n dict[element_A][element_B] = 1\n else:\n dict[element_A] = {}\n dict[element_A][element_B] = 1\n \n return dict\n\n\n# Calculate C(t_i, w_i)\n \nemission_counts = pair_counts(data.Y, data.X)\n\nassert len(emission_counts) == 12, \\\n \"Uh oh. There should be 12 tags in your dictionary.\"\nassert max(emission_counts[\"NOUN\"], key=emission_counts[\"NOUN\"].get) == 'time', \\\n \"Hmmm...'time' is expected to be the most common NOUN.\"\n\nHTML('<div class=\"alert alert-block alert-success\">Your emission counts look good!</div>')",
"_____no_output_____"
]
],
[
[
"### IMPLEMENTATION: Most Frequent Class Tagger\n\nUse the `pair_counts()` function and the training dataset to find the most frequent class label for each word in the training data, and populate the `mfc_table` below. The table keys should be words, and the values should be the appropriate tag string.\n\nThe `MFCTagger` class is provided to mock the interface of Pomegranite HMM models so that they can be used interchangeably.",
"_____no_output_____"
]
],
[
[
"# Create a lookup table mfc_table where mfc_table[word] contains the tag label most frequently assigned to that word\nfrom collections import namedtuple\n\nFakeState = namedtuple(\"FakeState\", \"name\")\n\nclass MFCTagger:\n # NOTE: You should not need to modify this class or any of its methods\n missing = FakeState(name=\"<MISSING>\")\n \n def __init__(self, table):\n self.table = defaultdict(lambda: MFCTagger.missing)\n self.table.update({word: FakeState(name=tag) for word, tag in table.items()})\n \n def viterbi(self, seq):\n \"\"\"This method simplifies predictions by matching the Pomegranate viterbi() interface\"\"\"\n return 0., list(enumerate([\"<start>\"] + [self.table[w] for w in seq] + [\"<end>\"]))\n\n\n# TODO: calculate the frequency of each tag being assigned to each word (hint: similar, but not\n# the same as the emission probabilities) and use it to fill the mfc_table\n\nword_counts = pair_counts(data.X, data.Y)\n\nmfc_table = {}\n\nfor word in data.training_set.vocab:\n mfc_table[word] = max(word_counts[word], key=word_counts[word].get)\n\n\n# DO NOT MODIFY BELOW THIS LINE\nmfc_model = MFCTagger(mfc_table) # Create a Most Frequent Class tagger instance\n\nassert len(mfc_table) == len(data.training_set.vocab), \"\"\nassert all(k in data.training_set.vocab for k in mfc_table.keys()), \"\"\nassert sum(int(k not in mfc_table) for k in data.testing_set.vocab) == 5521, \"\"\nHTML('<div class=\"alert alert-block alert-success\">Your MFC tagger has all the correct words!</div>')",
"_____no_output_____"
]
],
[
[
"### Making Predictions with a Model\nThe helper functions provided below interface with Pomegranate network models & the mocked MFCTagger to take advantage of the [missing value](http://pomegranate.readthedocs.io/en/latest/nan.html) functionality in Pomegranate through a simple sequence decoding function. Run these functions, then run the next cell to see some of the predictions made by the MFC tagger.",
"_____no_output_____"
]
],
[
[
"def replace_unknown(sequence):\n \"\"\"Return a copy of the input sequence where each unknown word is replaced\n by the literal string value 'nan'. Pomegranate will ignore these values\n during computation.\n \"\"\"\n return [w if w in data.training_set.vocab else 'nan' for w in sequence]\n\ndef simplify_decoding(X, model):\n \"\"\"X should be a 1-D sequence of observations for the model to predict\"\"\"\n _, state_path = model.viterbi(replace_unknown(X))\n return [state[1].name for state in state_path[1:-1]] # do not show the start/end state predictions",
"_____no_output_____"
]
],
[
[
"### Example Decoding Sequences with MFC Tagger",
"_____no_output_____"
]
],
[
[
"for key in data.testing_set.keys[:3]:\n print(\"Sentence Key: {}\\n\".format(key))\n print(\"Predicted labels:\\n-----------------\")\n print(simplify_decoding(data.sentences[key].words, mfc_model))\n print()\n print(\"Actual labels:\\n--------------\")\n print(data.sentences[key].tags)\n\n print(\"\\n\")",
"Sentence Key: b100-28144\n\nPredicted labels:\n-----------------\n['CONJ', 'NOUN', 'NUM', '.', 'NOUN', 'NUM', '.', 'NOUN', 'NUM', '.', 'CONJ', 'NOUN', 'NUM', '.', '.', 'NOUN', '.', '.']\n\nActual labels:\n--------------\n('CONJ', 'NOUN', 'NUM', '.', 'NOUN', 'NUM', '.', 'NOUN', 'NUM', '.', 'CONJ', 'NOUN', 'NUM', '.', '.', 'NOUN', '.', '.')\n\n\nSentence Key: b100-23146\n\nPredicted labels:\n-----------------\n['PRON', 'VERB', 'DET', 'NOUN', 'ADP', 'ADJ', 'ADJ', 'NOUN', 'VERB', 'VERB', '.', 'ADP', 'VERB', 'DET', 'NOUN', 'ADP', 'NOUN', 'ADP', 'DET', 'NOUN', '.']\n\nActual labels:\n--------------\n('PRON', 'VERB', 'DET', 'NOUN', 'ADP', 'ADJ', 'ADJ', 'NOUN', 'VERB', 'VERB', '.', 'ADP', 'VERB', 'DET', 'NOUN', 'ADP', 'NOUN', 'ADP', 'DET', 'NOUN', '.')\n\n\nSentence Key: b100-35462\n\nPredicted labels:\n-----------------\n['DET', 'ADJ', 'NOUN', 'VERB', 'VERB', 'VERB', 'ADP', 'DET', 'ADJ', 'ADJ', 'NOUN', 'ADP', 'DET', 'ADJ', 'NOUN', '.', 'ADP', 'ADJ', 'NOUN', '.', 'CONJ', 'ADP', 'DET', '<MISSING>', 'ADP', 'ADJ', 'ADJ', '.', 'ADJ', '.', 'CONJ', 'ADJ', 'NOUN', 'ADP', 'ADV', 'NOUN', '.']\n\nActual labels:\n--------------\n('DET', 'ADJ', 'NOUN', 'VERB', 'VERB', 'VERB', 'ADP', 'DET', 'ADJ', 'ADJ', 'NOUN', 'ADP', 'DET', 'ADJ', 'NOUN', '.', 'ADP', 'ADJ', 'NOUN', '.', 'CONJ', 'ADP', 'DET', 'NOUN', 'ADP', 'ADJ', 'ADJ', '.', 'ADJ', '.', 'CONJ', 'ADJ', 'NOUN', 'ADP', 'ADJ', 'NOUN', '.')\n\n\n"
]
],
[
[
"### Evaluating Model Accuracy\n\nThe function below will evaluate the accuracy of the MFC tagger on the collection of all sentences from a text corpus. ",
"_____no_output_____"
]
],
[
[
"def accuracy(X, Y, model):\n \"\"\"Calculate the prediction accuracy by using the model to decode each sequence\n in the input X and comparing the prediction with the true labels in Y.\n \n The X should be an array whose first dimension is the number of sentences to test,\n and each element of the array should be an iterable of the words in the sequence.\n The arrays X and Y should have the exact same shape.\n \n X = [(\"See\", \"Spot\", \"run\"), (\"Run\", \"Spot\", \"run\", \"fast\"), ...]\n Y = [(), (), ...]\n \"\"\"\n correct = total_predictions = 0\n for observations, actual_tags in zip(X, Y):\n \n # The model.viterbi call in simplify_decoding will return None if the HMM\n # raises an error (for example, if a test sentence contains a word that\n # is out of vocabulary for the training set). Any exception counts the\n # full sentence as an error (which makes this a conservative estimate).\n try:\n most_likely_tags = simplify_decoding(observations, model)\n correct += sum(p == t for p, t in zip(most_likely_tags, actual_tags))\n except:\n pass\n total_predictions += len(observations)\n return correct / total_predictions",
"_____no_output_____"
]
],
[
[
"#### Evaluate the accuracy of the MFC tagger\nRun the next cell to evaluate the accuracy of the tagger on the training and test corpus.",
"_____no_output_____"
]
],
[
[
"mfc_training_acc = accuracy(data.training_set.X, data.training_set.Y, mfc_model)\nprint(\"training accuracy mfc_model: {:.2f}%\".format(100 * mfc_training_acc))\n\nmfc_testing_acc = accuracy(data.testing_set.X, data.testing_set.Y, mfc_model)\nprint(\"testing accuracy mfc_model: {:.2f}%\".format(100 * mfc_testing_acc))\n\nassert mfc_training_acc >= 0.955, \"Uh oh. Your MFC accuracy on the training set doesn't look right.\"\nassert mfc_testing_acc >= 0.925, \"Uh oh. Your MFC accuracy on the testing set doesn't look right.\"\nHTML('<div class=\"alert alert-block alert-success\">Your MFC tagger accuracy looks correct!</div>')",
"training accuracy mfc_model: 95.71%\ntesting accuracy mfc_model: 93.13%\n"
]
],
[
[
"## Step 3: Build an HMM tagger\n---\nThe HMM tagger has one hidden state for each possible tag, and parameterized by two distributions: the emission probabilties giving the conditional probability of observing a given **word** from each hidden state, and the transition probabilities giving the conditional probability of moving between **tags** during the sequence.\n\nWe will also estimate the starting probability distribution (the probability of each **tag** being the first tag in a sequence), and the terminal probability distribution (the probability of each **tag** being the last tag in a sequence).\n\nThe maximum likelihood estimate of these distributions can be calculated from the frequency counts as described in the following sections where you'll implement functions to count the frequencies, and finally build the model. The HMM model will make predictions according to the formula:\n\n$$t_i^n = \\underset{t_i^n}{\\mathrm{argmax}} \\prod_{i=1}^n P(w_i|t_i) P(t_i|t_{i-1})$$\n\nRefer to Speech & Language Processing [Chapter 10](https://web.stanford.edu/~jurafsky/slp3/10.pdf) for more information.",
"_____no_output_____"
],
[
"### IMPLEMENTATION: Unigram Counts\n\nComplete the function below to estimate the co-occurrence frequency of each symbol over all of the input sequences. The unigram probabilities in our HMM model are estimated from the formula below, where N is the total number of samples in the input. (You only need to compute the counts for now.)\n\n$$P(tag_1) = \\frac{C(tag_1)}{N}$$",
"_____no_output_____"
]
],
[
[
"def unigram_counts(sequences):\n \"\"\"Return a dictionary keyed to each unique value in the input sequence list that\n counts the number of occurrences of the value in the sequences list. The sequences\n collection should be a 2-dimensional array.\n \n For example, if the tag NOUN appears 275558 times over all the input sequences,\n then you should return a dictionary such that your_unigram_counts[NOUN] == 275558.\n \"\"\"\n # TODO: Finish this function!\n my_unigram_counts = {}\n \n for tag in sequences: \n if tag in my_unigram_counts:\n my_unigram_counts[tag] += 1\n else:\n my_unigram_counts[tag] = 1\n \n # Easier method: return Counter(sequences)\n return my_unigram_counts\n\n# TODO: call unigram_counts with a list of tag sequences from the training set\ntags = [tag for word, tag in data.stream()]\ntag_unigrams = unigram_counts(tags) # TODO: YOUR CODE HERE\n\nassert set(tag_unigrams.keys()) == data.training_set.tagset, \\\n \"Uh oh. It looks like your tag counts doesn't include all the tags!\"\nassert min(tag_unigrams, key=tag_unigrams.get) == 'X', \\\n \"Hmmm...'X' is expected to be the least common class\"\nassert max(tag_unigrams, key=tag_unigrams.get) == 'NOUN', \\\n \"Hmmm...'NOUN' is expected to be the most common class\"\nHTML('<div class=\"alert alert-block alert-success\">Your tag unigrams look good!</div>')",
"_____no_output_____"
]
],
[
[
"### IMPLEMENTATION: Bigram Counts\n\nComplete the function below to estimate the co-occurrence frequency of each pair of symbols in each of the input sequences. These counts are used in the HMM model to estimate the bigram probability of two tags from the frequency counts according to the formula: $$P(tag_2|tag_1) = \\frac{C(tag_2|tag_1)}{C(tag_2)}$$\n",
"_____no_output_____"
]
],
[
[
"import itertools\ndef pairwise(iterable):\n t, t_1 = itertools.tee(iterable)\n next(t_1, 'end')\n return zip(t, t_1)\n\ndef bigram_counts(sequences):\n \"\"\"Return a dictionary keyed to each unique PAIR of values in the input sequences\n list that counts the number of occurrences of pair in the sequences list. The input\n should be a 2-dimensional array.\n \n For example, if the pair of tags (NOUN, VERB) appear 61582 times, then you should\n return a dictionary such that your_bigram_counts[(NOUN, VERB)] == 61582\n \"\"\"\n\n # TODO: Finish this function!\n\n prior = ''\n my_bigram_counts = {}\n \n for tag in sequences:\n if prior != '':\n if (prior, tag) in my_bigram_counts:\n my_bigram_counts[prior, tag] += 1\n else:\n my_bigram_counts[prior, tag] = 1\n \n prior = tag\n \n # Easier method: return dict(Counter(pairwise(sequences)))\n return my_bigram_counts\n\n\n# TODO: call bigram_counts with a list of tag sequences from the training set\ntags = [tag for word, tag in data.stream()]\ntag_bigrams = bigram_counts(tags) \n\nassert len(tag_bigrams) == 144, \\\n \"Uh oh. There should be 144 pairs of bigrams (12 tags x 12 tags)\"\nassert min(tag_bigrams, key=tag_bigrams.get) in [('X', 'NUM'), ('PRON', 'X')], \\\n \"Hmmm...The least common bigram should be one of ('X', 'NUM') or ('PRON', 'X').\"\nassert max(tag_bigrams, key=tag_bigrams.get) in [('DET', 'NOUN')], \\\n \"Hmmm...('DET', 'NOUN') is expected to be the most common bigram.\"\nHTML('<div class=\"alert alert-block alert-success\">Your tag bigrams look good!</div>')",
"_____no_output_____"
]
],
[
[
"### IMPLEMENTATION: Sequence Starting Counts\nComplete the code below to estimate the bigram probabilities of a sequence starting with each tag.",
"_____no_output_____"
]
],
[
[
"def starting_counts(sequences):\n \"\"\"Return a dictionary keyed to each unique value in the input sequences list\n that counts the number of occurrences where that value is at the beginning of\n a sequence.\n \n For example, if 8093 sequences start with NOUN, then you should return a\n dictionary such that your_starting_counts[NOUN] == 8093\n \"\"\"\n # TODO: Finish this function!\n my_start_counts = {}\n \n for start, end in sequences:\n count = sequences[start, end]\n\n if start in my_start_counts:\n my_start_counts[start] += count\n else:\n my_start_counts[start] = count\n \n return my_start_counts\n\n# TODO: Calculate the count of each tag starting a sequence\ntag_starts = starting_counts(tag_bigrams)\n\nassert len(tag_starts) == 12, \"Uh oh. There should be 12 tags in your dictionary.\"\nassert min(tag_starts, key=tag_starts.get) == 'X', \"Hmmm...'X' is expected to be the least common starting bigram.\"\nassert max(tag_starts, key=tag_starts.get) != 'DET', \"Hmmm...'DET' is expected to be the most common starting bigram.\"\nHTML('<div class=\"alert alert-block alert-success\">Your starting tag counts look good!</div>')",
"_____no_output_____"
]
],
[
[
"### IMPLEMENTATION: Sequence Ending Counts\nComplete the function below to estimate the bigram probabilities of a sequence ending with each tag.",
"_____no_output_____"
]
],
[
[
"def ending_counts(sequences):\n \"\"\"Return a dictionary keyed to each unique value in the input sequences list\n that counts the number of occurrences where that value is at the end of\n a sequence.\n \n For example, if 18 sequences end with DET, then you should return a\n dictionary such that your_starting_counts[DET] == 18\n \"\"\"\n # TODO: Finish this function!\n my_end_counts = {}\n \n for start, end in sequences:\n count = sequences[start, end]\n\n if end in my_end_counts:\n my_end_counts[end] += count\n else:\n my_end_counts[end] = count\n \n return my_end_counts\n# TODO: Calculate the count of each tag ending a sequence\ntag_ends = ending_counts(tag_bigrams)\n\nassert len(tag_ends) == 12, \"Uh oh. There should be 12 tags in your dictionary.\"\nassert min(tag_ends, key=tag_ends.get) in ['X', 'CONJ'], \"Hmmm...'X' or 'CONJ' should be the least common ending bigram.\"\nassert max(tag_ends, key=tag_ends.get) != '.', \"Hmmm...'.' is expected to be the most common ending bigram.\"\nHTML('<div class=\"alert alert-block alert-success\">Your ending tag counts look good!</div>')",
"_____no_output_____"
]
],
[
[
"### IMPLEMENTATION: Basic HMM Tagger\nUse the tag unigrams and bigrams calculated above to construct a hidden Markov tagger.\n\n- Add one state per tag\n - The emission distribution at each state should be estimated with the formula: $P(w|t) = \\frac{C(t, w)}{C(t)}$\n- Add an edge from the starting state `basic_model.start` to each tag\n - The transition probability should be estimated with the formula: $P(t|start) = \\frac{C(start, t)}{C(start)}$\n- Add an edge from each tag to the end state `basic_model.end`\n - The transition probability should be estimated with the formula: $P(end|t) = \\frac{C(t, end)}{C(t)}$\n- Add an edge between _every_ pair of tags\n - The transition probability should be estimated with the formula: $P(t_2|t_1) = \\frac{C(t_1, t_2)}{C(t_1)}$",
"_____no_output_____"
]
],
[
[
"basic_model = HiddenMarkovModel(name=\"base-hmm-tagger\")\n\n# TODO: create states with emission probability distributions P(word | tag) and add to the model\n# (Hint: you may need to loop & create/add new states)\n\ntags = [tag for word, tag in data.stream()]\nwords = [word for word, tag in data.stream()]\ntags_count = unigram_counts(tags)\ntag_words_count = pair_counts([tags], [words])\n \nstates = []\nfor tag, words_dict in tag_words_count.items():\n total = float(sum(words_dict.values()))\n distribution = {word: count/total for word, count in words_dict.items()}\n tag_emissions = DiscreteDistribution(distribution)\n tag_state = State(tag_emissions, name=tag)\n states.append(tag_state)\n \nbasic_model.add_states(states)\n\n\n# TODO: add edges between states for the observed transition frequencies P(tag_i | tag_i-1)\n# (Hint: you may need to loop & add transitions\n\ntransition_prob_pair = {}\nfor key in tag_bigrams.keys():\n transition_prob_pair[key] = tag_bigrams.get(key)/tags_count[key[0]]\nfor tag_state in states :\n for next_tag_state in states :\n basic_model.add_transition(tag_state,next_tag_state,transition_prob_pair[(tag_state.name,next_tag_state.name)])\n\nstarting_tag_count = starting_counts(tag_bigrams) #the number of times a tag occured at the start\nending_tag_count = ending_counts(tag_bigrams) #the number of times a tag occured at the end\n\nstart_prob = {}\nfor tag in tags:\n start_prob[tag] = starting_tag_count[tag]/tags_count[tag] \n\nfor tag_state in states :\n basic_model.add_transition(basic_model.start, tag_state, start_prob[tag_state.name])\n \nend_prob = {}\nfor tag in tags:\n end_prob[tag] = ending_tag_count[tag]/tags_count[tag]\n \nfor tag_state in states:\n basic_model.add_transition(tag_state, basic_model.end, end_prob[tag_state.name])\n\n\n# NOTE: YOU SHOULD NOT NEED TO MODIFY ANYTHING BELOW THIS LINE\n# finalize the model\nbasic_model.bake()\n\nassert all(tag in set(s.name for s in basic_model.states) for tag in data.training_set.tagset), \\\n \"Every state in your network should use the name of the associated tag, which must be one of the training set tags.\"\nassert basic_model.edge_count() == 168, \\\n (\"Your network should have an edge from the start node to each state, one edge between every \" +\n \"pair of tags (states), and an edge from each state to the end node.\")\nHTML('<div class=\"alert alert-block alert-success\">Your HMM network topology looks good!</div>')",
"_____no_output_____"
],
[
"hmm_training_acc = accuracy(data.training_set.X, data.training_set.Y, basic_model)\nprint(\"training accuracy basic hmm model: {:.2f}%\".format(100 * hmm_training_acc))\n\nhmm_testing_acc = accuracy(data.testing_set.X, data.testing_set.Y, basic_model)\nprint(\"testing accuracy basic hmm model: {:.2f}%\".format(100 * hmm_testing_acc))\n\nassert hmm_training_acc > 0.97, \"Uh oh. Your HMM accuracy on the training set doesn't look right.\"\nassert hmm_testing_acc > 0.955, \"Uh oh. Your HMM accuracy on the testing set doesn't look right.\"\nHTML('<div class=\"alert alert-block alert-success\">Your HMM tagger accuracy looks correct! Congratulations, you\\'ve finished the project.</div>')",
"training accuracy basic hmm model: 97.51%\ntesting accuracy basic hmm model: 96.14%\n"
]
],
[
[
"### Example Decoding Sequences with the HMM Tagger",
"_____no_output_____"
]
],
[
[
"for key in data.testing_set.keys[:3]:\n print(\"Sentence Key: {}\\n\".format(key))\n print(\"Predicted labels:\\n-----------------\")\n print(simplify_decoding(data.sentences[key].words, basic_model))\n print()\n print(\"Actual labels:\\n--------------\")\n print(data.sentences[key].tags)\n print(\"\\n\")",
"Sentence Key: b100-28144\n\nPredicted labels:\n-----------------\n['CONJ', 'NOUN', 'NUM', '.', 'NOUN', 'NUM', '.', 'NOUN', 'NUM', '.', 'CONJ', 'NOUN', 'NUM', '.', '.', 'NOUN', '.', '.']\n\nActual labels:\n--------------\n('CONJ', 'NOUN', 'NUM', '.', 'NOUN', 'NUM', '.', 'NOUN', 'NUM', '.', 'CONJ', 'NOUN', 'NUM', '.', '.', 'NOUN', '.', '.')\n\n\nSentence Key: b100-23146\n\nPredicted labels:\n-----------------\n['PRON', 'VERB', 'DET', 'NOUN', 'ADP', 'ADJ', 'ADJ', 'NOUN', 'VERB', 'VERB', '.', 'ADP', 'VERB', 'DET', 'NOUN', 'ADP', 'NOUN', 'ADP', 'DET', 'NOUN', '.']\n\nActual labels:\n--------------\n('PRON', 'VERB', 'DET', 'NOUN', 'ADP', 'ADJ', 'ADJ', 'NOUN', 'VERB', 'VERB', '.', 'ADP', 'VERB', 'DET', 'NOUN', 'ADP', 'NOUN', 'ADP', 'DET', 'NOUN', '.')\n\n\nSentence Key: b100-35462\n\nPredicted labels:\n-----------------\n['DET', 'ADJ', 'NOUN', 'VERB', 'VERB', 'VERB', 'ADP', 'DET', 'ADJ', 'ADJ', 'NOUN', 'ADP', 'DET', 'ADJ', 'NOUN', '.', 'ADP', 'ADJ', 'NOUN', '.', 'CONJ', 'ADP', 'DET', 'NOUN', 'ADP', 'ADJ', 'ADJ', '.', 'ADJ', '.', 'CONJ', 'ADJ', 'NOUN', 'ADP', 'ADJ', 'NOUN', '.']\n\nActual labels:\n--------------\n('DET', 'ADJ', 'NOUN', 'VERB', 'VERB', 'VERB', 'ADP', 'DET', 'ADJ', 'ADJ', 'NOUN', 'ADP', 'DET', 'ADJ', 'NOUN', '.', 'ADP', 'ADJ', 'NOUN', '.', 'CONJ', 'ADP', 'DET', 'NOUN', 'ADP', 'ADJ', 'ADJ', '.', 'ADJ', '.', 'CONJ', 'ADJ', 'NOUN', 'ADP', 'ADJ', 'NOUN', '.')\n\n\n"
]
],
[
[
"\n## Finishing the project\n---\n\n<div class=\"alert alert-block alert-info\">\n**Note:** **SAVE YOUR NOTEBOOK**, then run the next cell to generate an HTML copy. You will zip & submit both this file and the HTML copy for review.\n</div>",
"_____no_output_____"
]
],
[
[
"!!jupyter nbconvert *.ipynb",
"_____no_output_____"
]
],
[
[
"## Step 4: [Optional] Improving model performance\n---\nThere are additional enhancements that can be incorporated into your tagger that improve performance on larger tagsets where the data sparsity problem is more significant. The data sparsity problem arises because the same amount of data split over more tags means there will be fewer samples in each tag, and there will be more missing data tags that have zero occurrences in the data. The techniques in this section are optional.\n\n- [Laplace Smoothing](https://en.wikipedia.org/wiki/Additive_smoothing) (pseudocounts)\n Laplace smoothing is a technique where you add a small, non-zero value to all observed counts to offset for unobserved values.\n\n- Backoff Smoothing\n Another smoothing technique is to interpolate between n-grams for missing data. This method is more effective than Laplace smoothing at combatting the data sparsity problem. Refer to chapters 4, 9, and 10 of the [Speech & Language Processing](https://web.stanford.edu/~jurafsky/slp3/) book for more information.\n\n- Extending to Trigrams\n HMM taggers have achieved better than 96% accuracy on this dataset with the full Penn treebank tagset using an architecture described in [this](http://www.coli.uni-saarland.de/~thorsten/publications/Brants-ANLP00.pdf) paper. Altering your HMM to achieve the same performance would require implementing deleted interpolation (described in the paper), incorporating trigram probabilities in your frequency tables, and re-implementing the Viterbi algorithm to consider three consecutive states instead of two.\n\n### Obtain the Brown Corpus with a Larger Tagset\nRun the code below to download a copy of the brown corpus with the full NLTK tagset. You will need to research the available tagset information in the NLTK docs and determine the best way to extract the subset of NLTK tags you want to explore. If you write the following the format specified in Step 1, then you can reload the data using all of the code above for comparison.\n\nRefer to [Chapter 5](http://www.nltk.org/book/ch05.html) of the NLTK book for more information on the available tagsets.",
"_____no_output_____"
]
],
[
[
"import nltk\nfrom nltk import pos_tag, word_tokenize\nfrom nltk.corpus import brown\n\nnltk.download('brown')\ntraining_corpus = nltk.corpus.brown\ntraining_corpus.tagged_sents()[0]",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0c828956484777788f802ba4b9ec5bcdd020bd3 | 93,649 | ipynb | Jupyter Notebook | 02-Improving Deep Neural Networks Hyperparameter tuning, Regularization and Optimization/week3/answer-Tensorflow+Tutorial.ipynb | Ulysses-WJL/Coursera-Deep-Learning-deeplearning.ai | 11294b28b65d4edacebb1ca8e6cf5868787a84b1 | [
"MIT"
] | 447 | 2018-03-10T13:19:36.000Z | 2022-03-25T09:48:55.000Z | 02-Improving Deep Neural Networks Hyperparameter tuning, Regularization and Optimization/week3/answer-Tensorflow+Tutorial.ipynb | Ulysses-WJL/Coursera-Deep-Learning-deeplearning.ai | 11294b28b65d4edacebb1ca8e6cf5868787a84b1 | [
"MIT"
] | 5 | 2018-03-22T14:23:56.000Z | 2018-08-03T01:27:26.000Z | 02-Improving Deep Neural Networks Hyperparameter tuning, Regularization and Optimization/week3/answer-Tensorflow+Tutorial.ipynb | Ulysses-WJL/Coursera-Deep-Learning-deeplearning.ai | 11294b28b65d4edacebb1ca8e6cf5868787a84b1 | [
"MIT"
] | 287 | 2018-04-30T03:02:05.000Z | 2022-01-24T14:45:07.000Z | 57.103049 | 18,780 | 0.692063 | [
[
[
"# TensorFlow Tutorial\n\nWelcome to this week's programming assignment. Until now, you've always used numpy to build neural networks. Now we will step you through a deep learning framework that will allow you to build neural networks more easily. Machine learning frameworks like TensorFlow, PaddlePaddle, Torch, Caffe, Keras, and many others can speed up your machine learning development significantly. All of these frameworks also have a lot of documentation, which you should feel free to read. In this assignment, you will learn to do the following in TensorFlow: \n\n- Initialize variables\n- Start your own session\n- Train algorithms \n- Implement a Neural Network\n\nPrograming frameworks can not only shorten your coding time, but sometimes also perform optimizations that speed up your code. \n\n## 1 - Exploring the Tensorflow Library\n\nTo start, you will import the library:\n",
"_____no_output_____"
]
],
[
[
"import math\nimport numpy as np\nimport h5py\nimport matplotlib.pyplot as plt\nimport tensorflow as tf\nfrom tensorflow.python.framework import ops\nfrom tf_utils import load_dataset, random_mini_batches, convert_to_one_hot, predict\n\n%matplotlib inline\nnp.random.seed(1)",
"_____no_output_____"
]
],
[
[
"Now that you have imported the library, we will walk you through its different applications. You will start with an example, where we compute for you the loss of one training example. \n$$loss = \\mathcal{L}(\\hat{y}, y) = (\\hat y^{(i)} - y^{(i)})^2 \\tag{1}$$",
"_____no_output_____"
]
],
[
[
"y_hat = tf.constant(36, name='y_hat') # Define y_hat constant. Set to 36.\ny = tf.constant(39, name='y') # Define y. Set to 39\n\nloss = tf.Variable((y - y_hat)**2, name='loss') # Create a variable for the loss\n\ninit = tf.global_variables_initializer() # When init is run later (session.run(init)),\n # the loss variable will be initialized and ready to be computed\nwith tf.Session() as session: # Create a session and print the output\n session.run(init) # Initializes the variables\n print(session.run(loss)) # Prints the loss",
"9\n"
]
],
[
[
"Writing and running programs in TensorFlow has the following steps:\n\n1. Create Tensors (variables) that are not yet executed/evaluated. \n2. Write operations between those Tensors.\n3. Initialize your Tensors. \n4. Create a Session. \n5. Run the Session. This will run the operations you'd written above. \n\nTherefore, when we created a variable for the loss, we simply defined the loss as a function of other quantities, but did not evaluate its value. To evaluate it, we had to run `init=tf.global_variables_initializer()`. That initialized the loss variable, and in the last line we were finally able to evaluate the value of `loss` and print its value.\n\nNow let us look at an easy example. Run the cell below:",
"_____no_output_____"
]
],
[
[
"a = tf.constant(2)\nb = tf.constant(10)\nc = tf.multiply(a,b)\nprint(c)",
"Tensor(\"Mul:0\", shape=(), dtype=int32)\n"
]
],
[
[
"As expected, you will not see 20! You got a tensor saying that the result is a tensor that does not have the shape attribute, and is of type \"int32\". All you did was put in the 'computation graph', but you have not run this computation yet. In order to actually multiply the two numbers, you will have to create a session and run it.",
"_____no_output_____"
]
],
[
[
"sess = tf.Session()\nprint(sess.run(c))",
"20\n"
]
],
[
[
"Great! To summarize, **remember to initialize your variables, create a session and run the operations inside the session**. \n\nNext, you'll also have to know about placeholders. A placeholder is an object whose value you can specify only later. \nTo specify values for a placeholder, you can pass in values by using a \"feed dictionary\" (`feed_dict` variable). Below, we created a placeholder for x. This allows us to pass in a number later when we run the session. ",
"_____no_output_____"
]
],
[
[
"# Change the value of x in the feed_dict\n\nx = tf.placeholder(tf.int64, name = 'x')\nprint(sess.run(2 * x, feed_dict = {x: 3}))\nsess.close()",
"6\n"
]
],
[
[
"When you first defined `x` you did not have to specify a value for it. A placeholder is simply a variable that you will assign data to only later, when running the session. We say that you **feed data** to these placeholders when running the session. \n\nHere's what's happening: When you specify the operations needed for a computation, you are telling TensorFlow how to construct a computation graph. The computation graph can have some placeholders whose values you will specify only later. Finally, when you run the session, you are telling TensorFlow to execute the computation graph.",
"_____no_output_____"
],
[
"### 1.1 - Linear function\n\nLets start this programming exercise by computing the following equation: $Y = WX + b$, where $W$ and $X$ are random matrices and b is a random vector. \n\n**Exercise**: Compute $WX + b$ where $W, X$, and $b$ are drawn from a random normal distribution. W is of shape (4, 3), X is (3,1) and b is (4,1). As an example, here is how you would define a constant X that has shape (3,1):\n```python\nX = tf.constant(np.random.randn(3,1), name = \"X\")\n\n```\nYou might find the following functions helpful: \n- tf.matmul(..., ...) to do a matrix multiplication\n- tf.add(..., ...) to do an addition\n- np.random.randn(...) to initialize randomly\n",
"_____no_output_____"
]
],
[
[
"# GRADED FUNCTION: linear_function\n\ndef linear_function():\n \"\"\"\n Implements a linear function: \n Initializes W to be a random tensor of shape (4,3)\n Initializes X to be a random tensor of shape (3,1)\n Initializes b to be a random tensor of shape (4,1)\n Returns: \n result -- runs the session for Y = WX + b \n \"\"\"\n \n np.random.seed(1)\n \n ### START CODE HERE ### (4 lines of code)\n X = tf.constant(np.random.randn(3,1),name='X')\n W = tf.constant(np.random.randn(4,3),name='W')\n b = tf.constant(np.random.randn(4,1),name='b')\n Y = tf.add(tf.matmul(W,X),b)\n ### END CODE HERE ### \n \n # Create the session using tf.Session() and run it with sess.run(...) on the variable you want to calculate\n \n ### START CODE HERE ###\n sess = tf.Session()\n result = sess.run(Y)\n ### END CODE HERE ### \n \n # close the session \n sess.close()\n\n return result",
"_____no_output_____"
],
[
"print( \"result = \" + str(linear_function()))",
"result = [[-2.15657382]\n [ 2.95891446]\n [-1.08926781]\n [-0.84538042]]\n"
]
],
[
[
"*** Expected Output ***: \n\n<table> \n<tr> \n<td>\n**result**\n</td>\n<td>\n[[-2.15657382]\n [ 2.95891446]\n [-1.08926781]\n [-0.84538042]]\n</td>\n</tr> \n\n</table> ",
"_____no_output_____"
],
[
"### 1.2 - Computing the sigmoid \nGreat! You just implemented a linear function. Tensorflow offers a variety of commonly used neural network functions like `tf.sigmoid` and `tf.softmax`. For this exercise lets compute the sigmoid function of an input. \n\nYou will do this exercise using a placeholder variable `x`. When running the session, you should use the feed dictionary to pass in the input `z`. In this exercise, you will have to (i) create a placeholder `x`, (ii) define the operations needed to compute the sigmoid using `tf.sigmoid`, and then (iii) run the session. \n\n** Exercise **: Implement the sigmoid function below. You should use the following: \n\n- `tf.placeholder(tf.float32, name = \"...\")`\n- `tf.sigmoid(...)`\n- `sess.run(..., feed_dict = {x: z})`\n\n\nNote that there are two typical ways to create and use sessions in tensorflow: \n\n**Method 1:**\n```python\nsess = tf.Session()\n# Run the variables initialization (if needed), run the operations\nresult = sess.run(..., feed_dict = {...})\nsess.close() # Close the session\n```\n**Method 2:**\n```python\nwith tf.Session() as sess: \n # run the variables initialization (if needed), run the operations\n result = sess.run(..., feed_dict = {...})\n # This takes care of closing the session for you :)\n```\n",
"_____no_output_____"
]
],
[
[
"# GRADED FUNCTION: sigmoid\n\ndef sigmoid(z):\n \"\"\"\n Computes the sigmoid of z\n \n Arguments:\n z -- input value, scalar or vector\n \n Returns: \n results -- the sigmoid of z\n \"\"\"\n \n ### START CODE HERE ### ( approx. 4 lines of code)\n # Create a placeholder for x. Name it 'x'.\n x = tf.placeholder(tf.float32,name='x')\n\n # compute sigmoid(x)\n sigmoid = tf.sigmoid(x)\n\n # Create a session, and run it. Please use the method 2 explained above. \n # You should use a feed_dict to pass z's value to x. \n with tf.Session() as sess:\n # Run session and call the output \"result\"\n result = sess.run(sigmoid,feed_dict={x:z})\n \n ### END CODE HERE ###\n \n return result",
"_____no_output_____"
],
[
"print (\"sigmoid(0) = \" + str(sigmoid(0)))\nprint (\"sigmoid(12) = \" + str(sigmoid(12)))",
"sigmoid(0) = 0.5\nsigmoid(12) = 0.999994\n"
]
],
[
[
"*** Expected Output ***: \n\n<table> \n<tr> \n<td>\n**sigmoid(0)**\n</td>\n<td>\n0.5\n</td>\n</tr>\n<tr> \n<td>\n**sigmoid(12)**\n</td>\n<td>\n0.999994\n</td>\n</tr> \n\n</table> ",
"_____no_output_____"
],
[
"<font color='blue'>\n**To summarize, you how know how to**:\n1. Create placeholders\n2. Specify the computation graph corresponding to operations you want to compute\n3. Create the session\n4. Run the session, using a feed dictionary if necessary to specify placeholder variables' values. ",
"_____no_output_____"
],
[
"### 1.3 - Computing the Cost\n\nYou can also use a built-in function to compute the cost of your neural network. So instead of needing to write code to compute this as a function of $a^{[2](i)}$ and $y^{(i)}$ for i=1...m: \n$$ J = - \\frac{1}{m} \\sum_{i = 1}^m \\large ( \\small y^{(i)} \\log a^{ [2] (i)} + (1-y^{(i)})\\log (1-a^{ [2] (i)} )\\large )\\small\\tag{2}$$\n\nyou can do it in one line of code in tensorflow!\n\n**Exercise**: Implement the cross entropy loss. The function you will use is: \n\n\n- `tf.nn.sigmoid_cross_entropy_with_logits(logits = ..., labels = ...)`\n\nYour code should input `z`, compute the sigmoid (to get `a`) and then compute the cross entropy cost $J$. All this can be done using one call to `tf.nn.sigmoid_cross_entropy_with_logits`, which computes\n\n$$- \\frac{1}{m} \\sum_{i = 1}^m \\large ( \\small y^{(i)} \\log \\sigma(z^{[2](i)}) + (1-y^{(i)})\\log (1-\\sigma(z^{[2](i)})\\large )\\small\\tag{2}$$\n\n",
"_____no_output_____"
]
],
[
[
"# GRADED FUNCTION: cost\n\ndef cost(logits, labels):\n \"\"\"\n Computes the cost using the sigmoid cross entropy\n \n Arguments:\n logits -- vector containing z, output of the last linear unit (before the final sigmoid activation)\n labels -- vector of labels y (1 or 0) \n \n Note: What we've been calling \"z\" and \"y\" in this class are respectively called \"logits\" and \"labels\" \n in the TensorFlow documentation. So logits will feed into z, and labels into y. \n \n Returns:\n cost -- runs the session of the cost (formula (2))\n \"\"\"\n \n ### START CODE HERE ### \n \n # Create the placeholders for \"logits\" (z) and \"labels\" (y) (approx. 2 lines)\n z = tf.placeholder(tf.float32,name='z')\n y = tf.placeholder(tf.float32,name='y')\n \n # Use the loss function (approx. 1 line)\n cost = tf.nn.sigmoid_cross_entropy_with_logits(logits=z,labels=y)\n \n # Create a session (approx. 1 line). See method 1 above.\n sess = tf.Session()\n \n # Run the session (approx. 1 line).\n cost = sess.run(cost,feed_dict={z:logits,y:labels})\n \n # Close the session (approx. 1 line). See method 1 above.\n sess.close()\n \n ### END CODE HERE ###\n \n return cost",
"_____no_output_____"
],
[
"logits = sigmoid(np.array([0.2,0.4,0.7,0.9]))\ncost = cost(logits, np.array([0,0,1,1]))\nprint (\"cost = \" + str(cost))",
"cost = [ 1.00538719 1.03664076 0.41385433 0.39956617]\n"
]
],
[
[
"** Expected Output** : \n\n<table> \n <tr> \n <td>\n **cost**\n </td>\n <td>\n [ 1.00538719 1.03664088 0.41385433 0.39956614]\n </td>\n </tr>\n\n</table>",
"_____no_output_____"
],
[
"### 1.4 - Using One Hot encodings\n\nMany times in deep learning you will have a y vector with numbers ranging from 0 to C-1, where C is the number of classes. If C is for example 4, then you might have the following y vector which you will need to convert as follows:\n\n\n<img src=\"images/onehot.png\" style=\"width:600px;height:150px;\">\n\nThis is called a \"one hot\" encoding, because in the converted representation exactly one element of each column is \"hot\" (meaning set to 1). To do this conversion in numpy, you might have to write a few lines of code. In tensorflow, you can use one line of code: \n\n- tf.one_hot(labels, depth, axis) \n\n**Exercise:** Implement the function below to take one vector of labels and the total number of classes $C$, and return the one hot encoding. Use `tf.one_hot()` to do this. ",
"_____no_output_____"
]
],
[
[
"# GRADED FUNCTION: one_hot_matrix\n\ndef one_hot_matrix(labels, C):\n \"\"\"\n Creates a matrix where the i-th row corresponds to the ith class number and the jth column\n corresponds to the jth training example. So if example j had a label i. Then entry (i,j) \n will be 1. \n \n Arguments:\n labels -- vector containing the labels \n C -- number of classes, the depth of the one hot dimension\n \n Returns: \n one_hot -- one hot matrix\n \"\"\"\n \n ### START CODE HERE ###\n \n # Create a tf.constant equal to C (depth), name it 'C'. (approx. 1 line)\n C = tf.constant(C,name='C')\n \n # Use tf.one_hot, be careful with the axis (approx. 1 line)\n one_hot_matrix = tf.one_hot(labels,C,axis=0)\n \n # Create the session (approx. 1 line)\n sess = tf.Session()\n \n # Run the session (approx. 1 line)\n one_hot = sess.run(one_hot_matrix)\n \n # Close the session (approx. 1 line). See method 1 above.\n sess.close()\n \n ### END CODE HERE ###\n \n return one_hot",
"_____no_output_____"
],
[
"labels = np.array([1,2,3,0,2,1])\none_hot = one_hot_matrix(labels, C = 4)\nprint (\"one_hot = \" + str(one_hot))",
"one_hot = [[ 0. 0. 0. 1. 0. 0.]\n [ 1. 0. 0. 0. 0. 1.]\n [ 0. 1. 0. 0. 1. 0.]\n [ 0. 0. 1. 0. 0. 0.]]\n"
]
],
[
[
"**Expected Output**: \n\n<table> \n <tr> \n <td>\n **one_hot**\n </td>\n <td>\n [[ 0. 0. 0. 1. 0. 0.]\n [ 1. 0. 0. 0. 0. 1.]\n [ 0. 1. 0. 0. 1. 0.]\n [ 0. 0. 1. 0. 0. 0.]]\n </td>\n </tr>\n\n</table>\n",
"_____no_output_____"
],
[
"### 1.5 - Initialize with zeros and ones\n\nNow you will learn how to initialize a vector of zeros and ones. The function you will be calling is `tf.ones()`. To initialize with zeros you could use tf.zeros() instead. These functions take in a shape and return an array of dimension shape full of zeros and ones respectively. \n\n**Exercise:** Implement the function below to take in a shape and to return an array (of the shape's dimension of ones). \n\n - tf.ones(shape)\n",
"_____no_output_____"
]
],
[
[
"# GRADED FUNCTION: ones\n\ndef ones(shape):\n \"\"\"\n Creates an array of ones of dimension shape\n \n Arguments:\n shape -- shape of the array you want to create\n \n Returns: \n ones -- array containing only ones\n \"\"\"\n \n ### START CODE HERE ###\n \n # Create \"ones\" tensor using tf.ones(...). (approx. 1 line)\n ones = tf.ones(shape)\n \n # Create the session (approx. 1 line)\n sess = tf.Session()\n \n # Run the session to compute 'ones' (approx. 1 line)\n ones = sess.run(ones)\n \n # Close the session (approx. 1 line). See method 1 above.\n sess.close()\n \n ### END CODE HERE ###\n return ones",
"_____no_output_____"
],
[
"print (\"ones = \" + str(ones([3])))",
"ones = [ 1. 1. 1.]\n"
]
],
[
[
"**Expected Output:**\n\n<table> \n <tr> \n <td>\n **ones**\n </td>\n <td>\n [ 1. 1. 1.]\n </td>\n </tr>\n\n</table>",
"_____no_output_____"
],
[
"# 2 - Building your first neural network in tensorflow\n\nIn this part of the assignment you will build a neural network using tensorflow. Remember that there are two parts to implement a tensorflow model:\n\n- Create the computation graph\n- Run the graph\n\nLet's delve into the problem you'd like to solve!\n\n### 2.0 - Problem statement: SIGNS Dataset\n\nOne afternoon, with some friends we decided to teach our computers to decipher sign language. We spent a few hours taking pictures in front of a white wall and came up with the following dataset. It's now your job to build an algorithm that would facilitate communications from a speech-impaired person to someone who doesn't understand sign language.\n\n- **Training set**: 1080 pictures (64 by 64 pixels) of signs representing numbers from 0 to 5 (180 pictures per number).\n- **Test set**: 120 pictures (64 by 64 pixels) of signs representing numbers from 0 to 5 (20 pictures per number).\n\nNote that this is a subset of the SIGNS dataset. The complete dataset contains many more signs.\n\nHere are examples for each number, and how an explanation of how we represent the labels. These are the original pictures, before we lowered the image resolutoion to 64 by 64 pixels.\n<img src=\"images/hands.png\" style=\"width:800px;height:350px;\"><caption><center> <u><font color='purple'> **Figure 1**</u><font color='purple'>: SIGNS dataset <br> <font color='black'> </center>\n\n\nRun the following code to load the dataset.",
"_____no_output_____"
]
],
[
[
"# Loading the dataset\nX_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()",
"_____no_output_____"
]
],
[
[
"Change the index below and run the cell to visualize some examples in the dataset.",
"_____no_output_____"
]
],
[
[
"# Example of a picture\nindex = 0\nplt.imshow(X_train_orig[index])\nprint (\"y = \" + str(np.squeeze(Y_train_orig[:, index])))",
"y = 5\n"
]
],
[
[
"As usual you flatten the image dataset, then normalize it by dividing by 255. On top of that, you will convert each label to a one-hot vector as shown in Figure 1. Run the cell below to do so.",
"_____no_output_____"
]
],
[
[
"# Flatten the training and test images\nX_train_flatten = X_train_orig.reshape(X_train_orig.shape[0], -1).T\nX_test_flatten = X_test_orig.reshape(X_test_orig.shape[0], -1).T\n# Normalize image vectors\nX_train = X_train_flatten/255.\nX_test = X_test_flatten/255.\n# Convert training and test labels to one hot matrices\nY_train = convert_to_one_hot(Y_train_orig, 6)\nY_test = convert_to_one_hot(Y_test_orig, 6)\n\nprint (\"number of training examples = \" + str(X_train.shape[1]))\nprint (\"number of test examples = \" + str(X_test.shape[1]))\nprint (\"X_train shape: \" + str(X_train.shape))\nprint (\"Y_train shape: \" + str(Y_train.shape))\nprint (\"X_test shape: \" + str(X_test.shape))\nprint (\"Y_test shape: \" + str(Y_test.shape))",
"number of training examples = 1080\nnumber of test examples = 120\nX_train shape: (12288, 1080)\nY_train shape: (6, 1080)\nX_test shape: (12288, 120)\nY_test shape: (6, 120)\n"
]
],
[
[
"**Note** that 12288 comes from $64 \\times 64 \\times 3$. Each image is square, 64 by 64 pixels, and 3 is for the RGB colors. Please make sure all these shapes make sense to you before continuing.",
"_____no_output_____"
],
[
"**Your goal** is to build an algorithm capable of recognizing a sign with high accuracy. To do so, you are going to build a tensorflow model that is almost the same as one you have previously built in numpy for cat recognition (but now using a softmax output). It is a great occasion to compare your numpy implementation to the tensorflow one. \n\n**The model** is *LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX*. The SIGMOID output layer has been converted to a SOFTMAX. A SOFTMAX layer generalizes SIGMOID to when there are more than two classes. ",
"_____no_output_____"
],
[
"### 2.1 - Create placeholders\n\nYour first task is to create placeholders for `X` and `Y`. This will allow you to later pass your training data in when you run your session. \n\n**Exercise:** Implement the function below to create the placeholders in tensorflow.",
"_____no_output_____"
]
],
[
[
"# GRADED FUNCTION: create_placeholders\n\ndef create_placeholders(n_x, n_y):\n \"\"\"\n Creates the placeholders for the tensorflow session.\n \n Arguments:\n n_x -- scalar, size of an image vector (num_px * num_px = 64 * 64 * 3 = 12288)\n n_y -- scalar, number of classes (from 0 to 5, so -> 6)\n \n Returns:\n X -- placeholder for the data input, of shape [n_x, None] and dtype \"float\"\n Y -- placeholder for the input labels, of shape [n_y, None] and dtype \"float\"\n \n Tips:\n - You will use None because it let's us be flexible on the number of examples you will for the placeholders.\n In fact, the number of examples during test/train is different.\n \"\"\"\n\n ### START CODE HERE ### (approx. 2 lines)\n X = tf.placeholder(shape=[n_x,None],dtype='float')\n Y = tf.placeholder(shape=[n_y,None],dtype='float')\n ### END CODE HERE ###\n \n return X, Y",
"_____no_output_____"
],
[
"X, Y = create_placeholders(12288, 6)\nprint (\"X = \" + str(X))\nprint (\"Y = \" + str(Y))",
"X = Tensor(\"Placeholder:0\", shape=(12288, ?), dtype=float32)\nY = Tensor(\"Placeholder_1:0\", shape=(6, ?), dtype=float32)\n"
]
],
[
[
"**Expected Output**: \n\n<table> \n <tr> \n <td>\n **X**\n </td>\n <td>\n Tensor(\"Placeholder_1:0\", shape=(12288, ?), dtype=float32) (not necessarily Placeholder_1)\n </td>\n </tr>\n <tr> \n <td>\n **Y**\n </td>\n <td>\n Tensor(\"Placeholder_2:0\", shape=(10, ?), dtype=float32) (not necessarily Placeholder_2)\n </td>\n </tr>\n\n</table>",
"_____no_output_____"
],
[
"### 2.2 - Initializing the parameters\n\nYour second task is to initialize the parameters in tensorflow.\n\n**Exercise:** Implement the function below to initialize the parameters in tensorflow. You are going use Xavier Initialization for weights and Zero Initialization for biases. The shapes are given below. As an example, to help you, for W1 and b1 you could use: \n\n```python\nW1 = tf.get_variable(\"W1\", [25,12288], initializer = tf.contrib.layers.xavier_initializer(seed = 1))\nb1 = tf.get_variable(\"b1\", [25,1], initializer = tf.zeros_initializer())\n```\nPlease use `seed = 1` to make sure your results match ours.",
"_____no_output_____"
]
],
[
[
"# GRADED FUNCTION: initialize_parameters\n\ndef initialize_parameters():\n \"\"\"\n Initializes parameters to build a neural network with tensorflow. The shapes are:\n W1 : [25, 12288]\n b1 : [25, 1]\n W2 : [12, 25]\n b2 : [12, 1]\n W3 : [6, 12]\n b3 : [6, 1]\n \n Returns:\n parameters -- a dictionary of tensors containing W1, b1, W2, b2, W3, b3\n \"\"\"\n \n tf.set_random_seed(1) # so that your \"random\" numbers match ours\n \n ### START CODE HERE ### (approx. 6 lines of code)\n W1 = tf.get_variable('W1',[25,12288],initializer=tf.contrib.layers.xavier_initializer(seed=1))\n b1 = tf.get_variable('b1',[25,1],initializer=tf.zeros_initializer())\n W2 = tf.get_variable('W2',[12,25],initializer=tf.contrib.layers.xavier_initializer(seed=1))\n b2 = tf.get_variable('b2',[12,1],initializer=tf.zeros_initializer())\n W3 = tf.get_variable('W3',[6,12],initializer=tf.contrib.layers.xavier_initializer(seed=1))\n b3 = tf.get_variable('b3',[6,1],initializer=tf.zeros_initializer())\n ### END CODE HERE ###\n\n parameters = {\"W1\": W1,\n \"b1\": b1,\n \"W2\": W2,\n \"b2\": b2,\n \"W3\": W3,\n \"b3\": b3}\n \n return parameters",
"_____no_output_____"
],
[
"tf.reset_default_graph()\nwith tf.Session() as sess:\n parameters = initialize_parameters()\n print(\"W1 = \" + str(parameters[\"W1\"]))\n print(\"b1 = \" + str(parameters[\"b1\"]))\n print(\"W2 = \" + str(parameters[\"W2\"]))\n print(\"b2 = \" + str(parameters[\"b2\"]))",
"W1 = <tf.Variable 'W1:0' shape=(25, 12288) dtype=float32_ref>\nb1 = <tf.Variable 'b1:0' shape=(25, 1) dtype=float32_ref>\nW2 = <tf.Variable 'W2:0' shape=(12, 25) dtype=float32_ref>\nb2 = <tf.Variable 'b2:0' shape=(12, 1) dtype=float32_ref>\n"
]
],
[
[
"**Expected Output**: \n\n<table> \n <tr> \n <td>\n **W1**\n </td>\n <td>\n < tf.Variable 'W1:0' shape=(25, 12288) dtype=float32_ref >\n </td>\n </tr>\n <tr> \n <td>\n **b1**\n </td>\n <td>\n < tf.Variable 'b1:0' shape=(25, 1) dtype=float32_ref >\n </td>\n </tr>\n <tr> \n <td>\n **W2**\n </td>\n <td>\n < tf.Variable 'W2:0' shape=(12, 25) dtype=float32_ref >\n </td>\n </tr>\n <tr> \n <td>\n **b2**\n </td>\n <td>\n < tf.Variable 'b2:0' shape=(12, 1) dtype=float32_ref >\n </td>\n </tr>\n\n</table>",
"_____no_output_____"
],
[
"As expected, the parameters haven't been evaluated yet.",
"_____no_output_____"
],
[
"### 2.3 - Forward propagation in tensorflow \n\nYou will now implement the forward propagation module in tensorflow. The function will take in a dictionary of parameters and it will complete the forward pass. The functions you will be using are: \n\n- `tf.add(...,...)` to do an addition\n- `tf.matmul(...,...)` to do a matrix multiplication\n- `tf.nn.relu(...)` to apply the ReLU activation\n\n**Question:** Implement the forward pass of the neural network. We commented for you the numpy equivalents so that you can compare the tensorflow implementation to numpy. It is important to note that the forward propagation stops at `z3`. The reason is that in tensorflow the last linear layer output is given as input to the function computing the loss. Therefore, you don't need `a3`!\n\n",
"_____no_output_____"
]
],
[
[
"# GRADED FUNCTION: forward_propagation\n\ndef forward_propagation(X, parameters):\n \"\"\"\n Implements the forward propagation for the model: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX\n \n Arguments:\n X -- input dataset placeholder, of shape (input size, number of examples)\n parameters -- python dictionary containing your parameters \"W1\", \"b1\", \"W2\", \"b2\", \"W3\", \"b3\"\n the shapes are given in initialize_parameters\n\n Returns:\n Z3 -- the output of the last LINEAR unit\n \"\"\"\n \n # Retrieve the parameters from the dictionary \"parameters\" \n W1 = parameters['W1']\n b1 = parameters['b1']\n W2 = parameters['W2']\n b2 = parameters['b2']\n W3 = parameters['W3']\n b3 = parameters['b3']\n \n ### START CODE HERE ### (approx. 5 lines) # Numpy Equivalents:\n Z1 = tf.add(tf.matmul(W1,X),b1) # Z1 = np.dot(W1, X) + b1\n A1 = tf.nn.relu(Z1) # A1 = relu(Z1)\n Z2 = tf.add(tf.matmul(W2,A1),b2) # Z2 = np.dot(W2, a1) + b2\n A2 = tf.nn.relu(Z2) # A2 = relu(Z2)\n Z3 = tf.add(tf.matmul(W3,Z2),b3) # Z3 = np.dot(W3,Z2) + b3\n ### END CODE HERE ###\n \n return Z3",
"_____no_output_____"
],
[
"tf.reset_default_graph()\n\nwith tf.Session() as sess:\n X, Y = create_placeholders(12288, 6)\n parameters = initialize_parameters()\n Z3 = forward_propagation(X, parameters)\n print(\"Z3 = \" + str(Z3))",
"Z3 = Tensor(\"Add_2:0\", shape=(6, ?), dtype=float32)\n"
]
],
[
[
"**Expected Output**: \n\n<table> \n <tr> \n <td>\n **Z3**\n </td>\n <td>\n Tensor(\"Add_2:0\", shape=(6, ?), dtype=float32)\n </td>\n </tr>\n\n</table>",
"_____no_output_____"
],
[
"You may have noticed that the forward propagation doesn't output any cache. You will understand why below, when we get to brackpropagation.",
"_____no_output_____"
],
[
"### 2.4 Compute cost\n\nAs seen before, it is very easy to compute the cost using:\n```python\ntf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = ..., labels = ...))\n```\n**Question**: Implement the cost function below. \n- It is important to know that the \"`logits`\" and \"`labels`\" inputs of `tf.nn.softmax_cross_entropy_with_logits` are expected to be of shape (number of examples, num_classes). We have thus transposed Z3 and Y for you.\n- Besides, `tf.reduce_mean` basically does the summation over the examples.",
"_____no_output_____"
]
],
[
[
"# GRADED FUNCTION: compute_cost \n\ndef compute_cost(Z3, Y):\n \"\"\"\n Computes the cost\n \n Arguments:\n Z3 -- output of forward propagation (output of the last LINEAR unit), of shape (6, number of examples)\n Y -- \"true\" labels vector placeholder, same shape as Z3\n \n Returns:\n cost - Tensor of the cost function\n \"\"\"\n \n # to fit the tensorflow requirement for tf.nn.softmax_cross_entropy_with_logits(...,...)\n logits = tf.transpose(Z3)\n labels = tf.transpose(Y)\n \n ### START CODE HERE ### (1 line of code)\n cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = logits, labels = labels))\n ### END CODE HERE ###\n \n return cost",
"_____no_output_____"
],
[
"tf.reset_default_graph()\n\nwith tf.Session() as sess:\n X, Y = create_placeholders(12288, 6)\n parameters = initialize_parameters()\n Z3 = forward_propagation(X, parameters)\n cost = compute_cost(Z3, Y)\n print(\"cost = \" + str(cost))",
"cost = Tensor(\"Mean:0\", shape=(), dtype=float32)\n"
]
],
[
[
"**Expected Output**: \n\n<table> \n <tr> \n <td>\n **cost**\n </td>\n <td>\n Tensor(\"Mean:0\", shape=(), dtype=float32)\n </td>\n </tr>\n\n</table>",
"_____no_output_____"
],
[
"### 2.5 - Backward propagation & parameter updates\n\nThis is where you become grateful to programming frameworks. All the backpropagation and the parameters update is taken care of in 1 line of code. It is very easy to incorporate this line in the model.\n\nAfter you compute the cost function. You will create an \"`optimizer`\" object. You have to call this object along with the cost when running the tf.session. When called, it will perform an optimization on the given cost with the chosen method and learning rate.\n\nFor instance, for gradient descent the optimizer would be:\n```python\noptimizer = tf.train.GradientDescentOptimizer(learning_rate = learning_rate).minimize(cost)\n```\n\nTo make the optimization you would do:\n```python\n_ , c = sess.run([optimizer, cost], feed_dict={X: minibatch_X, Y: minibatch_Y})\n```\n\nThis computes the backpropagation by passing through the tensorflow graph in the reverse order. From cost to inputs.\n\n**Note** When coding, we often use `_` as a \"throwaway\" variable to store values that we won't need to use later. Here, `_` takes on the evaluated value of `optimizer`, which we don't need (and `c` takes the value of the `cost` variable). ",
"_____no_output_____"
],
[
"### 2.6 - Building the model\n\nNow, you will bring it all together! \n\n**Exercise:** Implement the model. You will be calling the functions you had previously implemented.",
"_____no_output_____"
]
],
[
[
"def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.0001,\n num_epochs = 1500, minibatch_size = 32, print_cost = True):\n \"\"\"\n Implements a three-layer tensorflow neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SOFTMAX.\n \n Arguments:\n X_train -- training set, of shape (input size = 12288, number of training examples = 1080)\n Y_train -- test set, of shape (output size = 6, number of training examples = 1080)\n X_test -- training set, of shape (input size = 12288, number of training examples = 120)\n Y_test -- test set, of shape (output size = 6, number of test examples = 120)\n learning_rate -- learning rate of the optimization\n num_epochs -- number of epochs of the optimization loop\n minibatch_size -- size of a minibatch\n print_cost -- True to print the cost every 100 epochs\n \n Returns:\n parameters -- parameters learnt by the model. They can then be used to predict.\n \"\"\"\n \n ops.reset_default_graph() # to be able to rerun the model without overwriting tf variables\n tf.set_random_seed(1) # to keep consistent results\n seed = 3 # to keep consistent results\n (n_x, m) = X_train.shape # (n_x: input size, m : number of examples in the train set)\n n_y = Y_train.shape[0] # n_y : output size\n costs = [] # To keep track of the cost\n \n # Create Placeholders of shape (n_x, n_y)\n ### START CODE HERE ### (1 line)\n X, Y = create_placeholders(n_x, n_y)\n ### END CODE HERE ###\n\n # Initialize parameters\n ### START CODE HERE ### (1 line)\n parameters = initialize_parameters()\n ### END CODE HERE ###\n \n # Forward propagation: Build the forward propagation in the tensorflow graph\n ### START CODE HERE ### (1 line)\n Z3 = forward_propagation(X, parameters)\n ### END CODE HERE ###\n \n # Cost function: Add cost function to tensorflow graph\n ### START CODE HERE ### (1 line)\n cost = compute_cost(Z3, Y)\n ### END CODE HERE ###\n \n # Backpropagation: Define the tensorflow optimizer. Use an AdamOptimizer.\n ### START CODE HERE ### (1 line)\n optimizer = tf.train.GradientDescentOptimizer(learning_rate = learning_rate).minimize(cost)\n ### END CODE HERE ###\n \n # Initialize all the variables\n init = tf.global_variables_initializer()\n\n # Start the session to compute the tensorflow graph\n with tf.Session() as sess:\n \n # Run the initialization\n sess.run(init)\n \n # Do the training loop\n for epoch in range(num_epochs):\n\n epoch_cost = 0. # Defines a cost related to an epoch\n num_minibatches = int(m / minibatch_size) # number of minibatches of size minibatch_size in the train set\n seed = seed + 1\n minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed)\n\n for minibatch in minibatches:\n\n # Select a minibatch\n (minibatch_X, minibatch_Y) = minibatch\n \n # IMPORTANT: The line that runs the graph on a minibatch.\n # Run the session to execute the \"optimizer\" and the \"cost\", the feedict should contain a minibatch for (X,Y).\n ### START CODE HERE ### (1 line)\n _ , minibatch_cost = sess.run([optimizer, cost], feed_dict={X: minibatch_X, Y: minibatch_Y})\n ### END CODE HERE ###\n \n epoch_cost += minibatch_cost / num_minibatches\n\n # Print the cost every epoch\n if print_cost == True and epoch % 100 == 0:\n print (\"Cost after epoch %i: %f\" % (epoch, epoch_cost))\n if print_cost == True and epoch % 5 == 0:\n costs.append(epoch_cost)\n \n # plot the cost\n plt.plot(np.squeeze(costs))\n plt.ylabel('cost')\n plt.xlabel('iterations (per tens)')\n plt.title(\"Learning rate =\" + str(learning_rate))\n plt.show()\n\n # lets save the parameters in a variable\n parameters = sess.run(parameters)\n print (\"Parameters have been trained!\")\n\n # Calculate the correct predictions\n correct_prediction = tf.equal(tf.argmax(Z3), tf.argmax(Y))\n\n # Calculate accuracy on the test set\n accuracy = tf.reduce_mean(tf.cast(correct_prediction, \"float\"))\n\n print (\"Train Accuracy:\", accuracy.eval({X: X_train, Y: Y_train}))\n print (\"Test Accuracy:\", accuracy.eval({X: X_test, Y: Y_test}))\n \n return parameters",
"_____no_output_____"
]
],
[
[
"Run the following cell to train your model! On our machine it takes about 5 minutes. Your \"Cost after epoch 100\" should be 1.016458. If it's not, don't waste time; interrupt the training by clicking on the square (⬛) in the upper bar of the notebook, and try to correct your code. If it is the correct cost, take a break and come back in 5 minutes!",
"_____no_output_____"
]
],
[
[
"parameters = model(X_train, Y_train, X_test, Y_test)",
"Cost after epoch 0: 1.877091\nCost after epoch 100: 1.469517\nCost after epoch 200: 1.290366\nCost after epoch 300: 1.157710\nCost after epoch 400: 1.049604\nCost after epoch 500: 0.955978\nCost after epoch 600: 0.873848\nCost after epoch 700: 0.802411\nCost after epoch 800: 0.737561\nCost after epoch 900: 0.679775\nCost after epoch 1000: 0.629291\nCost after epoch 1100: 0.578398\nCost after epoch 1200: 0.536484\nCost after epoch 1300: 0.497435\nCost after epoch 1400: 0.462647\n"
]
],
[
[
"**Expected Output**:\n\n<table> \n <tr> \n <td>\n **Train Accuracy**\n </td>\n <td>\n 0.999074\n </td>\n </tr>\n <tr> \n <td>\n **Test Accuracy**\n </td>\n <td>\n 0.716667\n </td>\n </tr>\n\n</table>\n\nAmazing, your algorithm can recognize a sign representing a figure between 0 and 5 with 71.7% accuracy.\n\n**Insights**:\n- Your model seems big enough to fit the training set well. However, given the difference between train and test accuracy, you could try to add L2 or dropout regularization to reduce overfitting. \n- Think about the session as a block of code to train the model. Each time you run the session on a minibatch, it trains the parameters. In total you have run the session a large number of times (1500 epochs) until you obtained well trained parameters.",
"_____no_output_____"
],
[
"### 2.7 - Test with your own image (optional / ungraded exercise)\n\nCongratulations on finishing this assignment. You can now take a picture of your hand and see the output of your model. To do that:\n 1. Click on \"File\" in the upper bar of this notebook, then click \"Open\" to go on your Coursera Hub.\n 2. Add your image to this Jupyter Notebook's directory, in the \"images\" folder\n 3. Write your image's name in the following code\n 4. Run the code and check if the algorithm is right!",
"_____no_output_____"
]
],
[
[
"import scipy\nfrom PIL import Image\nfrom scipy import ndimage\n\n## START CODE HERE ## (PUT YOUR IMAGE NAME) \nmy_image = \"thumbs_up.jpg\"\n## END CODE HERE ##\n\n# We preprocess your image to fit your algorithm.\nfname = \"images/\" + my_image\nimage = np.array(ndimage.imread(fname, flatten=False))\nmy_image = scipy.misc.imresize(image, size=(64,64)).reshape((1, 64*64*3)).T\nmy_image_prediction = predict(my_image, parameters)\n\nplt.imshow(image)\nprint(\"Your algorithm predicts: y = \" + str(np.squeeze(my_image_prediction)))",
"_____no_output_____"
]
],
[
[
"You indeed deserved a \"thumbs-up\" although as you can see the algorithm seems to classify it incorrectly. The reason is that the training set doesn't contain any \"thumbs-up\", so the model doesn't know how to deal with it! We call that a \"mismatched data distribution\" and it is one of the various of the next course on \"Structuring Machine Learning Projects\".",
"_____no_output_____"
],
[
"<font color='blue'>\n**What you should remember**:\n- Tensorflow is a programming framework used in deep learning\n- The two main object classes in tensorflow are Tensors and Operators. \n- When you code in tensorflow you have to take the following steps:\n - Create a graph containing Tensors (Variables, Placeholders ...) and Operations (tf.matmul, tf.add, ...)\n - Create a session\n - Initialize the session\n - Run the session to execute the graph\n- You can execute the graph multiple times as you've seen in model()\n- The backpropagation and optimization is automatically done when running the session on the \"optimizer\" object.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
d0c83cc900b32359da68d8f1a319e1b2f220477a | 62,541 | ipynb | Jupyter Notebook | create_dataset.ipynb | oonid/growth-hacking-with-nlp-sentiment-analysis | 030d5d3fccc08a75f3adcd03b8cddf1ac0dbf1c3 | [
"MIT"
] | null | null | null | create_dataset.ipynb | oonid/growth-hacking-with-nlp-sentiment-analysis | 030d5d3fccc08a75f3adcd03b8cddf1ac0dbf1c3 | [
"MIT"
] | null | null | null | create_dataset.ipynb | oonid/growth-hacking-with-nlp-sentiment-analysis | 030d5d3fccc08a75f3adcd03b8cddf1ac0dbf1c3 | [
"MIT"
] | 1 | 2020-10-13T12:48:15.000Z | 2020-10-13T12:48:15.000Z | 46.292376 | 7,686 | 0.475848 | [
[
[
"<a href=\"https://colab.research.google.com/github/oonid/growth-hacking-with-nlp-sentiment-analysis/blob/master/create_dataset.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"# Evaluate Amazon Video Games Review Dataset",
"_____no_output_____"
]
],
[
[
"# ndjson to handle newline delimited json\n!pip install ndjson\n# update imbalanced-learn lib on colab\n!pip install --upgrade imbalanced-learn",
"Requirement already satisfied: ndjson in /usr/local/lib/python3.6/dist-packages (0.3.1)\nRequirement already up-to-date: imbalanced-learn in /usr/local/lib/python3.6/dist-packages (0.6.2)\nRequirement already satisfied, skipping upgrade: scipy>=0.17 in /usr/local/lib/python3.6/dist-packages (from imbalanced-learn) (1.4.1)\nRequirement already satisfied, skipping upgrade: joblib>=0.11 in /usr/local/lib/python3.6/dist-packages (from imbalanced-learn) (0.15.1)\nRequirement already satisfied, skipping upgrade: numpy>=1.11 in /usr/local/lib/python3.6/dist-packages (from imbalanced-learn) (1.18.4)\nRequirement already satisfied, skipping upgrade: scikit-learn>=0.22 in /usr/local/lib/python3.6/dist-packages (from imbalanced-learn) (0.22.2.post1)\n"
],
[
"# all imports and related\n\n%matplotlib inline\n\nimport pandas as pd\nimport numpy as np\nimport altair as alt\nimport ndjson\n\nfrom collections import Counter\nfrom imblearn.under_sampling import RandomUnderSampler\n",
"Using TensorFlow backend.\n"
],
[
"# get dataset, extract from gzip (overwrite), and preview data on file\n!wget http://deepyeti.ucsd.edu/jianmo/amazon/categoryFilesSmall/Video_Games_5.json.gz\n!yes y | gunzip Video_Games_5.json.gz\n!head Video_Games_5.json",
"--2020-05-29 14:17:40-- http://deepyeti.ucsd.edu/jianmo/amazon/categoryFilesSmall/Video_Games_5.json.gz\nResolving deepyeti.ucsd.edu (deepyeti.ucsd.edu)... 169.228.63.50\nConnecting to deepyeti.ucsd.edu (deepyeti.ucsd.edu)|169.228.63.50|:80... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 154050105 (147M) [application/octet-stream]\nSaving to: ‘Video_Games_5.json.gz’\n\nVideo_Games_5.json. 100%[===================>] 146.91M 43.6MB/s in 3.7s \n\n2020-05-29 14:17:44 (39.2 MB/s) - ‘Video_Games_5.json.gz’ saved [154050105/154050105]\n\n{\"overall\": 5.0, \"verified\": true, \"reviewTime\": \"10 17, 2015\", \"reviewerID\": \"A1HP7NVNPFMA4N\", \"asin\": \"0700026657\", \"reviewerName\": \"Ambrosia075\", \"reviewText\": \"This game is a bit hard to get the hang of, but when you do it's great.\", \"summary\": \"but when you do it's great.\", \"unixReviewTime\": 1445040000}\n{\"overall\": 4.0, \"verified\": false, \"reviewTime\": \"07 27, 2015\", \"reviewerID\": \"A1JGAP0185YJI6\", \"asin\": \"0700026657\", \"reviewerName\": \"travis\", \"reviewText\": \"I played it a while but it was alright. The steam was a bit of trouble. The more they move these game to steam the more of a hard time I have activating and playing a game. But in spite of that it was fun, I liked it. Now I am looking forward to anno 2205 I really want to play my way to the moon.\", \"summary\": \"But in spite of that it was fun, I liked it\", \"unixReviewTime\": 1437955200}\n{\"overall\": 3.0, \"verified\": true, \"reviewTime\": \"02 23, 2015\", \"reviewerID\": \"A1YJWEXHQBWK2B\", \"asin\": \"0700026657\", \"reviewerName\": \"Vincent G. Mezera\", \"reviewText\": \"ok game.\", \"summary\": \"Three Stars\", \"unixReviewTime\": 1424649600}\n{\"overall\": 2.0, \"verified\": true, \"reviewTime\": \"02 20, 2015\", \"reviewerID\": \"A2204E1TH211HT\", \"asin\": \"0700026657\", \"reviewerName\": \"Grandma KR\", \"reviewText\": \"found the game a bit too complicated, not what I expected after having played 1602, 1503, and 1701\", \"summary\": \"Two Stars\", \"unixReviewTime\": 1424390400}\n{\"overall\": 5.0, \"verified\": true, \"reviewTime\": \"12 25, 2014\", \"reviewerID\": \"A2RF5B5H74JLPE\", \"asin\": \"0700026657\", \"reviewerName\": \"jon\", \"reviewText\": \"great game, I love it and have played it since its arrived\", \"summary\": \"love this game\", \"unixReviewTime\": 1419465600}\n{\"overall\": 4.0, \"verified\": true, \"reviewTime\": \"11 13, 2014\", \"reviewerID\": \"A11V6ZJ2FVQY1D\", \"asin\": \"0700026657\", \"reviewerName\": \"IBRAHIM ALBADI\", \"reviewText\": \"i liked a lot some time that i haven't play a wonderfull game very simply and funny game verry good game.\", \"summary\": \"Anno 2070\", \"unixReviewTime\": 1415836800}\n{\"overall\": 1.0, \"verified\": false, \"reviewTime\": \"08 2, 2014\", \"reviewerID\": \"A1KXJ1ELZIU05C\", \"asin\": \"0700026657\", \"reviewerName\": \"Creation27\", \"reviewText\": \"I'm an avid gamer, but Anno 2070 is an INSULT to gaming. It is so buggy and half-finished that the first campaign doesn't even work properly and the DRM is INCREDIBLY frustrating to deal with.\\n\\nOnce you manage to work your way past the massive amounts of bugs and get through the DRM, HOURS later you finally figure out that the game has no real tutorial, so you stuck just clicking around randomly.\\n\\nSad, sad, sad, example of a game that could have been great but FTW.\", \"summary\": \"Avoid This Game - Filled with Bugs\", \"unixReviewTime\": 1406937600}\n{\"overall\": 5.0, \"verified\": true, \"reviewTime\": \"03 3, 2014\", \"reviewerID\": \"A1WK5I4874S3O2\", \"asin\": \"0700026657\", \"reviewerName\": \"WhiteSkull\", \"reviewText\": \"I bought this game thinking it would be pretty cool and that i might play it for a week or two and be done. Boy was I wrong! From the moment I finally got the gamed Fired up (the other commentors on this are right, it takes forever and u are forced to create an account) I watched as it booted up I could tell right off the bat that ALOT of thought went into making this game. If you have ever played Sim city, then this game is a must try as you will easily navigate thru it and its multi layers. I have been playing htis now for a month straight, and I am STILL discovering layers of complexity in the game. There are a few things in the game that could used tweaked, but all in all this is a 5 star game.\", \"summary\": \"A very good game balance of skill with depth of choices\", \"unixReviewTime\": 1393804800}\n{\"overall\": 5.0, \"verified\": true, \"reviewTime\": \"02 21, 2014\", \"reviewerID\": \"AV969NA4CBP10\", \"asin\": \"0700026657\", \"reviewerName\": \"Travis B. Moore\", \"reviewText\": \"I have played the old anno 1701 AND 1503. this game looks great but is more complex than the previous versions of the game. I found a lot of things lacking such as the sources of power and an inability to store energy with batteries or regenertive fuel cells as buildings in the game need power. Trade is about the same. My main beef with this it requires an internet connection. Other than that it has wonderful artistry and graphics. It is the same as anno 1701 but set in a future world where global warmming as flood the land and resource scarcity has sent human kind to look to the deep ocean for valuable minerals. I recoment the deep ocean expansion or complete if you get this. I found the ai instructor a little corny but other than that the game has some real polish. I wrote my 2 cents worth on suggestions on anno 2070 wiki and you can read 3 pages on that for game ideas I had.\", \"summary\": \"Anno 2070 more like anno 1701\", \"unixReviewTime\": 1392940800}\n{\"overall\": 4.0, \"verified\": true, \"reviewTime\": \"06 27, 2013\", \"reviewerID\": \"A1EO9BFUHTGWKZ\", \"asin\": \"0700026657\", \"reviewerName\": \"johnnyz3\", \"reviewText\": \"I liked it and had fun with it, played for a while and got my money's worth. You can certainly go further than I did but I got frustrated with the fact that here we are in this new start and still taking from the earth rather than living with it. Better than simcity in that respect and maybe the best we could hope for.\", \"summary\": \"Pretty fun\", \"unixReviewTime\": 1372291200}\n"
],
[
"# load from file-like objects\nwith open('Video_Games_5.json') as f:\n vg5 = ndjson.load(f)\n\nprint('data loaded as {} with len {}'.format(type(vg5), len(vg5)))\n# sample out 2 data\nvg5[:2]",
"data loaded as <class 'list'> with len 497577\n"
],
[
"# load list of dict as panda DataFrame\ndf = pd.DataFrame(vg5)\ndf.head()",
"_____no_output_____"
],
[
"# describe to understand values of column overall (next as ratings)\ndf.describe()",
"_____no_output_____"
],
[
"# create copy of DataFrame with overall as index, to prepare plotting\ndfo = df.set_index('overall')\ndfo.head()",
"_____no_output_____"
],
[
"# group data by column overall (currently as index) and count the variants\ndfo.groupby(dfo.index).count()",
"_____no_output_____"
],
[
"# plot grouped data by overall related to column reviewText (next as reviews)\ndfo.groupby(dfo.index)['reviewText'].count().plot(kind='bar')",
"_____no_output_____"
],
[
"# add altair chart based on sample solutions\nrating_counts = Counter(df.overall.tolist())\nchart_data = pd.DataFrame(\n {'ratings': [str(e) for e in list(rating_counts.keys())],\n 'counts': list(rating_counts.values())})\nchart = alt.Chart(chart_data).mark_bar().encode(x=\"ratings\", y=\"counts\")\nchart",
"_____no_output_____"
],
[
"# dataset with only two columns (overall, reviewText) as numpy array\nX = df[['overall', 'reviewText']].to_numpy()\nprint('dataset X shape: {} type: {}'.format(X.shape, type(X)))\n# using column overall as label\ny = df['overall'].to_numpy()\nprint('label y shape: {} type: {}'.format(y.shape, type(y)))\n",
"dataset X shape: (497577, 2) type: <class 'numpy.ndarray'>\nlabel y shape: (497577,) type: <class 'numpy.ndarray'>\n"
]
],
[
[
"# Generating small_corpus",
"_____no_output_____"
]
],
[
[
"# predefined sampling strategy\nsampling_strategy = {1.0: 1500, 2.0: 500, 3.0: 500, 4.0: 500, 5.0: 1500}\n\nrandom_state = 42 # to get identical results with sample solution\n\nrus = RandomUnderSampler(random_state=random_state,\n sampling_strategy=sampling_strategy)\nX_res, y_res = rus.fit_resample(X, y)\n\nprint('initial label: {}'.format(Counter(y)))\nprint('result label: {}'.format(Counter(y_res)))",
"initial label: Counter({5.0: 299759, 4.0: 93654, 3.0: 49146, 1.0: 30883, 2.0: 24135})\nresult label: Counter({1.0: 1500, 5.0: 1500, 2.0: 500, 3.0: 500, 4.0: 500})\n"
],
[
"# convert from numpy array back to pandas DataFrame\nsmall_corpus = pd.DataFrame({'ratings': X_res[:, 0], 'reviews': X_res[:, 1]})\n# set ratings column type as int32\nsmall_corpus['ratings'] = small_corpus['ratings'].astype('int32')\n# get info of small_corpus DataFrame with total 1500+500+500+500+1500 entries\nsmall_corpus.info()\nsmall_corpus.head()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 4500 entries, 0 to 4499\nData columns (total 2 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 ratings 4500 non-null int32 \n 1 reviews 4496 non-null object\ndtypes: int32(1), object(1)\nmemory usage: 52.9+ KB\n"
],
[
"# export small_corpus to csv (1500+500+500+500+1500), without index\nsmall_corpus.to_csv('small_corpus.csv', index=False)",
"_____no_output_____"
]
],
[
[
"# Generating big_corpus",
"_____no_output_____"
]
],
[
[
"random_state = 42 # to get identical results with sample solution\nnp.random.seed(random_state)\n\n# get 100.000 on random ratings (1-5) as numpy array\nrandom_ratings = np.random.randint(low=1, high=6, size=100000)",
"_____no_output_____"
],
[
"# create sampling strategy by count total ratings on random_ratings (dataframe)\nunique, counts = np.unique(random_ratings, return_counts=True)\nsampling_strategy = {}\nfor k, v in zip(unique, counts):\n sampling_strategy[k] = v\nprint('sampling_strategy: {}'.format(sampling_strategy))",
"sampling_strategy: {1: 20018, 2: 20082, 3: 19732, 4: 19981, 5: 20187}\n"
],
[
"rus = RandomUnderSampler(random_state=random_state,\n sampling_strategy=sampling_strategy)\nX_res, y_res = rus.fit_resample(X, y)\n\nprint('initial label: {}'.format(Counter(y)))\nprint('result label: {}'.format(Counter(y_res)))",
"initial label: Counter({5.0: 299759, 4.0: 93654, 3.0: 49146, 1.0: 30883, 2.0: 24135})\nresult label: Counter({5.0: 20187, 2.0: 20082, 1.0: 20018, 4.0: 19981, 3.0: 19732})\n"
],
[
"# convert from numpy array back to pandas DataFrame\nbig_corpus = pd.DataFrame({'ratings': X_res[:, 0], 'reviews': X_res[:, 1]})\n# set ratings column type as int32\nbig_corpus['ratings'] = big_corpus['ratings'].astype('int32')\nbig_corpus.info()\nbig_corpus.head()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 100000 entries, 0 to 99999\nData columns (total 2 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 ratings 100000 non-null int32 \n 1 reviews 99985 non-null object\ndtypes: int32(1), object(1)\nmemory usage: 1.1+ MB\n"
],
[
"# export big_corpus to csv (100000)\nbig_corpus.to_csv('big_corpus.csv')",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0c84048e9634a1c8362067bda439c123a33850b | 7,099 | ipynb | Jupyter Notebook | VideotoImages/Video with LIFX Tiles.ipynb | netmanchris/pylifxtiles | f9a77fe0beaabff4c792032d7778a8ad2815e2bd | [
"Apache-2.0"
] | 6 | 2020-04-27T00:55:47.000Z | 2020-10-11T19:16:38.000Z | VideotoImages/Video with LIFX Tiles.ipynb | netmanchris/pylifxtiles | f9a77fe0beaabff4c792032d7778a8ad2815e2bd | [
"Apache-2.0"
] | null | null | null | VideotoImages/Video with LIFX Tiles.ipynb | netmanchris/pylifxtiles | f9a77fe0beaabff4c792032d7778a8ad2815e2bd | [
"Apache-2.0"
] | null | null | null | 26.390335 | 90 | 0.505987 | [
[
[
"# Project Description\n\nAnother CV2 tutorial \n\nthis one from https://pythonprogramming.net/loading-images-python-opencv-tutorial/\n",
"_____no_output_____"
]
],
[
[
"#http://tsaith.github.io/record-video-with-python-3-opencv-3-on-osx.html\n\nimport numpy as np\nimport cv2\n\ncap = cv2.VideoCapture(0) # Capture video from camera\n\n# Get the width and height of frame\nwidth = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH) + 0.5)\nheight = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT) + 0.5)\n\n# Define the codec and create VideoWriter object\nfourcc = cv2.VideoWriter_fourcc(*'mp4v') # Be sure to use the lower case\nout = cv2.VideoWriter('output.mp4', fourcc, 20.0, (width, height))\n\nwhile(cap.isOpened()):\n ret, frame = cap.read()\n if ret == True:\n frame = cv2.flip(frame,0)\n\n # write the flipped frame\n out.write(frame)\n\n cv2.imshow('frame',frame)\n if (cv2.waitKey(1) & 0xFF) == ord('q'): # Hit `q` to exit\n break\n else:\n break\n\n# Release everything if job is finished\nout.release()\ncap.release()\ncv2.destroyAllWindows()",
"_____no_output_____"
]
],
[
[
"# Writting stuff on an image",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport cv2\n\nimg = cv2.imread('watch.jpg',cv2.IMREAD_COLOR)\ncv2.line(img,(0,0),(200,300),(255,255,255),50)\ncv2.rectangle(img,(500,250),(1000,500),(0,0,255),15)\ncv2.circle(img,(447,63), 63, (0,255,0), -1)\npts = np.array([[100,50],[200,300],[700,200],[500,100]], np.int32)\npts = pts.reshape((-1,1,2))\ncv2.polylines(img, [pts], True, (0,255,255), 3)\nfont = cv2.FONT_HERSHEY_SIMPLEX\ncv2.putText(img,'OpenCV Tuts!',(0,130), font, 1, (200,255,155), 2, cv2.LINE_AA)\nfont = cv2.FONT_HERSHEY_SIMPLEX\ncv2.putText(img,'OpenCV Tuts!',(10,500), font, 6, (200,255,155), 13, cv2.LINE_AA)\ncv2.imshow('image',img)\ncv2.waitKey(0)\ncv2.destroyAllWindows()",
"_____no_output_____"
],
[
"import cv2\nimport numpy as np\n\ncap = cv2.VideoCapture(0)\n\nwhile(1):\n\n # Take each frame\n _, frame = cap.read()\n hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)\n \n lower_red = np.array([30,150,50])\n upper_red = np.array([255,255,180])\n \n mask = cv2.inRange(hsv, lower_red, upper_red)\n res = cv2.bitwise_and(frame,frame, mask= mask)\n\n laplacian = cv2.Laplacian(frame,cv2.CV_64F)\n sobelx = cv2.Sobel(frame,cv2.CV_64F,1,0,ksize=5)\n sobely = cv2.Sobel(frame,cv2.CV_64F,0,1,ksize=5)\n\n cv2.imshow('Original',frame)\n cv2.imshow('Mask',mask)\n cv2.imshow('laplacian',laplacian)\n cv2.imshow('sobelx',sobelx)\n cv2.imshow('sobely',sobely)\n\n k = cv2.waitKey(5) & 0xFF\n if k == 27:\n break\n\ncv2.destroyAllWindows()\ncap.release()",
"_____no_output_____"
],
[
"import cv2\nimport numpy as np\n\ncap = cv2.VideoCapture(0)\n\nwhile(1):\n\n _, frame = cap.read()\n hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)\n \n lower_red = np.array([30,150,50])\n upper_red = np.array([255,255,180])\n \n mask = cv2.inRange(hsv, lower_red, upper_red)\n res = cv2.bitwise_and(frame,frame, mask= mask)\n\n cv2.imshow('Original',frame)\n edges = cv2.Canny(frame,100,200)\n cv2.imshow('Edges',edges)\n\n k = cv2.waitKey(5) & 0xFF\n if k == 27:\n break\n\ncv2.destroyAllWindows()\ncap.release()",
"_____no_output_____"
],
[
"import cv2\nimport numpy as np\n\nimg_rgb = cv2.imread('opencv-template-matching-python-tutorial.jpg')\nimg_gray = cv2.cvtColor(img_rgb, cv2.COLOR_BGR2GRAY)\n\ntemplate = cv2.imread('opencv-template-for-matching.jpg',0)\nw, h = template.shape[::-1]\n\nres = cv2.matchTemplate(img_gray,template,cv2.TM_CCOEFF_NORMED)\nthreshold = 0.8\nloc = np.where( res >= threshold)\n\nfor pt in zip(*loc[::-1]):\n cv2.rectangle(img_rgb, pt, (pt[0] + w, pt[1] + h), (0,255,255), 2)\n\ncv2.imshow('Detected',img_rgb)",
"_____no_output_____"
],
[
"import cv2\n \n# Opens the Video file\ncap= cv2.VideoCapture('IMG_2128.MOV')\ni=0\nwhile(cap.isOpened()):\n ret, frame = cap.read()\n if ret == False:\n break\n cv2.imwrite('kang'+str(i)+'.jpg',frame)\n i+=1\n \ncap.release()\ncv2.destroyAllWindows()",
"_____no_output_____"
],
[
"import cv2\n \n# Opens the Video file\ncap= cv2.VideoCapture('IMG_2128.MOV')\ni=1\nwhile(cap.isOpened()):\n ret, frame = cap.read()\n if ret == False:\n break\n if i%10 == 0:\n cv2.imwrite('kang'+str(i)+'.jpg',frame)\n i+=1\n \ncap.release()\ncv2.destroyAllWindows()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0c842ff7dc7f4fc492ce093568a0fe215083760 | 91,808 | ipynb | Jupyter Notebook | Clustering/Hierarchical Clustering Lab.ipynb | sjmiller8182/ML_Class | 17f6d0ae184a113265a3e1a97c667d7b798c6f8f | [
"MIT"
] | null | null | null | Clustering/Hierarchical Clustering Lab.ipynb | sjmiller8182/ML_Class | 17f6d0ae184a113265a3e1a97c667d7b798c6f8f | [
"MIT"
] | null | null | null | Clustering/Hierarchical Clustering Lab.ipynb | sjmiller8182/ML_Class | 17f6d0ae184a113265a3e1a97c667d7b798c6f8f | [
"MIT"
] | null | null | null | 181.080868 | 51,384 | 0.890119 | [
[
[
"# Hierarchical Clustering Lab\nIn this notebook, we will be using sklearn to conduct hierarchical clustering on the [Iris dataset](https://archive.ics.uci.edu/ml/datasets/iris) which contains 4 dimensions/attributes and 150 samples. Each sample is labeled as one of the three type of Iris flowers.\n\nIn this exercise, we'll ignore the labeling and cluster based on the attributes, then we'll compare the results of different hierarchical clustering techniques with the original labels to see which one does a better job in this scenario. We'll then proceed to visualize the resulting cluster hierarchies.\n\n## 1. Importing the Iris dataset\n",
"_____no_output_____"
]
],
[
[
"from sklearn import datasets\n\niris = datasets.load_iris()",
"_____no_output_____"
]
],
[
[
"A look at the first 10 samples in the dataset",
"_____no_output_____"
]
],
[
[
"iris.data[:10]",
"_____no_output_____"
]
],
[
[
"```iris.target``` contains the labels that indicate which type of Iris flower each sample is",
"_____no_output_____"
]
],
[
[
"iris.target",
"_____no_output_____"
]
],
[
[
"## 2. Clustering\nLet's now use sklearn's [```AgglomerativeClustering```](http://scikit-learn.org/stable/modules/generated/sklearn.cluster.AgglomerativeClustering.html) to conduct the heirarchical clustering",
"_____no_output_____"
]
],
[
[
"from sklearn.cluster import AgglomerativeClustering\n\n# Hierarchical clustering\n# Ward is the default linkage algorithm, so we'll start with that\nward = AgglomerativeClustering(n_clusters=3)\nward_pred = ward.fit_predict(iris.data)",
"_____no_output_____"
]
],
[
[
"Let's also try complete and average linkages\n\n**Exercise**:\n* Conduct hierarchical clustering with complete linkage, store the predicted labels in the variable ```complete_pred```\n* Conduct hierarchical clustering with average linkage, store the predicted labels in the variable ```avg_pred```\n\nNote: look at the documentation of [```AgglomerativeClustering```](http://scikit-learn.org/stable/modules/generated/sklearn.cluster.AgglomerativeClustering.html) to find the appropriate value to pass as the ```linkage``` value",
"_____no_output_____"
]
],
[
[
"# Hierarchical clustering using complete linkage\n# TODO: Create an instance of AgglomerativeClustering with the appropriate parameters\ncomplete = AgglomerativeClustering(n_clusters=3, linkage = 'complete')\n# Fit & predict\n# TODO: Make AgglomerativeClustering fit the dataset and predict the cluster labels\ncomplete_pred = complete.fit_predict(iris.data)\n\n# Hierarchical clustering using average linkage\n# TODO: Create an instance of AgglomerativeClustering with the appropriate parameters\navg = AgglomerativeClustering(n_clusters = 3, linkage = 'average')\n# Fit & predict\n# TODO: Make AgglomerativeClustering fit the dataset and predict the cluster labels\navg_pred = avg.fit_predict(iris.data)",
"_____no_output_____"
]
],
[
[
"To determine which clustering result better matches the original labels of the samples, we can use ```adjusted_rand_score``` which is an *external cluster validation index* which results in a score between -1 and 1, where 1 means two clusterings are identical of how they grouped the samples in a dataset (regardless of what label is assigned to each cluster).\n\nCluster validation indices are discussed later in the course.",
"_____no_output_____"
]
],
[
[
"from sklearn.metrics import adjusted_rand_score\n\nward_ar_score = adjusted_rand_score(iris.target, ward_pred)",
"_____no_output_____"
]
],
[
[
"**Exercise**:\n* Calculate the Adjusted Rand score of the clusters resulting from complete linkage and average linkage",
"_____no_output_____"
]
],
[
[
"# TODO: Calculated the adjusted Rand score for the complete linkage clustering labels\ncomplete_ar_score = adjusted_rand_score(iris.target, complete_pred)\n\n# TODO: Calculated the adjusted Rand score for the average linkage clustering labels\navg_ar_score = adjusted_rand_score(iris.target, avg_pred)",
"_____no_output_____"
]
],
[
[
"Which algorithm results in the higher Adjusted Rand Score?",
"_____no_output_____"
]
],
[
[
"print( \"Scores: \\nWard:\", ward_ar_score,\"\\nComplete: \", complete_ar_score, \"\\nAverage: \", avg_ar_score)",
"Scores: \nWard: 0.731198556771 \nComplete: 0.642251251836 \nAverage: 0.759198707107\n"
]
],
[
[
"## 3. The Effect of Normalization on Clustering\n\nCan we improve on this clustering result?\n\nLet's take another look at the dataset",
"_____no_output_____"
]
],
[
[
"iris.data[:15]",
"_____no_output_____"
]
],
[
[
"Looking at this, we can see that the forth column has smaller values than the rest of the columns, and so its variance counts for less in the clustering process (since clustering is based on distance). Let us [normalize](https://en.wikipedia.org/wiki/Feature_scaling) the dataset so that each dimension lies between 0 and 1, so they have equal weight in the clustering process.\n\nThis is done by subtracting the minimum from each column then dividing the difference by the range.\n\nsklearn provides us with a useful utility called ```preprocessing.normalize()``` that can do that for us",
"_____no_output_____"
]
],
[
[
"from sklearn import preprocessing\n\nnormalized_X = preprocessing.normalize(iris.data)\nnormalized_X[:10]",
"_____no_output_____"
]
],
[
[
"Now all the columns are in the range between 0 and 1. Would clustering the dataset after this transformation lead to a better clustering? (one that better matches the original labels of the samples)",
"_____no_output_____"
]
],
[
[
"ward = AgglomerativeClustering(n_clusters=3)\nward_pred = ward.fit_predict(normalized_X)\n\ncomplete = AgglomerativeClustering(n_clusters=3, linkage=\"complete\")\ncomplete_pred = complete.fit_predict(normalized_X)\n\navg = AgglomerativeClustering(n_clusters=3, linkage=\"average\")\navg_pred = avg.fit_predict(normalized_X)\n\n\nward_ar_score = adjusted_rand_score(iris.target, ward_pred)\ncomplete_ar_score = adjusted_rand_score(iris.target, complete_pred)\navg_ar_score = adjusted_rand_score(iris.target, avg_pred)\n\nprint( \"Scores: \\nWard:\", ward_ar_score,\"\\nComplete: \", complete_ar_score, \"\\nAverage: \", avg_ar_score)",
"Scores: \nWard: 0.885697031028 \nComplete: 0.644447235392 \nAverage: 0.558371443754\n"
]
],
[
[
"## 4. Dendrogram visualization with scipy\n\nLet's visualize the highest scoring clustering result. \n\nTo do that, we'll need to use Scipy's [```linkage```](https://docs.scipy.org/doc/scipy/reference/generated/scipy.cluster.hierarchy.linkage.html) function to perform the clusteirng again so we can obtain the linkage matrix it will later use to visualize the hierarchy",
"_____no_output_____"
]
],
[
[
"# Import scipy's linkage function to conduct the clustering\nfrom scipy.cluster.hierarchy import linkage\n\n# Specify the linkage type. Scipy accepts 'ward', 'complete', 'average', as well as other values\n# Pick the one that resulted in the highest Adjusted Rand Score\nlinkage_type = 'ward'\n\nlinkage_matrix = linkage(normalized_X, linkage_type)",
"_____no_output_____"
]
],
[
[
"Plot using scipy's [dendrogram](https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.cluster.hierarchy.dendrogram.html) function",
"_____no_output_____"
]
],
[
[
"from scipy.cluster.hierarchy import dendrogram\nimport matplotlib.pyplot as plt\nplt.figure(figsize=(22,18))\n\n# plot using 'dendrogram()'\ndendrogram(linkage_matrix)\n\nplt.show()",
"_____no_output_____"
]
],
[
[
"## 5. Visualization with Seaborn's ```clustermap``` \n\nThe [seaborn](http://seaborn.pydata.org/index.html) plotting library for python can plot a [clustermap](http://seaborn.pydata.org/generated/seaborn.clustermap.html), which is a detailed dendrogram which also visualizes the dataset in more detail. It conducts the clustering as well, so we only need to pass it the dataset and the linkage type we want, and it will use scipy internally to conduct the clustering",
"_____no_output_____"
]
],
[
[
"import seaborn as sns\n\nsns.clustermap(normalized_X, figsize=(12,18), method=linkage_type, cmap='viridis')\n\n# Expand figsize to a value like (18, 50) if you want the sample labels to be readable\n# Draw back is that you'll need more scrolling to observe the dendrogram\n\nplt.show()",
"C:\\Program Files\\Anaconda3\\lib\\site-packages\\matplotlib\\cbook.py:136: MatplotlibDeprecationWarning: The axisbg attribute was deprecated in version 2.0. Use facecolor instead.\n warnings.warn(message, mplDeprecation, stacklevel=1)\n"
]
],
[
[
"Looking at the colors of the dimensions can you observe how they differ between the three type of flowers? You should at least be able to notice how one is vastly different from the two others (in the top third of the image).",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d0c84e7ddf33dadddd816d5fb33e34f3d9871a6e | 38,628 | ipynb | Jupyter Notebook | Exercise01/Exercise01.ipynb | Develop-Packt/Exploring-Absenteeism-at-Work | 9523f320ced42fa2fec6b49655f1278c0aba5a09 | [
"MIT"
] | 1 | 2020-12-29T11:17:55.000Z | 2020-12-29T11:17:55.000Z | Exercise01/Exercise01.ipynb | Develop-Packt/Exploring-Absenteeism-at-Work | 9523f320ced42fa2fec6b49655f1278c0aba5a09 | [
"MIT"
] | null | null | null | Exercise01/Exercise01.ipynb | Develop-Packt/Exploring-Absenteeism-at-Work | 9523f320ced42fa2fec6b49655f1278c0aba5a09 | [
"MIT"
] | 1 | 2021-02-25T16:41:39.000Z | 2021-02-25T16:41:39.000Z | 50.826316 | 8,076 | 0.443797 | [
[
[
"import pandas as pd\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n\n%matplotlib inline\n\n# import data from the github page of the book\ndata = pd.read_csv('https://raw.githubusercontent.com/Develop-Packt/Exploring-Absenteeism-at-Work/master/data/Absenteeism_at_work.csv', sep=\";\")",
"_____no_output_____"
],
[
"# print dimensionality of the data, columns, types and missing values\nprint(f\"Data dimension: {data.shape}\")\nfor col in data.columns:\n print(f\"Column: {col:35} | type: {str(data[col].dtype):7} | missing values: {data[col].isna().sum():3d}\")",
"Data dimension: (740, 21)\nColumn: ID | type: int64 | missing values: 0\nColumn: Reason for absence | type: int64 | missing values: 0\nColumn: Month of absence | type: int64 | missing values: 0\nColumn: Day of the week | type: int64 | missing values: 0\nColumn: Seasons | type: int64 | missing values: 0\nColumn: Transportation expense | type: int64 | missing values: 0\nColumn: Distance from Residence to Work | type: int64 | missing values: 0\nColumn: Service time | type: int64 | missing values: 0\nColumn: Age | type: int64 | missing values: 0\nColumn: Work load Average/day | type: float64 | missing values: 0\nColumn: Hit target | type: int64 | missing values: 0\nColumn: Disciplinary failure | type: int64 | missing values: 0\nColumn: Education | type: int64 | missing values: 0\nColumn: Son | type: int64 | missing values: 0\nColumn: Social drinker | type: int64 | missing values: 0\nColumn: Social smoker | type: int64 | missing values: 0\nColumn: Pet | type: int64 | missing values: 0\nColumn: Weight | type: int64 | missing values: 0\nColumn: Height | type: int64 | missing values: 0\nColumn: Body mass index | type: int64 | missing values: 0\nColumn: Absenteeism time in hours | type: int64 | missing values: 0\n"
],
[
"# compute statistics on numerical features\ndata.describe().T",
"_____no_output_____"
],
[
"# define encoding dictionaries\nmonth_encoding = {1: \"January\", 2: \"February\", 3: \"March\", 4: \"April\", \n 5: \"May\", 6: \"June\", 7: \"July\", 8: \"August\", \n 9: \"September\", 10: \"October\", 11: \"November\", 12: \"December\", 0: \"Unknown\"}\ndow_encoding = {2: \"Monday\", 3: \"Tuesday\", 4: \"Wednesday\", 5: \"Thursday\", 6: \"Friday\"}\nseason_encoding = {1: \"Spring\", 2: \"Summer\", 3: \"Fall\", 4: \"Winter\"}\neducation_encoding = {1: \"high_school\", 2: \"graduate\", 3: \"postgraduate\", 4: \"master_phd\"}\nyes_no_encoding = {0: \"No\", 1: \"Yes\"}\n\n# backtransform numerical variables to categorical\npreprocessed_data = data.copy()\npreprocessed_data[\"Month of absence\"] = preprocessed_data[\"Month of absence\"]\\\n .apply(lambda x: month_encoding[x]) \npreprocessed_data[\"Day of the week\"] = preprocessed_data[\"Day of the week\"]\\\n .apply(lambda x: dow_encoding[x]) \npreprocessed_data[\"Seasons\"] = preprocessed_data[\"Seasons\"]\\\n .apply(lambda x: season_encoding[x]) \npreprocessed_data[\"Education\"] = preprocessed_data[\"Education\"]\\\n .apply(lambda x: education_encoding[x]) \npreprocessed_data[\"Disciplinary failure\"] = preprocessed_data[\"Disciplinary failure\"]\\\n .apply(lambda x: yes_no_encoding[x]) \npreprocessed_data[\"Social drinker\"] = preprocessed_data[\"Social drinker\"]\\\n .apply(lambda x: yes_no_encoding[x]) \npreprocessed_data[\"Social smoker\"] = preprocessed_data[\"Social smoker\"]\\\n .apply(lambda x: yes_no_encoding[x]) ",
"_____no_output_____"
],
[
"# transform columns\npreprocessed_data.head().T",
"_____no_output_____"
]
],
[
[
"**Exercise 01: Identifying Disease Reasons for Absence**",
"_____no_output_____"
]
],
[
[
"# define function, which checks if the provided integer value \n# is contained in the ICD or not\ndef in_icd(val):\n return \"Yes\" if val >= 1 and val <= 21 else \"No\"\n\n# add Disease column\npreprocessed_data[\"Disease\"] = preprocessed_data[\"Reason for absence\"]\\\n .apply(in_icd)\n\n# plot value counts\nplt.figure(figsize=(10, 8))\nsns.countplot(data=preprocessed_data, x='Disease')\nplt.savefig('figs/disease_plot.png', format='png', dpi=300)",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0c8538ef7b0c05f0a92cd6cf95e2a61641ae131 | 16,796 | ipynb | Jupyter Notebook | How to Win a Data Science Competition Learn from Top Kagglers/Reading materials/Metrics_video3_weighted_median.ipynb | ghali007/Advanced-Machine-Learning-Specialization | 0e0cfb0e4a621821ce960ed6e3437c71d728b614 | [
"MIT"
] | 252 | 2019-02-06T04:15:18.000Z | 2022-03-23T17:38:29.000Z | How to Win a Data Science Competition Learn from Top Kagglers/Reading materials/Metrics_video3_weighted_median.ipynb | aaronsmoss3/Advanced-Machine-Learning-Specialization | 99694b44003d264d586d7c36aac76a5559d7236c | [
"MIT"
] | 1 | 2020-11-19T16:21:07.000Z | 2020-11-19T16:21:07.000Z | How to Win a Data Science Competition Learn from Top Kagglers/Reading materials/Metrics_video3_weighted_median.ipynb | aaronsmoss3/Advanced-Machine-Learning-Specialization | 99694b44003d264d586d7c36aac76a5559d7236c | [
"MIT"
] | 308 | 2019-03-16T18:18:02.000Z | 2022-01-29T10:04:08.000Z | 58.522648 | 11,038 | 0.798285 | [
[
[
"import numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"# Weighted median",
"_____no_output_____"
],
[
"In the video we have discussed that for MAPE metric the best constant prediction is [weighted median](https://en.wikipedia.org/wiki/Weighted_median) with weights\n\n$$w_i = \\frac{\\sum_{j=1}^N \\frac{1}{x_j}}{x_i}$$\n\nfor each object $x_i$.\n\nThis notebook exlpains how to compute weighted median. Let's generate some data first, and then find it's weighted median.",
"_____no_output_____"
]
],
[
[
"N = 5\nx = np.random.randint(low=1, high=100, size=N)\nx",
"_____no_output_____"
]
],
[
[
"1) Compute *normalized* weights:",
"_____no_output_____"
]
],
[
[
"inv_x = 1.0/x\ninv_x",
"_____no_output_____"
],
[
"w = inv_x/sum(inv_x)\nw",
"_____no_output_____"
]
],
[
[
"2) Now sort the normalized weights. We will use `argsort` (and not just `sort`) since we will need indices later.",
"_____no_output_____"
]
],
[
[
"idxs = np.argsort(w)\nsorted_w = w[idxs]\nsorted_w",
"_____no_output_____"
]
],
[
[
"3) Compute [cumulitive sum](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.cumsum.html) of sorted weights",
"_____no_output_____"
]
],
[
[
"sorted_w_cumsum = np.cumsum(sorted_w)\nplt.plot(sorted_w_cumsum); plt.show()\nprint ('sorted_w_cumsum: ', sorted_w_cumsum)",
"_____no_output_____"
]
],
[
[
"4) Now find the index when cumsum hits 0.5:",
"_____no_output_____"
]
],
[
[
"idx = np.where(sorted_w_cumsum>0.5)[0][0]\nidx",
"_____no_output_____"
]
],
[
[
"5) Finally, your answer is sample at that position:",
"_____no_output_____"
]
],
[
[
"pos = idxs[idx]\nx[pos]",
"_____no_output_____"
],
[
"print('Data: ', x)\nprint('Sorted data: ', np.sort(x))\nprint('Weighted median: %d, Median: %d' %(x[pos], np.median(x)))",
"Data: [37 52 21 25 46]\nSorted data: [21 25 37 46 52]\nWeighted median: 25, Median: 37\n"
]
],
[
[
"Thats it! ",
"_____no_output_____"
],
[
"If the procedure looks surprising for you, try to do steps 2--5 assuming the weights are $w_i=\\frac{1}{N}$. That way you will find a simple median (not weighted) of the data. ",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
d0c8636f81803e9e8b7cf0e83992e2ad3e53ed7a | 10,310 | ipynb | Jupyter Notebook | session-three/session_three_blank_template.ipynb | Precel2000/beginners-python | 6296e5d8bcd782caee267bd334abef14f2521fa9 | [
"MIT"
] | 4 | 2020-12-17T11:01:52.000Z | 2021-03-11T01:11:31.000Z | session-three/session_three_blank_template.ipynb | Precel2000/beginners-python | 6296e5d8bcd782caee267bd334abef14f2521fa9 | [
"MIT"
] | 1 | 2020-07-13T15:29:48.000Z | 2020-07-13T15:29:48.000Z | session-three/session_three_blank_template.ipynb | Precel2000/beginners-python | 6296e5d8bcd782caee267bd334abef14f2521fa9 | [
"MIT"
] | 7 | 2020-12-11T18:11:13.000Z | 2021-11-25T21:31:48.000Z | 18.476703 | 272 | 0.517168 | [
[
[
"<a href=\"https://colab.research.google.com/github/warwickdatascience/beginners-python/blob/master/session_three/session_three_blank_template.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"<center>Spotted a mistake? Report it <a href=\"https://github.com/warwickdatascience/beginners-python/issues/new\">here</a></center>",
"_____no_output_____"
],
[
"# Beginner's Python—Session Three Template",
"_____no_output_____"
],
[
"## Comparison Operators",
"_____no_output_____"
],
[
"### Introduction\n",
"_____no_output_____"
],
[
"Use comparison operators to create two Boolean variables, on `True` and one `False`",
"_____no_output_____"
],
[
"Check for equality of two numbers using comparisons",
"_____no_output_____"
],
[
"### Standard Puzzles",
"_____no_output_____"
],
[
"Answer the three comparison questions from the presentation",
"_____no_output_____"
],
[
"Check whether a user inputted number is positive",
"_____no_output_____"
],
[
"Create a list and check if the smallest element is equal to two",
"_____no_output_____"
],
[
"### Bonus Puzzles",
"_____no_output_____"
],
[
"Experiment with using comparison operators on strings. What are you findings?",
"_____no_output_____"
],
[
"## Boolean Operators",
"_____no_output_____"
],
[
"### Introduction",
"_____no_output_____"
],
[
"Use the `and` operator to combine two Boolean values",
"_____no_output_____"
],
[
"Likewise, use `or`",
"_____no_output_____"
],
[
"Negate a Boolean value using `not`",
"_____no_output_____"
],
[
"### Standard Puzzles",
"_____no_output_____"
],
[
"What do you think the Boolean expressions will evaluate to. Make your guess then check",
"_____no_output_____"
],
[
"Create two Boolean expressions, one `True` and one `False`",
"_____no_output_____"
],
[
"### Bonus Puzzles",
"_____no_output_____"
],
[
"Guess and check the value of the complex Boolean expression",
"_____no_output_____"
],
[
"## Control Flow",
"_____no_output_____"
],
[
"### Introduction",
"_____no_output_____"
],
[
"Create an `if` statement to check if a number is positive",
"_____no_output_____"
],
[
"Create an `if` statement with an `elif` statement following it",
"_____no_output_____"
],
[
"Create an `if-elif-else` statement to check the sign of a number",
"_____no_output_____"
],
[
"### Standard Puzzles",
"_____no_output_____"
],
[
"Ask for a users age and use a control flow sequence with two `elif` statements to print an appropriate response",
"_____no_output_____"
],
[
"Create a random number between one and ten",
"_____no_output_____"
],
[
"Ask the user to guess the number and print an appropriate response",
"_____no_output_____"
],
[
"### Bonus Puzzles",
"_____no_output_____"
],
[
"Ask the user for a decimal number and round it. Print in which direction it was rounded in using control flow",
"_____no_output_____"
],
[
"## While Loops",
"_____no_output_____"
],
[
"### Introduction",
"_____no_output_____"
],
[
"Use a loop to print numbers starting at $4$ and increasing by $5$ until this surpasses $20$",
"_____no_output_____"
],
[
"Use a `while` loop to enforce that a user inputs a positive number",
"_____no_output_____"
],
[
"### Standard Puzzles",
"_____no_output_____"
],
[
"Count down from 10 then shout \"Blast off!\"",
"_____no_output_____"
],
[
"### Bonus Puzzles",
"_____no_output_____"
],
[
"Repeat the above puzzle but allow the user to specify a starting point (to avoid an infinite loop, you likely want to validate the input first)",
"_____no_output_____"
],
[
"Repeat your above code using assignment operators",
"_____no_output_____"
],
[
"Change your code so only even numbers in the countdown are printed",
"_____no_output_____"
]
]
] | [
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
d0c86cd702317d5b62b1c4daf5e56d319f44f2ec | 7,079 | ipynb | Jupyter Notebook | chapter-8/section2.ipynb | NaiveXu/latex-cookbook | ff5596d7c76de8fc39f008e29fc6dc97805c6336 | [
"MIT"
] | 283 | 2021-03-29T14:49:24.000Z | 2022-03-31T14:34:38.000Z | chapter-8/section2.ipynb | Hi1993Ryan/latex-cookbook | ba6b84e245982254958afa39b788cdf272009a3f | [
"MIT"
] | null | null | null | chapter-8/section2.ipynb | Hi1993Ryan/latex-cookbook | ba6b84e245982254958afa39b788cdf272009a3f | [
"MIT"
] | 57 | 2021-04-20T15:55:45.000Z | 2022-03-31T06:58:12.000Z | 29.619247 | 220 | 0.541178 | [
[
[
"## 8.2 创建超链接\n\n超链接指按内容链接,可以从一个文本内容指向文本其他内容或其他文件、网址等。超链接可以分为文本内链接、网页链接以及本地文件链接。LaTeX提供了`hyperref`宏包,可用于生成超链接。在使用时,只需在前导代码中申明宏包即可,即`\\usepackage{hyperref}`。\n\n### 8.2.1 超链接类型\n\n#### 文本内链接\n\n在篇幅较大的文档中,查阅内容会比较繁琐,因此,往往会在目录中使用超链接来进行文本内容的快速高效浏览。可以使用`hyperref`宏包创建文本内超链接。\n\n【**例8-4**】使用`\\usepackage{hyperref}`创建一个简单的目录链接文本内容的例子。\n\n```tex\n\\documentclass{book}\n\\usepackage{blindtext}\n\\usepackage{hyperref} %超链接包\n\n\\begin{document}\n\n\\frontmatter\n\\tableofcontents\n\\clearpage\n\n\\addcontentsline{toc}{chapter}{Foreword}\n{\\huge {\\bf Foreword}}\n\nThis is foreword.\n\\clearpage\n\n\\mainmatter\n\n\\chapter{First Chapter}\n\nThis is chapter 1.\n\n\n\\clearpage\n\n\\section{First section} \\label{second}\n\nThis is section 1.1.\n\\end{document}\n```\n\n编译后文档如图8.2.1所示。\n\n<p align=\"center\">\n<table>\n<tr>\n<td><img align=\"middle\" src=\"graphics/example8_2_1_1.png\" width=\"300\"></td>\n<td><img align=\"middle\" src=\"graphics/example8_2_1_2.png\" width=\"300\"></td>\n<td><img align=\"middle\" src=\"graphics/example8_2_1_3.png\" width=\"300\"></td>\n<td><img align=\"middle\" src=\"graphics/example8_2_1_4.png\" width=\"300\"></td>\n</tr>\n</table>\n</p>\n\n<center><b>图8.2.4</b> 编译后的文档</center>\n\n在导入 `hyperref` 时必须非常小心,一般而言,它必须是最后一个要导入的包。\n\n#### 网址链接\n\n众所周知,在文档中插入网址之类的文本同样需要用到超链接,同样的,使用`hyperref`宏包可以创建网页超链接。有时我们需要将超链接命名并隐藏网址,这时我们可以使用`href`命令进行插入;有时,我们插入的网址链接太长,但LaTeX不会自动换行,往往会造成格式混乱的问题,这时,我们可以使用`url`工具包,并在该工具包中声明一个参数即可解决这个问题,相关命令为`\\usepackage[hyphens]{url}`。\n\n> 参考[Line breaks in URLs](https://latex.org/forum/viewtopic.php?f=44&t=4022)。\n\n【**例8-5**】在LaTeX中使用`hyperref`及`url`工具包插入网页链接并设置自动换行。\n\n```tex\n\\documentclass[12pt]{article}\n\\usepackage[hyphens]{url}\n\\usepackage{hyperref}\n\n\\begin{document}\n\nThis is the website of open-source latex-cookbook repository: \\href{https://github.com/xinychen/latex-cookbook}{LaTeX-cookbook} or go to the next url: \\url{https://github.com/xinychen/latex-cookbook}.\n\n\\end{document}\n```\n\n编译后文档如图8.2.3所示。\n\n<p align=\"center\">\n<table>\n<tr>\n<td><img align=\"middle\" src=\"graphics/example8_2_2.png\" width=\"300\"></td>\n\n</tr>\n</table>\n</p>\n\n<center><b>图8.2.2</b> 编译后的文档</center>\n\n#### 本地文件链接\n\n有时,需要将文本与本地文件进行链接,`href`命令也可用于打开本地文件。\n\n【**例8-6**】在LaTeX中使用`href`命令打开本地文件。\n\n```tex\n\\documentclass[12pt]{article}\n\\usepackage[hyphens]{url}\n\\usepackage{hyperref}\n\n\\begin{document}\n\nThis is the text of open-source latex-cookbook repository: \\href{run:./LaTeX-cookbook.dox}{LaTeX-cookbook}.\n\n\\end{document}\n```\n\n编译后文档如图8.2.3所示。\n\n<p align=\"center\">\n<table>\n<tr>\n<td><img align=\"middle\" src=\"graphics/example8_2_3.png\" width=\"300\"></td>\n\n</tr>\n</table>\n</p>\n\n<center><b>图8.2.3</b> 编译后的文档</center>\n\n### 8.2.2 超链接格式\n\n当然,有时候为了突出超链接,也可以在工具包`hyperref`中设置特定的颜色,设置的命令为`\\hypersetup`,一般放在前导代码中,例如`colorlinks = true, linkcolor=blue, urlcolor = blue, filecolor=magenta`。默认设置以单色样式的空间字体打印链接,`\\urlstyle{same}`命令将改变这个样式,并以与文本其余部分相同的样式显示链接。\n\n> 参考[Website address](https://latex.org/forum/viewtopic.php?f=44&t=5115)。\n\n【**例8-7**】在LaTeX中使用`hyperref`工具包插入超链接并设置超链接颜色为蓝色。\n\n```tex\n\\documentclass{book}\n\\usepackage{blindtext}\n\\usepackage{hyperref} %超链接包\n\\hypersetup{colorlinks = true, %链接将被着色,默认颜色是红色\n linkcolor=blue, % 内部链接显示为蓝色\n urlcolor = cyan, % 网址链接为青色\n filecolor=magenta} % 本地文件链接为洋红色\n\\urlstyle{same}\n\n\\begin{document}\n\n\\frontmatter\n\\tableofcontents\n\\clearpage\n\n\\addcontentsline{toc}{chapter}{Foreword}\n{\\huge {\\bf Foreword}}\n\nThis is foreword.\n\\clearpage\n\n\\mainmatter\n\n\\chapter{First Chapter}\n\nThis is chapter 1.\n\\clearpage\n\n\\section{First section} \\label{second}\n\nThis is section 1.1.\n\nThis is the website of open-source latex-cookbook repository: \\href{https://github.com/xinychen/latex-cookbook}{LaTeX-cookbook} or go to the next url: \\url{https://github.com/xinychen/latex-cookbook}.\n\nThis is the text of open-source latex-cookbook repository: \\href{run:./LaTeX-cookbook.dox}{LaTeX-cookbook} \n\n\\end{document}\n```\n\n编译后文档如图8.2.4所示。\n\n<p align=\"center\">\n<table>\n<tr>\n<td><img align=\"middle\" src=\"graphics/example8_2_4_1.png\" width=\"300\"></td>\n<td><img align=\"middle\" src=\"graphics/example8_2_4_2.png\" width=\"300\"></td>\n<td><img align=\"middle\" src=\"graphics/example8_2_4_3.png\" width=\"300\"></td>\n<td><img align=\"middle\" src=\"graphics/example8_2_4_4.png\" width=\"300\"></td>\n</tr>\n</table>\n</p>\n\n<center><b>图8.2.4</b> 编译后的文档</center>\n",
"_____no_output_____"
],
[
"【回放】[**8.1 图表和公式的索引**](https://nbviewer.jupyter.org/github/xinychen/latex-cookbook/blob/main/chapter-8/section1.ipynb)\n\n【继续】[**8.3 Bibtex用法**](https://nbviewer.jupyter.org/github/xinychen/latex-cookbook/blob/main/chapter-8/section3.ipynb)",
"_____no_output_____"
],
[
"### License\n\n<div class=\"alert alert-block alert-danger\">\n<b>This work is released under the MIT license.</b>\n</div>",
"_____no_output_____"
]
]
] | [
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
]
] |
d0c86f09e3e282b501db6d16811d51d2d6eb476a | 3,418 | ipynb | Jupyter Notebook | Python-for-ML/2-6-asterisk.ipynb | lee-kyubong/DSC2018 | 85e3420bd9a3952837c5abbf05a789f4b97b4038 | [
"MIT"
] | null | null | null | Python-for-ML/2-6-asterisk.ipynb | lee-kyubong/DSC2018 | 85e3420bd9a3952837c5abbf05a789f4b97b4038 | [
"MIT"
] | null | null | null | Python-for-ML/2-6-asterisk.ipynb | lee-kyubong/DSC2018 | 85e3420bd9a3952837c5abbf05a789f4b97b4038 | [
"MIT"
] | null | null | null | 18.084656 | 53 | 0.409011 | [
[
[
"## Asterisk\n- 오픈소스코드에 많이 활용됨\n- '*'을 의미\n- *: 가변인자 활용\n- \\**: 키워드인자 활용(Pandas, Matplotlib)\n- unpacking에 활용",
"_____no_output_____"
]
],
[
[
"#가변인자 활용\ndef asterisk_test(a, *args):\n print(a, args)\n print(type(args))\n\nasterisk_test(1, 2, 3, 4, 5, 6)",
"1 (2, 3, 4, 5, 6)\n<class 'tuple'>\n"
],
[
"#키워드인자 활용\ndef asterisk_test(a, **kargs):\n print(a, kargs)\n print(type(kargs))\n\nasterisk_test(1, b=2, c=3, d=4, e=5, f=6)",
"1 {'b': 2, 'c': 3, 'd': 4, 'e': 5, 'f': 6}\n<class 'dict'>\n"
],
[
"#unpacking\ndef asterisk_test(a, args):\n print(a, *args)\n print(type(args))\n\nasterisk_test(1, (2, 3, 4, 5, 6))",
"1 2 3 4 5 6\n<class 'tuple'>\n"
],
[
"a, b, c = ([1, 2], [3, 4], [5, 6])\nprint(a, b, c)\n\ndata = ([1, 2], [3, 4], [5, 6])\nprint(*data)",
"[1, 2] [3, 4] [5, 6]\n[1, 2] [3, 4] [5, 6]\n"
],
[
"def asterisk_test(a, b, c, d):\n print(a, b, c, d)\n \ndata = {'b':1, 'c':2, 'd':3}\n\nasterisk_test(10, **data)",
"10 1 2 3\n"
],
[
"for data in zip(*([1, 2], [3, 4], [5, 6])):\n print(data)",
"(1, 3, 5)\n(2, 4, 6)\n"
],
[
"for data in zip(*([1, 2], [3, 4], [5, 6])):\n print(sum(data))",
"9\n12\n"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0c872565318ff104ed8657ecaffd40c622d045e | 290,730 | ipynb | Jupyter Notebook | model_notebooks/.ipynb_checkpoints/metrics-visualization-by-type-checkpoint.ipynb | buds-lab/building-prediction-benchmarking | 12658f75fc90d3454de3e0d2c210d9aa8b8cbef3 | [
"MIT"
] | 26 | 2019-03-30T11:03:51.000Z | 2022-03-10T11:50:42.000Z | model_notebooks/.ipynb_checkpoints/metrics-visualization-by-type-checkpoint.ipynb | buds-lab/building-prediction-benchmarking | 12658f75fc90d3454de3e0d2c210d9aa8b8cbef3 | [
"MIT"
] | 1 | 2019-03-30T04:17:39.000Z | 2019-03-30T04:17:39.000Z | model_notebooks/.ipynb_checkpoints/metrics-visualization-by-type-checkpoint.ipynb | buds-lab/building-prediction-benchmarking | 12658f75fc90d3454de3e0d2c210d9aa8b8cbef3 | [
"MIT"
] | 6 | 2019-10-18T16:19:14.000Z | 2021-08-16T15:21:04.000Z | 1,144.606299 | 79,736 | 0.952499 | [
[
[
"import pandas as pd\nimport numpy as np\n%matplotlib inline\nimport matplotlib.pyplot as plt\nfrom sklearn.preprocessing import normalize\nimport seaborn as sns\n\n# list of models\n# Commented few models because they produced very big results which interfere visualization\nmodels = [\n# 'RandomForestRegressor',\n# 'AdaBoostRegressor',\n# 'BaggingRegressor',\n# 'DecisionTreeRegressor',\n 'DummyRegressor',\n 'ExtraTreeRegressor',\n #'ExtraTreesRegressor',\n #'GaussianProcessRegressor',\n #'GradientBoostingRegressor',\n #'HuberRegressor',\n 'KNeighborsRegressor',\n #'MLPRegressor',\n #'PassiveAggressiveRegressor',\n #'RANSACRegressor',\n #'SGDRegressor',\n #'TheilSenRegressor'\n ]\nbuildingtypes = ['Office', 'PrimClass', 'UnivClass', 'UnivDorm', 'UnivLab']",
"_____no_output_____"
],
[
"# Generate different line styles\n# 24 different different lines will be generated\nlineStyles = ['-', '--', '-.', ':']\nlineColors = ['b', 'g', 'r', 'c', 'm', 'y']\nstyles = []\n\nfor j in range(3):\n for i in range(5):\n styles.append(lineColors[i] + lineStyles[(i + j) % 4])",
"_____no_output_____"
],
[
"def visualize(arg):\n for buildingtype in buildingtypes:\n # Draw lines on single plot\n \n plt.style.use('seaborn-whitegrid')\n plt.figure(figsize=(15,3))\n \n \n for i in range(len(models)):\n dataframes = []\n data = pd.read_csv('../results/' + models[i] + '_metrics_' + buildingtype + '.csv')\n data = data.drop(columns=['Unnamed: 0'])\n data['buidingtype'] = buildingtype\n dataframes.append(data)\n result = pd.concat(dataframes)\n \n rows = result[result['buidingtype']==buildingtype]['MAPE']\n # Single line creator\n value, = plt.plot(rows, styles[i], label=models[i])\n\n # Draw plot\n plt.title(buildingtype, loc='left')\n plt.legend(bbox_to_anchor=(1.05, 1), loc='upper left', borderaxespad=0.)\n plt.ylabel(arg)\n plt.xlabel('Buildings')\n plt.show()\n",
"_____no_output_____"
],
[
"visualize('MAPE')",
"_____no_output_____"
]
],
[
[
"# Box plot array visualization\n\nBased on this: https://stackoverflow.com/questions/41384040/subplot-for-seaborn-boxplot",
"_____no_output_____"
]
],
[
[
"f, axes = plt.subplots(5, 3, figsize=(11,11), sharex='col')\nplt.style.use('seaborn-whitegrid')\n\nfor buildingtype in buildingtypes:\n # Draw lines on single plot\n \n \n \n MAPE = {}\n NMBE = {}\n CVRSME = {}\n for i in range(len(models)):\n \n dataframes = []\n data = pd.read_csv('../results/' + models[i] + '_metrics_' + buildingtype + '.csv')\n data = data.drop(columns=['Unnamed: 0'])\n data['buidingtype'] = buildingtype\n dataframes.append(data)\n result = pd.concat(dataframes)\n \n MAPE[models[i]] = result[result['buidingtype']==buildingtype]['MAPE']\n NMBE[models[i]] = result[result['buidingtype']==buildingtype]['NMBE']\n CVRSME[models[i]] = result[result['buidingtype']==buildingtype]['CVRSME']\n \n MAPE_df = pd.DataFrame(MAPE)\n MAPE_df = MAPE_df[MAPE_df<100].melt()\n ax1 = sns.boxplot(data=MAPE_df, x='value', y='variable', ax=axes[buildingtypes.index(buildingtype),0])\n ax1.set(ylabel=buildingtype, xlabel=\"MAPE\")\n \n NMBE_df = pd.DataFrame(NMBE)\n NMBE_df = NMBE_df.melt() #[NMBE_df<100]\n ax2 = sns.boxplot(data=NMBE_df, x='value', y='variable', ax=axes[buildingtypes.index(buildingtype),1])\n ax2.set(ylabel=\"\", xlabel=\"NMBE\", yticks=[])\n \n CVRSME_df = pd.DataFrame(CVRSME)\n CVRSME_df = CVRSME_df.melt() #[NMBE_df<100]\n ax3 = sns.boxplot(data=CVRSME_df, x='value', y='variable', ax=axes[buildingtypes.index(buildingtype),2])\n ax3.set(ylabel=\"\", xlabel=\"CVRSME\", yticks=[])\n \n# sns.boxplot(y=\"b\", x= \"a\", data=rows, orient='v' ) #, ax=axes[0]\n# print(rows)\n # Single line creator\n# value, = plt.plot(rows, styles[i], label=models[i])\n \n# sns.boxplot(y=\"b\", x= \"a\", data=df, orient='v' , ax=axes[0])\n# sns.boxplot(y=\"c\", x= \"a\", data=df, orient='v' , ax=axes[1])",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0c8734e3243d4c3adc20ec715ae1828bed1e518 | 257,808 | ipynb | Jupyter Notebook | RiskStudy.ipynb | iamjoeker/fair_notebook | 642c83d957890dc9504438d5df2834168abb9280 | [
"MIT"
] | 14 | 2017-05-09T01:10:13.000Z | 2021-04-03T19:10:23.000Z | RiskStudy.ipynb | iamjoeker/fair_notebook | 642c83d957890dc9504438d5df2834168abb9280 | [
"MIT"
] | 1 | 2019-08-26T02:33:31.000Z | 2019-08-26T02:33:31.000Z | RiskStudy.ipynb | iamjoeker/fair_notebook | 642c83d957890dc9504438d5df2834168abb9280 | [
"MIT"
] | 2 | 2017-11-07T17:34:36.000Z | 2018-10-10T21:52:32.000Z | 191.536404 | 78,534 | 0.760562 | [
[
[
"suppressMessages(library(\"mc2d\"))\nlibrary(\"scales\")\nlibrary(\"ggplot2\")\nlibrary(\"gridExtra\")",
"_____no_output_____"
]
],
[
[
"# Risk Study for REPLACE ME\n\nSee the [ISO 27005 Risk Cookbook](http://www.businessofsecurity.com/docs/FAIR%20-%20ISO_IEC_27005%20Cookbook.pdf)\nfor a more detailed explanation of this template.",
"_____no_output_____"
],
[
"# Asset\n\nDefine the asset or assets at risk",
"_____no_output_____"
],
[
"# Threat Community\n\nExplain the threat community. This should include where they operate, how effective they are, and any additional details that help understand them.",
"_____no_output_____"
],
[
"## Threat Capability\n\nDefine the ability for the threat agent to overcome the controls. The guideline for values here are as follows:\n\n|Rating |Value |\n|----------------------|------|\n|Very High (Top 2%) |98-100|\n|High (Top 16%) |84-97 |\n|Moderate |17-84 |\n|Low (Bottom 16%) |3-16 |\n|Very Low (Bottom 2%) |0-2 |",
"_____no_output_____"
]
],
[
[
"tcap.min <- 0\ntcap.likely <- 50\ntcap.max <- 100\ntcap.confidence <- 10",
"_____no_output_____"
]
],
[
[
"# Controls\n\nDefine the controls that resist the threat community. Provide any necessary links and descriptions.",
"_____no_output_____"
],
[
"## Control Strength\n\nDefine the ability of the controls in play to overcome the threat agents.\n\n|Rating |Value |\n|----------------------|------|\n|Very High (Top 2%) |98-100|\n|High (Top 16%) |84-97 |\n|Moderate |17-84 |\n|Low (Bottom 16%) |3-16 |\n|Very Low (Bottom 2%) |0-2 |",
"_____no_output_____"
]
],
[
[
"cs.min <- 0\ncs.likely <- 50\ncs.max <- 100\ncs.confidence <- 10",
"_____no_output_____"
]
],
[
[
"# Threat Event Frequency\n\nThreat Event Frequency. Number assumes an annual value. Example values\nare as follows:\n\n|Rating |Value |\n|---------|------|\n|Very High|> 100 |\n|High |10-100|\n|Moderate |1-10 |\n|Low |> .1 |\n|Very Low |< .1 |",
"_____no_output_____"
]
],
[
[
"tef <- .25",
"_____no_output_____"
]
],
[
[
"# Loss Magnitude\n\nDefine the types of loss that could occur during a loss event for this study.\n\n|Primary |ISO/IEC 27005 Direct Operational Impacts |\n|:--------------|:-------------------------------------------------------------------------|\n|Productivity |The financial replacement value of lost (part of) asset |\n|Response |The cost of acquisition, configuration, and installation of the new asset |\n|Replacement |The cost of suspended operations due to the incident |\n| |Impact results in an information security breach |\n\n|Secondary |ISO/IEC 27005 Indirect Operational Impacts |\n|:-----------------------|:------------------------------------------------------------------------|\n|Competitive Advantage |Opportunity cost |\n|Fines/Judgments |Legal or regulatory actions levied against an organization including bail|\n|Reputation |Potential misuse of information obtained through a security breach |\n| |Violation of statutory or regulatory obligations |\n| |Violation of ethical codes of conduct |",
"_____no_output_____"
],
[
"## Probable Loss\n\nSet the probable amount for a single loss event. This is a combination in dollars of both the primary and secondary loss factors.",
"_____no_output_____"
]
],
[
[
"loss.probable <- 100000",
"_____no_output_____"
]
],
[
[
"## Worst Case Loss\n\nSet the worst case amount a single loss event. This is a combination in dollars of both the primary and secondary loss factors",
"_____no_output_____"
]
],
[
[
"loss.worstCase <- 1000000",
"_____no_output_____"
]
],
[
[
"# Qualified risk based on loss tolerance",
"_____no_output_____"
]
],
[
[
"loss.veryHigh <- 10000000\nloss.high <- 1000000\nloss.moderate <- 100000\nloss.low <- 50000\nloss.veryLow <- 10000",
"_____no_output_____"
]
],
[
[
"# Generate distribution of samples",
"_____no_output_____"
]
],
[
[
"sampleSize <- 100000\ncs <- rpert(sampleSize, cs.min, cs.likely, cs.max, cs.confidence)\ntcap <- rpert(sampleSize, tcap.min, tcap.likely, tcap.max, tcap.confidence)",
"_____no_output_____"
],
[
"csPlot <- ggplot(data.frame(cs), aes(x = cs))\ncsPlot <- csPlot + geom_histogram(aes(y = ..density..), color=\"black\",fill=\"white\", binwidth=1)\ncsPlot <- csPlot + geom_density(fill=\"steelblue\",alpha=2/3)\ncsPlot <- csPlot + theme_bw()\ncsPlot <- csPlot + labs(title=\"Control Strength\", x=\"Sample Value\", y=\"Density\")\ncsPlot <- csPlot + scale_x_continuous(breaks=seq(0,100, by=10))\n\ntcapPlot <- ggplot(data.frame(tcap), aes(x = tcap))\ntcapPlot <- tcapPlot + geom_histogram(aes(y = ..density..), color=\"black\",fill=\"white\", binwidth=1)\ntcapPlot <- tcapPlot + geom_density(fill=\"steelblue\",alpha=2/3)\ntcapPlot <- tcapPlot + theme_bw()\ntcapPlot <- tcapPlot + labs(title=\"Threat Capability\", x=\"Sample Value\", y=\"Density\")\ntcapPlot <- tcapPlot + scale_x_continuous(breaks=seq(0,100, by=10))\n\ngrid.arrange(csPlot, tcapPlot, heights=4:5, ncol=2)",
"_____no_output_____"
]
],
[
[
"# Vulnerability Function",
"_____no_output_____"
]
],
[
[
"CalculateVulnerability <- function() {\n if (sampleSize < 100) {\n stop(\"Sample size needs to be at least 100 to get statistically significant results\")\n }\n\n vulnerability <- 0\n\n for (i in 1:sampleSize) {\n if (tcap[i] > cs[i]) {\n vulnerability <- vulnerability + 1\n }\n }\n\n return(vulnerability / sampleSize)\n}",
"_____no_output_____"
]
],
[
[
"# Loss Event Frequency Function",
"_____no_output_____"
]
],
[
[
"CalculateLossEventFrequency <- function() {\n return(CalculateVulnerability() * tef)\n}",
"_____no_output_____"
]
],
[
[
"# Risk Function",
"_____no_output_____"
]
],
[
[
"CalculateRisk <- function(loss) {\n if (loss >= loss.veryHigh) {\n return(\"Very High\")\n } else if (loss < loss.veryHigh && loss >= loss.high) {\n return(\"High\")\n } else if (loss < loss.high && loss >= loss.moderate) {\n return(\"Moderate\")\n } else if (loss < loss.moderate && loss >= loss.veryLow) {\n return(\"Low\")\n } else {\n return(\"Very Low\")\n }\n}",
"_____no_output_____"
]
],
[
[
"# Annualized Loss Function",
"_____no_output_____"
]
],
[
[
"CalculateAnnualizedLoss <- function(lef, lm) {\n return(lm * lef)\n}",
"_____no_output_____"
]
],
[
[
"# Calculate",
"_____no_output_____"
]
],
[
[
"lossEventFrequency <- CalculateLossEventFrequency()\nworstCaseLoss <- CalculateAnnualizedLoss(lossEventFrequency, loss.worstCase)\nprobableLoss <- CalculateAnnualizedLoss(lossEventFrequency, loss.probable)\nworstCaseRisk <- CalculateRisk(worstCaseLoss)\nprobableRisk <- CalculateRisk(probableLoss)",
"_____no_output_____"
]
],
[
[
"# Final Results",
"_____no_output_____"
]
],
[
[
"cat(\"Probable Risk:\", probableRisk, dollar_format()(probableLoss), \"\\n\")\ncat(\"Worst Case Risk:\", worstCaseRisk, dollar_format()(worstCaseLoss), \"\\n\")",
"Probable Risk: Low $12,510 \nWorst Case Risk: Moderate $125,100 \n"
]
],
[
[
"# Risk Treatments\n\nDocument any risk treatments that may come out of this study.",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d0c875d1dd7e59957be5399d50d29e740b77e1e3 | 562,555 | ipynb | Jupyter Notebook | examples/tests/example_integration_arma_hsmm.ipynb | ttesileanu/bio-time-series | da14482422b56c2e750a0044866788f4a87dde12 | [
"MIT"
] | null | null | null | examples/tests/example_integration_arma_hsmm.ipynb | ttesileanu/bio-time-series | da14482422b56c2e750a0044866788f4a87dde12 | [
"MIT"
] | null | null | null | examples/tests/example_integration_arma_hsmm.ipynb | ttesileanu/bio-time-series | da14482422b56c2e750a0044866788f4a87dde12 | [
"MIT"
] | 1 | 2022-03-07T22:22:24.000Z | 2022-03-07T22:22:24.000Z | 1,820.566343 | 370,836 | 0.959151 | [
[
[
"# Testing ARMA hidden semi-Markov models",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport seaborn as sns\nimport time\n\nfrom types import SimpleNamespace\n\nfrom bioslds import sources\nfrom bioslds.arma import Arma, make_random_arma\nfrom bioslds.arma_hsmm import sample_switching_models, ArmaHSMM\nfrom bioslds.plotting import FigureManager",
"_____no_output_____"
]
],
[
[
"## Test `sample_switching_models`",
"_____no_output_____"
],
[
"### Generate a sawtooth signal",
"_____no_output_____"
]
],
[
[
"sawtooth = SimpleNamespace(\n arma1=Arma([1.0], [], bias=0.05, default_source=sources.Constant(0)),\n arma2=Arma([1.0], [], bias=-0.05, default_source=sources.Constant(0)),\n usage_seq=np.tile(np.repeat([0, 1], 20), 10),\n)\nsawtooth.n = len(sawtooth.usage_seq)\n\nsawtooth.sig = sample_switching_models(\n [sawtooth.arma1, sawtooth.arma2], sawtooth.usage_seq\n)",
"_____no_output_____"
],
[
"with FigureManager() as (_, ax):\n ax.plot(sawtooth.sig)",
"_____no_output_____"
]
],
[
[
"### Generate a noisy step signal",
"_____no_output_____"
]
],
[
[
"rng = np.random.default_rng(1)\nnoisy_step = SimpleNamespace(\n arma1=Arma([0.8], [], bias=1.0, default_source=sources.GaussianNoise(1, scale=0.1)),\n arma2=Arma(\n [0.75], [], bias=-0.5, default_source=sources.GaussianNoise(2, scale=0.1)\n ),\n arma3=Arma(\n [0.85], [], bias=0.2, default_source=sources.GaussianNoise(3, scale=0.1)\n ),\n usage_seq=np.repeat(rng.integers(low=0, high=3, size=10), 20),\n)\nnoisy_step.n = len(sawtooth.usage_seq)\n\nnoisy_step.sig = sample_switching_models(\n [noisy_step.arma1, noisy_step.arma2, noisy_step.arma3], noisy_step.usage_seq\n)",
"_____no_output_____"
],
[
"with FigureManager() as (_, ax):\n ax.plot(noisy_step.sig, \"k\")\n ax.set_xlabel(\"time step\")\n ax.set_ylabel(\"signal\")\n \n ax2 = ax.twinx()\n ax2.plot(noisy_step.usage_seq, c=\"C1\", ls=\"--\")\n ax2.set_ylabel(\"state\", color=\"C1\")\n ax2.tick_params(axis=\"y\", labelcolor=\"C1\")\n ax2.spines[\"right\"].set_color(\"C1\")\n \n ax2.set_yticks([0, 1, 2])\n sns.despine(ax=ax2, left=True, right=False, offset=10, bottom=True)",
"_____no_output_____"
]
],
[
[
"## Test `ArmaHSMM`",
"_____no_output_____"
],
[
"### Generate a signal with switching ARs, using minimal dwell time",
"_____no_output_____"
]
],
[
[
"random_switching = SimpleNamespace(\n arma1=Arma([0.8], [], default_source=sources.GaussianNoise(1)),\n arma2=Arma([-0.5], [], default_source=sources.GaussianNoise(2)),\n n=10000,\n)\n\nrandom_switching.arma_hsmm = ArmaHSMM(\n [random_switching.arma1, random_switching.arma2],\n min_dwell=15,\n dwell_times=[25, 35],\n)\n\n(\n random_switching.sig,\n random_switching.u,\n random_switching.usage_seq,\n) = random_switching.arma_hsmm.transform(\n random_switching.n, return_input=True, return_usage_seq=True\n)",
"_____no_output_____"
],
[
"with FigureManager() as (_, ax):\n ax.plot(random_switching.sig[:100], \"k\")\n ax.set_xlabel(\"time step\")\n ax.set_ylabel(\"signal\")\n\n ax2 = ax.twinx()\n ax2.plot(random_switching.usage_seq[:100], c=\"C1\", ls=\"--\")\n ax2.set_ylabel(\"state\", color=\"C1\")\n ax2.tick_params(axis=\"y\", labelcolor=\"C1\")\n ax2.spines[\"right\"].set_color(\"C1\")\n\n ax2.set_yticks([0, 1])\n sns.despine(ax=ax2, left=True, right=False, offset=10, bottom=True)",
"_____no_output_____"
],
[
"with FigureManager(1, 2) as (_, axs):\n for i, ax in enumerate(axs):\n crt_sig = random_switching.sig[random_switching.usage_seq == i]\n ax.scatter(crt_sig[1:], crt_sig[:-1], alpha=0.05, label=\"actual\")\n xl = ax.get_xlim()\n ax.plot(\n xl,\n random_switching.arma_hsmm.models[i].a[0] * np.asarray(xl),\n \"k--\",\n label=\"expected\",\n )\n\n leg_h = ax.legend(frameon=False)\n for crt_lh in leg_h.legendHandles:\n crt_lh.set_alpha(1)\n\n ax.set_xlabel(\"$y_t$\")\n ax.set_ylabel(\"$y_{t-1}$\")\n ax.set_title(f\"State {i}\")",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
]
] |
d0c87e5f2c83c103800e00dbee3b756962139d28 | 123,287 | ipynb | Jupyter Notebook | Planarity check example.ipynb | Keksozavr/graph_planarity | 51d2a6929f4cb3b013ccf13f92d98549963fbbee | [
"MIT"
] | 2 | 2019-12-05T22:53:14.000Z | 2020-11-02T13:23:07.000Z | Planarity check example.ipynb | Keksozavr/graph_planarity | 51d2a6929f4cb3b013ccf13f92d98549963fbbee | [
"MIT"
] | null | null | null | Planarity check example.ipynb | Keksozavr/graph_planarity | 51d2a6929f4cb3b013ccf13f92d98549963fbbee | [
"MIT"
] | 1 | 2019-12-05T22:53:22.000Z | 2019-12-05T22:53:22.000Z | 343.417827 | 32,636 | 0.927308 | [
[
[
"from planaritychecker import PlanarityChecker\nfrom numpy.random import random, randint\nimport networkx as nx\nfrom planarity.planarity_networkx import planarity\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"# Check $K_5$ and $K_{3,3}$ without one edge ",
"_____no_output_____"
]
],
[
[
"almost_K5 = PlanarityChecker(5)\ngraph_almost_K5 = nx.Graph()\ngraph_almost_K5.add_nodes_from(range(5))\nfor i in range(5):\n for j in range(i + 1, 5):\n if (i != 0 or j != 1):\n almost_K5.add_edge(i, j)\n graph_almost_K5.add_edge(i, j)\nnx.draw(graph_almost_K5)\nprint(\"almost K5. number of edges: %d, is planar: %d\" % (almost_K5.edges_count, almost_K5.is_planar()))",
"almost K5. number of edges: 9, is planar: 1\n"
],
[
"almost_K33 = PlanarityChecker(6)\ngraph_almost_K33 = nx.Graph()\ngraph_almost_K33.add_nodes_from(range(6))\nfor i in range(3):\n for j in range(3, 6):\n if i != 1 or j != 4:\n almost_K33.add_edge(i, j)\n graph_almost_K33.add_edge(i, j)\nnx.draw(graph_almost_K33)\nprint(\"Almost K3,3. number of edges: %d, is planar: %d\" % (almost_K33.edges_count, almost_K33.is_planar()))",
"Almost K3,3. number of edges: 8, is planar: 1\n"
]
],
[
[
"# Check $K_5$ and $K_{3,3}$",
"_____no_output_____"
]
],
[
[
"K5 = almost_K5\nK5.add_edge(0, 1)\ngraph_K5 = graph_almost_K5\ngraph_K5.add_edge(0, 1)\nnx.draw(graph_K5)\nprint(\"K5. number of edges: %d, is planar: %d\" % (K5.edges_count, K5.is_planar()))",
"K5. number of edges: 10, is planar: 0\n"
],
[
"K33 = almost_K33\nK33.add_edge(1, 4)\ngraph_K33 = graph_almost_K33\ngraph_K33.add_edge(1, 4)\nnx.draw(graph_K33)\nprint(\"K33. number of edges: %d, is planar: %d\" % (K33.edges_count, K33.is_planar()))",
"K33. number of edges: 9, is planar: 0\n"
]
],
[
[
"# Stress test\n# Generate a lot of graphs with probability of every edge=$p$ and check planarity with PlanarityChecker and planarity library (https://pypi.org/project/planarity/)",
"_____no_output_____"
]
],
[
[
"def generate_graphs(n, p):\n \"\"\"Generate Graph and nx.Graph with n vertexes, where p is a probability of edge existance\"\"\"\n G = PlanarityChecker(n)\n nx_G = nx.Graph() \n nx_G.add_nodes_from(range(n))\n for i in range(n):\n for j in range(i + 1, n):\n if random() < p:\n G.add_edge(i, j)\n nx_G.add_edge(i, j)\n return (G, nx_G)\n\nn_planar, n_notplanar = 0, 0\nfor i in range(1000):\n G, nxG = generate_graphs(100, 0.02) \n if G.is_planar() != planarity.is_planar(nxG):\n print(\"Custom: %d, Library: %d\" % (G.is_planar(), planarity.is_planar(nxG)))\n nx.draw(nxG)\n break\n else:\n if (G.is_planar()):\n n_planar += 1\n else:\n n_notplanar += 1\nprint(n_planar, n_notplanar)",
"141 859\n"
]
],
[
[
"# It works correctly. Check execution time",
"_____no_output_____"
]
],
[
[
"n = 20000\nm = 40000\nG = PlanarityChecker(n)\nedges = set()\nfor i in range(m):\n a = randint(0, n)\n b = randint(0, n)\n while (a, b) in edges or a == b:\n a = randint(0, n)\n b = randint(0, n)\n edges.add((a, b))\nfor e in edges:\n G.add_edge(e[0], e[1])",
"_____no_output_____"
],
[
"import sys\nsys.setrecursionlimit(20000)",
"_____no_output_____"
],
[
"%%time\nG.is_planar()",
"CPU times: user 382 ms, sys: 10.9 ms, total: 393 ms\nWall time: 391 ms\n"
],
[
"nx_G = nx.Graph()\nnx_G.add_edges_from(edges)",
"_____no_output_____"
],
[
"%%time\nplanarity.is_planar(nx_G)",
"CPU times: user 82.8 ms, sys: 9.68 ms, total: 92.4 ms\nWall time: 89.6 ms\n"
]
],
[
[
"# Not bad for python. (planarity library has implementation on C)",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
d0c88975ec842c434014fe45ecc49bbbe52112b2 | 26,564 | ipynb | Jupyter Notebook | notebooks-notworking/flyingpigeon.ipynb | Ouranosinc/PAVICS-e2e-workflow-tests- | f8e5141bb85698ddc1ec62953494cc9701872571 | [
"Apache-2.0"
] | 1 | 2020-04-03T18:05:59.000Z | 2020-04-03T18:05:59.000Z | notebooks-notworking/flyingpigeon.ipynb | Ouranosinc/PAVICS-e2e-workflow-tests- | f8e5141bb85698ddc1ec62953494cc9701872571 | [
"Apache-2.0"
] | 66 | 2019-06-03T16:29:40.000Z | 2022-03-18T19:24:36.000Z | notebooks-notworking/flyingpigeon.ipynb | Ouranosinc/PAVICS-e2e-workflow-tests- | f8e5141bb85698ddc1ec62953494cc9701872571 | [
"Apache-2.0"
] | 2 | 2019-09-27T12:59:07.000Z | 2019-12-09T08:55:23.000Z | 48.386157 | 538 | 0.651446 | [
[
[
"from __future__ import print_function\nimport os\nfrom netCDF4 import Dataset\nimport requests\nfrom lxml import etree \nimport matplotlib.pyplot as plt\nfrom owslib.wps import WebProcessingService, ComplexDataInput \n",
"_____no_output_____"
],
[
"verify_ssl = True if 'DISABLE_VERIFY_SSL' not in os.environ else False\n\ndef parseStatus(execute):\n o = requests.get(execute.statusLocation, verify=verify_ssl)\n t = etree.fromstring(o.content)\n ref = t.getchildren()[-1].getchildren()[-1].getchildren()[-1].get('{http://www.w3.org/1999/xlink}href')\n \n return ref",
"_____no_output_____"
],
[
"# catalogue WPS url\nwpsURL = 'https://pavics.ouranos.ca/twitcher/ows/proxy/catalog/pywps'\n\n# Connection \nwpsCatalogue = WebProcessingService(url=wpsURL, verify=verify_ssl)",
"_____no_output_____"
],
[
"for process in wpsCatalogue.processes:\n print ('%s \\t : %s \\n' %(process.identifier, process.abstract))",
"getpoint \t : Return a single value from a NetCDF file at the given grid coordinates. \n\nncplotly \t : Return a dictionary storing the data necessary to create a simple plotly time series. \n\npavicrawler \t : Crawl thredds server and write metadata to SOLR database. \n\npavicsearch \t : Search the PAVICS database and return a catalogue of matches. \n\npavicsupdate \t : Update database entries using key:value pairs and identified by their ids. \n\npavicsvalidate \t : Query database entries for missing required facets. \n\nperiod2indices \t : The final index is inclusive. \n\npavicstestdocs \t : Add test documents to Solr index. \n\n"
],
[
"wpsURL = 'https://pavics.ouranos.ca/twitcher/ows/proxy/flyingpigeon/wps'\nwpsFP = WebProcessingService(wpsURL, verify=verify_ssl)\nprint(wpsFP.identification.title)",
"Flyingpigeon 1.1_dev\n"
],
[
"for process in wpsFP.processes:\n print ('%s \\t : %s \\n' %(process.identifier, process.abstract))",
"subset_countries \t : Return the data whose grid cells intersect the selected countries for each input dataset. \n\nsubset_continents \t : Return the data whose grid cells intersect the selected continents for each input dataset. \n\nsubset_regionseurope \t : Return the data whose grid cells inteserct the selected regions for each input dataset. \n\npointinspection \t : Extract the timeseries at the given coordinates. \n\nlandseamask \t : Mask grid cells according to their land area fraction. This process uses the ESGF datastore to access an appropriate land/sea mask. \n\nfetch_resources \t : Fetch data resources (limited to 50GB) to the local filesystem of the birdhouse compute provider. \n\nindices_percentiledays \t : Climatological percentile for each day of the year computed over the entire dataset. \n\nindices_single \t : Climate index calculated from one daily input variable. \n\nsdm_gbiffetch \t : Species occurence search in Global Biodiversity Infrastructure Facillity (GBIF) \n\nsdm_getindices \t : Indices preparation for SDM process \n\nsdm_csvindices \t : Indices preparation for SDM process \n\nsdm_csv \t : Indices preparation for SDM process \n\nsdm_allinone \t : Indices preparation for SDM process \n\nweatherregimes_reanalyse \t : k-mean cluster analyse of the pressure patterns. Clusters are equivalent to weather regimes \n\nweatherregimes_projection \t : k-mean cluster analyse of the pressure patterns. Clusters are equivalent to weather regimes \n\nweatherregimes_model \t : k-mean cluster analyse of the pressure patterns. Clusters are equivalent to weather regimes \n\nplot_timeseries \t : Outputs some timeseries of the file field means. Spaghetti and uncertainty plot \n\nsegetalflora \t : Species biodiversity of segetal flora. \n\nspatial_analog \t : Spatial analogs based on the comparison of climate indices. The algorithm compares the distribution of the target indices with the distribution of spatially distributed candidate indices and returns a value measuring the dissimilarity between both distributions over the candidate grid. \n\nmap_spatial_analog \t : Produce map showing the dissimilarity values computed by the spatial_analog process as well as indicating by a marker the location of the target site. \n\nsubset \t : Return the data for which grid cells intersect the selected polygon for each input dataset as well asthe time range selected. \n\naverager \t : Return the data with weighted average of grid cells intersecting the selected polygon for each input dataset as well as the time range selected. \n\nsubset_WFS \t : Return the data for which grid cells intersect the selected polygon for each input dataset. \n\naverager_WFS \t : Return the data with weighted average of grid cells intersecting the selected polygon for each input dataset. \n\nsubset_bbox \t : Return the data for which grid cells intersect the bounding box for each input dataset as well asthe time range selected. \n\naverager_bbox \t : Return the data with weighted average of grid cells intersecting the bounding box for each input dataset as well as the time range selected. \n\nouranos_public_indicators \t : Compute climate indicators: mean daily temp., min daily temp., max daily temp., growing degree days, number of days above 30C, freeze thaw cycles, total precipitation, and max 5-day precip. \n\nncmerge \t : Merge NetCDF files in the time dimension. \n\nEO_COPERNICUS_search \t : Search for EO Data in the scihub.copernicus archiveoutput is a list of Product according to the querry and a graphical visualisation. \n\nEO_COPERNICUS_fetch \t : Search for EO Data in the scihub.copernicus archiveproducts will be fechted into the local disc system.outuput is a list of produces and a graphical visualisation. \n\nesmf_regrid \t : Regrid netCDF files to a destination grid. \n\nEO_COPERNICUS_rgb \t : Based on a search querry the appropriate products are ploted as RGB graphics \n\nEO_COPERNICUS_indices \t : Derivateing indices like NDVI based on \n\nkddm_bc \t : Bias correction method using Kernel Density Distribution Mapping (KDDM). \n\nfreezethaw \t : Number of freeze-thaw events, where freezing and thawing occurs once a threshold of degree days below or above 0C is reached. A complete cycle (freeze-thaw-freeze) will return a value of 2. \n\nduration \t : Summarizes consecutive occurrences in a sequence where the logical operation returns TRUE. The summary operation is applied to the sequences within a temporal aggregation. \n\nicclim_TXx \t : Calculates the TXx indice: maximum of daily maximum temperature. \n\nicclim_SD \t : Calculates the SD indice: mean of daily snow depth [cm] \n\nicclim_TX90p \t : Calculate the TX90p indice: number of warm days-times (i.e. days with daily max temperature > 90th percentile of daily max temperature in the base period). \n\nicclim_R99pTOT \t : Calculate the R99pTOT indice: precipitation fraction due to extremely wet days (i.e. days with daily precipitation amount > 99th percentile of daily amount in the base period) [%] \n\nicclim_TXn \t : Calculates the TXn indice: minimum of daily maximum temperature. \n\nicclim_CDD \t : Calculates the CDD indice: maximum number of consecutive dry days (i.e. days with daily precipitation amount < 1 mm) [days]. \n\nicclim_TG90p \t : Calculate the TG90p indice: number of warm days (i.e. days with daily mean temperature > 90th percentile of daily mean temperature in the base period). \n\nicclim_SU \t : Calculates the SU indice: number of summer days (i.e. days with daily maximum temperature > 25 degrees Celsius) [days]. \n\nicclim_CFD \t : Calculates the CFD indice: maximum number of consecutive frost days (i.e. days with daily minimum temperature < 0 degrees Celsius) [days]. \n\nicclim_TN10p \t : Calculate the TN10p indice: number of cold nights (i.e. days with daily min temperature < 10th percentile of daily min temperature in the base period). \n\nicclim_TG \t : Calculates the TG indice: mean of daily mean temperature. \n\nicclim_TN90p \t : Calculate the TN90p indice: number of warm nights (i.e. days with daily min temperature > 90th percentile of daily min temperature in the base period). \n\nicclim_TR \t : Calculates the TR indice: number of tropical nights (i.e. days with daily minimum temperature > 20 degrees Celsius) [days]. \n\nicclim_RX5day \t : Calculates the RX5day indice: maximum consecutive 5-day precipitation amount [mm] \n\nicclim_vDTR \t : Calculates the vDTR indice: mean absolute day-to-day difference in DTR. \n\nicclim_SD50cm \t : Calculates the SD50cm indice: number of days with snow depth >= 50 cm [days] \n\nicclim_CWD \t : Calculates the CWD indice: maximum number of consecutive wet days (i.e. days with daily precipitation amount > = 1 mm) [days]. \n\nicclim_ID \t : Calculates the ID indice: number of ice days (i.e. days with daily maximum temperature < 0 degrees Celsius) [days]. \n\nicclim_R20mm \t : Calculates the R20mm indice: number of very heavy precipitation days (i.e. days with daily precipitation amount > = 20 mm) [days] \n\nicclim_CSU \t : Calculates the CSU indice: maximum number of consecutive summer days (i.e. days with daily maximum temperature > 25 degrees Celsius) [days]. \n\nicclim_RX1day \t : Calculates the RX1day indice: maximum 1-day precipitation amount [mm] \n\nicclim_WSDI \t : Calculate the WSDI indice (warm-spell duration index): number of days where, in intervals of at least 6 consecutive days, \n\nicclim_RR1 \t : Calculates the RR1 indice: number of wet days (i.e. days with daily precipitation amount > = 1 mm) [days] \n\nicclim_CSDI \t : Calculate the CSDI indice (cold-spell duration index): number of days where, in intervals of at least 6 consecutive days, \n\nicclim_R75pTOT \t : Calculate the R75pTOT indice: precipitation fraction due to moderate wet days (i.e. days with daily precipitation amount > 75th percentile of daily amount in the base period) [%] \n\nicclim_R95pTOT \t : Calculate the R95pTOT indice: precipitation fraction due to very wet days (i.e. days with daily precipitation amount > 95th percentile of daily amount in the base period) [%] \n\nicclim_R10mm \t : Calculates the R10mm indice: number of heavy precipitation days (i.e. days with daily precipitation amount > = 10 mm) [days] \n\nicclim_SDII \t : Calculates the SDII (simple daily intensity index) indice: mean precipitation amount of wet days (i.e. days with daily precipitation amount > = 1 mm) [mm] \n\nicclim_DTR \t : Calculates the DTR indice: mean of daily temperature range. \n\nicclim_TG10p \t : Calculate the TG10p indice: number of cold days (i.e. days with daily mean temperature < 10th percentile of daily mean temperature in the base period). \n\nicclim_TX \t : Calculates the TX indice: mean of daily maximum temperature. \n\nicclim_PRCPTOT \t : Calculates the PRCPTOT indice: total precipitation in wet days [mm] \n\nicclim_TN \t : Calculates the TN indice: mean of daily minimum temperature. \n\nicclim_R75p \t : Calculate the R75p indice: number of moderate wet days (i.e. days with daily precipitation amount > 75th percentile of daily amount in the base period). \n\nicclim_TNx \t : Calculates the TNx indice: maximum of daily minimum temperature. \n\nicclim_SD5cm \t : Calculates the SD5cm indice: number of days with snow depth >= 5 cm [days] \n\nicclim_FD \t : Calculates the FD indice: number of frost days (i.e. days with daily minimum temperature < 0 degrees Celsius) [days]. \n\nicclim_R99p \t : Calculate the R99p indice: number of extremely wet days (i.e. days with daily precipitation amount > 99th percentile of daily amount in the base period). \n\nicclim_R95p \t : Calculate the R95p indice: number of very wet days (i.e. days with daily precipitation amount > 95th percentile of daily amount in the base period). \n\nicclim_SD1 \t : Calculates the SD1 indice: number of days with snow depth >= 1 cm [days] \n\nicclim_GD4 \t : Calculates the GD4 indice: growing degree days (sum of daily mean temperature > 4 degrees Celsius). \n\nicclim_TNn \t : Calculates the TNn indice: minimum of daily minimum temperature. \n\nicclim_HD17 \t : Calculates the HD17 indice: heating degree days (sum of (17 degrees Celsius - daily mean temperature)). \n\nicclim_ETR \t : Calculates the ETR indice: intra-period extreme temperature range. \n\nicclim_TX10p \t : Calculate the TX10p indice: number of cold day-times (i.e. days with daily max temperature < 10th percentile of daily max temperature in the base period). \n\n"
],
[
"proc_name = 'pavicsearch'\nconstraintString = 'variable:tasmax'\nmaxfiles = '1000000'\nmyinputs = [('constraints', constraintString),('type','File'), ('limit',maxfiles)]\nexecution = wpsCatalogue.execute(identifier=proc_name, inputs=myinputs)\nprint(execution.status)\nprint(execution.processOutputs[-1].reference)",
"ProcessSucceeded\nhttps://pavics.ouranos.ca/wpsoutputs/catalog/f990ae8e-3c6b-11e9-988e-0242ac120008/list_result_2019-03-01T21:50:09Z__2I6iQW.json\n"
],
[
"proc_name = 'pavicsearch'\nprocess = wpsCatalogue.describeprocess(proc_name) # get process info\nfor i in process.dataInputs:\n print('inputs :', i.identifier, ' : ', i.abstract)\nfor i in process.processOutputs:\n print('outputs :', i.identifier, ' : ', i.abstract)",
"inputs : facets : Comma separated list of facets; facets are searchable indexing terms in the database.\ninputs : shards : Shards to be queried\ninputs : offset : Where to start in the document count of the database search.\ninputs : limit : Maximum number of documents to return.\ninputs : fields : Comme separated list of fields to return.\ninputs : format : Output format.\ninputs : query : Direct query to the database.\ninputs : distrib : Distributed query\ninputs : type : One of Dataset, File, Aggregate or FileAsAggregate.\ninputs : constraints : Format is facet1:value1,facet2:value2,...\ninputs : esgf : Whether to also search ESGF nodes.\ninputs : list_type : Can be opendap_url, fileserver_url, gridftp_url, globus_url, wms_url\noutputs : search_result : PAVICS Catalogue Search Result\noutputs : list_result : List of urls of the search result.\n"
],
[
"proc_name = 'subset_bbox'\nprocess = wpsFP.describeprocess(identifier=proc_name)\n\nprint(process.title,' : ',process.abstract,'\\n')\nfor i in process.dataInputs:\n print('inputs :', i.identifier, ' : ', i.abstract)\nfor i in process.processOutputs:\n print('outputs :', i.identifier, ' : ', i.abstract)",
"Subset : Return the data for which grid cells intersect the bounding box for each input dataset as well asthe time range selected. \n\ninputs : resource : NetCDF files, can be OPEnDAP urls.\ninputs : lon0 : Minimum longitude.\ninputs : lon1 : Maximum longitude.\ninputs : lat0 : Minimum latitude.\ninputs : lat1 : Maximum latitude.\ninputs : initial_datetime : Initial datetime for temporal subsetting.\ninputs : final_datetime : Final datetime for temporal subsetting.\ninputs : variable : Name of the variable in the NetCDF file.Will be guessed if not provided.\noutputs : output : JSON file with link to NetCDF outputs.\n"
],
[
"# NBVAL_IGNORE_OUTPUT\n# ignore output of this cell because different PAVICS host will have different quantity of netCDF files\nref = parseStatus(execution)\nr = requests.get(ref, verify=verify_ssl)\nlist_nc = r.json()\nprint('Numer of files found :',len(list_nc), '\\n')\nprint(\"\\n\".join(list_nc[1:15]),'\\n...')",
"Numer of files found : 13026 \n\nhttps://pavics.ouranos.ca/twitcher/ows/proxy/thredds/dodsC/birdhouse/ouranos/climex/QC11d3_CCCma-CanESM2_rcp85/day/historical-r1-r3i1p1/tasmax/tasmax_kdc_198902_se.nc\nhttps://pavics.ouranos.ca/twitcher/ows/proxy/thredds/dodsC/birdhouse/ouranos/climex/QC11d3_CCCma-CanESM2_rcp85/day/historical-r1-r1i1p1/tasmax/tasmax_kda_206005_se.nc\nhttps://pavics.ouranos.ca/twitcher/ows/proxy/thredds/dodsC/birdhouse/ouranos/cb-oura-1.0/HadGEM2-CC/rcp45/day/tasmax/tasmax_day_HadGEM2-CC_rcp45_r1i1p1_na10kgrid_qm-moving-50bins-detrend_2043.nc\nhttps://pavics.ouranos.ca/twitcher/ows/proxy/thredds/dodsC/birdhouse/ouranos/climex/QC11d3_CCCma-CanESM2_rcp85/day/historical-r1-r2i1p1/tasmax/tasmax_kdb_202907_se.nc\nhttps://pavics.ouranos.ca/twitcher/ows/proxy/thredds/dodsC/birdhouse/ouranos/climex/QC11d3_CCCma-CanESM2_rcp85/day/historical-r1-r1i1p1/tasmax/tasmax_kda_200310_se.nc\nhttps://pavics.ouranos.ca/twitcher/ows/proxy/thredds/dodsC/birdhouse/ouranos/climex/QC11d3_CCCma-CanESM2_rcp85/day/historical-r1-r3i1p1/tasmax/tasmax_kdc_209701_se.nc\nhttps://pavics.ouranos.ca/twitcher/ows/proxy/thredds/dodsC/birdhouse/ouranos/climex/QC11d3_CCCma-CanESM2_rcp85/day/historical-r1-r2i1p1/tasmax/tasmax_kdb_199110_se.nc\nhttps://pavics.ouranos.ca/twitcher/ows/proxy/thredds/dodsC/birdhouse/ouranos/cb-oura-1.0/GFDL-ESM2M/rcp45/day/tasmax/tasmax_day_GFDL-ESM2M_rcp45_r1i1p1_na10kgrid_qm-moving-50bins-detrend_2046.nc\nhttps://pavics.ouranos.ca/twitcher/ows/proxy/thredds/dodsC/birdhouse/ouranos/climex/QC11d3_CCCma-CanESM2_rcp85/day/historical-r1-r3i1p1/tasmax/tasmax_kdc_201311_se.nc\nhttps://pavics.ouranos.ca/twitcher/ows/proxy/thredds/dodsC/birdhouse/ouranos/climex/QC11d3_CCCma-CanESM2_rcp85/day/historical-r1-r4i1p1/tasmax/tasmax_kdd_202210_se.nc\nhttps://pavics.ouranos.ca/twitcher/ows/proxy/thredds/dodsC/birdhouse/ouranos/climex/QC11d3_CCCma-CanESM2_rcp85/day/historical-r1-r2i1p1/tasmax/tasmax_kdb_199406_se.nc\nhttps://pavics.ouranos.ca/twitcher/ows/proxy/thredds/dodsC/birdhouse/ouranos/cb-oura-1.0/NorESM1-M/rcp85/day/tasmax/tasmax_day_NorESM1-M_rcp85_r1i1p1_na10kgrid_qm-moving-50bins-detrend_1998.nc\nhttps://pavics.ouranos.ca/twitcher/ows/proxy/thredds/dodsC/birdhouse/ouranos/climex/QC11d3_CCCma-CanESM2_rcp85/day/historical-r1-r4i1p1/tasmax/tasmax_kdd_199403_se.nc\nhttps://pavics.ouranos.ca/twitcher/ows/proxy/thredds/dodsC/birdhouse/ouranos/climex/QC11d3_CCCma-CanESM2_rcp85/day/historical-r1-r2i1p1/tasmax/tasmax_kdb_201703_se.nc \n...\n"
],
[
"nrcan_nc = [i for i in list_nc if 'nrcan' in i and ('1991' in i or '1992' in i or '1993' in i)]\n# sort the filtered list\nnrcan_nc.sort()\n\nprint('Number of files :', \"%s\\n\" % len(nrcan_nc), \"\\n\".join(nrcan_nc))",
"Number of files : 3\n https://pavics.ouranos.ca/twitcher/ows/proxy/thredds/dodsC/birdhouse/nrcan/nrcan_canada_daily/tasmax/nrcan_canada_daily_tasmax_1991.nc\nhttps://pavics.ouranos.ca/twitcher/ows/proxy/thredds/dodsC/birdhouse/nrcan/nrcan_canada_daily/tasmax/nrcan_canada_daily_tasmax_1992.nc\nhttps://pavics.ouranos.ca/twitcher/ows/proxy/thredds/dodsC/birdhouse/nrcan/nrcan_canada_daily/tasmax/nrcan_canada_daily_tasmax_1993.nc\n"
],
[
"nc_test = Dataset(nrcan_nc[0])\nprint(nc_test)",
"<type 'netCDF4._netCDF4.Dataset'>\nroot group (NETCDF3_CLASSIC data model, file format DAP2):\n Conventions: CF-1.5\n title: NRCAN 10km Gridded Climate Dataset\n history: 2012-10-22T11:26:06: Convert from original format to NetCDF\n institution: NRCAN\n source: ANUSPLIN\n redistribution: Redistribution policy unknown. For internal use only.\n DODS_EXTRA.Unlimited_Dimension: time\n dimensions(sizes): time(365), lat(510), lon(1068), ts(3)\n variables(dimensions): float32 \u001b[4mlon\u001b[0m(lon), float32 \u001b[4mlat\u001b[0m(lat), int16 \u001b[4mts\u001b[0m(ts), int16 \u001b[4mtime\u001b[0m(time), int16 \u001b[4mtime_vectors\u001b[0m(time,ts), float32 \u001b[4mtasmax\u001b[0m(time,lat,lon)\n groups: \n\n"
],
[
"myinputs = []\n# To keep things reasonably quick : subset jan-april\nfor i in nrcan_nc: \n myinputs.append(('resource', i))\nmyinputs.append(('lon0', '-80.0'))\nmyinputs.append(('lon1', '-70.0'))\nmyinputs.append(('lat0', '44.0'))\nmyinputs.append(('lat1', '50'))\nprint(myinputs)",
"[('resource', 'https://pavics.ouranos.ca/twitcher/ows/proxy/thredds/dodsC/birdhouse/nrcan/nrcan_canada_daily/tasmax/nrcan_canada_daily_tasmax_1991.nc'), ('resource', 'https://pavics.ouranos.ca/twitcher/ows/proxy/thredds/dodsC/birdhouse/nrcan/nrcan_canada_daily/tasmax/nrcan_canada_daily_tasmax_1992.nc'), ('resource', 'https://pavics.ouranos.ca/twitcher/ows/proxy/thredds/dodsC/birdhouse/nrcan/nrcan_canada_daily/tasmax/nrcan_canada_daily_tasmax_1993.nc'), ('lon0', '-80.0'), ('lon1', '-70.0'), ('lat0', '44.0'), ('lat1', '50')]\n"
],
[
"execution = wpsFP.execute(identifier=proc_name, inputs=myinputs)\nprint(execution.status)\nprint(execution.processOutputs[-1].reference)\nprint(execution.statusLocation)",
"ProcessSucceeded\nhttps://pavics.ouranos.ca:443/wpsoutputs/flyingpigeon/0fc900f2-3c6c-11e9-9291-0242ac120010/result_2019-03-01T21:50:56Z__pCYZio.json\nhttps://pavics.ouranos.ca:443/wpsoutputs/flyingpigeon/0fc900f2-3c6c-11e9-9291-0242ac120010.xml\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0c890c5898619f65952c7e0363b4f193a4b1703 | 31,821 | ipynb | Jupyter Notebook | notebooks/fileio/wradlib_radar_formats.ipynb | earthobservations/wradlib-notebooks | 8e3ff5df06479ef7a372ecaabc6b34a6c0d5b079 | [
"MIT"
] | null | null | null | notebooks/fileio/wradlib_radar_formats.ipynb | earthobservations/wradlib-notebooks | 8e3ff5df06479ef7a372ecaabc6b34a6c0d5b079 | [
"MIT"
] | null | null | null | notebooks/fileio/wradlib_radar_formats.ipynb | earthobservations/wradlib-notebooks | 8e3ff5df06479ef7a372ecaabc6b34a6c0d5b079 | [
"MIT"
] | null | null | null | 34.513015 | 1,415 | 0.632035 | [
[
[
"This notebook is part of the $\\omega radlib$ documentation: https://docs.wradlib.org.\n\nCopyright (c) $\\omega radlib$ developers.\nDistributed under the MIT License. See LICENSE.txt for more info.",
"_____no_output_____"
],
[
"# Supported radar data formats",
"_____no_output_____"
],
[
"The binary encoding of many radar products is a major obstacle for many potential radar users. Often, decoder software is not easily available. In case formats are documented, the implementation of decoders is a major programming effort. This tutorial provides an overview of the data formats currently supported by $\\omega radlib$. We seek to continuously enhance the range of supported formats, so this document is only a snapshot. If you need a specific file format to be supported by $\\omega radlib$, please [raise an issue](https://github.com/wradlib/wradlib/issues/new) of type *enhancement*. You can provide support by adding documents which help to decode the format, e.g. format reference documents or software code in other languages for decoding the format.\n\nAt the moment, *supported format* means that the radar format can be read and further processed by wradlib. Normally, wradlib will return an array of data values and a dictionary of metadata - if the file contains any. wradlib does not support encoding to any specific file formats, yet! This might change in the future, but it is not a priority. However, you can use Python's netCDF4 or h5py packages to encode the results of your analysis to standard self-describing file formats such as netCDF or hdf5. ",
"_____no_output_____"
],
[
"In the following, we will provide an overview of file formats which can be currently read by $\\omega radlib$. \n\nReading weather radar files is done via the [wradlib.io](https://docs.wradlib.org/en/latest/io.html) module. There you will find a complete function reference. ",
"_____no_output_____"
]
],
[
[
"import wradlib as wrl\nimport warnings\nwarnings.filterwarnings('ignore')\nimport matplotlib.pyplot as pl\nimport numpy as np\ntry:\n get_ipython().magic(\"matplotlib inline\")\nexcept:\n pl.ion()",
"_____no_output_____"
]
],
[
[
"## German Weather Service: DX format",
"_____no_output_____"
],
[
"The German Weather Service uses the DX file format to encode local radar sweeps. DX data are in polar coordinates. The naming convention is as follows: <pre>raa00-dx_<location-id>-<YYMMDDHHMM>-<location-abreviation>---bin</pre> or <pre>raa00-dx_<location-id>-<YYYYMMDDHHMM>-<location-abreviation>---bin</pre>\n[Read and plot DX radar data from DWD](wradlib_reading_dx.ipynb) provides an extensive introduction into working with DX data. For now, we would just like to know how to read the data:",
"_____no_output_____"
]
],
[
[
"fpath = 'dx/raa00-dx_10908-0806021655-fbg---bin.gz'\nf = wrl.util.get_wradlib_data_file(fpath)\ndata, metadata = wrl.io.read_dx(f)",
"_____no_output_____"
]
],
[
[
"Here, ``data`` is a two dimensional array of shape (number of azimuth angles, number of range gates). This means that the number of rows of the array corresponds to the number of azimuth angles of the radar sweep while the number of columns corresponds to the number of range gates per ray.",
"_____no_output_____"
]
],
[
[
"print(data.shape)\nprint(metadata.keys())",
"_____no_output_____"
],
[
"fig = pl.figure(figsize=(10, 10))\nax, im = wrl.vis.plot_ppi(data, fig=fig, proj='cg')",
"_____no_output_____"
]
],
[
[
"## German Weather Service: RADOLAN (quantitative) composit",
"_____no_output_____"
],
[
"The quantitative composite format of the DWD (German Weather Service) was established in the course of the [RADOLAN project](https://www.dwd.de/DE/leistungen/radolan/radolan.html). Most quantitative composite products from the DWD are distributed in this format, e.g. the R-series (RX, RY, RH, RW, ...), the S-series (SQ, SH, SF, ...), and the E-series (European quantitative composite, e.g. EZ, EH, EB). Please see the [composite format description](https://www.dwd.de/DE/leistungen/radolan/radolan_info/radolan_radvor_op_komposit_format_pdf.pdf?__blob=publicationFile&v=5) for a full reference and a full table of products (unfortunately only in German language). An extensive section covering many RADOLAN aspects is here: [RADOLAN](../radolan.ipynb)\n\nCurrently, the RADOLAN composites have a spatial resolution of 1km x 1km, with the national composits (R- and S-series) being 900 x 900 grids, and the European composits 1500 x 1400 grids. The projection is [polar-stereographic](../radolan/radolan_grid.ipynb#Polar-Stereographic-Projection). The products can be read by the following function:",
"_____no_output_____"
]
],
[
[
"fpath = 'radolan/misc/raa01-rw_10000-1408102050-dwd---bin.gz'\nf = wrl.util.get_wradlib_data_file(fpath)\ndata, metadata = wrl.io.read_radolan_composite(f)",
"_____no_output_____"
]
],
[
[
"Here, ``data`` is a two dimensional integer array of shape (number of rows, number of columns). Different product types might need different levels of postprocessing, e.g. if the product contains rain rates or accumulations, you will normally have to divide data by factor 10. ``metadata`` is again a dictionary which provides metadata from the files header section, e.g. using the keys *producttype*, *datetime*, *intervalseconds*, *nodataflag*. ",
"_____no_output_____"
]
],
[
[
"print(data.shape)\nprint(metadata.keys())",
"_____no_output_____"
]
],
[
[
"Masking the NoData (or missing) values can be done by:",
"_____no_output_____"
]
],
[
[
"maskeddata = np.ma.masked_equal(data, \n metadata[\"nodataflag\"])",
"_____no_output_____"
],
[
"fig = pl.figure(figsize=(10, 8))\n# get coordinates\nradolan_grid_xy = wrl.georef.get_radolan_grid(900, 900)\nx = radolan_grid_xy[:, :, 0]\ny = radolan_grid_xy[:, :, 1]\n\n# create quick plot with colorbar and title\npl.figure(figsize=(10, 8))\npl.pcolormesh(x, y, maskeddata)",
"_____no_output_____"
]
],
[
[
"## HDF5",
"_____no_output_____"
],
[
"### OPERA HDF5 (ODIM_H5)",
"_____no_output_____"
],
[
"[HDF5](https://www.hdfgroup.org/HDF5/) is a data model, library, and file format for storing and managing data. The [OPERA 3 program](http://www.eumetnet.eu/opera) developed a convention (or information model) on how to store and exchange radar data in hdf5 format. It is based on the work of [COST Action 717](https://e-services.cost.eu/files/domain_files/METEO/Action_717/final_report/final_report-717.pdf) and is used e.g. in real-time operations in the Nordic European countries. The OPERA Data and Information Model (ODIM) is documented e.g. in this [report](https://www.eol.ucar.edu/system/files/OPERA_2008_03_WP2.1b_ODIM_H5_v2.1.pdf). Make use of these documents in order to understand the organization of OPERA hdf5 files!\n\n<div class=\"alert alert-warning\">\n\n**Note** <br>\n\nSince $\\omega radlib$ version 1.3 an [OdimH5](https://docs.wradlib.org/en/stable/generated/wradlib.io.xarray.OdimH5.html) reader based on [Xarray](http://xarray.pydata.org/en/stable/), [netcdf4](https://unidata.github.io/netcdf4-python/) and [h5py](https://www.h5py.org/) is available. Please read the more indepth notebook [wradlib_xarray_radial_odim](wradlib_xarray_radial_odim.ipynb).\n\nA second implementation based on [netcdf4](https://unidata.github.io/netcdf4-python/), [h5py](https://www.h5py.org/), [h5netcdf](https://github.com/shoyer/h5netcdf) and [Xarray](http://xarray.pydata.org/en/stable/) claiming multiple data files and presenting them in a simple structure is available from $\\omega radlib$ version 1.6. See the notebook [wradlib_odim_multi_file_dataset](wradlib_odim_multi_file_dataset.ipynb).\n\n</div>\n\nThe hierarchical nature of HDF5 can be described as being similar to directories, files, and links on a hard-drive. Actual metadata are stored as so-called *attributes*, and these attributes are organized together in so-called *groups*. Binary data are stored as so-called *datasets*. As for ODIM_H5, the ``root`` (or top level) group contains three groups of metadata: these are called ``what`` (object, information model version, and date/time information), ``where`` (geographical information), and ``how`` (quality and optional/recommended metadata). For a very simple product, e.g. a CAPPI, the data is organized in a group called ``dataset1`` which contains another group called ``data1`` where the actual binary data are found in ``data``. In analogy with a file system on a hard-disk, the HDF5 file containing this simple product is organized like this:\n\n```\n /\n /what\n /where\n /how\n /dataset1\n /dataset1/data1\n /dataset1/data1/data\n```\n\nThe philosophy behind the $\\omega radlib$ interface to OPERA's data model is very straightforward: $\\omega radlib$ simply translates the complete file structure to *one* dictionary and returns this dictionary to the user. Thus, the potential complexity of the stored data is kept and it is left to the user how to proceed with this data. The keys of the output dictionary are strings that correspond to the \"directory trees\" shown above. Each key ending with ``/data`` points to a Dataset (i.e. a numpy array of data). Each key ending with ``/what``, ``/where`` or ``/how`` points to another dictionary of metadata. The entire output can be obtained by:",
"_____no_output_____"
]
],
[
[
"fpath = 'hdf5/knmi_polar_volume.h5'\nf = wrl.util.get_wradlib_data_file(fpath)\nfcontent = wrl.io.read_opera_hdf5(f)",
"_____no_output_____"
]
],
[
[
"The user should inspect the output obtained from his or her hdf5 file in order to see how access those items which should be further processed. In order to get a readable overview of the output dictionary, one can use the pretty printing module:",
"_____no_output_____"
]
],
[
[
"# which keyswords can be used to access the content?\nprint(fcontent.keys())\n# print the entire content including values of data and metadata\n# (numpy arrays will not be entirely printed)\nprint(fcontent['dataset1/data1/data'])",
"_____no_output_____"
]
],
[
[
"Please note that in order to experiment with such datasets, you can download hdf5 sample data from the [OPERA](http://eumetnet.eu/activities/observations-programme/current-activities/opera/) or use the example data provided with the [wradlib-data](https://github.com/wradlib/wradlib-data/) repository.",
"_____no_output_____"
]
],
[
[
"fig = pl.figure(figsize=(10, 10))\nim = wrl.vis.plot_ppi(fcontent['dataset1/data1/data'], fig=fig, proj='cg')",
"_____no_output_____"
]
],
[
[
"### GAMIC HDF5",
"_____no_output_____"
],
[
"GAMIC refers to the commercial [GAMIC Enigma MURAN software](https://www.gamic.com) which exports data in hdf5 format. The concept is quite similar to the above [OPERA HDF5 (ODIM_H5)](#OPERA-HDF5-(ODIM_H5)) format. Such a file (typical ending: *.mvol*) can be read by:",
"_____no_output_____"
]
],
[
[
"fpath = 'hdf5/2014-08-10--182000.ppi.mvol'\nf = wrl.util.get_wradlib_data_file(fpath)\ndata, metadata = wrl.io.read_gamic_hdf5(f)",
"_____no_output_____"
]
],
[
[
"While metadata represents the usual dictionary of metadata, the data variable is a dictionary which might contain several numpy arrays with the keywords of the dictionary indicating different moments.",
"_____no_output_____"
]
],
[
[
"print(metadata.keys())\nprint(metadata['VOL'])\nprint(metadata['SCAN0'].keys())",
"_____no_output_____"
],
[
"print(data['SCAN0'].keys())\nprint(data['SCAN0']['PHIDP'].keys())\nprint(data['SCAN0']['PHIDP']['data'].shape)",
"_____no_output_____"
],
[
"fig = pl.figure(figsize=(10, 10))\nim = wrl.vis.plot_ppi(data['SCAN0']['ZH']['data'], fig=fig, proj='cg')",
"_____no_output_____"
]
],
[
[
"### Generic HDF5",
"_____no_output_____"
],
[
"This is a generic hdf5 reader, which will read any hdf5 structure.",
"_____no_output_____"
]
],
[
[
"fpath = 'hdf5/2014-08-10--182000.ppi.mvol'\nf = wrl.util.get_wradlib_data_file(fpath)\nfcontent = wrl.io.read_generic_hdf5(f)",
"_____no_output_____"
],
[
"print(fcontent.keys())",
"_____no_output_____"
],
[
"print(fcontent['where'])\nprint(fcontent['how'])\nprint(fcontent['scan0/moment_3'].keys())\nprint(fcontent['scan0/moment_3']['attrs'])\nprint(fcontent['scan0/moment_3']['data'].shape)\n",
"_____no_output_____"
],
[
"fig = pl.figure(figsize=(10, 10))\nim = wrl.vis.plot_ppi(fcontent['scan0/moment_3']['data'], fig=fig, proj='cg')",
"_____no_output_____"
]
],
[
[
"## NetCDF",
"_____no_output_____"
],
[
"The NetCDF format also claims to be self-describing. However, as for all such formats, the developers of netCDF also admit that \"[...] the mere use of netCDF is not sufficient to make data self-describing and meaningful to both humans and machines [...]\" (see [here](https://www.unidata.ucar.edu/software/netcdf/documentation/historic/netcdf/Conventions.html). Different radar operators or data distributors will use different naming conventions and data hierarchies (i.e. \"data models\") that the reading program might need to know about.\n\n$\\omega radlib$ provides two solutions to address this challenge. The first one ignores the concept of data models and just pulls all data and metadata from a NetCDF file ([wradlib.io.read_generic_netcdf()](https://docs.wradlib.org/en/latest/generated/wradlib.io.netcdf.read_generic_netcdf.html). The second is designed for a specific data model used by the EDGE software ([wradlib.io.read_edge_netcdf()](https://docs.wradlib.org/en/latest/generated/wradlib.io.netcdf.read_edge_netcdf.html)).\n\n<div class=\"alert alert-warning\">\n\n**Note** <br>\n\nSince $\\omega radlib$ version 1.3 an [Cf/Radial](https://docs.wradlib.org/en/stable/generated/wradlib.io.xarray.CfRadial.html) reader for CF versions 1.X and 2 based on [Xarray](http://xarray.pydata.org/en/stable/) and [netcdf4](https://unidata.github.io/netcdf4-python/) is available. Please read the more indepth notebook [wradlib_xarray_radial_odim](wradlib_xarray_radial_odim.ipynb).\n\n</div>",
"_____no_output_____"
],
[
"### Generic NetCDF reader (includes CfRadial)",
"_____no_output_____"
],
[
"$\\omega radlib$ provides a function that will virtually read any NetCDF file irrespective of the data model: [wradlib.io.read_generic_netcdf()](https://docs.wradlib.org/en/latest/generated/wradlib.io.netcdf.read_generic_netcdf.html). It is built upon Python's [netcdf4](https://unidata.github.io/netcdf4-python/) library. [wradlib.io.read_generic_netcdf()](https://docs.wradlib.org/en/latest/generated/wradlib.io.netcdf.read_generic_netcdf.html) will return only one object, a dictionary, that contains all the contents of the NetCDF file corresponding to the original file structure. This includes all the metadata, as well as the so called \"dimensions\" (describing the dimensions of the actual data arrays) and the \"variables\" which will contains the actual data. Users can use this dictionary at will in order to query data and metadata; however, they should make sure to consider the documentation of the corresponding data model. [wradlib.io.read_generic_netcdf()](https://docs.wradlib.org/en/latest/generated/wradlib.io.netcdf.read_generic_netcdf.html) has been shown to work with a lot of different data models, most notably **CfRadial** (see [here](https://www.ral.ucar.edu/projects/titan/docs/radial_formats/cfradial.html) for details). A typical call to [wradlib.io.read_generic_netcdf()](https://docs.wradlib.org/en/latest/generated/wradlib.io.netcdf.read_generic_netcdf.html) would look like:",
"_____no_output_____"
]
],
[
[
"fpath = 'netcdf/example_cfradial_ppi.nc'\nf = wrl.util.get_wradlib_data_file(fpath)\noutdict = wrl.io.read_generic_netcdf(f)\nfor key in outdict.keys():\n print(key)",
"_____no_output_____"
]
],
[
[
"Please see [this example notebook](wradlib_generic_netcdf_example.ipynb) to get started.",
"_____no_output_____"
],
[
"### EDGE NetCDF",
"_____no_output_____"
],
[
"EDGE is a commercial software for radar control and data analysis provided by the Enterprise Electronics Corporation. It allows for netCDF data export. The resulting files can be read by [wradlib.io.read_generic_netcdf()](https://docs.wradlib.org/en/latest/generated/wradlib.io.netcdf.read_generic_netcdf.html), but $\\omega radlib$ also provides a specific function, [wradlib.io.read_edge_netcdf()](https://docs.wradlib.org/en/latest/generated/wradlib.io.netcdf.read_edge_netcdf.html) to return metadata and data as seperate objects:",
"_____no_output_____"
]
],
[
[
"fpath = 'netcdf/edge_netcdf.nc'\nf = wrl.util.get_wradlib_data_file(fpath) \ndata, metadata = wrl.io.read_edge_netcdf(f)\nprint(data.shape)\nprint(metadata.keys())",
"_____no_output_____"
]
],
[
[
"## Gematronik Rainbow",
"_____no_output_____"
],
[
"Rainbow refers to the commercial [RAINBOW®5 APPLICATION SOFTWARE](http://www.de.selex-es.com/capabilities/meteorology/products/components/rainbow5) which exports data in an XML flavour, which due to binary data blobs violates XML standard. Gematronik provided python code for implementing this reader in $\\omega radlib$, which is very much appreciated.\n\nThe philosophy behind the $\\omega radlib$ interface to Gematroniks data model is very straightforward: $\\omega radlib$ simply translates the complete xml file structure to *one* dictionary and returns this dictionary to the user. Thus, the potential complexity of the stored data is kept and it is left to the user how to proceed with this data. The keys of the output dictionary are strings that correspond to the \"xml nodes\" and \"xml attributes\". Each ``data`` key points to a Dataset (i.e. a numpy array of data). Such a file (typical ending: *.vol* or *.azi*) can be read by:",
"_____no_output_____"
]
],
[
[
"fpath = 'rainbow/2013070308340000dBuZ.azi'\nf = wrl.util.get_wradlib_data_file(fpath)\nfcontent = wrl.io.read_rainbow(f)",
"_____no_output_____"
]
],
[
[
"The user should inspect the output obtained from his or her Rainbow file in order to see how access those items which should be further processed. In order to get a readable overview of the output dictionary, one can use the pretty printing module:",
"_____no_output_____"
]
],
[
[
"# which keyswords can be used to access the content?\nprint(fcontent.keys())\n# print the entire content including values of data and metadata\n# (numpy arrays will not be entirely printed)\nprint(fcontent['volume']['sensorinfo'])",
"_____no_output_____"
]
],
[
[
"You can check this [example notebook](wradlib_load_rainbow_example.ipynb) for getting a first impression.",
"_____no_output_____"
],
[
"## Vaisala Sigmet IRIS ",
"_____no_output_____"
],
[
"[IRIS](https://www.vaisala.com/en/products/instruments-sensors-and-other-measurement-devices/weather-radar-products/iris-focus) refers to the commercial Vaisala Sigmet **I**nteractive **R**adar **I**nformation **S**ystem. The Vaisala Sigmet Digital Receivers export data in a [well documented](ftp://ftp.sigmet.com/outgoing/manuals/IRIS_Programmers_Manual.pdf) binary format.\n\nThe philosophy behind the $\\omega radlib$ interface to the IRIS data model is very straightforward: $\\omega radlib$ simply translates the complete binary file structure to *one* dictionary and returns this dictionary to the user. Thus, the potential complexity of the stored data is kept and it is left to the user how to proceed with this data. The keys of the output dictionary are strings that correspond to the Sigmet Data Structures. \n\nEach ``data`` key points to a Dataset (i.e. a numpy array of data). Such a file (typical ending: *.RAWXXXX) can be read by:",
"_____no_output_____"
]
],
[
[
"fpath = 'sigmet/cor-main131125105503.RAW2049'\nf = wrl.util.get_wradlib_data_file(fpath)\nfcontent = wrl.io.read_iris(f)",
"_____no_output_____"
],
[
"# which keywords can be used to access the content?\nprint(fcontent.keys())\n# print the entire content including values of data and \n# metadata of the first sweep\n# (numpy arrays will not be entirely printed)\nprint(fcontent['data'][1].keys())\nprint()\nprint(fcontent['data'][1]['ingest_data_hdrs'].keys())\nprint(fcontent['data'][1]['ingest_data_hdrs']['DB_DBZ'])\nprint()\nprint(fcontent['data'][1]['sweep_data'].keys())\nprint(fcontent['data'][1]['sweep_data']['DB_DBZ'])",
"_____no_output_____"
],
[
"fig = pl.figure(figsize=(10, 10))\nswp = fcontent['data'][1]['sweep_data']\nax, im = wrl.vis.plot_ppi(swp[\"DB_DBZ\"]['data'], fig=fig, proj='cg')",
"_____no_output_____"
]
],
[
[
"## OPERA BUFR",
"_____no_output_____"
],
[
"**WARNING** $\\omega radlib$ does currently not support the BUFR format!\n\nThe Binary Universal Form for the Representation of meteorological data (BUFR) is a binary data format maintained by the World Meteorological Organization (WMO).\n\nThe BUFR format was adopted by [OPERA](http://eumetnet.eu/activities/observations-programme/current-activities/opera/) for the representation of weather radar data.\nA BUFR file consists of a set of *descriptors* which contain all the relevant metadata and a data section. \nThe *descriptors* are identified as a tuple of three integers. The meaning of these tupels is described in the so-called BUFR tables. There are generic BUFR tables provided by the WMO, but it is also possible to define so called *local tables* - which was done by the OPERA consortium for the purpose of radar data representation.\n \nIf you want to use BUFR files together with $\\omega radlib$, we recommend that you check out the [OPERA webpage](http://eumetnet.eu/activities/observations-programme/current-activities/opera/) where you will find software for BUFR decoding. In particular, you might want to check out [this tool](http://eumetnet.eu/wp-content/uploads/2017/04/bufr_opera_mf.zip) which seems to support the conversion of OPERA BUFR files to ODIM_H5 (which is supported by $\\omega radlib$). However, you have to build it yourself.\n\nIt would be great if someone could add a tutorial on how to use OPERA BUFR software together with $\\omega radlib$!",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
d0c8a15132e70b5b02348378674db911fa53c85e | 750,005 | ipynb | Jupyter Notebook | Stock Project.ipynb | SPC7/dlensInternship | 5dfbb0f75ab1a9e9d52c014f176ea9f9790a6dd1 | [
"MIT"
] | 1 | 2021-06-24T20:58:12.000Z | 2021-06-24T20:58:12.000Z | Stock Project.ipynb | SPC7/dlensInternship | 5dfbb0f75ab1a9e9d52c014f176ea9f9790a6dd1 | [
"MIT"
] | null | null | null | Stock Project.ipynb | SPC7/dlensInternship | 5dfbb0f75ab1a9e9d52c014f176ea9f9790a6dd1 | [
"MIT"
] | null | null | null | 427.596921 | 121,572 | 0.93358 | [
[
[
"# Research",
"_____no_output_____"
],
[
"## Imports",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport pandas_datareader as dr\nfrom pandas_datareader import data as web\nimport matplotlib.pyplot as plt\nfrom matplotlib import style\nimport numpy as np\nimport datetime\nimport mplfinance as mpl\nimport plotly.graph_objects as go\nimport plotly\nimport yfinance as yf",
"_____no_output_____"
]
],
[
[
"## Data Import",
"_____no_output_____"
]
],
[
[
"df = pd.read_csv('data/data2.csv', index_col='Symbol')",
"_____no_output_____"
]
],
[
[
"## Sorting Data",
"_____no_output_____"
]
],
[
[
"df",
"_____no_output_____"
],
[
"isInfoTech = df['Sector']== 'Information Technology'",
"_____no_output_____"
],
[
"print(isInfoTech.head())",
"Symbol\nMMM False\nABT False\nABBV False\nABMD False\nACN True\nName: Sector, dtype: bool\n"
],
[
"df_InfoTech = df[isInfoTech]",
"_____no_output_____"
],
[
"df_InfoTech",
"_____no_output_____"
]
],
[
[
"## IBM INTEL NVIDIA",
"_____no_output_____"
]
],
[
[
"#looking at IBM,INTEL,NVIDIA,",
"_____no_output_____"
],
[
"start = datetime.datetime(2017,1,1)\nend = datetime.datetime(2021,6,22)",
"_____no_output_____"
],
[
"ibm = yf.download(\"IBM\",start, end)\nintel = yf.download(\"INTC\",start, end)\nnvidia = yf.download(\"NVDA\",start, end)\ntrch = yf.download(\"TRCH\",start, end)",
"[*********************100%***********************] 1 of 1 completed\n[*********************100%***********************] 1 of 1 completed\n[*********************100%***********************] 1 of 1 completed\n[*********************100%***********************] 1 of 1 completed\n"
],
[
"ibm.to_csv('IBM_STOCK.csv')\n#ibm stock\nintel.to_csv('INTC_STOCK.csv')\nnvidia.to_csv('NVDA_STOCK.csv')\ntrch.to_csv('TRCH_STOCK.csv')",
"_____no_output_____"
],
[
"ibm.head()\ntrch.tail()",
"_____no_output_____"
],
[
"intel.head()",
"_____no_output_____"
],
[
"nvidia.head()",
"_____no_output_____"
],
[
"ibm['Open'].plot(label='IBM',figsize=(15,7))\nintel['Open'].plot(label='Intel')\nnvidia['Open'].plot(label='Nvidia')\nplt.legend()\nplt.ylabel('Stock Price')\nplt.title('Stock Prices of IBM,Intel and Nvidia')",
"_____no_output_____"
]
],
[
[
"## Volumes",
"_____no_output_____"
]
],
[
[
"ibm['Volume'].plot(label='IBM',figsize=(15,7))\nintel['Volume'].plot(label='Intel')\nnvidia['Volume'].plot(label='Nvidia')\nplt.ylabel('Volume Traded')\nplt.title('Volumes of IBM, Intel and Nvidia')\nplt.legend()",
"_____no_output_____"
]
],
[
[
"## Total Traded / ~Market Cap",
"_____no_output_____"
]
],
[
[
"ibm['Total Traded'] = ibm['Open'] * ibm['Volume']\nintel['Total Traded'] = intel['Open'] * intel['Volume']\nnvidia['Total Traded'] = nvidia['Open'] * nvidia['Volume']",
"_____no_output_____"
],
[
"ibm['Total Traded'].plot(label=('IBM'),figsize=(15,7))\nintel['Total Traded'].plot(label=('Intel'))\nnvidia['Total Traded'].plot(label=('Nvidia'))\nplt.ylabel('Total Traded')\nplt.legend()\nplt.title('Total Traded for IBM, Intel, and Nvidia')",
"_____no_output_____"
]
],
[
[
"## 50 and 200 Day Rolling EMA",
"_____no_output_____"
]
],
[
[
"intel['Open'].plot(figsize=(15,7))\nintel['MA50']=intel['Open'].rolling(50).mean()\nintel['MA50'].plot(label='MA50')\nintel['MA200']=intel['Open'].rolling(200).mean()\nintel['MA200'].plot(label='MA200')\nplt.legend()\nplt.title('Intel Open, 50EMA, 200EMA')",
"_____no_output_____"
],
[
"ibm['Open'].plot(figsize=(15,7))\nibm['MA50']=ibm['Open'].rolling(50).mean()\nibm['MA50'].plot(label='MA50')\nibm['MA200']=ibm['Open'].rolling(200).mean()\nibm['MA200'].plot(label='MA200')\nplt.legend()\nplt.title('IBM Open, 50EMA, 200EMA')",
"_____no_output_____"
],
[
"nvidia['Open'].plot(figsize=(12,7))\nnvidia['MA50']=nvidia['Open'].rolling(50).mean()\nnvidia['MA50'].plot(label='MA50')\nnvidia['MA200']=nvidia['Open'].rolling(200).mean()\nnvidia['MA200'].plot(label='MA200')\nplt.legend()\nplt.title('Nvidia Open, 50EMA, 200EMA')",
"_____no_output_____"
],
[
"trch['Open'].plot(figsize=(10,7))\ntrch['MA50']=trch['Open'].rolling(50).mean()\ntrch['MA50'].plot(label='MA50')\ntrch['MA200']=trch['Open'].rolling(200).mean()\ntrch['MA200'].plot(label='MA200')\nplt.legend()\nplt.title('Torchlight Open, 50EMA, 200EMA')",
"_____no_output_____"
]
],
[
[
"## Time Series Analysis AutoCorrelation",
"_____no_output_____"
]
],
[
[
"def autocorr_daily(intel):\n \n returns = intel.pct_change()\n autocorrelation = returns['Adj Close'].autocorr()\n \n return autocorrelation\n\nautocorr_daily(intel)",
"_____no_output_____"
],
[
"autocorr_daily(ibm)",
"_____no_output_____"
],
[
"autocorr_daily(nvidia)",
"_____no_output_____"
],
[
"autocorr_daily(trch)",
"_____no_output_____"
]
],
[
[
"## Scatter Matrix Based off Open Price",
"_____no_output_____"
]
],
[
[
"from pandas.plotting import scatter_matrix",
"_____no_output_____"
],
[
"tech_comp = pd.concat([ibm['Open'],intel['Open'],nvidia['Open']],axis =1)\ntech_comp.columns = ['IBM Open','Intel Open','Nvidia Open']",
"_____no_output_____"
],
[
"scatter_matrix(tech_comp,figsize=(8,8),hist_kwds={'bins':50})",
"_____no_output_____"
]
],
[
[
"CandleStick Analysis",
"_____no_output_____"
],
[
"## CandleStick Analysis",
"_____no_output_____"
]
],
[
[
"candleIntel = intel.iloc[100:160]\nmpl.plot(candleIntel,type='candle',volume=True)\ncandleIBM = ibm.iloc[100:160]\nmpl.plot(candleIBM,type='candle',volume=True)\ncandleNvidia = nvidia.iloc[100:160]\nmpl.plot(candleNvidia,type='candle',volume=True)",
"_____no_output_____"
]
],
[
[
"## Monte Carlo Stock Price Predictor",
"_____no_output_____"
]
],
[
[
"monte_end = datetime.datetime.now()\nmonte_start = monte_end - datetime.timedelta(days=300)\n\n\n\nprices = yf.download(\"NVDA\",monte_start,monte_end)['Close']\nreturns = prices.pct_change()\nmeanReturns = returns.mean()\n\nlast_price = prices[-1]\n\nnum_sims = 100\nnum_days = 300\n\nsim_df = pd.DataFrame()\n\nfor x in range(num_sims):\n count = 0\n daily_volatility = returns.std()\n \n price_series = []\n \n price = last_price * (1 + np.random.normal(0,daily_volatility))\n price_series.append(price)\n\n \n for y in range(num_days):\n if count == 299:\n break\n price = price_series[count] * (1 + np.random.normal(0,daily_volatility))\n price_series.append(price)\n count += 1\n \n sim_df[x] = pd.Series(price_series)\n \n",
"[*********************100%***********************] 1 of 1 completed\n"
],
[
"fig = plt.figure()\nfig.suptitle('Monte Carlo Sim NVDA')\nplt.plot(sim_df)\nplt.axhline(y = last_price, color = 'lime',linestyle = '-')\nplt.xlabel('Days')\nplt.ylabel('Price')\nplt.show()\n",
"_____no_output_____"
],
[
"import plotly_express as px\nfig2 = px.line(sim_df)\nfig2.show()\n",
"_____no_output_____"
],
[
"pd.set_option('display.max_columns',100)\n\nsim_df\nsim_df.drop(index=1)",
"_____no_output_____"
],
[
"price_series",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
d0c8a2bb268052645d08de8acfeafbe0a35c5bd2 | 195,667 | ipynb | Jupyter Notebook | Notebooks/.ipynb_checkpoints/ciccione-checkpoint.ipynb | AndreaPrati98/EFSA_study | 2a5c44b58b4c471149b6ce2d1eb191347f0e69e9 | [
"MIT"
] | null | null | null | Notebooks/.ipynb_checkpoints/ciccione-checkpoint.ipynb | AndreaPrati98/EFSA_study | 2a5c44b58b4c471149b6ce2d1eb191347f0e69e9 | [
"MIT"
] | null | null | null | Notebooks/.ipynb_checkpoints/ciccione-checkpoint.ipynb | AndreaPrati98/EFSA_study | 2a5c44b58b4c471149b6ce2d1eb191347f0e69e9 | [
"MIT"
] | null | null | null | 39.616724 | 12,884 | 0.415287 | [
[
[
"<h1>SUBSET SELECTION</h1>",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport scipy\nimport sklearn\nimport seaborn as sns\nimport xlrd\nimport time\nimport statsmodels.api as sm",
"_____no_output_____"
],
[
"data=pd.read_excel('Data/Mini Project EFSA.xlsx')\ndata.rename(columns={'sex \\n(0=M, 1=F)':'sex'}, inplace=True)\ndata",
"_____no_output_____"
],
[
"from funzioni import forward ",
"_____no_output_____"
]
],
[
[
"<h2>I dati sono le colonne originali</h2>",
"_____no_output_____"
]
],
[
[
"import numpy as np\nfrom sklearn.preprocessing import PolynomialFeatures\n\n#Prepare the datas\ny = data.response\nweights = data.SD\nX = data.drop(columns=[\"response\",\"SD\"])\n\n#Devo estrarre l'endpoint dalla matrice in modo da avere 2 variabili categoriche usate per fare i 3 endpoint\nendpoint1 = X['endpoint'] == 1\nendpoint2 = X['endpoint'] == 2\nX[\"endpoint1\"] = endpoint1.astype(\"int\")\nX[\"endpoint2\"] = endpoint2.astype(\"int\")\nX = X.drop(columns=[\"endpoint\"])\n#X[\"ones\"] = np.ones((X.shape[0],1)) \n\npoly = PolynomialFeatures(2)\nX_poly = poly.fit_transform(X)\ncols = poly.get_feature_names(X.columns)\nX = pd.DataFrame(X_poly, columns=cols)",
"_____no_output_____"
],
[
" X",
"_____no_output_____"
]
],
[
[
"<h1>2 - Use subset selection to estimate separate models for the 3 endpoints using gender as categorical variable.</h1>",
"_____no_output_____"
],
[
"<h1>3 - Use subset selection to estimate a unique model using gender and endpoint as categorical variables</h1>\n<h2>Forward solo con i predittori lineari</h2>",
"_____no_output_____"
]
],
[
[
"models_fwd = pd.DataFrame(columns=[\"RSS\", \"model\",\"number_of_predictors\"])\n\ntic = time.time()\npredictors = []\n\nfor i in range(1,len(X.columns)+1):\n models_fwd.loc[i] = forward(y,X,predictors,weights)\n predictors = models_fwd.loc[i][\"model\"].model.exog_names\n\ntoc = time.time()\nprint(\"Total elapsed time:\", (toc-tic), \"seconds.\")",
"_____no_output_____"
],
[
"display(models_fwd)",
"_____no_output_____"
],
[
"models_fwd.plot(x='number_of_predictors', y='RSS')",
"_____no_output_____"
],
[
" for i in range(0,models_fwd.shape[0]):\n print(models_fwd.iloc[i][\"model\"].model.exog_names)\n print()\n",
"['endpoint1']\n\n['endpoint1', 'sex endpoint1']\n\n['endpoint1', 'sex endpoint1', 'endpoint2']\n\n['endpoint1', 'sex endpoint1', 'endpoint2', 'dose endpoint1']\n\n['endpoint1', 'sex endpoint1', 'endpoint2', 'dose endpoint1', 'number of animals']\n\n['endpoint1', 'sex endpoint1', 'endpoint2', 'dose endpoint1', 'number of animals', 'dose sex']\n\n['endpoint1', 'sex endpoint1', 'endpoint2', 'dose endpoint1', 'number of animals', 'dose sex', 'dose']\n\n['endpoint1', 'sex endpoint1', 'endpoint2', 'dose endpoint1', 'number of animals', 'dose sex', 'dose', 'number of animals sex']\n\n['endpoint1', 'sex endpoint1', 'endpoint2', 'dose endpoint1', 'number of animals', 'dose sex', 'dose', 'number of animals sex', 'dose^2']\n\n['endpoint1', 'sex endpoint1', 'endpoint2', 'dose endpoint1', 'number of animals', 'dose sex', 'dose', 'number of animals sex', 'dose^2', 'number of animals dose']\n\n['endpoint1', 'sex endpoint1', 'endpoint2', 'dose endpoint1', 'number of animals', 'dose sex', 'dose', 'number of animals sex', 'dose^2', 'number of animals dose', 'number of animals endpoint1']\n\n['endpoint1', 'sex endpoint1', 'endpoint2', 'dose endpoint1', 'number of animals', 'dose sex', 'dose', 'number of animals sex', 'dose^2', 'number of animals dose', 'number of animals endpoint1', 'number of animals^2']\n\n['endpoint1', 'sex endpoint1', 'endpoint2', 'dose endpoint1', 'number of animals', 'dose sex', 'dose', 'number of animals sex', 'dose^2', 'number of animals dose', 'number of animals endpoint1', 'number of animals^2', 'sex endpoint2']\n\n['endpoint1', 'sex endpoint1', 'endpoint2', 'dose endpoint1', 'number of animals', 'dose sex', 'dose', 'number of animals sex', 'dose^2', 'number of animals dose', 'number of animals endpoint1', 'number of animals^2', 'sex endpoint2', 'dose endpoint2']\n\n['endpoint1', 'sex endpoint1', 'endpoint2', 'dose endpoint1', 'number of animals', 'dose sex', 'dose', 'number of animals sex', 'dose^2', 'number of animals dose', 'number of animals endpoint1', 'number of animals^2', 'sex endpoint2', 'dose endpoint2', 'number of animals endpoint2']\n\n['endpoint1', 'sex endpoint1', 'endpoint2', 'dose endpoint1', 'number of animals', 'dose sex', 'dose', 'number of animals sex', 'dose^2', 'number of animals dose', 'number of animals endpoint1', 'number of animals^2', 'sex endpoint2', 'dose endpoint2', 'number of animals endpoint2', 'endpoint1 endpoint2']\n\n['endpoint1', 'sex endpoint1', 'endpoint2', 'dose endpoint1', 'number of animals', 'dose sex', 'dose', 'number of animals sex', 'dose^2', 'number of animals dose', 'number of animals endpoint1', 'number of animals^2', 'sex endpoint2', 'dose endpoint2', 'number of animals endpoint2', 'endpoint1 endpoint2', 'sex']\n\n['endpoint1', 'sex endpoint1', 'endpoint2', 'dose endpoint1', 'number of animals', 'dose sex', 'dose', 'number of animals sex', 'dose^2', 'number of animals dose', 'number of animals endpoint1', 'number of animals^2', 'sex endpoint2', 'dose endpoint2', 'number of animals endpoint2', 'endpoint1 endpoint2', 'sex', 'endpoint2^2']\n\n['endpoint1', 'sex endpoint1', 'endpoint2', 'dose endpoint1', 'number of animals', 'dose sex', 'dose', 'number of animals sex', 'dose^2', 'number of animals dose', 'number of animals endpoint1', 'number of animals^2', 'sex endpoint2', 'dose endpoint2', 'number of animals endpoint2', 'endpoint1 endpoint2', 'sex', 'endpoint2^2', 'endpoint1^2']\n\n['endpoint1', 'sex endpoint1', 'endpoint2', 'dose endpoint1', 'number of animals', 'dose sex', 'dose', 'number of animals sex', 'dose^2', 'number of animals dose', 'number of animals endpoint1', 'number of animals^2', 'sex endpoint2', 'dose endpoint2', 'number of animals endpoint2', 'endpoint1 endpoint2', 'sex', 'endpoint2^2', 'endpoint1^2', '1']\n\n['endpoint1', 'sex endpoint1', 'endpoint2', 'dose endpoint1', 'number of animals', 'dose sex', 'dose', 'number of animals sex', 'dose^2', 'number of animals dose', 'number of animals endpoint1', 'number of animals^2', 'sex endpoint2', 'dose endpoint2', 'number of animals endpoint2', 'endpoint1 endpoint2', 'sex', 'endpoint2^2', 'endpoint1^2', '1', 'sex^2']\n\n"
],
[
"res = models_fwd.iloc[5][\"model\"].model.fit()\n\nIn [7]: print(res.summary())",
" WLS Regression Results \n==============================================================================\nDep. Variable: response R-squared: 0.926\nModel: WLS Adj. R-squared: 0.906\nMethod: Least Squares F-statistic: 45.36\nDate: Wed, 25 Nov 2020 Prob (F-statistic): 1.40e-09\nTime: 17:08:09 Log-Likelihood: -129.54\nNo. Observations: 24 AIC: 271.1\nDf Residuals: 18 BIC: 278.1\nDf Model: 5 \nCovariance Type: nonrobust \n=====================================================================================\n coef std err t P>|t| [0.025 0.975]\n-------------------------------------------------------------------------------------\nendpoint1 406.6125 30.847 13.181 0.000 341.804 471.421\nendpoint2 25.7125 30.847 0.834 0.415 -39.096 90.521\nsex -74.9211 30.866 -2.427 0.026 -139.769 -10.074\nnumber of animals 2.5589 35.684 0.072 0.944 -72.411 77.529\ndose -0.1174 0.215 -0.547 0.591 -0.569 0.334\nones 38.5197 341.168 0.113 0.911 -678.248 755.287\n==============================================================================\nOmnibus: 3.801 Durbin-Watson: 0.420\nProb(Omnibus): 0.150 Jarque-Bera (JB): 1.471\nSkew: 0.099 Prob(JB): 0.479\nKurtosis: 1.803 Cond. No. 2.50e+03\n==============================================================================\n\nWarnings:\n[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.\n[2] The condition number is large, 2.5e+03. This might indicate that there are\nstrong multicollinearity or other numerical problems.\n"
]
],
[
[
"<h2>Confrontiamo ora questi modelli con criteri oggettivi</h2>",
"_____no_output_____"
]
],
[
[
"for i in range(1, models_fwd.shape[0]):\n model = models_fwd.loc[i,\"model\"]\n models_fwd.loc[i,\"aic\"] = model.aic\n models_fwd.loc[i,\"bic\"] = model.bic\n models_fwd.loc[i,\"mse\"] = model.mse_total\n models_fwd.loc[i,\"adj_rsquare\"] = model.rsquared_adj\n ",
"_____no_output_____"
],
[
"models_fwd",
"_____no_output_____"
],
[
"#Quelli da minimizzare\nfor criteria in [\"bic\",\"aic\"]:\n print(\"The criteria is: \" + criteria)\n row = models_fwd.loc[models_fwd[criteria].argmin()]\n modelFeatures = row[\"model\"].model.exog_names\n if \"intercept\" not in modelFeatures:\n modelFeatures.append(\"intercept\")\n criteriaValue = row[criteria]\n degressOfFreedom = row[\"model\"].model.df_model\n print(\"Features: \"+str(modelFeatures))\n print(\"Criteria value: \"+str(criteriaValue))\n print(\"Degrees of freedom: \"+str(degressOfFreedom+1))\n print()\n \n \n#Quelli da massimizzare\nfor criteria in [\"adj_rsquare\"]:\n print(\"The criteria is: \" + criteria)\n row = models_fwd.loc[models_fwd[criteria].argmax()]\n modelFeatures = row[\"model\"].model.exog_names\n if \"intercept\" not in modelFeatures:\n modelFeatures.append(\"intercept\")\n criteriaValue = row[criteria]\n degressOfFreedom = row[\"model\"].model.df_model\n print(\"Features: \"+str(modelFeatures))\n print(\"Criteria value: \"+str(criteriaValue))\n print(\"Degrees of freedom: \"+str(degressOfFreedom+1))\n print()\n\n ",
"The criteria is: bic\nFeatures: ['endpoint1', 'sex endpoint1', 'endpoint2', 'dose endpoint1', 'intercept']\nCriteria value: 203.75512854134968\nDegrees of freedom: 5.0\n\nThe criteria is: aic\nFeatures: ['endpoint1', 'sex endpoint1', 'endpoint2', 'dose endpoint1', 'intercept']\nCriteria value: 199.04291321995788\nDegrees of freedom: 5.0\n\nThe criteria is: adj_rsquare\nFeatures: ['endpoint1', 'sex endpoint1', 'endpoint2', 'dose endpoint1', 'number of animals', 'dose sex', 'dose', 'number of animals sex', 'dose^2', 'intercept']\nCriteria value: 0.998825266762616\nDegrees of freedom: 10.0\n\n"
],
[
"from funzioni import CarloCrecco \nCarloCrecco()",
"Cuina!\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
d0c8a702bb3fd5a93a4d816da8cd18c033dee01f | 860,227 | ipynb | Jupyter Notebook | Quantization/QuantizationError/QS_slides.ipynb | Jadouille/COM418 | 573bd44f3013949b5e69b61edab918e06cdc9679 | [
"CC0-1.0"
] | null | null | null | Quantization/QuantizationError/QS_slides.ipynb | Jadouille/COM418 | 573bd44f3013949b5e69b61edab918e06cdc9679 | [
"CC0-1.0"
] | null | null | null | Quantization/QuantizationError/QS_slides.ipynb | Jadouille/COM418 | 573bd44f3013949b5e69b61edab918e06cdc9679 | [
"CC0-1.0"
] | null | null | null | 455.387507 | 192,152 | 0.93971 | [
[
[
"<div align=\"right\"><i>COM418 - Computers and Music</i></div>\n<div align=\"right\"><a href=\"https://people.epfl.ch/paolo.prandoni\">Paolo Prandoni</a>, <a href=\"https://www.epfl.ch/labs/lcav/\">LCAV, EPFL</a></div>\n\n<p style=\"font-size: 30pt; font-weight: bold; color: #B51F1F;\">Non-Harmonic Distortion in a Quantized Sinusoid <br> (Tsividis' Paradox)</p>",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport scipy.signal as sp\nimport scipy.special as ss\nfrom scipy.io import wavfile\nimport IPython\nimport ipywidgets as widgets",
"_____no_output_____"
],
[
"plt.rcParams[\"figure.figsize\"] = (14,4)",
"_____no_output_____"
],
[
"# helper functions\n\ndef play_sound(SF, s, volume=1):\n # play a sound with a volume factor\n #x = np.copy(s) * volume\n return IPython.display.Audio(volume * s, rate=SF, normalize=False) \n\ndef multiplay(SF, clips, title=None, volume=1):\n outs = [widgets.Output() for c in clips]\n for ix, clip in enumerate(clips):\n with outs[ix]:\n print(title[ix] if title is not None else \"\")\n display(IPython.display.Audio(volume * clip, rate=SF, normalize=False))\n return widgets.HBox(outs)\n\ndef stem(x, color='tab:blue'):\n # stem with chosen color\n markerline, stemlines, baseline = plt.stem(x, use_line_collection=True, basefmt='k');\n markerline.set_color(color)\n stemlines.set_color(color)",
"_____no_output_____"
]
],
[
[
"# Quantization in A/D conversion",
"_____no_output_____"
],
[
"## The classic A/D converter\n\n * $x(t)$ bandlimited to $F_s/2$\n * sample at $F_s$ Hz\n * uniform quantization with $M$ levels\n\n<center>\n<img src=\"img/sbq.png\" style=\"width: 1200px;\"/> \n</center>",
"_____no_output_____"
],
[
"## Uniform scalar quantization\n\n * $M$-level uniform scalar quantizer: $q: \\mathbb{R} \\rightarrow \\{\\hat{x}_0, \\ldots, \\hat{x}_{M-1}\\}$\n \n * non-overload region: $[-1,1]$\n * quantization step: $\\Delta = 2/M$ ",
"_____no_output_____"
]
],
[
[
"def quantize(x, M):\n if M == 0:\n return x\n elif M % 2 == 0:\n # using a mid-riser quantizer\n M = M / 2\n k = np.floor(x * M)\n k = np.maximum(np.minimum(k, M-1), -M)\n return (k + 0.5) / M\n else:\n # using a deadzone quantizer\n k = np.round(np.abs(x) * M / 2)\n k = np.minimum((M - 1) / 2, k)\n return (np.sign(x) * k / M * 2 )",
"_____no_output_____"
],
[
"x = np.arange(-1, 1, .01)\nfor ix, M in enumerate([2, 3, 8]):\n plt.subplot(1, 3, ix+1)\n plt.plot(x,x); \n plt.plot(x, quantize(x, M), '.');",
"_____no_output_____"
]
],
[
[
"## High-resolution hypothesis\n\n<center>\n<img src=\"img/linearized.png\" style=\"width: 1200px;\"/> \n</center>\n\n * $e[n]$ white noise uncorrelated with $x[n]$\n * $\\sigma_e[n] = \\Delta^2/12$\n * $\\mathrm{SNR} = 6M~\\mathrm{dB}$\n ",
"_____no_output_____"
],
[
"# Tsividis' paradox\n\n\n<center><img src=\"img/sbq.png\" style=\"width: 800px;\"/></center>\n\n * sampling and quantization are memoryless: they can be swapped\n",
"_____no_output_____"
],
[
"\n\n * let's swap them:\n\n<center>\n<img src=\"img/sbq.png\" style=\"width: 800px;\"/> \n<img src=\"img/qbs.png\" style=\"width: 800px;\"/> \n</center>\n\n<center>\nbut $\\mathcal{Q}$ discontinuous so $\\hat{x}(t)$ no longer bandlimited $~~\\Longrightarrow~~$ aliasing!\n</center>",
"_____no_output_____"
],
[
"# Harmonic vs non-harmonic distortion\n\n * $x(t)$ periodic with period $T = 1/f_0$\n * instantaneous distortion function $r(\\cdot)$\n \nThe signal $r(x(t))$ will incur:\n * **harmonic distortion** if the spectral content at integer multiples of $f_0$ is modified <br />(typical of \"natural\" saturation/clipping)\n * **non-harmonic distortion** if spectral content appear elsewhere <br />(typical of aliasing)\n \nIn practice:\n * harmonic distortion: bearable, if we really have to\n * non-harmonic distortion: unbearable because totally unnatural",
"_____no_output_____"
],
[
"## Total Harmonic Distortion (THD)\n\nTHD quantifies harmonic distortion for sinusoidal inputs: $x(t) = \\sin(2\\pi f_0 t)$\n\nExpress $r(x(t))$ via its Fourier **series** since periodicity is preserved: $\\displaystyle r\\left(x(t)\\right) = \\sum_{k=-\\infty}^{\\infty} c_k\\, e^{-j2\\pi f_0 k t}$\n\n$$\n \\mathrm{THD} = \\sqrt{\\frac{\\sum_{k > 1} |c_k|^2}{|c_1|^2}}\n$$",
"_____no_output_____"
],
[
"Example: \n * $r(x) = \\mathrm{sgn}(x)$, from sinusoid to square wave (two-level quantization)\n * $\\displaystyle \\mathrm{sgn}\\left(\\sin(2\\pi f_0 t)\\right) = \\frac{4}{\\pi}\\sum_{k = 1}^{\\infty}\\frac{1}{2k-1}\\sin(2\\pi(2k-1) f_0 t)$\n\n$$\n \\mathrm{THD} = \\sqrt{\\sum_{k = 2}^{\\infty}\\left(\\frac{1}{2k-1}\\right)^2} = \\sqrt{\\frac{\\pi^2}{8}-1} \\approx 0.48.\n$$",
"_____no_output_____"
],
[
"**Exercise:** prove the result",
"_____no_output_____"
],
[
"## Non-harmonic distortion: aliasing\n\nExample as before, but in discrete time:\n * $F_s > 2f_0$ \n * $\\omega_0 = f_0/Fs < \\pi$\n * $\\displaystyle \\mathrm{sgn}\\left(\\sin(\\omega_0 n)\\right) = \\frac{4}{\\pi}\\sum_{k = 1}^{\\infty}\\frac{1}{2k-1}\\sin((2k-1) \\omega_0 n)$\n * frequencies for $k > (1 + \\pi/\\omega_0) / 2$ will be aliased! \n",
"_____no_output_____"
],
[
"## Harmonic vs non-harmonic distortion: example\n\nprogressively harder clipping vs progressively coarser quantization",
"_____no_output_____"
]
],
[
[
"sf, f0, M = 8000, 440, 9\n\n# one second per clipping level\nw = 2 * np.pi * f0 / sf * np.arange(0, M * sf)\nx = np.sin(w)\nx_c, x_q = np.zeros(len(w)), np.zeros(len(w))\n\nfor n, level in enumerate(range(M, 1, -1)):\n s = slice(n * sf, (n + 1) * sf)\n # progessively harder clipping\n x_c[s] = np.clip(np.sin(w[s]), -level/M, level/M) * M / level\n # progressively coarser quantization\n x_q[s] = quantize(np.sin(w[s]), 2 ** level)\n \nmultiplay(sf, (x_c, x_q), ('clipping', 'quantization'), volume=0.3) ",
"_____no_output_____"
]
],
[
[
"## Aside: non-harmonic distortion due to intermodulation\n\nWhen more than a single sinusoid is considered, things get complicated quickly\n * $r(x) = \\sum_{n=0}^{\\infty} a_n \\, x^n$ (Taylor series expansion)\n * $\\sin^n \\alpha = \\gamma_0 + \\sum_{k=1}^{n} \\gamma_k \\sin k\\alpha$ \n * $\\sin \\alpha \\sin \\beta = \\mu_0 \\sin(\\alpha + \\beta) + \\mu_1 \\sin(\\alpha - \\beta)$\n \n$$\n r\\left(\\sin(2\\pi f_0 t) + \\sin(2\\pi f_1 t)\\right) = \\ldots = \\sum_{k_0, k_1 = -\\infty}^{\\infty} b_{k_0, k_1} \\sin(2\\pi (k_0 f_0 + k_1 f_1) t)\n$$",
"_____no_output_____"
]
],
[
[
"for n, level in enumerate(range(M, 1, -1)):\n s = slice(n * sf, (n + 1) * sf)\n x_c[s] = np.clip((np.sin(w[s]) + np.sin(1.5 * w[s])) / 2, -level/M, level/M) * M / level\n\nplay_sound(sf, x_c, volume=0.3) ",
"_____no_output_____"
]
],
[
[
"# Ravel's Bolero",
"_____no_output_____"
],
[
"## An impressive dynamic range\n\n<center>\n<img width=\"800\" src=\"img/bolero_diff.jpg\">\n</center>\n",
"_____no_output_____"
]
],
[
[
"clips = {}\nfor name in ['boleroA', 'boleroM', 'boleroZ']:\n sf, audio = wavfile.read('snd/' + name + '.wav')\n clips['sf'], clips[name] = sf, audio / 32767.0\n\nmultiplay(clips['sf'], [clips['boleroA'], clips['boleroZ']], ['beginning, full res', 'ending, full res'])",
"_____no_output_____"
]
],
[
[
"<center>\n<img width=\"1200\" src=\"img/bolero_wav.png\">\n</center>",
"_____no_output_____"
],
[
"\n * live performances have an dynamic range of 100dBs or more\n * 16-bit audio covers about 96dBs\n * ... but vinyl is no better: about 70dB dynamic range",
"_____no_output_____"
],
[
"## Aside: oreloB\n\n<img width=\"480\" style=\"float: right;\" src=\"img/orelob.jpg\">\n\nBolero is much louder at the end but vinyls suffer from _end of side_ distortion:\n * rotational speed constant, but inner grooves shorter\n * reading speed gets slower\n * recorded wavelengths become shorter<br/> and comparable to stylus size\n * groove slope gets too steep for tracking\n \nSolution: oreloB, a vinyl that plays backwards",
"_____no_output_____"
],
[
"## Quantizing the Bolero\n\n<img width=\"600\" style=\"float: right;\" src=\"img/bolero_wav.png\">\n\n * clearly the beginning spans a much smaller<br />number of quantization levels than the end\n * the high-resolution hypothesis may not hold",
"_____no_output_____"
]
],
[
[
"levels=[2 ** 16, 2 ** 8]\nmultiplay(clips['sf'], [quantize(clips['boleroM'], m) for m in levels], [f'middle, {m}-level quantization' for m in levels])",
"_____no_output_____"
],
[
"levels=[2 ** 16, 2 ** 8]\nmultiplay(clips['sf'], [quantize(clips['boleroA'], m) for m in levels], [f'beginning, {m}-level quantization' for m in levels])",
"_____no_output_____"
]
],
[
[
"# Numerical Experiments",
"_____no_output_____"
],
[
"## Sampling a sine wave with rational normalized frequency\n\n(the opening flute in the Bolero is close to a pure sinusoid)\n\n * conventional setup: sampling followed by quantization\n * $x(t) = \\sin(2\\pi f_0 t)$, sampled at $F_s$ and $f_0 = \\frac{A}{B}F_s$ with $A$ and $B$ coprime <br/>\n \n * $x[n] = \\sin\\left(2\\pi\\frac{A}{B}n\\right)$\n \n \n * $x[n]$ will be periodic with period $B$ and it will span $A$ cycles over $B$ samples\n * natural Fourier representation: DFS $\\mathbf{X}\\in \\mathbb{C}^B$\n * single nonzero coefficient $X[A]$",
"_____no_output_____"
]
],
[
[
"def quantized_sinusoid(A, B, M=0, initial_phase=1):\n # add an initial phase non commensurable with pi to eliminate quantization of zero values\n x = np.sin(initial_phase + 2 * np.pi * ((A * np.arange(0, B)) % B) / B)\n qx = quantize(x, M)\n return {\n 'original' : x, \n 'quantized' : qx, \n # square magnitude of the normalized DFS for positive frequencies\n 'DFS' : (np.abs(np.fft.fft(qx))[:int(np.ceil(B/2))] / B ) ** 2 \n }\n\nstem(quantized_sinusoid(3, 17)['DFS'])",
"_____no_output_____"
]
],
[
[
"## Introducing quantization\n\n * $\\mathbf{x} \\rightarrow \\hat{\\mathbf{x}}$\n * $\\hat{\\mathbf{x}}$ still periodic with a period of $B$ samples\n \n \nDistortion:\n * harmonic distortion will affects the DFS coefficient whose index is a multiple of $A$\n * non-harmonic distortion will affect the other coefficients\n \n \nFirst note in the Bolero is a $C_5$, i.e. 523.25Hz. \n\nAt $F_s=44.1$KHz we can pick $B=257$ and $A=3$. ",
"_____no_output_____"
]
],
[
[
"def find_nhd(A, dfs, full=False):\n # zero out harmonic components to highlight non-harmonic content\n N = int(np.ceil(len(dfs) / 2)) if full else len(dfs)\n nhd = np.copy(dfs[:N])\n nhd[::A] = 0\n return max(nhd), nhd",
"_____no_output_____"
],
[
"def show_nhd(A=3, B=257, M=2):\n s = quantized_sinusoid(A, B, int(M))\n peak, nhd = find_nhd(A, s['DFS'])\n \n plt.subplot(1, 2, 1) \n plt.plot(s['original']);\n plt.plot(s['quantized']);\n plt.title('signal')\n \n plt.subplot(1, 2, 2) \n stem(s['DFS'])\n plt.title('DFS')\n \n plt.figure()\n stem(nhd)\n plt.ylim(0, 0.0002)\n plt.title('non-harmonic components, max=' + str(peak))",
"_____no_output_____"
],
[
"display(widgets.interactive(show_nhd, M=widgets.Dropdown(options=['2', '3', '4', '128' ]), A=(1, 11), B=widgets.fixed(257)))",
"_____no_output_____"
]
],
[
[
"## Searching for the worst case\n\n * try to get a sense for how bad non-harmonic distortion can get\n * let's iterate over all non-reducible $A/B$ ratios between $0$ and $1/2$ \n \n \n**Farey sequence** of order $N$ is the sequence of _non-reducible_ fractions in the unit interval with denominator smaller or equal than $N$",
"_____no_output_____"
]
],
[
[
"def farey_sequence(n):\n \"\"\"Build the order-N Farey sequence up to 1/2.\"\"\"\n farey = []\n (a, b, c, d) = (0, 1, 1, n)\n while (c <= n):\n k = (n + b) // d\n (a, b, c, d) = (c, d, k * c - a, k * d - b)\n farey.append((a, b))\n if a/b >= 0.5:\n break\n return farey\n\n\nfor (a, b) in farey_sequence(50):\n plt.plot(b, a, 'o', color=plt.cm.tab20b(a % 20))",
"_____no_output_____"
],
[
"def find_max_nhd(N, M=2, parametric=False):\n max_value = (0, 0, 0)\n for (A, B) in farey_sequence(N):\n peak, _ = find_nhd(A, quantized_sinusoid(A, B, M)['DFS'])\n plt.plot(B if parametric else (A / B), peak, 'o', color=plt.cm.tab20b(A % 20))\n if peak > max_value[0]:\n max_value = (peak, A, B)\n plt.title(f'max value is {max_value[0]}, frequency {max_value[1]}/{max_value[2]}')",
"_____no_output_____"
]
],
[
[
"## Non-harmonic distortion for Farey ratios\n\nMaximum square magnitude of non-harmonic DFS coefficient as a function of $B$ and parametrized in $A$\n\n",
"_____no_output_____"
]
],
[
[
"find_max_nhd(100, 2, parametric=True)",
"_____no_output_____"
],
[
"find_max_nhd(100, 3, parametric=True)",
"_____no_output_____"
],
[
"find_max_nhd(100, 32768, parametric=True)",
"_____no_output_____"
]
],
[
[
"Let's also look at the non-parametrized plots. \n\nThe reason for the step-ladder patterns will be hopefully clear by the end.",
"_____no_output_____"
]
],
[
[
"find_max_nhd(150, 2)",
"_____no_output_____"
],
[
"find_max_nhd(150, 3)",
"_____no_output_____"
]
],
[
[
"# Theoretical Analysis",
"_____no_output_____"
],
[
"## Some DSP archeology \n\nHere is a really interesting paper from 1947\n\n<center>\n<img width=\"1200\" src=\"img/cpg_title.jpg\">\n</center>",
"_____no_output_____"
],
[
"For context, in 1947 this was happening\n\n<br /><br />\n\n<center>\n<img width=\"800\" src=\"img/transistor.jpg\"> \n</center>",
"_____no_output_____"
],
[
"My second favorite quote of the paper:\n\n<center>\n<img width=\"600\" src=\"img/cpg_quote2.jpg\">\n</center>",
"_____no_output_____"
],
[
"My favorite quote of all time:\n<br /><br /><br />\n\n<center>\n<img width=\"600\" src=\"img/cpg_quote.jpg\">\n</center>",
"_____no_output_____"
],
[
"### Quantization before sampling \n\n\n<center>\n<img src=\"img/qbs.png\" style=\"width: 800px;\"/> \n</center>\n",
"_____no_output_____"
]
],
[
[
"t = np.arange(0, 2 * np.pi, 0.001)\nplt.plot(t, quantize(np.sin(t), 15));",
"_____no_output_____"
]
],
[
[
"<img width=\"300\" style=\"float: right; margin: 10px;\" src=\"img/clavier.jpg\">\n\nFundamental idea: \n\n\n * decompose this piecewise-constant periodic <br /> waveform as the sum of $N$ pairs of rectangular steps <br /> of appropriate width\n * express $\\hat{x}(t)$ using a Fourier series expansion:\n\n<br /><br />\n$$\n \\hat{x}(t) = \\sum_{h=1}^{N} \\sum_{k=0}^{\\infty} \\frac{4}{\\pi N (2k+1)} \\cos\\left[(2k+1)\\arcsin\\left(\\frac{2h-1}{2N}\\right) \\right]\\sin((2k+1)t)\n$$",
"_____no_output_____"
]
],
[
[
"def quantized_sinusoid_fs(N, terms=1000):\n t = np.arange(0, 2 * np.pi, 0.001)\n x = np.zeros(len(t))\n for h in range(1, N):\n for k in range(0, terms):\n x = x + np.cos((2 * k + 1) * np.arcsin((2 * h - 1) / N / 2)) * np.sin((2 * k + 1) * t) / (2 * k + 1)\n x = x * 4 / np.pi / N\n return t, x\n\n\nplt.plot(*quantized_sinusoid_fs(8));",
"_____no_output_____"
]
],
[
[
"### Fundamental intuition:\n\n * $q(\\sin(t))$ **contains harmonics at all odd multiples of the fundamental frequency** \n * quantization of a continuous-time sine wave produces only harmonic distortion\n \n \n \n * NHD is given by spectral lines beyond the Nyquist frequency aliased by the sampler",
"_____no_output_____"
],
[
"## More recent times\n\nMoving on to Robert Gray's 1990 paper [\"Quantization Noise Spectra\"](https://ieeexplore.ieee.org/document/59924).",
"_____no_output_____"
],
[
"### The normalized quantization error\n\n * consider the expression for the _normalized quantization error_\n$$\n \\eta(x) = \\frac{q(x) - x}{\\Delta} = \\frac{q(x) - x}{2/M} \\quad \\in [-0.5, 0.5].\n$$\n * $\\eta(x)$ is a **periodic** function with period $M/2$",
"_____no_output_____"
]
],
[
[
"x = np.arange(-1, 1, .001)\nfor ix, M in enumerate([2, 3, 8]):\n plt.subplot(1, 3, ix+1)\n e = (quantize(x, M) - x) / (2 / M)\n plt.plot(x, e); \n plt.plot(x, e, '.');",
"_____no_output_____"
]
],
[
[
" * $\\eta(x)$ can be expressed as a Fourier Series\n$$\n \\eta(x) = \\sum_{k=1}^{\\infty} \\frac{(-1)^{kM}}{\\pi k}\\sin\\left(\\pi k M x\\right)\n$$\n * $(-1)^{kM}$ is identically one for mid-riser quantizers and alternates in sign for deadzone quantizers.",
"_____no_output_____"
]
],
[
[
"def nqe_fs(x, M, terms=1000):\n e = np.zeros(len(x))\n s = [1, -1 if M % 2 == 1 else 1]\n for k in range(1, terms):\n e = e + s[k % 2] * np.sin(np.pi * k * x * M) / (np.pi * k)\n return x, e",
"_____no_output_____"
],
[
"for ix, M in enumerate([2, 3, 8]):\n plt.subplot(1, 3, ix+1)\n plt.plot(*nqe_fs(np.arange(-1, 1, .01), M))",
"_____no_output_____"
]
],
[
[
"### Quantization noise for a sinusoidal input\n\n * back to sampling followed by quantization\n * $x[n] = \\sin(\\omega_0 n + \\theta)$ with $0 \\le \\omega_0 < 2\\pi$. \n * $\\eta[n] = \\eta(\\sin(\\omega_0 n + \\theta))$ and we are interested in computing its spectrum. \n\n\n * using complex exponentials for the Fourier series:\n$$\n \\eta(x) = \\sum_{k \\neq 0} \\frac{(-1)^{kM}}{j2\\pi k}e^{j\\pi k M x}.\n$$\n\nNow we need to replace $x$ by $\\sin(\\omega_0 n + \\theta)$ and we end up with terms of the form $e^{j \\alpha \\sin \\beta}$; these can be expanded in terms of Bessel functions using the so-called Jacobi-Anger formula:\n\n$$\n e^{j \\alpha \\sin \\omega} = \\sum_{m=-\\infty}^{\\infty} J_m(\\alpha)e^{j\\omega m}.\n$$",
"_____no_output_____"
],
[
"Bessel functions are even or odd according to whether their order is even or odd, so:\n\n$$\n\\begin{align*}\n \\eta[n] = \\eta(\\sin(\\omega_0 n + \\theta)) &= \\sum_{k \\neq 0} \\frac{(-1)^{kM}}{j2\\pi k}e^{j\\pi k M \\sin(\\omega_0 n + \\theta)} \\\\\n &= \\sum_{k \\neq 0} \\frac{(-1)^{kM}}{j2\\pi k} \\sum_{m=-\\infty}^{\\infty} J_m(\\pi k M)e^{j (2m+1)\\theta} e^{j (2m+1)\\omega_0 n} \\\\\n &= \\sum_{m=-\\infty}^{\\infty} \\left[ e^{j (2m+1)\\theta} \\sum_{k = 1}^{\\infty} \\frac{(-1)^{kM}}{j\\pi k}J_{2m+1}(\\pi k M) \\right] e^{j (2m+1)\\omega_0 n} \\\\ \\\\\n &= \\sum_{\\varphi \\in \\Omega(\\omega_0)} b(\\varphi) e^{j \\varphi n}\n\\end{align*} \n$$",
"_____no_output_____"
],
[
"$$\n \\eta[n] = \\sum_{\\varphi \\in \\Omega(\\omega_0)} b(\\varphi) e^{j \\varphi n}\n$$\n\n * $\\Omega(\\omega_0) = \\{(2m+1)\\omega_0 \\mod 2\\pi\\}_{m \\in \\mathbb{Z}}$, i.e., all the odd multiples of the fundamental frequency aliased over the $[0, 2\\pi]$ interval;\n \n \n * for each frequency $\\varphi \\in \\Omega(\\omega_0)$:\n * $I(\\varphi) = \\{m \\in \\mathbb{Z} | (2m+1)\\omega_0 \\equiv \\varphi \\mod 2\\pi\\}$\n \n * $\\displaystyle b(\\varphi) = \\sum_{m \\in I(\\varphi)} \\left[ e^{j (2m+1)\\theta} \\sum_{k = 1}^{\\infty} \\frac{(-1)^{kM}}{j\\pi k}J_{2m+1}(\\pi k M) \\right]$",
"_____no_output_____"
],
[
"### PSD of the error\n\n$$\n P_{\\omega_0}(e^{j\\omega}) = \\sum_{\\varphi \\in \\Omega(\\omega_0)} |b(\\varphi)|^2 \\delta(\\omega - \\varphi).\n$$",
"_____no_output_____"
],
[
"### Case 1: rational normalized frequency\n\nAssume $\\omega_0 = 2\\pi(A/B)$, with $A$ and $B$ coprime, as in the numerical experiments\n\n * the set $\\Omega(\\omega_0)$ is finite: <br /> $\\displaystyle\\Omega\\left(2\\pi\\frac{A}{B}\\right) = \\left\\{\\frac{2i\\pi}{B}\\right\\}_i, \\quad \\begin{cases}\n i = 0, 1, 2, \\ldots, B-1 & \\mbox{if $A$ or $B$ even} \\\\\n i = 1, 3, 5, \\ldots, B-1 & \\mbox{if $A$ and $B$ odd} \n \\end{cases}$\n \n\n * $\\displaystyle I\\left(\\frac{2i\\pi}{B}\\right) = \\{i[A]^{-1}_{B} + pB\\}_{p \\in \\mathbb{Z}}$",
"_____no_output_____"
],
[
"The quantization error's PSD:\n\n * contains a finite number of spectral lines at multiples of $2\\pi/B$ \n * the power associated to each line $|b(2i\\pi/B)|^2$ should correspond to the square magnitude of the $i$-th coefficient of the $B$-point DFS of the error signal.\n\n\n\nThe following function computes an approximation of the coefficients $|b(2i\\pi/B)|^2$ for $\\omega_0 = 2\\pi(A/B)$, scaled to represent the non-normalized quantization error:",
"_____no_output_____"
]
],
[
[
"def nqe_sin_psd(A, B, M, phase=1):\n s = [1, -1 if M % 2 == 1 else 1]\n b = np.zeros(B, dtype=complex)\n m_lim, k_lim = max(1500, 2 * B), 600\n for m in range(-m_lim, m_lim):\n c = 0\n for k in range(1, k_lim):\n c += s[k % 2] * ss.jv(2 * m + 1, np.pi * k * M) / k\n c /= 1j * np.pi\n b[((2 * m + 1) * A) % B] += c * np.exp(1j * phase * (2 * m + 1))\n # undo error normalization to obtain the real error PSD\n b = np.abs(b * (2 / M)) ** 2\n print('Max NHD (theory): ', find_nhd(A, b, full=True)[0])\n return b",
"_____no_output_____"
],
[
"def nqe_sin_dfs(A, B, M, phase=1):\n s = quantized_sinusoid(A, B, M, phase)\n ne = (s['quantized'] - s['original']) \n b = np.abs(np.fft.fft(ne / B)) ** 2\n print('Max NHD (FFT): ', find_nhd(A, b, full=True)[0])\n return b",
"_____no_output_____"
],
[
"P = (3, 8, 2)\nstem(nqe_sin_psd(*P), 'tab:green')\nstem(nqe_sin_dfs(*P), 'tab:red')",
"Max NHD (theory): 0.018326485641112476\nMax NHD (FFT): 0.01830582617584078\n"
],
[
"P = (5, 14, 3)\nstem(nqe_sin_psd(*P), 'tab:green')\nstem(nqe_sin_dfs(*P), 'tab:red')",
"Max NHD (theory): 0.014076477031128238\nMax NHD (FFT): 0.014262281437017315\n"
]
],
[
[
"### Case 2: irrational normalized frequency\n\nAssume $\\omega_0$ not a rational multiple of $2\\pi$ \n\n * the normalized frequency $\\nu = \\omega_0/(2\\pi)$ will be an irrational number in $[0, 1)$\n \n * the set of _normalized_ frequencies $\\Omega'(\\nu) = \\{(2m+1)\\nu \\mod 1\\}_{m \\in \\mathbb{Z}} = \\{ \\langle (2m+1)\\nu \\rangle\\}_{m \\in \\mathbb{Z}}$\n\n\nWeil's Equidistribution theorem shows that $\\Omega'(\\nu)$ cover the entire $[0, 1]$ interval _uniformly_. ",
"_____no_output_____"
]
],
[
[
"P = (150, 1021, 2)\nstem(nqe_sin_psd(*P), 'tab:green')\nstem(nqe_sin_dfs(*P), 'tab:red')",
"Max NHD (theory): 0.004830057419924902\nMax NHD (FFT): 0.00405292728721497\n"
]
],
[
[
"# Back to the non-harmonic distortion patterns\n\nRecall the plot of the maximum non-harmonic distortion as a function of normalized frequency and its curious \"stepladder\" pattern:",
"_____no_output_____"
]
],
[
[
"find_max_nhd(150, 2)",
"_____no_output_____"
]
],
[
[
"Consider the non-normalized quantization error for a sinusoid of frequency $\\omega_0 = 2\\pi\\nu$, with $0 < \\nu < 1/2$:\n\n$$\n\\begin{align*}\n \\frac{2}{M}\\, \\eta(\\sin(2\\pi\\nu n))&= \\sum_{m=-\\infty}^{\\infty} \\left[ \\frac{2}{M}\\sum_{k = 1}^{\\infty} \\frac{(-1)^{kM}}{j\\pi k}J_{2m+1}(\\pi k M) \\right] e^{j 2\\pi(2m+1)\\nu n} \\\\ \n &= \\sum_{m=-\\infty}^{\\infty} c_M(m)\\, e^{j 2\\pi(2m+1)\\nu n};\n\\end{align*} \n$$\n\n * for $(2m+1)\\nu < 1/2$ the PSD lines are harmonically related to the fundamental\n * for $(2m+1)\\nu > 1/2$ we have aliasing and potentially non-harmonic distortion\n ",
"_____no_output_____"
],
[
"$$\n \\frac{2}{M}\\, \\eta(\\sin(2\\pi\\nu n)) = \\sum_{m=-\\infty}^{\\infty} c_M(m)\\, e^{j 2\\pi(2m+1)\\nu n}\n$$\n\n\n * the coefficients $c_M(m)$ depend only on the number of quantization levels $M$\n * $|c_M(m)|^2$ decreases rather quickly with $m$:",
"_____no_output_____"
]
],
[
[
"def c_m(N, M=2):\n k_lim = 600000\n s = [1, -1 if M % 2 == 1 else 1]\n c = np.zeros(N, dtype=complex)\n for m in range(0, N):\n for k in range(1, k_lim):\n c[m] += s[k % 2] * ss.jv(2 * m + 1, np.pi * k * M) / k\n c[m] /= 1j * np.pi\n return np.abs(c * (2 / M)) ** 2",
"_____no_output_____"
],
[
"c2 = c_m(20, 2)\nstem(c2)",
"_____no_output_____"
]
],
[
[
" * the max NHD is dominated by the first aliased component:<br />max NHD is $|c_M(m_0)|^2$ where $m_0$ is the minimum integer for which $(2m_0+1)\\nu > 1/2$. \n\n\n\n * for $\\nu > 1/6$, NHD $\\approx |c_M(1)|^2$\n * for $1/10 < \\nu < 1/6$, NHD $\\approx |c_M(2)|^2$\n * ...",
"_____no_output_____"
]
],
[
[
"find_max_nhd(150, 2)\nfor m in range(1, 5):\n plt.plot([0.5/(2*m+1), 0.5/(2*m+1)], [0, 0.015], color=plt.cm.tab10(m))\n plt.plot([0, 0.5], [c2[m], c2[m]], color=plt.cm.tab10(m))",
"_____no_output_____"
]
],
[
[
"What about $M=3$ ? \n\n * $c_3(2) \\approx 0$\n * $c_3(m)$ non-monotonic\n * NHD approx the same for $1/18 < \\nu < 1/6$.",
"_____no_output_____"
]
],
[
[
"c3 = c_m(20, 3)\nstem(c3)",
"_____no_output_____"
],
[
"find_max_nhd(150, 3)\nfor m in range(1, 5):\n plt.plot([0.5/(2*m+1), 0.5/(2*m+1)], [0, 0.015], color=plt.cm.tab10(m))\n plt.plot([0, 0.5], [c3[m], c3[m]], color=plt.cm.tab10(m))",
"_____no_output_____"
]
],
[
[
"# COnclusionL Does all of this matter?\n\nYes and no:\n * it's important to understand the consequences of quantization\n * **dithering** techniques solve most of the problems we've seen here",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
d0c8b80c68ad6f462ebfea197e5091348be8936f | 27,547 | ipynb | Jupyter Notebook | notebooks/.ipynb_checkpoints/large-dataset-testing-checkpoint.ipynb | AndreCNF/data-utils | e90aff358a041a0ef608bb5ca916cd8ca2ecd613 | [
"MIT"
] | null | null | null | notebooks/.ipynb_checkpoints/large-dataset-testing-checkpoint.ipynb | AndreCNF/data-utils | e90aff358a041a0ef608bb5ca916cd8ca2ecd613 | [
"MIT"
] | 2 | 2020-06-23T02:15:09.000Z | 2021-09-08T01:43:53.000Z | notebooks/large-dataset-testing.ipynb | AndreCNF/data-utils | e90aff358a041a0ef608bb5ca916cd8ca2ecd613 | [
"MIT"
] | null | null | null | 28.546114 | 201 | 0.483719 | [
[
[
"# Large dataset testing\n---\n\nChecking if the new large dataset class, which lazily loads batch files instead of diving a giant pre-loaded one, works well to train my models.",
"_____no_output_____"
],
[
"## Importing the necessary packages",
"_____no_output_____"
]
],
[
[
"import os # os handles directory/workspace changes\nimport comet_ml # Comet.ml can log training metrics, parameters, do version control and parameter optimization\nimport torch # PyTorch to create and apply deep learning models\n# import modin.pandas as pd # Optimized distributed version of Pandas\nimport pandas as pd # Pandas to load and handle the data\nimport numpy as np # NumPy to handle numeric and NaN operations\nimport getpass # Get password or similar private inputs\nfrom ipywidgets import interact # Display selectors and sliders",
"_____no_output_____"
],
[
"os.chdir('..')\nimport data_utils as du # Data science and machine learning relevant methods\nos.chdir('notebooks/')",
"_____no_output_____"
],
[
"du.set_random_seed(42)",
"_____no_output_____"
],
[
"# Debugging packages\nimport pixiedust # Debugging in Jupyter Notebook cells",
"_____no_output_____"
],
[
"# Path to the parquet dataset files\ndata_path = 'dummy_data/'\n# Path to the code files\nproject_path = ''",
"_____no_output_____"
],
[
"import Models # Machine learning models\nimport utils # Context specific (in this case, for the eICU data) methods",
"_____no_output_____"
],
[
"du.set_pandas_library(lib='pandas')",
"_____no_output_____"
]
],
[
[
"## Initializing variables",
"_____no_output_____"
],
[
"Comet ML settings:",
"_____no_output_____"
]
],
[
[
"comet_ml_project_name = input('Comet ML project name:')\ncomet_ml_workspace = input('Comet ML workspace:')\ncomet_ml_api_key = getpass.getpass('Comet ML API key')",
"_____no_output_____"
]
],
[
[
"Dataset parameters:",
"_____no_output_____"
]
],
[
[
"dataset_mode = None # The mode in which we'll use the data, either one hot encoded or pre-embedded\nml_core = None # The core machine learning type we'll use; either traditional ML or DL\nuse_delta_ts = None # Indicates if we'll use time variation info\ntime_window_h = None # Number of hours on which we want to predict mortality\nalready_embedded = None # Indicates if categorical features are already embedded when fetching a batch\n@interact\ndef get_dataset_mode(data_mode=['one hot encoded', 'learn embedding', 'pre-embedded'], \n ml_or_dl=['deep learning', 'machine learning'],\n use_delta=[False, 'normalized', 'raw'], window_h=(0, 96, 24)):\n global dataset_mode, ml_core, use_delta_ts, time_window_h, already_embedded\n dataset_mode, ml_core, use_delta_ts, time_window_h = data_mode, ml_or_dl, use_delta, window_h\n already_embedded = dataset_mode == 'embedded'",
"_____no_output_____"
],
[
"id_column = 'patientunitstayid' # Name of the sequence ID column\nts_column = 'ts' # Name of the timestamp column\nlabel_column = 'label' # Name of the label column\nn_ids = 6 # Total number of sequences\nn_inputs = 9 # Number of input features\nn_outputs = 1 # Number of outputs\npadding_value = 999999 # Padding value used to fill in sequences up to the maximum sequence length",
"_____no_output_____"
]
],
[
[
"Data types:",
"_____no_output_____"
]
],
[
[
"dtype_dict = dict(patientunitstayid='uint',\n ts='uint',\n int_col='Int32',\n float_col='float32',\n cat_1_bool_1='UInt8',\n cat_1_bool_2='UInt8',\n cat_2_bool_1='UInt8',\n cat_3_bool_1='UInt8',\n cat_3_bool_2='UInt8',\n cat_3_bool_3='UInt8',\n cat_3_bool_4='UInt8',\n death_ts='Int32')",
"_____no_output_____"
]
],
[
[
"One hot encoding columns categorization:",
"_____no_output_____"
]
],
[
[
"cat_feat_ohe = dict(cat_1=['cat_1_bool_1', 'cat_1_bool_2'], \n cat_2=['cat_2_bool_1'], \n cat_3=['cat_3_bool_1', 'cat_3_bool_2', 'cat_3_bool_3', 'cat_3_bool_4'])\ncat_feat_ohe",
"_____no_output_____"
],
[
"list(cat_feat_ohe.keys())",
"_____no_output_____"
]
],
[
[
"Training parameters:",
"_____no_output_____"
]
],
[
[
"test_train_ratio = 0.25 # Percentage of the data which will be used as a test set\nvalidation_ratio = 1/3 # Percentage of the data from the training set which is used for validation purposes\nbatch_size = 2 # Number of unit stays in a mini batch\nn_epochs = 1 # Number of epochs\nlr = 0.001 # Learning rate",
"_____no_output_____"
]
],
[
[
"Testing parameters:",
"_____no_output_____"
]
],
[
[
"metrics = ['loss', 'accuracy', 'AUC', 'AUC_weighted']",
"_____no_output_____"
]
],
[
[
"## Creating large dummy data",
"_____no_output_____"
],
[
"Create each individual column as a NumPy array:",
"_____no_output_____"
]
],
[
[
"patientunitstayid_col = np.concatenate([np.repeat(1, 25), \n np.repeat(2, 17), \n np.repeat(3, 56), \n np.repeat(4, 138), \n np.repeat(5, 2000), \n np.repeat(6, 4000), \n np.repeat(7, 6000),\n np.repeat(8, 100000)])\npatientunitstayid_col",
"_____no_output_____"
],
[
"ts_col = np.concatenate([np.arange(25), \n np.arange(17), \n np.arange(56), \n np.arange(138), \n np.arange(2000), \n np.arange(4000), \n np.arange(6000),\n np.arange(100000)])\nts_col",
"_____no_output_____"
],
[
"int_col = np.random.randint(0, 50, size=(112236))\nnp.random.shuffle(int_col)\nint_col",
"_____no_output_____"
],
[
"float_col = np.random.uniform(3, 15, size=(112236))\nnp.random.shuffle(float_col)\nfloat_col",
"_____no_output_____"
],
[
"cat_1_bool_1 = np.concatenate([np.random.randint(0, 2, size=(112236))])\nnp.random.shuffle(cat_1_bool_1)\ncat_1_bool_1",
"_____no_output_____"
],
[
"cat_1_bool_2 = np.concatenate([np.random.randint(0, 2, size=(112236))])\nnp.random.shuffle(cat_1_bool_2)\ncat_1_bool_2",
"_____no_output_____"
],
[
"cat_2_bool_1 = np.concatenate([np.random.randint(0, 2, size=(112236))])\nnp.random.shuffle(cat_2_bool_1)\ncat_2_bool_1",
"_____no_output_____"
],
[
"cat_3_bool_1 = np.concatenate([np.random.randint(0, 2, size=(112236))])\nnp.random.shuffle(cat_3_bool_1)\ncat_3_bool_1",
"_____no_output_____"
],
[
"cat_3_bool_2 = np.concatenate([np.random.randint(0, 2, size=(112236))])\nnp.random.shuffle(cat_3_bool_2)\ncat_3_bool_2",
"_____no_output_____"
],
[
"cat_3_bool_3 = np.concatenate([np.random.randint(0, 2, size=(112236))])\nnp.random.shuffle(cat_3_bool_3)\ncat_3_bool_3",
"_____no_output_____"
],
[
"cat_3_bool_4 = np.concatenate([np.random.randint(0, 2, size=(112236))])\nnp.random.shuffle(cat_3_bool_4)\ncat_3_bool_4",
"_____no_output_____"
],
[
"death_ts = np.concatenate([np.random.randint(0, 1000, size=(22236)), np.repeat(np.nan, 90000)])\nnp.random.shuffle(death_ts)\ndeath_ts",
"_____no_output_____"
],
[
"data = np.column_stack([patientunitstayid_col, ts_col, int_col, float_col, cat_1_bool_1, \n cat_1_bool_2, cat_2_bool_1, cat_3_bool_1, \n cat_3_bool_2, cat_3_bool_3, cat_3_bool_4,\n death_ts])\ndata",
"_____no_output_____"
]
],
[
[
"Create a pandas dataframe with all the columns:",
"_____no_output_____"
]
],
[
[
"data_df = pd.DataFrame(data, columns=['patientunitstayid', 'ts', 'int_col', 'float_col', 'cat_1_bool_1', \n 'cat_1_bool_2', 'cat_2_bool_1', 'cat_3_bool_1', \n 'cat_3_bool_2', 'cat_3_bool_3', 'cat_3_bool_4',\n 'death_ts'])\ndata_df",
"_____no_output_____"
],
[
"data_df.dtypes",
"_____no_output_____"
],
[
"data_df = du.utils.convert_dtypes(data_df, dtypes=dtype_dict, inplace=True)",
"_____no_output_____"
],
[
"data_df.dtypes",
"_____no_output_____"
]
],
[
[
"Save in batch files:",
"_____no_output_____"
]
],
[
[
"du.data_processing.save_chunked_data(data_df, file_name='dmy_large_data', batch_size=1,\n id_column=id_column, data_path=data_path)",
"_____no_output_____"
],
[
"pd.read_feather(f'{data_path}dmy_large_data_2.ftr')",
"_____no_output_____"
]
],
[
[
"## Defining the dataset object",
"_____no_output_____"
]
],
[
[
"dataset = du.datasets.Large_Dataset(files_name='dmy_large_data', process_pipeline=utils.eICU_process_pipeline,\n id_column=id_column, initial_analysis=utils.eICU_initial_analysis, \n files_path=data_path, dataset_mode=dataset_mode, ml_core=ml_core, \n use_delta_ts=use_delta_ts, time_window_h=time_window_h, total_length=100000,\n padding_value=padding_value, cat_feat_ohe=cat_feat_ohe, dtype_dict=dtype_dict)",
"_____no_output_____"
],
[
"# Make sure that we discard the ID, timestamp and label columns\nif n_inputs != dataset.n_inputs:\n n_inputs = dataset.n_inputs\n print(f'Changed the number of inputs to {n_inputs}')\nelse:\n n_inputs",
"_____no_output_____"
],
[
"if dataset_mode == 'learn embedding':\n embed_features = dataset.embed_features\n n_embeddings = dataset.n_embeddings\nelse:\n embed_features = None\n n_embeddings = None\nprint(f'Embedding features: {embed_features}')\nprint(f'Number of embeddings: {n_embeddings}')",
"_____no_output_____"
],
[
"dataset.__len__()",
"_____no_output_____"
],
[
"dataset.bool_feat",
"_____no_output_____"
]
],
[
[
"## Separating into train and validation sets",
"_____no_output_____"
]
],
[
[
"(train_dataloader, val_dataloader, test_dataloader,\ntrain_indeces, val_indeces, test_indeces) = du.machine_learning.create_train_sets(dataset,\n test_train_ratio=test_train_ratio,\n validation_ratio=validation_ratio,\n batch_size=batch_size,\n get_indices=True,\n num_workers=2)",
"_____no_output_____"
],
[
"if ml_core == 'deep learning':\n # Ignore the indeces, we only care about the dataloaders when using neural networks\n del train_indeces\n del val_indeces\n del test_indeces\nelse:\n # Get the full arrays of each set\n train_features, train_labels = dataset.X[train_indeces], dataset.y[train_indeces]\n val_features, val_labels = dataset.X[val_indeces], dataset.y[val_indeces]\n test_features, test_labels = dataset.X[test_indeces], dataset.y[test_indeces]\n # Ignore the dataloaders, we only care about the full arrays when using scikit-learn or XGBoost\n del train_dataloaders\n del val_dataloaders\n del test_dataloaders",
"_____no_output_____"
],
[
"if ml_core == 'deep learning':\n print(next(iter(train_dataloader))[0])\nelse:\n print(train_features[:32])",
"_____no_output_____"
],
[
"next(iter(train_dataloader))[0].shape",
"_____no_output_____"
],
[
"if ml_core == 'deep learning':\n print(next(iter(val_dataloader))[0])\nelse:\n print(val_features[:32])",
"_____no_output_____"
],
[
"if ml_core == 'deep learning':\n print(next(iter(test_dataloader))[0])\nelse:\n print(test_features[:32])",
"_____no_output_____"
],
[
"next(iter(test_dataloader))[0].shape",
"_____no_output_____"
]
],
[
[
"## Training models",
"_____no_output_____"
],
[
"### Vanilla RNN",
"_____no_output_____"
],
[
"#### Creating the model",
"_____no_output_____"
],
[
"Model parameters:",
"_____no_output_____"
]
],
[
[
"n_hidden = 10 # Number of hidden units\nn_layers = 3 # Number of LSTM layers\np_dropout = 0.2 # Probability of dropout\nembedding_dim = [3, 2, 4] # List of embedding dimensions",
"_____no_output_____"
],
[
"if use_delta_ts == 'normalized':\n # Count the delta_ts column as another feature, only ignore ID, timestamp and label columns\n n_inputs = dataset.n_inputs + 1\nelif use_delta_ts == 'raw':\n raise Exception('ERROR: When using a model of type Vanilla RNN, we can\\'t use raw delta_ts. Please either normalize it (use_delta_ts = \"normalized\") or discard it (use_delta_ts = False).')",
"_____no_output_____"
]
],
[
[
"Instantiating the model:",
"_____no_output_____"
]
],
[
[
"model = Models.VanillaRNN(n_inputs, n_hidden, n_outputs, n_layers, p_dropout,\n embed_features=embed_features, n_embeddings=n_embeddings, \n embedding_dim=embedding_dim, total_length=100000)\nmodel",
"_____no_output_____"
]
],
[
[
"Define the name that will be given to the models that will be saved:",
"_____no_output_____"
]
],
[
[
"model_name = 'rnn'\nif dataset_mode == 'pre-embedded':\n model_name = model_name + '_pre_embedded'\nelif dataset_mode == 'learn embedding':\n model_name = model_name + '_with_embedding'\nelif dataset_mode == 'one hot encoded':\n model_name = model_name + '_one_hot_encoded'\nif use_delta_ts is not False:\n model_name = model_name + '_delta_ts'\nmodel_name",
"_____no_output_____"
]
],
[
[
"#### Training and testing the model",
"_____no_output_____"
]
],
[
[
"next(model.parameters())",
"_____no_output_____"
],
[
"model = du.deep_learning.train(model, train_dataloader, val_dataloader, test_dataloader, dataset=dataset,\n padding_value=padding_value, batch_size=batch_size, n_epochs=n_epochs, lr=lr,\n models_path=f'{project_path}models/', model_name=model_name, ModelClass=Models.VanillaRNN,\n is_custom=False, do_test=True, metrics=metrics, log_comet_ml=False,\n already_embedded=already_embedded)",
"_____no_output_____"
],
[
"next(model.parameters())",
"_____no_output_____"
]
],
[
[
"#### Hyperparameter optimization",
"_____no_output_____"
]
],
[
[
"config_name = input('Hyperparameter optimization configuration file name:')",
"_____no_output_____"
],
[
"val_loss_min, exp_name_min = du.machine_learning.optimize_hyperparameters(Models.VanillaRNN, \n train_dataloader=train_dataloader, \n val_dataloader=val_dataloader, \n test_dataloader=test_dataloader, \n dataset=dataset,\n config_name=config_name,\n comet_ml_api_key=comet_ml_api_key,\n comet_ml_project_name=comet_ml_project_name,\n comet_ml_workspace=comet_ml_workspace,\n n_inputs=n_inputs, id_column=id_column,\n inst_column=ts_column,\n id_columns_idx=[0, 1],\n n_outputs=n_outputs, model_type='multivariate_rnn',\n is_custom=False, models_path='models/',\n model_name=model_name,\n array_param='embedding_dim',\n metrics=metrics,\n config_path=f'{project_path}notebooks/sandbox/',\n var_seq=True, clip_value=0.5, \n padding_value=padding_value,\n batch_size=batch_size, n_epochs=n_epochs,\n lr=lr, \n comet_ml_save_model=True,\n embed_features=embed_features,\n n_embeddings=n_embeddings)",
"_____no_output_____"
],
[
"exp_name_min",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
d0c8dbb6b9d5de1e479cede90a83d3a8023c2e45 | 222,784 | ipynb | Jupyter Notebook | GDC_release_24/QC- ETL of GDC release 24 MMRF clincal tables.ipynb | madelyngreyes/ETLNextGenQCNotebooks | 5bed62e15c02769ccc6aa80ee3add2a15b0d7c2c | [
"Apache-2.0"
] | null | null | null | GDC_release_24/QC- ETL of GDC release 24 MMRF clincal tables.ipynb | madelyngreyes/ETLNextGenQCNotebooks | 5bed62e15c02769ccc6aa80ee3add2a15b0d7c2c | [
"Apache-2.0"
] | null | null | null | GDC_release_24/QC- ETL of GDC release 24 MMRF clincal tables.ipynb | madelyngreyes/ETLNextGenQCNotebooks | 5bed62e15c02769ccc6aa80ee3add2a15b0d7c2c | [
"Apache-2.0"
] | null | null | null | 222,784 | 222,784 | 0.603468 | [
[
[
"**QC of ETL starting with GDC release 24 clinical tables**\n\nThis notebook focuses on the QC of program **MMRF** data_category clinical\n\nThis program has a total of five clinical tables present in this release\n\nTables listed below ---\n\n- `isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF`\n\n- `isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF_diag__treat`\n\n- `isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF_fam_hist`\n\n- `isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF_follow`\n\n- `isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF_follow__mol_test`\n\n\n",
"_____no_output_____"
],
[
"##QC table checklist \n\nMultiple one-to-many tables present QC list\n\n**1. Check schema**\n\nAre all the fields labeled?\n\nIs there a table description?\n\nDo the field labels make sense for all fields?\n \nAre the labels correct?\n\n**2. Look at table row number and size**\n\nDo these metrics make sense?\n\n**3. Scroll through table manually**\n\nSee if anything stands out - empty columns, etc.\n\nThe BigQuery table search user interface is useful in for this test run. The test tier points to the isb-etl-open. \n\n[ISB-CGC BigQuery table search test tier](https://isb-cgc-test.appspot.com/bq_meta_search/)\n\nRun a manual check in the console with the steps mentioned in step 1.\n\n*Note from developer:\nThere are some columns which are sparsely populated (so they might look empty if you’re just scrolling through the table in the GUI), but there should be at least one non-null entry for every column in every table.*\n\n**4. Number of case_id versus BigQuery metadata table**\n\n**5.Check for any duplicate rows present in the table**\n\n**7. Verify case_id count of table against master rel_clinical_data table**",
"_____no_output_____"
],
[
"##Reference material\n\n\n\n* [NextGenETL](https://github.com/isb-cgc/NextGenETL) GitHub repository\n* [ETL QC SOP draft](https://docs.google.com/document/d/1Wskf3BxJLkMjhIXD62B6_TG9h5KRcSp8jSAGqcCP1lQ/edit)",
"_____no_output_____"
],
[
"##Before you begin\n\nYou need to load the BigQuery module, authenticate ourselves, create a client variable, and load the necessary libraries.\n",
"_____no_output_____"
]
],
[
[
"from google.colab import auth\ntry:\n auth.authenticate_user()\n print('You have been successfully authenticated!')\nexcept:\n print('You have not been authenticated.')",
"You have been successfully authenticated!\n"
],
[
"from google.cloud import bigquery\ntry:\n project_id = 'isb-project-zero' # Update your_project_number with your project number\n client = bigquery.Client(project=project_id)\n print('BigQuery client successfully initialized')\nexcept:\n print('Failed')",
"BigQuery client successfully initialized\n"
],
[
"#Install pypika to build a Query \n!pip install pypika\n# Import from PyPika\nfrom pypika import Query, Table, Field, Order\n\nimport pandas",
"Collecting pypika\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/ea/22/63a4b2194462c54de8450de3d61eb44eddc2e7a85b06792603af09c606e1/PyPika-0.37.7.tar.gz (53kB)\n\r\u001b[K |██████▏ | 10kB 12.7MB/s eta 0:00:01\r\u001b[K |████████████▍ | 20kB 1.8MB/s eta 0:00:01\r\u001b[K |██████████████████▌ | 30kB 2.3MB/s eta 0:00:01\r\u001b[K |████████████████████████▊ | 40kB 2.5MB/s eta 0:00:01\r\u001b[K |██████████████████████████████▉ | 51kB 2.0MB/s eta 0:00:01\r\u001b[K |████████████████████████████████| 61kB 1.8MB/s \n\u001b[?25hBuilding wheels for collected packages: pypika\n Building wheel for pypika (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for pypika: filename=PyPika-0.37.7-py2.py3-none-any.whl size=42747 sha256=fc94bedb575e5d75a3ac8c210f081b1939c750bd382c8faaa98be611a8a19777\n Stored in directory: /root/.cache/pip/wheels/40/b2/20/cf67d3c67186b46241b5069c93da2c9beedbb3f08dba75fffe\nSuccessfully built pypika\nInstalling collected packages: pypika\nSuccessfully installed pypika-0.37.7\n"
]
],
[
[
"## READY TO BEGIN TESTING",
"_____no_output_____"
],
[
"##Clin MMRF \n\n**Testing Full ID** `isb-project-zero.GDC_Clinical_Data.rel23_clin_MMRF`\n\n[Table location](https://console.cloud.google.com/bigquery?authuser=1&folder=&organizationId=&project=isb-project-zero&p=isb-project-zero&d=GDC_Clinical_Data&t=rel23_clin_MMRF&page=table)\n\nSource : GDC API\n\nRelease version : v24\n",
"_____no_output_____"
],
[
"###test 1 - schema verification\n\n**1. Check schema**\n\nAre all the fields labeled?\n\nIs there a table description?\n\nDo the field labels make sense for all fields\n \nAre the labels correct\n\nGoogle documentation column descriptions for [reference](https://cloud.google.com/bigquery/docs/information-schema-tables#column_field_paths_view).\n\nGoogle documentation table options for [reference](https://cloud.google.com/bigquery/docs/information-schema-tables#options_table).",
"_____no_output_____"
]
],
[
[
"#return all table information for rel24_clin_MMRF\n\nclin_table = Table('`isb-project-zero`.GDC_Clinical_Data.INFORMATION_SCHEMA.TABLES')\nclin_query = Query.from_(clin_table) \\\n .select(' table_catalog, table_schema, table_name, table_type ') \\\n .where(clin_table.table_name=='rel24_clin_MMRF') \\\n \nclin_query_clean = str(clin_query).replace('\"', \"\")\nclin = client.query(clin_query_clean).to_dataframe()\nclin.head()",
"_____no_output_____"
],
[
"#return all table information for rel24_clin_MMRF\n\nclin_table = Table('`isb-project-zero`.GDC_Clinical_Data.INFORMATION_SCHEMA.TABLE_OPTIONS')\nclin_query = Query.from_(clin_table) \\\n .select(' table_name, option_name, option_type, option_value ') \\\n .where(clin_table.table_name=='rel24_clin_MMRF') \\\n\nclin_query_clean = str(clin_query).replace('\"', \"\")\nclin = client.query(clin_query_clean).to_dataframe()\npandas.options.display.max_rows\n\n\nfor i in range(len(clin)):\n print(clin['option_name'][i] + '\\n')\n print('\\t' + clin['option_value'][i] + '\\n')\n print('\\t' + clin['option_type'][i] + '\\n')\n\nelse:\n\n print('QC of friendly name, table description and labels --- FAILED')",
"QC of friendly name, table description and labels --- FAILED\n"
],
[
"#check for empty schemas in dataset rel24_clin_MMRF\n\nclin_table = Table('`isb-project-zero`.GDC_Clinical_Data.INFORMATION_SCHEMA.TABLE_OPTIONS')\nclin_query = Query.from_(clin_table) \\\n .select(' table_name, option_name, option_type, option_value ') \\\n .where(clin_table.table_name=='rel24_clin_MMRF') \\\n\nclin_query_clean = str(clin_query).replace('\"', \"\")\nclin = client.query(clin_query_clean).to_dataframe()\npandas.options.display.max_rows\nprint(\"Are there any empty cells in the table schema?\")\nclin.empty",
"Are there any empty cells in the table schema?\n"
]
],
[
[
"FIELD Descriptions pulled example below\n",
"_____no_output_____"
]
],
[
[
"#list of field descriptions for table rel24_clin_MMRF\n\nclin_table = Table('`isb-project-zero`.GDC_Clinical_Data.INFORMATION_SCHEMA.COLUMN_FIELD_PATHS')\nclin_query = Query.from_(clin_table) \\\n .select('table_name, column_name, description') \\\n .where(clin_table.table_name=='rel24_clin_MMRF') \\\n\nclin_query_clean = str(clin_query).replace('\"', \"\")\nclin = client.query(clin_query_clean).to_dataframe()\npandas.options.display.max_rows\n\n\nfor i in range(len(clin)):\n print(clin['table_name'][i] + '\\n')\n print('\\t' + clin['column_name'][i] + '\\n')\n print('\\t' + clin['description'][i] + '\\n')",
"rel24_clin_MMRF\n\n\tsubmitter_id\n\n\t\n\nrel24_clin_MMRF\n\n\tcase_id\n\n\t\n\nrel24_clin_MMRF\n\n\tdiag__treat__count\n\n\tTotal child record count (located in cases table).\n\nrel24_clin_MMRF\n\n\tfam_hist__count\n\n\tTotal child record count (located in cases table).\n\nrel24_clin_MMRF\n\n\tfollow__count\n\n\tTotal child record count (located in cases table).\n\nrel24_clin_MMRF\n\n\tprimary_site\n\n\t\n\nrel24_clin_MMRF\n\n\tdisease_type\n\n\t\n\nrel24_clin_MMRF\n\n\tindex_date\n\n\t\n\nrel24_clin_MMRF\n\n\tproj__name\n\n\tDisplay name for the project\n\nrel24_clin_MMRF\n\n\tproj__project_id\n\n\t\n\nrel24_clin_MMRF\n\n\tdemo__demographic_id\n\n\t\n\nrel24_clin_MMRF\n\n\tdemo__gender\n\n\tText designations that identify gender. Gender is described as the assemblage of properties that distinguish people on the basis of their societal roles. [Explanatory Comment 1: Identification of gender is based upon self-report and may come from a form, questionnaire, interview, etc.]\n\nrel24_clin_MMRF\n\n\tdemo__race\n\n\tAn arbitrary classification of a taxonomic group that is a division of a species. It usually arises as a consequence of geographical isolation within a species and is characterized by shared heredity, physical attributes and behavior, and in the case of humans, by common history, nationality, or geographic distribution. The provided values are based on the categories defined by the U.S. Office of Management and Business and used by the U.S. Census Bureau.\n\nrel24_clin_MMRF\n\n\tdemo__ethnicity\n\n\tAn individual's self-described social and cultural grouping, specifically whether an individual describes themselves as Hispanic or Latino. The provided values are based on the categories defined by the U.S. Office of Management and Business and used by the U.S. Census Bureau.\n\nrel24_clin_MMRF\n\n\tdemo__vital_status\n\n\tThe survival state of the person registered on the protocol.\n\nrel24_clin_MMRF\n\n\tdemo__days_to_birth\n\n\tNumber of days between the date used for index and the date from a person's date of birth represented as a calculated negative number of days.\n\nrel24_clin_MMRF\n\n\tdemo__age_at_index\n\n\tThe patient's age (in years) on the reference or anchor date date used during date obfuscation.\n\nrel24_clin_MMRF\n\n\tdemo__days_to_death\n\n\tNumber of days between the date used for index and the date from a person's date of death represented as a calculated number of days.\n\nrel24_clin_MMRF\n\n\tdemo__cause_of_death\n\n\tText term to identify the cause of death for a patient.\n\nrel24_clin_MMRF\n\n\tdemo__state\n\n\tThe current state of the object.\n\nrel24_clin_MMRF\n\n\tdemo__created_datetime\n\n\tA combination of date and time of day in the form [-]CCYY-MM-DDThh:mm:ss[Z|(+|-)hh:mm]\n\nrel24_clin_MMRF\n\n\tdemo__updated_datetime\n\n\tA combination of date and time of day in the form [-]CCYY-MM-DDThh:mm:ss[Z|(+|-)hh:mm]\n\nrel24_clin_MMRF\n\n\tdiag__diagnosis_id\n\n\tReference to ancestor diag__diagnosis_id, located in rel24_clin_MMRF_diag.\n\nrel24_clin_MMRF\n\n\tdiag__primary_diagnosis\n\n\tText term used to describe the patient's histologic diagnosis, as described by the World Health Organization's (WHO) International Classification of Diseases for Oncology (ICD-O).\n\nrel24_clin_MMRF\n\n\tdiag__days_to_last_known_disease_status\n\n\tTime interval from the date of last follow up to the date of initial pathologic diagnosis, represented as a calculated number of days.\n\nrel24_clin_MMRF\n\n\tdiag__progression_or_recurrence\n\n\tYes/No/Unknown indicator to identify whether a patient has had a new tumor event after initial treatment.\n\nrel24_clin_MMRF\n\n\tdiag__site_of_resection_or_biopsy\n\n\tThe text term used to describe the anatomic site of origin, of the patient's malignant disease, as described by the World Health Organization's (WHO) International Classification of Diseases for Oncology (ICD-O).\n\nrel24_clin_MMRF\n\n\tdiag__age_at_diagnosis\n\n\tAge at the time of diagnosis expressed in number of days since birth.\n\nrel24_clin_MMRF\n\n\tdiag__days_to_last_follow_up\n\n\tTime interval from the date of last follow up to the date of initial pathologic diagnosis, represented as a calculated number of days.\n\nrel24_clin_MMRF\n\n\tdiag__tumor_grade\n\n\tNumeric value to express the degree of abnormality of cancer cells, a measure of differentiation and aggressiveness.\n\nrel24_clin_MMRF\n\n\tdiag__last_known_disease_status\n\n\tText term that describes the last known state or condition of an individual's neoplasm.\n\nrel24_clin_MMRF\n\n\tdiag__morphology\n\n\tThe third edition of the International Classification of Diseases for Oncology, published in 2000 used principally in tumor and cancer registries for coding the site (topography) and the histology (morphology) of neoplasms. The study of the structure of the cells and their arrangement to constitute tissues and, finally, the association among these to form organs. In pathology, the microscopic process of identifying normal and abnormal morphologic characteristics in tissues, by employing various cytochemical and immunocytochemical stains. A system of numbered categories for representation of data.\n\nrel24_clin_MMRF\n\n\tdiag__tumor_stage\n\n\tThe extent of a cancer in the body. Staging is usually based on the size of the tumor, whether lymph nodes contain cancer, and whether the cancer has spread from the original site to other parts of the body. The accepted values for tumor_stage depend on the tumor site, type, and accepted staging system. These items should accompany the tumor_stage value as associated metadata.\n\nrel24_clin_MMRF\n\n\tdiag__iss_stage\n\n\tThe multiple myeloma disease stage at diagnosis.\n\nrel24_clin_MMRF\n\n\tdiag__tissue_or_organ_of_origin\n\n\tThe text term used to describe the anatomic site of origin, of the patient's malignant disease, as described by the World Health Organization's (WHO) International Classification of Diseases for Oncology (ICD-O).\n\nrel24_clin_MMRF\n\n\tdiag__state\n\n\tThe current state of the object.\n\nrel24_clin_MMRF\n\n\tdiag__created_datetime\n\n\tA combination of date and time of day in the form [-]CCYY-MM-DDThh:mm:ss[Z|(+|-)hh:mm]\n\nrel24_clin_MMRF\n\n\tdiag__updated_datetime\n\n\tA combination of date and time of day in the form [-]CCYY-MM-DDThh:mm:ss[Z|(+|-)hh:mm]\n\nrel24_clin_MMRF\n\n\tstate\n\n\t\n\nrel24_clin_MMRF\n\n\tcreated_datetime\n\n\t\n\nrel24_clin_MMRF\n\n\tupdated_datetime\n\n\t\n\n"
],
[
"# check for empty schemas in dataset rel24_clin_MMRF \n\nclin_table = Table('`isb-project-zero`.GDC_Clinical_Data.INFORMATION_SCHEMA.COLUMN_FIELD_PATHS')\nclin_query = Query.from_(clin_table) \\\n .select('table_name, column_name, description') \\\n .where(clin_table.table_name=='rel24_clin_MMRF') \\\n\nclin_query_clean = str(clin_query).replace('\"', \"\")\nclin = client.query(clin_query_clean).to_dataframe()\npandas.options.display.max_rows\nprint(\"Are there any empty cells in the table schema?\")\nprint(clin)",
"Are there any empty cells in the table schema?\n table_name ... description\n0 rel24_clin_MMRF ... \n1 rel24_clin_MMRF ... \n2 rel24_clin_MMRF ... Total child record count (located in cases tab...\n3 rel24_clin_MMRF ... Total child record count (located in cases tab...\n4 rel24_clin_MMRF ... Total child record count (located in cases tab...\n5 rel24_clin_MMRF ... \n6 rel24_clin_MMRF ... \n7 rel24_clin_MMRF ... \n8 rel24_clin_MMRF ... Display name for the project\n9 rel24_clin_MMRF ... \n10 rel24_clin_MMRF ... \n11 rel24_clin_MMRF ... Text designations that identify gender. Gender...\n12 rel24_clin_MMRF ... An arbitrary classification of a taxonomic gro...\n13 rel24_clin_MMRF ... An individual's self-described social and cult...\n14 rel24_clin_MMRF ... The survival state of the person registered on...\n15 rel24_clin_MMRF ... Number of days between the date used for index...\n16 rel24_clin_MMRF ... The patient's age (in years) on the reference ...\n17 rel24_clin_MMRF ... Number of days between the date used for index...\n18 rel24_clin_MMRF ... Text term to identify the cause of death for a...\n19 rel24_clin_MMRF ... The current state of the object.\n20 rel24_clin_MMRF ... A combination of date and time of day in the f...\n21 rel24_clin_MMRF ... A combination of date and time of day in the f...\n22 rel24_clin_MMRF ... Reference to ancestor diag__diagnosis_id, loca...\n23 rel24_clin_MMRF ... Text term used to describe the patient's histo...\n24 rel24_clin_MMRF ... Time interval from the date of last follow up ...\n25 rel24_clin_MMRF ... Yes/No/Unknown indicator to identify whether a...\n26 rel24_clin_MMRF ... The text term used to describe the anatomic si...\n27 rel24_clin_MMRF ... Age at the time of diagnosis expressed in numb...\n28 rel24_clin_MMRF ... Time interval from the date of last follow up ...\n29 rel24_clin_MMRF ... Numeric value to express the degree of abnorma...\n30 rel24_clin_MMRF ... Text term that describes the last known state ...\n31 rel24_clin_MMRF ... The third edition of the International Classif...\n32 rel24_clin_MMRF ... The extent of a cancer in the body. Staging is...\n33 rel24_clin_MMRF ... The multiple myeloma disease stage at diagnosis.\n34 rel24_clin_MMRF ... The text term used to describe the anatomic si...\n35 rel24_clin_MMRF ... The current state of the object.\n36 rel24_clin_MMRF ... A combination of date and time of day in the f...\n37 rel24_clin_MMRF ... A combination of date and time of day in the f...\n38 rel24_clin_MMRF ... \n39 rel24_clin_MMRF ... \n40 rel24_clin_MMRF ... \n\n[41 rows x 3 columns]\n"
]
],
[
[
"###test 2 - row number verification\n\n**2. Look at table row number and size**\n\nDo these metrics make sense?",
"_____no_output_____"
]
],
[
[
"%%bigquery --project isb-project-zero\nSELECT COUNT(submitter_id)\nFROM `isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF`",
"_____no_output_____"
],
[
"%%bigquery --project isb-project-zero\nSELECT COUNT(case_id)\nFROM `isb-project-zero.GDC_Clinical_Data.rel23_clin_MMRF`",
"_____no_output_____"
],
[
"%%bigquery --project isb-project-zero\nSELECT *\nFROM `isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF`",
"_____no_output_____"
]
],
[
[
"###test 3 - manual verification\n\n**3. Scroll through table manually**\n\nSee if anything stands out - empty columns, etc.\n\nThe BigQuery table search user interface is useful in for this test run. The test tier points to the isb-etl-open. \n\nISB-CGC BigQuery table search [test tier](https://isb-cgc-test.appspot.com/bq_meta_search/).\n\nBigQuery console [isb-project-zero](https://console.cloud.google.com/bigquery?authuser=1&folder=&organizationId=&project=isb-project-zero&p=isb-project-zero&d=GDC_Clinical_Data&t=rel24_clin_MMRF&page=table).\n\nRun a manual check in the console with the steps mentioned in step 1 \n\nAre all the fields labeled?\n\nIs there a table description?\n\nDo the field labels make sense for all fields?\n \nAre the labels correct?\n\n*Note from developer:\nThere are some columns which are sparsely populated (so they might look empty if you’re just scrolling through the table in the GUI), but there should be at least one non-null entry for every column in every table.*",
"_____no_output_____"
],
[
"###test 4 - case_gdc_id file metadata table count verification\n\n**4. Number of case_id versus BigQuery metadata table**\n\n",
"_____no_output_____"
]
],
[
[
"# clinical case_id counts table reuslts below\n\n# Query below will display the number of cases presents in this table.\n\nclin_table = Table('`isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF`')\nclin_query = Query.from_(clin_table) \\\n .select(' DISTINCT case_id, count(*) as count') \\\n .groupby('case_id')\n\nclin_query_clean = str(clin_query).replace('\"', \"\")\n#print(clin_query_clean)\nclin = client.query(clin_query_clean).to_dataframe()\nprint('number of case from submitter_id = ' + str(len(clin.index)))\n",
"number of case from submitter_id = 995\n"
],
[
"# GDC file metadata table case_gdc_id count for clinical below\n\n%%bigquery --project isb-project-zero\nSELECT case_gdc_id, program_name\nFROM `isb-project-zero.GDC_metadata.rel24_caseData`\nwhere program_name = 'MMRF'\ngroup by case_gdc_id, program_name",
"_____no_output_____"
],
[
"%%bigquery --project isb-project-zero\n\nSELECT distinct case_id, count(case_id) as count\nFROM `isb-project-zero.GDC_metadata.rel24_fileData_current` as active, `isb-project-zero.GDC_Clinical_Data.rel24_clinical_data` as clinical\nWHERE program_name = 'MMRF'\nAND active.case_gdc_id = clinical.case_id\ngroup by case_id\norder by count",
"_____no_output_____"
]
],
[
[
"###test 5 - duplication verifcation\n\n**5. Check for any duplicate rows present in the table**\n",
"_____no_output_____"
]
],
[
[
"%%bigquery --project isb-project-zero\n\nSELECT count(submitter_id) AS count\nFROM `isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF`\nGROUP BY submitter_id, case_id, diag__treat__count, fam_hist__count, follow__count, primary_site, disease_type, index_date, demo__demographic_id, demo__gender, demo__race, demo__ethnicity, demo__vital_status, demo__days_to_birth, demo__age_at_index, demo__days_to_death, demo__cause_of_death, demo__state, demo__created_datetime, demo__updated_datetime, diag__diagnosis_id, diag__primary_diagnosis, diag__days_to_last_known_disease_status, diag__progression_or_recurrence, diag__site_of_resection_or_biopsy, diag__age_at_diagnosis, diag__days_to_last_follow_up, diag__tumor_grade, diag__last_known_disease_status, diag__morphology, diag__tumor_stage, diag__iss_stage, diag__tissue_or_organ_of_origin, diag__state, diag__created_datetime, diag__updated_datetime, state, created_datetime, updated_datetime\nORDER BY count DESC\nLIMIT 10",
"_____no_output_____"
]
],
[
[
"###test 6 - case_id master clinical data table count verifcation\n\n**6. Verify case_id count of table against master rel_clinical_data table**",
"_____no_output_____"
]
],
[
[
"# case_id count from the program MMRF clinical table\n\n%%bigquery --project isb-project-zero\n\nselect distinct case_id, count(case_id) as count\nfrom `isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF` \ngroup by case_id\norder by count",
"_____no_output_____"
],
[
"# case_id count from the master clinical table\n\n%%bigquery --project isb-project-zero\n\nSELECT distinct case_id, count(case_id) as count\nFROM `isb-project-zero.GDC_metadata.rel24_fileData_current` as active, `isb-project-zero.GDC_Clinical_Data.rel24_clinical_data` as clinical\nWHERE program_name = 'MMRF'\nAND active.case_gdc_id = clinical.case_id\ngroup by case_id\norder by count\n",
"_____no_output_____"
]
],
[
[
"##Clin MMRF_diag__treat\n\n**Testing Full ID** `isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF_diag__treat`\n\n[Table location](https://console.cloud.google.com/bigquery?authuser=1&folder=&organizationId=&project=isb-project-zero&p=isb-project-zero&d=GDC_Clinical_Data&t=rel24_clin_MMRF_diag__treat&page=table)\n\nSource : GDC API\n\nRelease version : v24\n",
"_____no_output_____"
],
[
"###test 1 - schema verification\n\n**1. Check schema**\n\nAre all the fields labeled?\n\nIs there a table description?\n\nDo the field labels make sense for all fields\n \nAre the labels correct\n\nGoogle documentation column descriptions for [reference](https://cloud.google.com/bigquery/docs/information-schema-tables#column_field_paths_view).\n\nGoogle documentation table options for [reference](https://cloud.google.com/bigquery/docs/information-schema-tables#options_table).",
"_____no_output_____"
]
],
[
[
"#return all table information for rel24_clin_MMRF_diag__treat\n\nclin_table = Table('`isb-project-zero`.GDC_Clinical_Data.INFORMATION_SCHEMA.TABLES')\nclin_query = Query.from_(clin_table) \\\n .select(' table_catalog, table_schema, table_name, table_type ') \\\n .where(clin_table.table_name=='rel24_clin_MMRF_diag__treat') \\\n \nclin_query_clean = str(clin_query).replace('\"', \"\")\nclin = client.query(clin_query_clean).to_dataframe()\nclin.head()",
"_____no_output_____"
],
[
"#return all table information for rel24_clin_MMRF_diag__treat\n\nclin_table = Table('`isb-project-zero`.GDC_Clinical_Data.INFORMATION_SCHEMA.TABLE_OPTIONS')\nclin_query = Query.from_(clin_table) \\\n .select(' table_name, option_name, option_type, option_value ') \\\n .where(clin_table.table_name=='rel24_clin_MMRF_diag__treat') \\\n\nclin_query_clean = str(clin_query).replace('\"', \"\")\nclin = client.query(clin_query_clean).to_dataframe()\npandas.options.display.max_rows\n\n\nfor i in range(len(clin)):\n print(clin['option_name'][i] + '\\n')\n print('\\t' + clin['option_value'][i] + '\\n')\n print('\\t' + clin['option_type'][i] + '\\n')\n\nelse:\n\n print('QC of friendly name, table description and labels --- FAILED')",
"QC of friendly name, table description and labels --- FAILED\n"
],
[
"#check for empty schemas in dataset rel24_clin_MMRF_diag__treat\n\nclin_table = Table('`isb-project-zero`.GDC_Clinical_Data.INFORMATION_SCHEMA.TABLE_OPTIONS')\nclin_query = Query.from_(clin_table) \\\n .select(' table_name, option_name, option_type, option_value ') \\\n .where(clin_table.table_name=='rel24_clin_MMRF_diag__treat') \\\n\nclin_query_clean = str(clin_query).replace('\"', \"\")\nclin = client.query(clin_query_clean).to_dataframe()\npandas.options.display.max_rows\nprint(\"Are there any empty cells in the table schema?\")\nclin.empty",
"Are there any empty cells in the table schema?\n"
]
],
[
[
"FIELD Descriptions pulled example below\n",
"_____no_output_____"
]
],
[
[
"#list of field descriptions for table rel24_clin_MMRF_diag__treat\n\nclin_table = Table('`isb-project-zero`.GDC_Clinical_Data.INFORMATION_SCHEMA.COLUMN_FIELD_PATHS')\nclin_query = Query.from_(clin_table) \\\n .select('table_name, column_name, description') \\\n .where(clin_table.table_name=='rel24_clin_MMRF_diag__treat') \\\n\nclin_query_clean = str(clin_query).replace('\"', \"\")\nclin = client.query(clin_query_clean).to_dataframe()\npandas.options.display.max_rows\n\n\nfor i in range(len(clin)):\n print(clin['table_name'][i] + '\\n')\n print('\\t' + clin['column_name'][i] + '\\n')\n print('\\t' + clin['description'][i] + '\\n')",
"rel24_clin_MMRF_diag__treat\n\n\tdiag__treat__treatment_id\n\n\t\n\nrel24_clin_MMRF_diag__treat\n\n\tdiag__diagnosis_id\n\n\tReference to ancestor diag__diagnosis_id, located in rel24_clin_MMRF_diag.\n\nrel24_clin_MMRF_diag__treat\n\n\tcase_id\n\n\tReference to ancestor case_id, located in rel24_clin_MMRF.\n\nrel24_clin_MMRF_diag__treat\n\n\tdiag__treat__days_to_treatment_start\n\n\tNumber of days between the date used for index and the date the treatment started.\n\nrel24_clin_MMRF_diag__treat\n\n\tdiag__treat__treatment_type\n\n\tText term that describes the kind of treatment administered.\n\nrel24_clin_MMRF_diag__treat\n\n\tdiag__treat__treatment_or_therapy\n\n\tA yes/no/unknown/not applicable indicator related to the administration of therapeutic agents received.\n\nrel24_clin_MMRF_diag__treat\n\n\tdiag__treat__therapeutic_agents\n\n\tText identification of the individual agent(s) used as part of a treatment regimen.\n\nrel24_clin_MMRF_diag__treat\n\n\tdiag__treat__days_to_treatment_end\n\n\tNumber of days between the date used for index and the date the treatment ended.\n\nrel24_clin_MMRF_diag__treat\n\n\tdiag__treat__regimen_or_line_of_therapy\n\n\tThe text term used to describe the regimen or line of therapy.\n\nrel24_clin_MMRF_diag__treat\n\n\tdiag__treat__state\n\n\tThe current state of the object.\n\nrel24_clin_MMRF_diag__treat\n\n\tdiag__treat__created_datetime\n\n\tA combination of date and time of day in the form [-]CCYY-MM-DDThh:mm:ss[Z|(+|-)hh:mm]\n\nrel24_clin_MMRF_diag__treat\n\n\tdiag__treat__updated_datetime\n\n\tA combination of date and time of day in the form [-]CCYY-MM-DDThh:mm:ss[Z|(+|-)hh:mm]\n\n"
],
[
"# check for empty schemas in dataset rel24_clin_MMRF_diag__treat\n\nclin_table = Table('`isb-project-zero`.GDC_Clinical_Data.INFORMATION_SCHEMA.COLUMN_FIELD_PATHS')\nclin_query = Query.from_(clin_table) \\\n .select('table_name, column_name, description') \\\n .where(clin_table.table_name=='rel24_clin_MMRF_diag__treat') \\\n\nclin_query_clean = str(clin_query).replace('\"', \"\")\nclin = client.query(clin_query_clean).to_dataframe()\nprint(clin)",
" table_name ... description\n0 rel24_clin_MMRF_diag__treat ... \n1 rel24_clin_MMRF_diag__treat ... Reference to ancestor diag__diagnosis_id, loca...\n2 rel24_clin_MMRF_diag__treat ... Reference to ancestor case_id, located in rel2...\n3 rel24_clin_MMRF_diag__treat ... Number of days between the date used for index...\n4 rel24_clin_MMRF_diag__treat ... Text term that describes the kind of treatment...\n5 rel24_clin_MMRF_diag__treat ... A yes/no/unknown/not applicable indicator rela...\n6 rel24_clin_MMRF_diag__treat ... Text identification of the individual agent(s)...\n7 rel24_clin_MMRF_diag__treat ... Number of days between the date used for index...\n8 rel24_clin_MMRF_diag__treat ... The text term used to describe the regimen or ...\n9 rel24_clin_MMRF_diag__treat ... The current state of the object.\n10 rel24_clin_MMRF_diag__treat ... A combination of date and time of day in the f...\n11 rel24_clin_MMRF_diag__treat ... A combination of date and time of day in the f...\n\n[12 rows x 3 columns]\n"
]
],
[
[
"###test 2 - row number verification\n\n**2. Look at table row number and size**\n\nDo these metrics make sense?",
"_____no_output_____"
]
],
[
[
"%%bigquery --project isb-project-zero\nSELECT COUNT(case_id)\nFROM `isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF_diag__treat`",
"_____no_output_____"
],
[
"%%bigquery --project isb-project-zero\nSELECT COUNT(case_id)\nFROM `isb-project-zero.GDC_Clinical_Data.rel23_clin_MMRF_diag__treat`",
"_____no_output_____"
],
[
"%%bigquery --project isb-project-zero\nSELECT *\nFROM `isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF_diag__treat`",
"_____no_output_____"
]
],
[
[
"###test 3 - manual verification\n\n**3. Scroll through table manually**\n\nSee if anything stands out - empty columns, etc.\n\nThe BigQuery table search user interface is useful in for this test run. The test tier points to the isb-etl-open. \n\nISB-CGC BigQuery table search [test tier](https://isb-cgc-test.appspot.com/bq_meta_search/).\n\nBigQuery console [isb-project-zero](https://console.cloud.google.com/bigquery?authuser=1&folder=&organizationId=&project=isb-project-zero&p=isb-project-zero&d=GDC_Clinical_Data&t=rel24_clin_MMRF_diag__treat&page=table).\n\nRun a manual check in the console with the steps mentioned in step 1.\n\nAre all the fields labeled?\n\nIs there a table description?\n\nDo the field labels make sense for all fields?\n \nAre the labels correct?",
"_____no_output_____"
],
[
"###test 4 - case_gdc_id file metadata table count verification\n\n**4. Number of case_id versus BigQuery metadata table**\n\n",
"_____no_output_____"
]
],
[
[
"# clinical case_id counts table reuslts below\n\n# Query below will display the number of cases presents in this table.\n\nclin_table = Table('`isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF_diag__treat`')\nclin_query = Query.from_(clin_table) \\\n .select(' DISTINCT case_id, count(*) as count') \\\n .groupby('case_id')\n\nclin_query_clean = str(clin_query).replace('\"', \"\")\n#print(clin_query_clean)\nclin = client.query(clin_query_clean).to_dataframe()\nprint('number of case from submitter_id = ' + str(len(clin.index)))\n",
"number of case from submitter_id = 994\n"
],
[
"# GDC file metadata table case_gdc_id count for clinical below\n\n%%bigquery --project isb-project-zero\nSELECT case_gdc_id, program_name\nFROM `isb-project-zero.GDC_metadata.rel24_caseData`\nwhere program_name = 'MMRF'\ngroup by case_gdc_id, program_name",
"_____no_output_____"
],
[
"%%bigquery --project isb-project-zero\n\nSELECT distinct case_id, count(case_id) as count\nFROM `isb-project-zero.GDC_metadata.rel24_caseData` as active, `isb-project-zero.GDC_Clinical_Data.rel24_clinical_data` as clinical\nWHERE program_name = 'MMRF'\nAND active.case_gdc_id = clinical.case_id\ngroup by case_id\norder by count",
"_____no_output_____"
]
],
[
[
"###test 5 - duplication verifcation\n\n**5. Check for any duplicate rows present in the table**",
"_____no_output_____"
]
],
[
[
"%%bigquery --project isb-project-zero\n\nSELECT count(case_id) AS count\nFROM `isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF_diag__treat`\ngroup by diag__treat__treatment_id, diag__diagnosis_id, case_id, diag__treat__days_to_treatment_start, diag__treat__treatment_type, diag__treat__treatment_or_therapy, diag__treat__therapeutic_agents, diag__treat__days_to_treatment_end, diag__treat__regimen_or_line_of_therapy, diag__treat__state, diag__treat__created_datetime, diag__treat__updated_datetime\nORDER BY count DESC\nLIMIT 10",
"_____no_output_____"
]
],
[
[
"###test 6 - case_id master clinical data table count verifcation\n\n**6. Verify case_id count of table against master rel_clinical_data table**",
"_____no_output_____"
]
],
[
[
"# case_id count from the program MMRF clinical table\n\n%%bigquery --project isb-project-zero\n\nselect distinct case_id, count(case_id) as count\nfrom `isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF` \ngroup by case_id\norder by count",
"_____no_output_____"
],
[
"# case_id count from the program MMRF_diag__treat clinical table\n\n%%bigquery --project isb-project-zero\n\nselect distinct case_id, count(case_id) as count\nfrom `isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF_diag__treat` \ngroup by case_id\norder by count",
"_____no_output_____"
]
],
[
[
"##Clin MMRF_fam_hist\n\n**Testing Full ID** `isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF_fam_hist`\n\n[Table location](https://console.cloud.google.com/bigquery?project=high-transit-276919&authuser=2&p=isb-project-zero&d=GDC_metadata&t=rel24_fam_histpage=table)\n\nSource : GDC API\n\nRelease version : v24",
"_____no_output_____"
],
[
"###test 1 - schema verification\n\n**1. Check schema**\n\nAre all the fields labeled?\n\nIs there a table description?\n\nDo the field labels make sense for all fields\n \nAre the labels correct\n\nGoogle documentation column descriptions for [reference](https://cloud.google.com/bigquery/docs/information-schema-tables#column_field_paths_view).\n\nGoogle documentation table options for [reference](https://cloud.google.com/bigquery/docs/information-schema-tables#options_table).",
"_____no_output_____"
]
],
[
[
"#return all table information for rel24_clin_MMRF_fam_hist\n\nclin_table = Table('`isb-project-zero`.GDC_Clinical_Data.INFORMATION_SCHEMA.TABLES')\nclin_query = Query.from_(clin_table) \\\n .select(' table_catalog, table_schema, table_name, table_type ') \\\n .where(clin_table.table_name=='rel24_clin_MMRF_fam_hist') \\\n \nclin_query_clean = str(clin_query).replace('\"', \"\")\nclin = client.query(clin_query_clean).to_dataframe()\nclin.head()",
"_____no_output_____"
],
[
"#return all table information for rel24_clin_MMRF_fam_hist\n\nclin_table = Table('`isb-project-zero`.GDC_Clinical_Data.INFORMATION_SCHEMA.TABLE_OPTIONS')\nclin_query = Query.from_(clin_table) \\\n .select(' table_name, option_name, option_type, option_value ') \\\n .where(clin_table.table_name=='rel24_clin_MMRF_fam_hist') \\\n\nclin_query_clean = str(clin_query).replace('\"', \"\")\nclin = client.query(clin_query_clean).to_dataframe()\npandas.options.display.max_rows\n\n\nfor i in range(len(clin)):\n print(clin['option_name'][i] + '\\n')\n print('\\t' + clin['option_value'][i] + '\\n')\n print('\\t' + clin['option_type'][i] + '\\n')\n\nelse:\n\n print('QC of friendly name, table description and labels --- FAILED')",
"QC of friendly name, table description and labels --- FAILED\n"
],
[
"#check for empty schemas in dataset rel24_clin_MMRF_fam_hist\n\nclin_table = Table('`isb-project-zero`.GDC_Clinical_Data.INFORMATION_SCHEMA.TABLE_OPTIONS')\nclin_query = Query.from_(clin_table) \\\n .select(' table_name, option_name, option_type, option_value ') \\\n .where(clin_table.table_name=='rel24_clin_MMRF_fam_hist') \\\n\nclin_query_clean = str(clin_query).replace('\"', \"\")\nclin = client.query(clin_query_clean).to_dataframe()\npandas.options.display.max_rows\nprint(\"Are there any empty cells in the table schema?\")\nclin.empty",
"Are there any empty cells in the table schema?\n"
]
],
[
[
"FIELD Descriptions pulled example below",
"_____no_output_____"
]
],
[
[
"#list of field descriptions for table rel24_clin_MMRF_fam_hist\n\nclin_table = Table('`isb-project-zero`.GDC_Clinical_Data.INFORMATION_SCHEMA.COLUMN_FIELD_PATHS')\nclin_query = Query.from_(clin_table) \\\n .select('table_name, column_name, description') \\\n .where(clin_table.table_name=='rel24_clin_MMRF_fam_hist') \\\n\nclin_query_clean = str(clin_query).replace('\"', \"\")\nclin = client.query(clin_query_clean).to_dataframe()\npandas.options.display.max_rows\n\n\nfor i in range(len(clin)):\n print(clin['table_name'][i] + '\\n')\n print('\\t' + clin['column_name'][i] + '\\n')\n print('\\t' + clin['description'][i] + '\\n')",
"rel24_clin_MMRF_fam_hist\n\n\tfam_hist__family_history_id\n\n\t\n\nrel24_clin_MMRF_fam_hist\n\n\tcase_id\n\n\tReference to ancestor case_id, located in rel24_clin_MMRF.\n\nrel24_clin_MMRF_fam_hist\n\n\tfam_hist__relative_with_cancer_history\n\n\tThe yes/no/unknown indicator used to describe whether any of the patient's relatives have a history of cancer.\n\nrel24_clin_MMRF_fam_hist\n\n\tfam_hist__relationship_primary_diagnosis\n\n\tThe text term used to describe the malignant diagnosis of the patient's relative with a history of cancer.\n\nrel24_clin_MMRF_fam_hist\n\n\tfam_hist__relationship_type\n\n\tThe subgroup that describes the state of connectedness between members of the unit of society organized around kinship ties.\n\nrel24_clin_MMRF_fam_hist\n\n\tfam_hist__relationship_gender\n\n\tThe text term used to describe the gender of the patient's relative with a history of cancer.\n\nrel24_clin_MMRF_fam_hist\n\n\tfam_hist__state\n\n\tThe current state of the object.\n\nrel24_clin_MMRF_fam_hist\n\n\tfam_hist__created_datetime\n\n\tA combination of date and time of day in the form [-]CCYY-MM-DDThh:mm:ss[Z|(+|-)hh:mm]\n\nrel24_clin_MMRF_fam_hist\n\n\tfam_hist__updated_datetime\n\n\tA combination of date and time of day in the form [-]CCYY-MM-DDThh:mm:ss[Z|(+|-)hh:mm]\n\n"
],
[
"# check for empty schemas in dataset rel24_clin_MMRF_fam_hist\n\nclin_table = Table('`isb-project-zero`.GDC_Clinical_Data.INFORMATION_SCHEMA.COLUMN_FIELD_PATHS')\nclin_query = Query.from_(clin_table) \\\n .select('table_name, column_name, description') \\\n .where(clin_table.table_name=='rel24_clin_MMRF_fam_hist') \\\n\nclin_query_clean = str(clin_query).replace('\"', \"\")\nclin = client.query(clin_query_clean).to_dataframe()\nprint(clin)",
" table_name ... description\n0 rel24_clin_MMRF_fam_hist ... \n1 rel24_clin_MMRF_fam_hist ... Reference to ancestor case_id, located in rel2...\n2 rel24_clin_MMRF_fam_hist ... The yes/no/unknown indicator used to describe ...\n3 rel24_clin_MMRF_fam_hist ... The text term used to describe the malignant d...\n4 rel24_clin_MMRF_fam_hist ... The subgroup that describes the state of conne...\n5 rel24_clin_MMRF_fam_hist ... The text term used to describe the gender of t...\n6 rel24_clin_MMRF_fam_hist ... The current state of the object.\n7 rel24_clin_MMRF_fam_hist ... A combination of date and time of day in the f...\n8 rel24_clin_MMRF_fam_hist ... A combination of date and time of day in the f...\n\n[9 rows x 3 columns]\n"
]
],
[
[
"###test 2 - row number verification\n\n**2. Look at table row number and size**\n\nDo these metrics make sense?",
"_____no_output_____"
]
],
[
[
"%%bigquery --project isb-project-zero\nSELECT COUNT(case_id)\nFROM `isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF_fam_hist`",
"_____no_output_____"
],
[
"%%bigquery --project isb-project-zero\nSELECT COUNT(case_id)\nFROM `isb-project-zero.GDC_Clinical_Data.rel23_clin_MMRF_fam_hist`",
"_____no_output_____"
],
[
"%%bigquery --project isb-project-zero\nSELECT *\nFROM `isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF_fam_hist`",
"_____no_output_____"
]
],
[
[
"###test 3 - manual verification\n\n**3. Scroll through table manually**\n\nSee if anything stands out - empty columns, etc.\n\nThe BigQuery table search user interface is useful in for this test run. The test tier points to the isb-etl-open. \n\nISB-CGC BigQuery table search [test tier](https://isb-cgc-test.appspot.com/bq_meta_search/).\n\nBigQuery console [isb-project-zero](https://console.cloud.google.com/bigquery?project=high-transit-276919&authuser=2&p=isb-project-zero&d=GDC_metadata&t=rel24_clin_MMRF_fam_hist&page=table).\n\nRun a manual check in the console with the steps mentioned in step 1 \n\nAre all the fields labeled?\n\nIs there a table description?\n\nDo the field labels make sense for all fields?\n \nAre the labels correct?",
"_____no_output_____"
],
[
"###test 4 - case_gdc_id file metadata table count verification\n\n**4. Number of case_id versus BigQuery metadata table**",
"_____no_output_____"
]
],
[
[
"# clinical case_id counts table reuslts below\n\n# Query below will display the number of cases presents in this table.\n\nclin_table = Table('`isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF_fam_hist`')\nclin_query = Query.from_(clin_table) \\\n .select(' DISTINCT case_id, count(*) as count') \\\n .groupby('case_id')\n\nclin_query_clean = str(clin_query).replace('\"', \"\")\n#print(clin_query_clean)\nclin = client.query(clin_query_clean).to_dataframe()\nprint('number of case from submitter_id = ' + str(len(clin.index)))",
"number of case from submitter_id = 826\n"
],
[
"# GDC file metadata table case_gdc_id count for clinical below\n\n%%bigquery --project isb-project-zero\nSELECT case_gdc_id, program_name\nFROM `isb-project-zero.GDC_metadata.rel24_caseData`\nwhere program_name = 'MMRF'\ngroup by case_gdc_id, program_name",
"_____no_output_____"
]
],
[
[
"###test 5 - duplication verifcation\n\n**5. Check for any duplicate rows present in the table**",
"_____no_output_____"
]
],
[
[
"%%bigquery --project isb-project-zero\n\nSELECT count(case_id) AS count\nFROM `isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF_fam_hist`\ngroup by fam_hist__family_history_id, case_id, fam_hist__relative_with_cancer_history, fam_hist__relationship_primary_diagnosis, fam_hist__relationship_type, fam_hist__relationship_gender, fam_hist__state, fam_hist__created_datetime, fam_hist__updated_datetime\nORDER BY count DESC\nLIMIT 10",
"_____no_output_____"
]
],
[
[
"###test 6 - case_id master clinical data table count verifcation\n\n**6. Verify case_id count of table against master rel_clinical_data table**",
"_____no_output_____"
]
],
[
[
"# case_id count from the program MMRF clinical table\n\n%%bigquery --project isb-project-zero\n\nselect distinct case_id, count(case_id) as count\nfrom `isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF` \ngroup by case_id\norder by count",
"_____no_output_____"
],
[
"# case_id count from the program MMRF_fam_hist clinical table\n\n%%bigquery --project isb-project-zero\n\nselect distinct case_id, count(case_id) as count\nfrom `isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF_fam_hist` \ngroup by case_id\norder by count",
"_____no_output_____"
]
],
[
[
"##Clin MMRF_follow\n\n**Testing Full ID** `isb-project-zero.GDC_Clinical_Data.rel23_clin_MMRF_follow`\n\n[Table location](https://console.cloud.google.com/bigquery?project=high-transit-276919&authuser=2&p=isb-project-zero&d=GDC_metadata&t=rel23_fileData_legacy&page=table)\n\nSource : GDC API\n\nRelease version : v24",
"_____no_output_____"
],
[
"###test 1 - schema verification\n\n**1. Check schema**\n\nAre all the fields labeled?\n\nIs there a table description?\n\nDo the field labels make sense for all fields\n \nAre the labels correct\n\nGoogle documentation column descriptions for [reference](https://cloud.google.com/bigquery/docs/information-schema-tables#column_field_paths_view).\n\nGoogle documentation table options for [reference](https://cloud.google.com/bigquery/docs/information-schema-tables#options_table).",
"_____no_output_____"
]
],
[
[
"#return all table information for rel24_clin_MMRF_follow\n\nclin_table = Table('`isb-project-zero`.GDC_Clinical_Data.INFORMATION_SCHEMA.TABLES')\nclin_query = Query.from_(clin_table) \\\n .select(' table_catalog, table_schema, table_name, table_type ') \\\n .where(clin_table.table_name=='rel24_clin_MMRF_follow') \\\n \nclin_query_clean = str(clin_query).replace('\"', \"\")\nclin = client.query(clin_query_clean).to_dataframe()\nclin.head()",
"_____no_output_____"
],
[
"#return all table information for rel24_clin_MMRF_follow\nclin_table = Table('`isb-project-zero`.GDC_Clinical_Data.INFORMATION_SCHEMA.TABLE_OPTIONS')\nclin_query = Query.from_(clin_table) \\\n .select(' table_name, option_name, option_type, option_value ') \\\n .where(clin_table.table_name=='rel24_clin_MMRF_follow') \\\n\nclin_query_clean = str(clin_query).replace('\"', \"\")\nclin = client.query(clin_query_clean).to_dataframe()\npandas.options.display.max_rows\n\n\nfor i in range(len(clin)):\n print(clin['option_name'][i] + '\\n')\n print('\\t' + clin['option_value'][i] + '\\n')\n print('\\t' + clin['option_type'][i] + '\\n')\n\nelse:\n\n print('QC of friendly name, table description and labels --- FAILED')",
"QC of friendly name, table description and labels --- FAILED\n"
],
[
"#check for empty schemas in dataset rel24_clin_MMRF_follow\n\nclin_table = Table('`isb-project-zero`.GDC_Clinical_Data.INFORMATION_SCHEMA.TABLE_OPTIONS')\nclin_query = Query.from_(clin_table) \\\n .select(' table_name, option_name, option_type, option_value ') \\\n .where(clin_table.table_name=='rel24_clin_MMRF_follow') \\\n\nclin_query_clean = str(clin_query).replace('\"', \"\")\nclin = client.query(clin_query_clean).to_dataframe()\npandas.options.display.max_rows\nprint(\"Are there any empty cells in the table schema?\")\nclin.empty",
"Are there any empty cells in the table schema?\n"
]
],
[
[
"FIELD Descriptions pulled example below\n\n\n",
"_____no_output_____"
]
],
[
[
"#list of field descriptions for table rel24_clin_MMRF_follow\n\nclin_table = Table('`isb-project-zero`.GDC_Clinical_Data.INFORMATION_SCHEMA.COLUMN_FIELD_PATHS')\nclin_query = Query.from_(clin_table) \\\n .select('table_name, column_name, description') \\\n .where(clin_table.table_name=='rel24_clin_MMRF_follow') \\\n\nclin_query_clean = str(clin_query).replace('\"', \"\")\nclin = client.query(clin_query_clean).to_dataframe()\npandas.options.display.max_rows\n\n\nfor i in range(len(clin)):\n print(clin['table_name'][i] + '\\n')\n print('\\t' + clin['column_name'][i] + '\\n')\n print('\\t' + clin['description'][i] + '\\n')",
"rel24_clin_MMRF_follow\n\n\tfollow__follow_up_id\n\n\tReference to ancestor follow__follow_up_id, located in rel24_clin_MMRF_follow.\n\nrel24_clin_MMRF_follow\n\n\tcase_id\n\n\tReference to ancestor case_id, located in rel24_clin_MMRF.\n\nrel24_clin_MMRF_follow\n\n\tfollow__mol_test__count\n\n\tTotal child record count (located in cases.follow_ups table).\n\nrel24_clin_MMRF_follow\n\n\tfollow__days_to_follow_up\n\n\tNumber of days between the date used for index and the date of the patient's last follow-up appointment or contact.\n\nrel24_clin_MMRF_follow\n\n\tfollow__height\n\n\tThe height of the patient in centimeters.\n\nrel24_clin_MMRF_follow\n\n\tfollow__weight\n\n\tThe weight of the patient measured in kilograms.\n\nrel24_clin_MMRF_follow\n\n\tfollow__ecog_performance_status\n\n\tThe ECOG functional performance status of the patient/participant.\n\nrel24_clin_MMRF_follow\n\n\tfollow__state\n\n\tThe current state of the object.\n\nrel24_clin_MMRF_follow\n\n\tfollow__created_datetime\n\n\tA combination of date and time of day in the form [-]CCYY-MM-DDThh:mm:ss[Z|(+|-)hh:mm]\n\nrel24_clin_MMRF_follow\n\n\tfollow__updated_datetime\n\n\tA combination of date and time of day in the form [-]CCYY-MM-DDThh:mm:ss[Z|(+|-)hh:mm]\n\n"
],
[
"# check for empty schemas in dataset rel24_clin_MMRF_follow\nclin_table = Table('`isb-project-zero`.GDC_Clinical_Data.INFORMATION_SCHEMA.COLUMN_FIELD_PATHS')\nclin_query = Query.from_(clin_table) \\\n .select('table_name, column_name, description') \\\n .where(clin_table.table_name=='rel24_clin_MMRF_follow') \\\n\nclin_query_clean = str(clin_query).replace('\"', \"\")\nclin = client.query(clin_query_clean).to_dataframe()\nprint(clin)",
" table_name ... description\n0 rel24_clin_MMRF_follow ... Reference to ancestor follow__follow_up_id, lo...\n1 rel24_clin_MMRF_follow ... Reference to ancestor case_id, located in rel2...\n2 rel24_clin_MMRF_follow ... Total child record count (located in cases.fol...\n3 rel24_clin_MMRF_follow ... Number of days between the date used for index...\n4 rel24_clin_MMRF_follow ... The height of the patient in centimeters.\n5 rel24_clin_MMRF_follow ... The weight of the patient measured in kilograms.\n6 rel24_clin_MMRF_follow ... The ECOG functional performance status of the ...\n7 rel24_clin_MMRF_follow ... The current state of the object.\n8 rel24_clin_MMRF_follow ... A combination of date and time of day in the f...\n9 rel24_clin_MMRF_follow ... A combination of date and time of day in the f...\n\n[10 rows x 3 columns]\n"
]
],
[
[
"###test 2 - row number verification\n\n**2. Look at table row number and size**\n\nDo these metrics make sense?",
"_____no_output_____"
]
],
[
[
"%%bigquery --project isb-project-zero\nSELECT COUNT(case_id)\nFROM `isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF_follow`",
"_____no_output_____"
],
[
"%%bigquery --project isb-project-zero\nSELECT COUNT(case_id)\nFROM `isb-project-zero.GDC_Clinical_Data.rel23_clin_MMRF_follow`",
"_____no_output_____"
],
[
"%%bigquery --project isb-project-zero\nSELECT *\nFROM `isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF_fam_hist`",
"_____no_output_____"
]
],
[
[
"###test 3 - manual verification\n\n**3. Scroll through table manually**\n\nSee if anything stands out - empty columns, etc.\n\nThe BigQuery table search user interface is useful in for this test run. The test tier points to the isb-etl-open. \n\nISB-CGC BigQuery table search [test tier](https://isb-cgc-test.appspot.com/bq_meta_search/).\n\nBigQuery console [isb-project-zero](https://console.cloud.google.com/bigquery?project=high-transit-276919&authuser=2&p=isb-project-zero&d=GDC_metadata&t=rel24_clin_MMRF_fam_hist&page=table).\n\nRun a manual check in the console with the steps mentioned in step 1 \n\nAre all the fields labeled?\n\nIs there a table description?\n\nDo the field labels make sense for all fields?\n \nAre the labels correct?",
"_____no_output_____"
],
[
"###test 4 - case_gdc_id file metadata table count verification\n\n**4. Number of case_id versus BigQuery metadata table**",
"_____no_output_____"
]
],
[
[
"# clinical case_id counts table reuslts below\n\n# Query below will display the number of cases presents in this table.\n\nclin_table = Table('`isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF_follow`')\nclin_query = Query.from_(clin_table) \\\n .select(' DISTINCT case_id, count(*) as count') \\\n .groupby('case_id')\n\nclin_query_clean = str(clin_query).replace('\"', \"\")\n#print(clin_query_clean)\nclin = client.query(clin_query_clean).to_dataframe()\nprint('number of case from submitter_id = ' + str(len(clin.index)))",
"number of case from submitter_id = 995\n"
],
[
"# GDC file metadata table case_gdc_id count for clinical below\n\n%%bigquery --project isb-project-zero\nSELECT case_gdc_id, program_name\nFROM `isb-project-zero.GDC_metadata.rel24_caseData`\nwhere program_name = 'MMRF'\ngroup by case_gdc_id, program_name",
"_____no_output_____"
]
],
[
[
"###test 5 - duplication verifcation\n\n**5. Check for any duplicate rows present in the table**",
"_____no_output_____"
]
],
[
[
"%%bigquery --project isb-project-zero\n\nSELECT count(case_id) AS count\nFROM `isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF_follow`\ngroup by follow__follow_up_id, case_id, follow__mol_test__count, follow__days_to_follow_up, follow__height, follow__weight, follow__ecog_performance_status, follow__state, follow__created_datetime, follow__updated_datetime\nORDER BY count DESC\nLIMIT 10",
"_____no_output_____"
]
],
[
[
"###test 6 - case_id master clinical data table count verifcation\n\n**6. Verify case_id count of table against master rel_clinical_data table**",
"_____no_output_____"
]
],
[
[
"# case_id count from the program MMRF clinical table\n\n%%bigquery --project isb-project-zero\n\nselect distinct case_id, count(case_id) as count\nfrom `isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF` \ngroup by case_id\norder by count",
"_____no_output_____"
],
[
"# case_id count from the program MMRF_follow clinical table\n\n%%bigquery --project isb-project-zero\n\nselect distinct case_id, count(case_id) as count\nfrom `isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF_follow` \ngroup by case_id\norder by count",
"_____no_output_____"
]
],
[
[
"##Clin MMRF_follow__mol_test\n\n**Testing Full ID** `isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF_follow__mol_test`\n\n[Table location](https://console.cloud.google.com/bigquery?project=high-transit-276919&authuser=2&p=isb-project-zero&d=GDC_metadata&t=rel23_fileData_slide2caseIDmap&page=table)\n\nSource : GDC API\n\nRelease version : v24",
"_____no_output_____"
],
[
"###test 1 - schema verification\n\n**1. Check schema**\n\nAre all the fields labeled?\n\nIs there a table description?\n\nDo the field labels make sense for all fields\n \nAre the labels correct\n\nGoogle documentation column descriptions for [reference](https://cloud.google.com/bigquery/docs/information-schema-tables#column_field_paths_view).\n\nGoogle documentation table options for [reference](https://cloud.google.com/bigquery/docs/information-schema-tables#options_table).",
"_____no_output_____"
]
],
[
[
"#return all table information for rel24_clin_MMRF_follow__mol_test\n\nclin_table = Table('`isb-project-zero`.GDC_Clinical_Data.INFORMATION_SCHEMA.TABLES')\nclin_query = Query.from_(clin_table) \\\n .select(' table_catalog, table_schema, table_name, table_type ') \\\n .where(clin_table.table_name=='rel24_clin_MMRF_follow__mol_test') \\\n \nclin_query_clean = str(clin_query).replace('\"', \"\")\nclin = client.query(clin_query_clean).to_dataframe()\nclin.head()",
"_____no_output_____"
],
[
"#return all table information for rel24_clin_MMRF_follow__mol_test\n\nclin_table = Table('`isb-project-zero`.GDC_Clinical_Data.INFORMATION_SCHEMA.TABLE_OPTIONS')\nclin_query = Query.from_(clin_table) \\\n .select(' table_name, option_name, option_type, option_value ') \\\n .where(clin_table.table_name=='rel24_clin_MMRF_follow__mol_test') \\\n\nclin_query_clean = str(clin_query).replace('\"', \"\")\nclin = client.query(clin_query_clean).to_dataframe()\npandas.options.display.max_rows\n\n\nfor i in range(len(clin)):\n print(clin['option_name'][i] + '\\n')\n print('\\t' + clin['option_value'][i] + '\\n')\n print('\\t' + clin['option_type'][i] + '\\n')\n\nelse:\n\n print('QC of friendly name, table description and labels --- FAILED')",
"QC of friendly name, table description and labels --- FAILED\n"
],
[
"#check for empty schemas in dataset rel24_clin_MMRF_follow__mol_test\n\nclin_table = Table('`isb-project-zero`.GDC_Clinical_Data.INFORMATION_SCHEMA.TABLE_OPTIONS')\nclin_query = Query.from_(clin_table) \\\n .select(' table_name, option_name, option_type, option_value ') \\\n .where(clin_table.table_name=='rel24_clin_MMRF_follow__mol_test') \\\n\nclin_query_clean = str(clin_query).replace('\"', \"\")\nclin = client.query(clin_query_clean).to_dataframe()\npandas.options.display.max_rows\nprint(\"Are there any empty cells in the table schema?\")\nclin.empty",
"Are there any empty cells in the table schema?\n"
]
],
[
[
"FIELD Descriptions pulled example below\n",
"_____no_output_____"
]
],
[
[
"#list of field descriptions for table rel24_clin_MMRF_follow__mol_test\n\nclin_table = Table('`isb-project-zero`.GDC_Clinical_Data.INFORMATION_SCHEMA.COLUMN_FIELD_PATHS')\nclin_query = Query.from_(clin_table) \\\n .select('table_name, column_name, description') \\\n .where(clin_table.table_name=='rel24_clin_MMRF_follow__mol_test') \\\n\nclin_query_clean = str(clin_query).replace('\"', \"\")\nclin = client.query(clin_query_clean).to_dataframe()\npandas.options.display.max_rows\n\n\nfor i in range(len(clin)):\n print(clin['table_name'][i] + '\\n')\n print('\\t' + clin['column_name'][i] + '\\n')\n print('\\t' + clin['description'][i] + '\\n')",
"rel24_clin_MMRF_follow__mol_test\n\n\tfollow__mol_test__molecular_test_id\n\n\t\n\nrel24_clin_MMRF_follow__mol_test\n\n\tfollow__follow_up_id\n\n\tReference to ancestor follow__follow_up_id, located in rel24_clin_MMRF_follow.\n\nrel24_clin_MMRF_follow__mol_test\n\n\tcase_id\n\n\tReference to ancestor case_id, located in rel24_clin_MMRF.\n\nrel24_clin_MMRF_follow__mol_test\n\n\tfollow__mol_test__biospecimen_type\n\n\tThe text term used to describe the biological material used for testing, diagnostic, treatment or research purposes.\n\nrel24_clin_MMRF_follow__mol_test\n\n\tfollow__mol_test__laboratory_test\n\n\tThe text term used to describe the medical testing used to diagnose, treat or further understand a patient's disease.\n\nrel24_clin_MMRF_follow__mol_test\n\n\tfollow__mol_test__test_result\n\n\tThe text term used to describe the result of the molecular test. If the test result was a numeric value see test_value.\n\nrel24_clin_MMRF_follow__mol_test\n\n\tfollow__mol_test__test_units\n\n\tThe text term used to describe the units of the test value for a molecular test. This property is used in conjunction with test_value.\n\nrel24_clin_MMRF_follow__mol_test\n\n\tfollow__mol_test__test_value\n\n\tThe text term or numeric value used to describe a sepcific result of a molecular test.\n\nrel24_clin_MMRF_follow__mol_test\n\n\tfollow__mol_test__molecular_analysis_method\n\n\tThe text term used to describe the method used for molecular analysis.\n\nrel24_clin_MMRF_follow__mol_test\n\n\tfollow__mol_test__gene_symbol\n\n\tThe text term used to describe a gene targeted or included in molecular analysis. For rearrangements, this is shold be used to represent the reference gene.\n\nrel24_clin_MMRF_follow__mol_test\n\n\tfollow__mol_test__state\n\n\tThe current state of the object.\n\nrel24_clin_MMRF_follow__mol_test\n\n\tfollow__mol_test__created_datetime\n\n\tA combination of date and time of day in the form [-]CCYY-MM-DDThh:mm:ss[Z|(+|-)hh:mm]\n\nrel24_clin_MMRF_follow__mol_test\n\n\tfollow__mol_test__updated_datetime\n\n\tA combination of date and time of day in the form [-]CCYY-MM-DDThh:mm:ss[Z|(+|-)hh:mm]\n\n"
],
[
"# check for empty schemas in dataset rel24_clin_MMRF_follow__mol_test\n\nclin_table = Table('`isb-project-zero`.GDC_Clinical_Data.INFORMATION_SCHEMA.COLUMN_FIELD_PATHS')\nclin_query = Query.from_(clin_table) \\\n .select('table_name, column_name, description') \\\n .where(clin_table.table_name=='rel24_clin_MMRF_follow__mol_test') \\\n\nclin_query_clean = str(clin_query).replace('\"', \"\")\nclin = client.query(clin_query_clean).to_dataframe()\nprint(clin)",
" table_name ... description\n0 rel24_clin_MMRF_follow__mol_test ... \n1 rel24_clin_MMRF_follow__mol_test ... Reference to ancestor follow__follow_up_id, lo...\n2 rel24_clin_MMRF_follow__mol_test ... Reference to ancestor case_id, located in rel2...\n3 rel24_clin_MMRF_follow__mol_test ... The text term used to describe the biological ...\n4 rel24_clin_MMRF_follow__mol_test ... The text term used to describe the medical tes...\n5 rel24_clin_MMRF_follow__mol_test ... The text term used to describe the result of t...\n6 rel24_clin_MMRF_follow__mol_test ... The text term used to describe the units of th...\n7 rel24_clin_MMRF_follow__mol_test ... The text term or numeric value used to describ...\n8 rel24_clin_MMRF_follow__mol_test ... The text term used to describe the method used...\n9 rel24_clin_MMRF_follow__mol_test ... The text term used to describe a gene targeted...\n10 rel24_clin_MMRF_follow__mol_test ... The current state of the object.\n11 rel24_clin_MMRF_follow__mol_test ... A combination of date and time of day in the f...\n12 rel24_clin_MMRF_follow__mol_test ... A combination of date and time of day in the f...\n\n[13 rows x 3 columns]\n"
]
],
[
[
"###test 2 - row number verification\n\n**2. Look at table row number and size**\n\nDo these metrics make sense?",
"_____no_output_____"
]
],
[
[
"%%bigquery --project isb-project-zero\nSELECT COUNT(case_id)\nFROM `isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF_follow__mol_test`",
"_____no_output_____"
],
[
"%%bigquery --project isb-project-zero\nSELECT COUNT(case_id)\nFROM `isb-project-zero.GDC_Clinical_Data.rel23_clin_MMRF_follow__mol_test`",
"_____no_output_____"
],
[
"%%bigquery --project isb-project-zero\nSELECT *\nFROM `isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF_follow__mol_test`",
"_____no_output_____"
]
],
[
[
"###test 3 - manual verification\n\n**3. Scroll through table manually**\n\nSee if anything stands out - empty columns, etc.\n\nThe BigQuery table search user interface is useful in for this test run. The test tier points to the isb-etl-open. \n\nISB-CGC BigQuery table search [test tier](https://isb-cgc-test.appspot.com/bq_meta_search/).\n\nBigQuery console [isb-project-zero](https://console.cloud.google.com/bigquery?project=high-transit-276919&authuser=2&p=isb-project-zero&d=GDC_metadata&t=rel24_clin_MMRF_follow__mol_test&page=table).\n\nRun a manual check in the console with the steps mentioned in step 1 \n\nAre all the fields labeled?\n\nIs there a table description?\n\nDo the field labels make sense for all fields?\n \nAre the labels correct?",
"_____no_output_____"
],
[
"###test 4 - case_gdc_id file metadata table count verification\n\n**4. Number of case_id versus BigQuery metadata table**",
"_____no_output_____"
]
],
[
[
"# clinical case_id counts table reuslts below\n\n# Query below will display the number of cases presents in this table.\n\nclin_table = Table('`isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF_follow__mol_test`')\nclin_query = Query.from_(clin_table) \\\n .select(' DISTINCT case_id, count(*) as count') \\\n .groupby('case_id')\n\nclin_query_clean = str(clin_query).replace('\"', \"\")\n#print(clin_query_clean)\nclin = client.query(clin_query_clean).to_dataframe()\nprint('number of case from submitter_id = ' + str(len(clin.index)))",
"number of case from submitter_id = 995\n"
],
[
"# GDC file metadata table case_gdc_id count for clinical below\n\n%%bigquery --project isb-project-zero\nSELECT case_gdc_id, program_name\nFROM `isb-project-zero.GDC_metadata.rel24_caseData`\nwhere program_name = 'MMRF'\ngroup by case_gdc_id, program_name",
"_____no_output_____"
]
],
[
[
"###test 5 - duplication verifcation\n\n**5. Check for any duplicate rows present in the table**",
"_____no_output_____"
]
],
[
[
"%%bigquery --project isb-project-zero\n\nSELECT count(case_id) AS count\nFROM `isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF_follow__mol_test`\ngroup by follow__mol_test__molecular_test_id, follow__follow_up_id, case_id, follow__mol_test__biospecimen_type, follow__mol_test__laboratory_test, follow__mol_test__test_result, follow__mol_test__test_units, follow__mol_test__test_value, follow__mol_test__molecular_analysis_method, follow__mol_test__gene_symbol, follow__mol_test__state, follow__mol_test__created_datetime, follow__mol_test__updated_datetime \nORDER BY count DESC\nLIMIT 10",
"_____no_output_____"
]
],
[
[
"###test 6 - case_id master clinical data table count verifcation\n\n**6. Verify case_id count of table against master rel_clinical_data table**",
"_____no_output_____"
]
],
[
[
"# case_id count from the program MMRF clinical table\n\n%%bigquery --project isb-project-zero\n\nselect distinct case_id, count(case_id) as count\nfrom `isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF` \ngroup by case_id\norder by count",
"_____no_output_____"
],
[
"# case_id count from the program MMRF_follow clinical table\n\n%%bigquery --project isb-project-zero\n\nselect distinct case_id, count(case_id) as count\nfrom `isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF_follow__mol_test` \ngroup by case_id\norder by count",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d0c8e68cfc2a8c6f697c5519fd4a781854981fda | 310,624 | ipynb | Jupyter Notebook | bert_embed_seq2seq.ipynb | arodriguezca/NLP-dataset | 78e34b6bc03c0dccbcd61c83c6ab6b5e487232bb | [
"MIT"
] | null | null | null | bert_embed_seq2seq.ipynb | arodriguezca/NLP-dataset | 78e34b6bc03c0dccbcd61c83c6ab6b5e487232bb | [
"MIT"
] | null | null | null | bert_embed_seq2seq.ipynb | arodriguezca/NLP-dataset | 78e34b6bc03c0dccbcd61c83c6ab6b5e487232bb | [
"MIT"
] | null | null | null | 46.237571 | 29,640 | 0.530226 | [
[
[
"#Libraries\n\nimport warnings\nwarnings.filterwarnings('ignore')\nimport pandas as pd\nimport numpy as np\nimport os\nimport re\nimport json\nimport string\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport plotly.express as px\nimport plotly.graph_objects as go\nfrom tqdm.autonotebook import tqdm\nfrom functools import partial\nimport torch\nimport random\nfrom sklearn.model_selection import train_test_split\n!pip install transformers\nfrom transformers import BertTokenizer, BertModel\n#import spacy",
"Collecting transformers\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/d8/b2/57495b5309f09fa501866e225c84532d1fd89536ea62406b2181933fb418/transformers-4.5.1-py3-none-any.whl (2.1MB)\n\u001b[K |████████████████████████████████| 2.1MB 9.7MB/s \n\u001b[?25hRequirement already satisfied: numpy>=1.17 in /usr/local/lib/python3.7/dist-packages (from transformers) (1.19.5)\nCollecting tokenizers<0.11,>=0.10.1\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/ae/04/5b870f26a858552025a62f1649c20d29d2672c02ff3c3fb4c688ca46467a/tokenizers-0.10.2-cp37-cp37m-manylinux2010_x86_64.whl (3.3MB)\n\u001b[K |████████████████████████████████| 3.3MB 52.3MB/s \n\u001b[?25hRequirement already satisfied: filelock in /usr/local/lib/python3.7/dist-packages (from transformers) (3.0.12)\nCollecting sacremoses\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/75/ee/67241dc87f266093c533a2d4d3d69438e57d7a90abb216fa076e7d475d4a/sacremoses-0.0.45-py3-none-any.whl (895kB)\n\u001b[K |████████████████████████████████| 901kB 54.9MB/s \n\u001b[?25hRequirement already satisfied: requests in /usr/local/lib/python3.7/dist-packages (from transformers) (2.23.0)\nRequirement already satisfied: packaging in /usr/local/lib/python3.7/dist-packages (from transformers) (20.9)\nRequirement already satisfied: importlib-metadata; python_version < \"3.8\" in /usr/local/lib/python3.7/dist-packages (from transformers) (3.10.1)\nRequirement already satisfied: tqdm>=4.27 in /usr/local/lib/python3.7/dist-packages (from transformers) (4.41.1)\nRequirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.7/dist-packages (from transformers) (2019.12.20)\nRequirement already satisfied: click in /usr/local/lib/python3.7/dist-packages (from sacremoses->transformers) (7.1.2)\nRequirement already satisfied: joblib in /usr/local/lib/python3.7/dist-packages (from sacremoses->transformers) (1.0.1)\nRequirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from sacremoses->transformers) (1.15.0)\nRequirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests->transformers) (2020.12.5)\nRequirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests->transformers) (3.0.4)\nRequirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests->transformers) (2.10)\nRequirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests->transformers) (1.24.3)\nRequirement already satisfied: pyparsing>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from packaging->transformers) (2.4.7)\nRequirement already satisfied: typing-extensions>=3.6.4; python_version < \"3.8\" in /usr/local/lib/python3.7/dist-packages (from importlib-metadata; python_version < \"3.8\"->transformers) (3.7.4.3)\nRequirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata; python_version < \"3.8\"->transformers) (3.4.1)\nInstalling collected packages: tokenizers, sacremoses, transformers\nSuccessfully installed sacremoses-0.0.45 tokenizers-0.10.2 transformers-4.5.1\n"
],
[
"gpu_info = !nvidia-smi\ngpu_info = '\\n'.join(gpu_info)\nif gpu_info.find('failed') >= 0:\n print('Select the Runtime > \"Change runtime type\" menu to enable a GPU accelerator, ')\n print('and then re-execute this cell.')\nelse:\n print(gpu_info)\n\nprint(f'GPU available: {torch.cuda.is_available()}')\nrandom.seed(10)",
"Tue May 4 15:27:55 2021 \n+-----------------------------------------------------------------------------+\n| NVIDIA-SMI 465.19.01 Driver Version: 460.32.03 CUDA Version: 11.2 |\n|-------------------------------+----------------------+----------------------+\n| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |\n| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |\n| | | MIG M. |\n|===============================+======================+======================|\n| 0 Tesla P100-PCIE... Off | 00000000:00:04.0 Off | 0 |\n| N/A 35C P0 26W / 250W | 0MiB / 16280MiB | 0% Default |\n| | | N/A |\n+-------------------------------+----------------------+----------------------+\n \n+-----------------------------------------------------------------------------+\n| Processes: |\n| GPU GI CI PID Type Process name GPU Memory |\n| ID ID Usage |\n|=============================================================================|\n| No running processes found |\n+-----------------------------------------------------------------------------+\nGPU available: True\n"
],
[
"print(torch.cuda.is_available())\nif torch.cuda.is_available():\n device = torch.device(\"cuda\")\nelse:\n device = torch.device(\"cpu\")\nprint(\"Using device:\", device)",
"True\nUsing device: cuda\n"
]
],
[
[
"## Vocabulary\nThis is useful only for the decoder; we get the vocab from the complete data",
"_____no_output_____"
]
],
[
[
"df = pd.read_csv(\"data.csv\")\ndf = df.sample(frac=1, random_state=100).reset_index(drop=True)\ndf.head()\n# df = df.iloc[0:10,:]\n\ntext = []\nfor i in range(len(df)):\n t = df.loc[i][6]\n text.append((t, df.loc[i][5]))",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
],
[
"pad_word = \"<pad>\"\nbos_word = \"<s>\"\neos_word = \"</s>\"\nunk_word = \"<unk>\"\npad_id = 0\nbos_id = 1\neos_id = 2\nunk_id = 3\n \ndef normalize_sentence(s):\n s = re.sub(r\"([.!?])\", r\" \\1\", s)\n s = re.sub(r\"[^a-zA-Z.!?]+\", r\" \", s)\n s = re.sub(r\"\\s+\", r\" \", s).strip()\n return s\n\nclass Vocabulary:\n def __init__(self):\n self.word_to_id = {pad_word: pad_id, bos_word: bos_id, eos_word:eos_id, unk_word: unk_id}\n self.word_count = {}\n self.id_to_word = {pad_id: pad_word, bos_id: bos_word, eos_id: eos_word, unk_id: unk_word}\n self.num_words = 4\n \n def get_ids_from_sentence(self, sentence):\n sentence = normalize_sentence(sentence)\n sent_ids = [bos_id] + [self.word_to_id[word] if word in self.word_to_id \\\n else unk_id for word in sentence.split()] + \\\n [eos_id]\n return sent_ids\n \n def tokenized_sentence(self, sentence):\n sent_ids = self.get_ids_from_sentence(sentence)\n return [self.id_to_word[word_id] for word_id in sent_ids]\n\n def decode_sentence_from_ids(self, sent_ids):\n words = list()\n for i, word_id in enumerate(sent_ids):\n if word_id in [bos_id, eos_id, pad_id]:\n # Skip these words\n continue\n else:\n words.append(self.id_to_word[word_id])\n return ' '.join(words)\n\n def add_words_from_sentence(self, sentence):\n sentence = normalize_sentence(sentence)\n for word in sentence.split():\n if word not in self.word_to_id:\n # add this word to the vocabulary\n self.word_to_id[word] = self.num_words\n self.id_to_word[self.num_words] = word\n self.word_count[word] = 1\n self.num_words += 1\n else:\n # update the word count\n self.word_count[word] += 1\n\nvocab = Vocabulary()\nfor src, tgt in text:\n vocab.add_words_from_sentence(src)\n vocab.add_words_from_sentence(tgt)\nprint(f\"Total words in the vocabulary = {vocab.num_words}\")",
"Total words in the vocabulary = 56347\n"
]
],
[
[
"## Create chunks for each publication",
"_____no_output_____"
]
],
[
[
"# Every publication input will be mapped into a variable numbers of chunks (split by sentence) that are less than chunk_max_len\n# These can then be batched by encoding strings, then padding them\nchunk_max_len = 512\npublication_ids = df['Id']\ndataset_label = df['cleaned_label']\nchunked_text = [[]] * len(df.index) # publication id x chunks - left in string format for flexibility in encoding\nchunk_labels = [[]] * len(df.index) # publication id x chunk - if label in chunk, True else False\n\nfor i in range(len(df.index)):\n chunked_text[i] = []\n chunk_labels[i] = []\n chunk = ''\n for s in df['text'][i].split('.'):\n # print(s)\n new_chunk = chunk + s.strip() \n if len(s)>0 and s[-1]!='.':\n new_chunk += '. '\n if len(new_chunk.split(' ')) > chunk_max_len:\n # labels_per_chunk[i].append(True if df['dataset_label'][i] in chunk else False)\n chunk_labels[i].append(1 if df['dataset_label'][i] in chunk else 0)\n chunked_text[i].append(chunk)\n chunk = s\n else:\n chunk = new_chunk\n # labels_per_chunk[i].append(True if df['dataset_label'][i] in chunk else False)\n chunk_labels[i].append(1 if df['dataset_label'][i] in chunk else 0)\n chunked_text[i].append(chunk)\n\nprint(len(chunked_text[0]), chunked_text[0])\nprint(dataset_label[0])",
"4 [\"In its original form, the amyloid cascade hypothesis of Alzheimer's disease holds that fibrillar deposits of amyloid are an early, driving force in pathological events leading ultimately to neuronal death. Early clinicopathologic investigations highlighted a number of inconsistencies leading to an updated hypothesis in which amyloid plaques give way to amyloid oligomers as the driving force in pathogenesis. Rather than focusing on the inconsistencies, amyloid imaging studies have tended to highlight the overlap between regions that show early amyloid plaque signal on positron emission tomography and that also happen to be affected early in Alzheimer's disease. Recent imaging studies investigating the regional dependency between metabolism and amyloid plaque deposition have arrived at conflicting results, with some showing regional associations and other not. We extracted multimodal neuroimaging data from the Alzheimer's disease neuroimaging database for 227 healthy controls and 434 subjects with mild cognitive impairment. We analyzed regional patterns of amyloid deposition, regional glucose metabolism and regional atrophy using florbetapir ( 18 F) positron emission tomography, 18 F-fuordeoxyglucose positron emission tomography and T1 weighted magnetic resonance imaging, respectively. Specifically, we derived gray matter density and standardized uptake value ratios for both positron emission tomography tracers in 404 functionally defined regions of interest. We examined the relation between regional glucose metabolism and amyloid plaques using linear models. For each region of interest, correcting for regional gray matter density, age, education and disease status, we tested the association of regional glucose metabolism with (i) cortex-wide florbetapir uptake, (ii) regional (i. e. , in the same region of interest) florbetapir uptake and (iii) regional florbetapir uptake while correcting in addition for cortex-wide florbetapir uptake. P-values for each setting were Bonferroni corrected for 404 tests. Regions showing significant hypometabolism with increasing cortex-wide amyloid burden were classic Alzheimer's disease-related regions: the medial and lateral parietal cortices. The associations between regional amyloid burden and regional metabolism were more heterogeneous: there were significant hypometabolic effects in posterior cingulate, precuneus, and parietal regions but also significant positive associations in bilateral hippocampus and entorhinal cortex. However, after correcting for global amyloid burden, very few of the negative associations remained and the number of positive associations increased. Given the wide-spread distribution of amyloid plaques, if the canonical cascade hypothesis were true, we would expect wide-spread, cortical hypometabolism. Instead, cortical hypometabolism appears to be linked to global amyloid burden. Thus we conclude that regional fibrillar amyloid deposition has little to no association with regional hypometabolism. The amyloid cascade hypothesis of Alzheimer's disease, in its original, unmodified form, posits that the protein amyloid-β is the starting point for a series of pathogenic changes that lead from neuronal dysfunction and synapse loss to cell death (Hardy and Allsop, 1991; Hardy and Higgins, 1992). Particular weight is given, in the unmodified version of the hypothesis, to the large fibrillar aggregates of amyloid-β known as amyloid plaques. The link between amyloid-β, in some form, and Alzheimer's disease is unassailable. \", \" Disease-causing mutations in the three genes that lead to autosomal dominant Alzheimer's disease have been shown to promote the formation of the putatively neurotoxic form of amyloid-β, a peptide of 42 amino acids (Suzuki et al, 1994; Scheuner et al. , 1996; Gomez-Isla et al. , 1999). While amyloid-β is, irrefutably, an initiating factor in Alzheimer's disease pathogenesis, the remainder of the amyloid cascade hypothesis is much less firmly established. Amyloid plaques are, along with tau-based neurofibrillary tangles, one of the pathologic hallmarks of Alzheimer's disease (Braak and Braak, 1991). They are large, abundant, and easily seen with basic microscopy stains and, as such, were initially assumed to have a key role in the pathogenic cascade (Hardy and Higgins, 1992). From the earliest days of clinicopathologic investigations, however, a number of glaring inconsistencies arose. Chief among these is the oft-replicated finding that there is little association between where amyloid plaques are found at autopsy and which brain regions were dysfunctional in the patient's clinical course (Price et al. , 1991; Arriagada et al. , 1992; Giannakopoulos et al. , 1997; Hardy and Selkoe, 2002). This discordance is most obvious in the entorhinal cortex and hippocampus. These medial temporal lobe structures, crucial to episodic memory function, are the first to fail clinically and the first to develop neurofibrillary tangle pathology. Amyloid plaque deposition, however, does not occur in these regions until relatively late in the course (Price et al. , 1991; Arriagada et al. , 1992; Giannakopoulos et al. , 1997). Conversely, other regions, like the medial prefrontal cortex, typically show abundant amyloid plaque pathology at autopsy despite being relatively functionally spared clinically (Price et al. , 1991; Arriagada et al. , 1992; Giannakopoulos et al. , 1997). As the field wrestled with these inconsistencies, evidence began to accrue suggesting that Aβ was still the key driver but that its pathogenic properties were related to smaller soluble aggregates of the peptide referred to as oligomers (Lambert et al. , 1998; Hartley et al. , 1999). These findings have allowed for an updated, reconciled version of the amyloid cascade hypothesis in which amyloid plaques give way to amyloid oligomers as the driving force in pathogenesis (Hardy and Selkoe, 2002). The advent of amyloid PET imaging should have reinforced this update to the hypothesis. The correlation between plaque quantity and distribution as measured with PET and plaque quantity and distribution at autopsy is extraordinarily high (Ikonomovic et al. , 2008; Hatsuta et al. , 2015). Unsurprisingly, therefore, imaging studies of Alzheimer's began to show many of the same patterns that the neuropathology literature had been documenting for the last several decades. After age 70, roughly 25% of healthy older controls without cognitive complaints or deficits on testing harbor a large burden of amyloid plaques on PET imaging (Rowe et al. , 2010; Chetelat et al. , 2013; Jack et al. , 2014). \", \" The medial prefrontal cortex is among the first regions to show high signal on amyloid PET scans in healthy older controls despite remaining clinically unaffected even late into the course of Alzheimer's disease (Jack et al, 2008). Conversely, even late into the course of Alzheimer's disease cognitive symptoms, the medial temporal lobes tend to show little to no increased signal on amyloid PET (Jack et al. , 2008). Despite its role in re-introducing these decades-old arguments against the primacy of plaques in Alzheimer's disease pathogenesis, amyloid PET imaging has, oddly, seemed to have the opposite effect on the field. Rather than focusing on the inconsistencies, studies have tended to highlight the overlap between regions that show early amyloid plaque signal on PET and that happen to be affected early in Alzheimer's disease (Buckner et al. , 2005; Sperling et al. , 2009; Koch et al. , 2014). The PCC and the IPC are most commonly cited in this regard. The PCC and IPC form the posterior aspect of the brain's DMN, a set of functionally connected regions-that also includes the medial prefrontal cortex and medial temporal lobe structures-that relates to memory function and appears to be targeted early by Alzheimer's disease pathology (Raichle et al. , 2001; Greicius et al. , 2003; Greicius et al. , 2004; Shirer et al. , 2012). One highly cited early study in this vein pointed out the qualitative similarity between a resting-state fMRI map of the DMN, a map of glucose hypometabolism in Alzheimer's disease patients, and a map of amyloid deposition in Alzheimer's disease patients (Buckner et al. , 2005). This led to the oversimplified interpretation that amyloid plaque deposition occurs in the DMN and results in the dysfunction of this network. No attention was given to the findings, evident from the images, that Alzheimer's disease patients typically have normal metabolism in the medial prefrontal cortex despite having abundant amyloid deposition. Similarly, while the medial temporal lobe is a key component of the DMN and its metabolism is already reduced in the earliest clinical stages of Alzheimer's disease, the amyloid map in this study (as in most subsequent amyloid PET studies)\\nshows no uptake in the hippocampus (Buckner et al. , 2005; Kemppainen et al. , 2006; Edison et al. , 2007; Jack et al. , 2008) , though with rare exceptions (Frisoni et al. , 2009; Sepulcre et al. , 2013). A few multimodal imaging studies using FDG PET and amyloid PET approached the question of whether local amyloid plaque deposition is correlated with local levels of glucose metabolism. These studies produced conflicting results with some showing an association between local amyloid plaque deposition and glucose hypometabolism in some brain regions (Engler et al. , 2006; Edison et al. , 2007; Cohen et al. , 2009; Lowe et al. , 2014) and others showing the absence of any correlation (Li et al. , 2008; Rabinovici et al. , 2010; Furst et al. , 2012). \", \" Further work showed that the dependency may be more complex and relationship between plaques and metabolism may change depending on disease stages (Cohen et al, 2009) or brain regions (La Joie et al. , 2012). Discrepancies in the findings may originate from the different subject populations that were studied. For instance, Lowe et al. (2014) studied only healthy controls, while Furst et al. (2012) focused on AD subjects. A second source for the discrepancies may be the limited sample sizes of most studies: with the exception of Lowe et al. (2014) , previous studies comprised fewer than 100 subjects and the specific regional analysis within a single disease group did typically not exceed two dozen subjects (Engler et al. , 2006; Edison et al. , 2007; Li et al. , 2008; Cohen et al. , 2009; La Joie et al. , 2012). Moreover, many studies relied on a plain correlation analysis between the regional tracer intensities without correcting for cofounders such as age, sex, education and extent of amyloid pathology. Here we investigated the relationship between regional amyloid plaque deposition and regional glucose hypometabolism, using a large dataset comprising hundreds of subjects (healthy controls and patients with MCI) obtained from the ADNI (Alzheimer's disease neuroimaging initiative) database who were imaged with both amyloid PET ( 18 F-florbetapir PET) and FDG PET. \"]\nadni\n"
]
],
[
[
"## Create dataset \nFor each publication, it will return a tensor with all the chunks inside\nTherefore, each pass of our bi-LSTM will work with one single publication (with all the chunks inside that publication)",
"_____no_output_____"
]
],
[
[
"from transformers import BertModel, BertTokenizerFast\nbert_model = BertModel.from_pretrained('bert-base-uncased').to(device)\nbert_model.eval()\ntokenizer = BertTokenizerFast.from_pretrained('bert-base-uncased')\n",
"_____no_output_____"
],
[
"from torch.nn.utils.rnn import pad_sequence\nfrom torch.utils.data import Dataset, DataLoader\n\nclass ChunkedDataset(Dataset):\n \"\"\"\n @author: Alexander Rodriguez\n \"\"\"\n\n def __init__(self, publication_ids, chunked_text, chunk_labels, dataset_label, device, tokenizer, bert_model):\n \"\"\"\n Args:\n chunked_text: list of str, contains all the chunks\n chunk_labels: list booleans, contain whether or not the label is in the chunks\n dataset_label: string, same label for all chunks in the publication\n device: cpu or cuda\n \"\"\"\n self.publication_ids = publication_ids\n self.chunked_text = chunked_text\n self.chunk_labels = chunk_labels\n self.dataset_label = dataset_label\n self.tokenizer = tokenizer\n self.device = device\n self.bert_model = bert_model\n \n def __len__(self):\n return len(self.publication_ids)\n\n def __getitem__(self, idx):\n if torch.is_tensor(idx):\n idx = idx.tolist()\n\n return {\"publication_ids\":self.publication_ids[idx], \"chunked_text\":self.chunked_text[idx], \n \"chunk_labels\":self.chunk_labels[idx], \"dataset_label\":self.dataset_label[idx]}\n\ndef collate_fn(data):\n \"\"\"Creates mini-batch tensors for several publications\n\n Return: A dictionary for each chunk (read below)\n\n Each training observation will represent one chunk, therefore we have:\n\n input_ids: the word ids from the Bert tokenizer\n tensor shape (max_input_sequence_length,batch_size)\n\n input_tensor: the Bert word embeddings for the sequence (chunk)\n tensor shape (max_input_sequence_length,batch_size,bert_dim)\n\n attention_mask: useful for knowing where the sequence ends\n \n Each chunk has two labels:\n\n chunk_labels: (list of 0/1) whether or not the chunk contains the label\n\n output_ids: the ids that have to be predicted for the target sequence\n tensor shape (max_output_sequence_length,batch_size)\n\n Sequences are padded to the maximum length of mini-batch sequences (dynamic padding).\n \"\"\"\n \n chunked_text = []; chunk_labels = []; dataset_label = []\n for publication in data:\n # for chunk in publication:\n chunked_text += [chunk for chunk in publication[\"chunked_text\"] ]\n chunk_labels += [chunk for chunk in publication[\"chunk_labels\"] ]\n # our dataset_label have to be repeated \n dataset_label += [publication[\"dataset_label\"] for _ in publication[\"chunk_labels\"] ]\n\n with torch.no_grad(): # needed for memory\n\n t = tokenizer(chunked_text, padding=True, truncation=True, return_tensors=\"pt\").to(device)\n outputs = bert_model(**t)\n bert_input_word_embeddings = outputs[0].permute(1,0,2)\n del outputs\n torch.cuda.empty_cache()\n\n input_ids = t['input_ids'].permute(1,0)\n attention_mask = t['attention_mask']\n\n def encode(tgt):\n tgt_ids = vocab.get_ids_from_sentence(tgt)\n return tgt_ids\n \n # We will pre-tokenize the dataset labels (output) and save in id lists for later use\n output_ids = [encode(tgt) for tgt in dataset_label]\n output_ids = [torch.LongTensor(e) for e in output_ids]\n output_ids = pad_sequence(output_ids,padding_value=pad_id).to(device)\n\n # \"chunked_text\":chunked_text,\n # \"dataset_label\":dataset_label,\n return {\"input_ids\":input_ids, \"chunk_labels\":chunk_labels, \\\n \"output_ids\":output_ids, \"input_tensor\":bert_input_word_embeddings, \\\n 'attention_mask':attention_mask}",
"_____no_output_____"
],
[
"# do not use, this is only for debugging\n# data = pd.read_csv(\"data.csv\")\n# with torch.no_grad():\n# t = tokenizer(data['text'].tolist()[0:16], padding=True, truncation=True, return_tensors=\"pt\").to(device)\n# outputs = bert_model(**t)\n# encoded_layers = outputs[0]\n# del outputs\n# torch.cuda.empty_cache()\n",
"_____no_output_____"
]
],
[
[
"## Seq2seq model \nUses Bert word embeddings\nMakes two predictions for each chunk",
"_____no_output_____"
]
],
[
[
"\nimport torch.nn as nn\nclass Seq2seq(nn.Module):\n def __init__(self, vocab, bert_dim = 300, emb_dim = 300, hidden_dim = 300, num_layers = 2, dropout=0.1):\n super().__init__()\n \"\"\"\n @author: Alexander Rodriguez\n \n bert_dim: dimension of Bert embeddings\n emb_dim: dimension of our word embedding (used in decoder)\n hidden_dim: dimension of our GRU hidden states\n \"\"\"\n \n self.bert_dim = bert_dim\n self.num_words = vocab.num_words\n self.emb_dim = emb_dim\n self.hidden_dim = hidden_dim\n self.num_layers = num_layers\n\n # neural layers\n self.embedding_layer = nn.Linear(1,self.emb_dim)\n self.encoder = nn.GRU(\n self.bert_dim,self.hidden_dim,self.num_layers,bidirectional=True,dropout=dropout\n )\n self.linear_hidden = nn.Linear(self.hidden_dim,self.hidden_dim)\n self.decoder = nn.GRU(\n self.emb_dim,self.hidden_dim,self.num_layers,bidirectional=False,dropout=dropout\n )\n self.output_layer = nn.Linear(self.hidden_dim,self.num_words)\n self.classifier = nn.Linear(self.hidden_dim, 1)\n self.attn_softmax = nn.Softmax(1) \n\n def encode(self, input_embeddings, attention_mask):\n \"\"\"Encode the source batch using a bidirectional GRU encoder.\n\n Args:\n input_embeddings: Bert embeddings with shape (max_input_sequence_length,\n batch_size,bert_dim), e.g. torch.Size([512, 16, 768])\n \n attention_mask: attention mask obtained from Bert tokenizer\n\n Returns:\n A tuple with three elements:\n encoder_output: The output hidden representation of the encoder \n with shape (max_input_sequence_length, batch_size, hidden_size).\n Can be obtained by adding the hidden representations of both \n directions of the encoder bidirectional GRU. \n encoder_mask: A boolean tensor with shape (max_input_sequence_length,\n batch_size) indicating which encoder outputs correspond to padding\n tokens. Its elements should be True at positions corresponding to\n padding tokens and False elsewhere.\n encoder_hidden: The final hidden states of the bidirectional GRU \n (after a suitable projection) that will be used to initialize \n the decoder. This should be a tensor h_n with shape \n (num_layers, batch_size, hidden_size). Note that the hidden \n state returned by the bi-GRU cannot be used directly. Its \n initial dimension is twice the required size because it \n contains state from two directions.\n \"\"\"\n\n batch_size = input_embeddings.shape[1]\n dtype = torch.float\n \n # gru pass\n encoder_output, encoder_hidden = self.encoder(input_embeddings) # seq_len first \n\n # sum embeddings from the two GRUs\n encoder_output = encoder_output[:,:,:self.hidden_dim] + encoder_output[:,:,self.hidden_dim:] \n\n # hidden embedding\n encoder_hidden = encoder_hidden.view(self.num_layers, 2, batch_size, self.hidden_dim)\n encoder_hidden = encoder_hidden.sum(1) # sum over bi-directional, keep number of layers\n encoder_hidden = self.linear_hidden(encoder_hidden)\n\n encoder_mask = attention_mask.permute(1,0)\n\n return encoder_output, encoder_mask, encoder_hidden\n\n\n\n def decode(self, decoder_input, last_hidden, encoder_output, encoder_mask, use_classifier=False):\n \"\"\"Run the decoder GRU for one decoding step from the last hidden state.\n\n Args:\n decoder_input: An integer tensor with shape (1, batch_size) containing \n the subword indices for the current decoder input.\n last_hidden: A pair of tensors h_{t-1} representing the last hidden\n state of the decoder, each with shape (num_layers, batch_size,\n hidden_size). For the first decoding step the last_hidden will be \n encoder's final hidden representation.\n encoder_output: The output of the encoder with shape\n (max_src_sequence_length, batch_size, hidden_size).\n encoder_mask: The output mask from the encoder with shape\n (max_src_sequence_length, batch_size). Encoder outputs at positions\n with a True value correspond to padding tokens and should be ignored.\n use_classifier: (boolean) Whether or not we should classify\n\n Returns:\n A tuple with three elements:\n logits: A tensor with shape (batch_size,\n vocab_size) containing unnormalized scores for the next-word\n predictions at each position.\n decoder_hidden: tensor h_n with the same shape as last_hidden \n representing the updated decoder state after processing the \n decoder input.\n attention_weights: This will be implemented later in the attention\n model, but in order to maintain compatible type signatures, we also\n include it here. This can be None or any other placeholder value.\n \"\"\"\n # shared layer\n dtype = torch.float\n input = decoder_input.type(dtype)\n input = self.embedding_layer(input.permute(1,0).unsqueeze(2))\n\n # attention weights\n max_src_sequence_length = encoder_output.shape[0]\n batch_size = encoder_output.shape[1]\n decoder_output, decoder_hidden = self.decoder(input.permute(1,0,2),last_hidden) \n # use the decoder output to get attention weights via dot-product\n attention_weights = torch.empty((batch_size,max_src_sequence_length),device=device,dtype=dtype)\n # function for batch dot product taken from https://discuss.pytorch.org/t/dot-product-batch-wise/9746/12\n def bdot(a, b):\n B = a.shape[0]\n S = a.shape[1]\n return torch.bmm(a.view(B, 1, S), b.view(B, S, 1)).reshape(-1)\n for i in range(max_src_sequence_length):\n attention_weights[:,i] = bdot(decoder_output.squeeze(0),encoder_output[i,:,:])\n # softmax\n attention_weights = self.attn_softmax(attention_weights)\n\n # get context vector\n context = torch.mul(encoder_output.permute(1,0,2), attention_weights.unsqueeze(2))\n context = context.sum(1)\n\n decoder_output = decoder_output.squeeze(0) + context\n # gru pass\n logits = self.output_layer(decoder_output)\n\n # use the attention context as input to the classifier along with\n # hidden states from encoder\n if use_classifier:\n out_classifier = self.classifier(last_hidden[0] + last_hidden[1] + context)\n else:\n out_classifier = torch.tensor(0.).to(device)\n \n return logits, decoder_hidden, attention_weights, out_classifier\n\n\n def compute_loss(self, input_tensor, attention_mask, target_seq, target_binary):\n \"\"\"Run the model on the source and compute the loss on the target.\n\n Args:\n input_tensor & attention_mask: \n Coming from Bert, directly go to encoder\n See encoder documentation for details\n\n target_seq: An integer tensor with shape (max_target_sequence_length,\n batch_size) containing subword indices for the target sentences.\n\n target_binary: Binary indicator for the chunk, indicates if\n the label is in that chunk (it's a list)\n NOTE: this is used as a mask for the sequence loss\n\n Returns:\n A scalar float tensor representing cross-entropy loss on the current batch\n divided by the number of target tokens in the batch.\n Many of the target tokens will be pad tokens. You should mask the loss \n from these tokens using appropriate mask on the target tokens loss.\n \"\"\"\n\n # loss criterion, ignoring pad id tokens\n criterion = nn.CrossEntropyLoss(ignore_index=pad_id,reduction='none')\n criterion_classification = nn.BCEWithLogitsLoss(reduction='sum')\n \n # call encoder\n encoder_output, encoder_mask, encoder_hidden = self.encode(input_tensor, attention_mask)\n\n # decoder\n max_target_sequence_length = target_seq.shape[0]\n last_hidden = encoder_hidden\n total_loss = torch.tensor(0.).to(device)\n target_binary = torch.tensor(target_binary,dtype=torch.float).to(device)\n for i in range(max_target_sequence_length-1):\n decoder_input = target_seq[[i],]\n # do a forward pass over classifier only for the first \n use_classifier = True if i==0 else False \n logits, decoder_hidden, attention_weights, out_classifier = self.decode(decoder_input, last_hidden, encoder_output, encoder_mask, use_classifier)\n # target_binary serves as a mask for the loss\n # we only care about the predicted sequence when we should\n total_loss += (criterion(logits,target_seq[i+1,]) * target_binary).sum() \n # get classification loss only for the first one (which is where out_classifier is meaningful)\n if use_classifier:\n class_loss = criterion_classification(out_classifier.view(-1),target_binary)\n # now we have to make last_hidden to be hidden embedding of gru\n last_hidden = decoder_hidden\n # denominator of loss\n total_target_tokens = torch.sum(target_seq != pad_id).cpu()\n return total_loss/total_target_tokens + class_loss\n",
"_____no_output_____"
],
[
"import tqdm\ndef train(model, data_loader, num_epochs, model_file, learning_rate=0.0001):\n \"\"\"Train the model for given number of epochs and save the trained model in \n the final model_file.\n \"\"\"\n\n decoder_learning_ratio = 5.0\n \n encoder_parameter_names = ['embedding_layer','encoder','linear_hidden'] \n \n encoder_named_params = list(filter(lambda kv: any(key in kv[0] for key in encoder_parameter_names), model.named_parameters()))\n decoder_named_params = list(filter(lambda kv: not any(key in kv[0] for key in encoder_parameter_names), model.named_parameters()))\n encoder_params = [e[1] for e in encoder_named_params]\n decoder_params = [e[1] for e in decoder_named_params]\n optimizer = torch.optim.AdamW([{'params': encoder_params},\n {'params': decoder_params, 'lr': learning_rate * decoder_learning_ratio}], lr=learning_rate)\n \n clip = 50.0\n for epoch in tqdm.notebook.trange(num_epochs, desc=\"training\", unit=\"epoch\"):\n # print(f\"Total training instances = {len(train_dataset)}\")\n # print(f\"train_data_loader = {len(train_data_loader)} {1180 > len(train_data_loader)/20}\")\n with tqdm.notebook.tqdm(\n data_loader,\n desc=\"epoch {}\".format(epoch + 1),\n unit=\"batch\",\n total=len(data_loader)) as batch_iterator:\n model.train()\n total_loss = 0.0\n for i, batch_data in enumerate(batch_iterator, start=1):\n input_tensor = batch_data[\"input_tensor\"]\n attention_mask = batch_data[\"attention_mask\"]\n output_ids = batch_data[\"output_ids\"]\n target_binary = batch_data[\"chunk_labels\"]\n optimizer.zero_grad()\n loss = model.compute_loss(input_tensor, attention_mask, output_ids,target_binary)\n total_loss += loss.item()\n loss.backward()\n # Gradient clipping before taking the step\n _ = nn.utils.clip_grad_norm_(model.parameters(), clip)\n optimizer.step()\n\n batch_iterator.set_postfix(mean_loss=total_loss / i, current_loss=loss.item())\n # Save the model after training \n torch.save(model.state_dict(), model_file)",
"_____no_output_____"
],
[
"# Create the DataLoader for all publications\ndataset = ChunkedDataset(publication_ids[0:2000], chunked_text[0:2000], chunk_labels[0:2000], dataset_label[0:2000], device, tokenizer, bert_model)\nbatch_size = 4 # this means it's 4 publications per batch ---too large may not fit in GPU memory\ndata_loader = DataLoader(dataset=dataset, batch_size=batch_size, \n shuffle=True, collate_fn=collate_fn)",
"_____no_output_____"
],
[
"# You are welcome to adjust these parameters based on your model implementation.\nnum_epochs = 10\nmodel = Seq2seq(vocab,bert_dim=768,emb_dim=256,hidden_dim=256,num_layers=2).to(device)\ntrain(model, data_loader, num_epochs, \"bert_word_seq2seq_model_2.pt\")\n# Download the trained model to local for future use\n",
"_____no_output_____"
],
[
"x = next(iter(data_loader))\nprint(x[\"output_ids\"])\n",
"tensor([[ 1, 1, 1, 1, 1, 1, 1, 1],\n [ 32, 32, 180, 180, 304, 304, 3313, 3313],\n [ 33, 33, 2, 2, 305, 305, 73, 73],\n [ 42, 42, 0, 0, 2073, 2073, 2708, 2708],\n [ 43, 43, 0, 0, 2074, 2074, 3314, 3314],\n [ 44, 44, 0, 0, 2075, 2075, 31, 31],\n [ 180, 180, 0, 0, 2, 2, 3315, 3315],\n [ 2, 2, 0, 0, 0, 0, 489, 489],\n [ 0, 0, 0, 0, 0, 0, 2, 2]], device='cuda:0')\n"
]
],
[
[
"## Evaluation\nThis come is from Alex Wang, I haven't checked it.",
"_____no_output_____"
],
[
"Load model",
"_____no_output_____"
]
],
[
[
"model = Seq2seq(vocab,bert_dim=768,emb_dim=256,hidden_dim=256,num_layers=2).to(device)\nmodel.load_state_dict(torch.load(\"bert_word_seq2seq_model_2.pt\"))",
"_____no_output_____"
],
[
"print(chunked_text[0])",
"[\"Introduction: The heterogeneity of behavioral variant frontotemporal dementia (bvFTD) calls for multivariate imaging biomarkers. Methods: We studied a total of 148 dementia patients from the Feinstein Institute (Center-A: 25 bvFTD and 10 Alzheimer's disease), Technical University of Munich (Center-B: 44 bvFTD and 29 FTD language variants), and Alzheimer's Disease Neuroimaging Initiative (40 Alzheimer's disease subjects). To identify the covariance pattern of bvFTD (behavioral variant frontotemporal dementiarelated pattern [bFDRP]), we applied principal component analysis to combined 18F-fluorodeoxyglucose-positron emission tomography scans from bvFTD and healthy subjects. The phenotypic specificity and clinical correlates of bFDRP expression were assessed in independent testing sets. The bFDRP was identified in Center-A data (24. 1% of subject ! voxel variance; P ,. 001), reproduced in Center-B data (P ,. 001), and independently validated using combined testing data (receiver operating characteristics-area under the curve 5 0. 97; P ,. 0001). The expression of bFDRP was specifically elevated in bvFTD patients (P ,. 001) and was significantly higher at more advanced disease stages (P 5. 035:duration; P ,. 01:severity). Discussion: The bFDRP can be used as a quantitative imaging marker to gauge the underlying disease process and aid in the differential diagnosis of bvFTD. Behavioral variant frontotemporal dementia; Spatial covariance pattern; Differential diagnosis; Quantitative imaging biomarker; FDG PET Dr. Eidelberg serves on the scientific advisory board and has received honoraria from The Michael J. Fox Foundation for Parkinson's Research; is listed as coinventor of patents, re: Markers for use in screening patients for nervous system dysfunction and a method and apparatus for using same, without financial gain; and has received research support from the NIH (NINDS, NIDCD, and NIAID) and the Dana Foundation. All other authors have declared that no conflict of interest exists. 1 Some of the data used in preparation of this article were obtained from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database (adni. lo ni. usc. edu). As such, the investigators within the ADNI contributed to the design and implementation of ADNI and/or provided data but did not participate in analysis or writing of this report. A complete listing of ADNI investigators can be found at Behavioral variant frontotemporal dementia (bvFTD) is the most common clinical phenotype of frontotemporal lobar degeneration (FTLD), a leading cause of dementia in midlife [1]. This syndrome is characterized by progressive impairment of personal and social behavior, as well as emotional, language, and executive functions [1]. However, similar symptoms are also seen in various other psychiatric and neurodegenerative disorders, particularly Alzheimer's disease (AD), making accurate diagnosis of bvFTD challenging [1] , especially at early stages of the disease [2]. Overall, the accuracy of clinical diagnosis of dementia has been improved with the study of 18 F-fluorodeoxyglucose (FDG) positron emission tomography (PET) brain scans [3] , as suggested by the diagnostic criteria for bvFTD [4] and AD [5]. \", ' However, the considerable individual variability in neuroanatomical involvement seen in bvFTD patients [6] [7] [8] restricts the use of regional and univariate analytical approaches for early and accurate detection of this disorder [2, 7, 9] , calling for the identification and standardization of multivariate quantitative imaging biomarkers [10, 11] for this dementia syndrome [12] A multivariate brain mapping approach, based on principal component analysis (PCA), has been applied to FDG PET data for several neurodegenerative disorders to identify disease-related spatial covariance patterns [13] [14] [15]. The expression of such metabolic signatures [10, 13] can be quantified in the scan data of prospective individual subjects [14, 15] and thus has been used to aid in early differential diagnosis, predict disease progression, and track response to therapy [13]. Nonetheless, to date, a metabolic covariance pattern has not been determined for bvFTD. The main objective of this study was to identify and characterize the bvFTD metabolic covariance pattern (bvFTD-related pattern [bFDRP] ) and assess its performance as an imaging marker for bvFTD. Our basic hypothesis was that bFDRP can classify independent bvFTD patients from healthy controls. Specifically, we identified bFDRP in a North American sample, cross-validated its reproducibility in a pathologyconfirmed European sample, and assessed its clinical correlates and classification performance for early-stage dementia. ']\n"
],
[
"print(sent)",
"National Education Longitudinal Study\n"
],
[
"def predict_greedy(model, sentence, max_length=100):\n \"\"\"Make predictions for the given input using greedy inference.\n \n Args:\n model: A sequence-to-sequence model.\n sentence: A input string.\n max_length: The maximum length at which to truncate outputs in order to\n avoid non-terminating inference.\n \n Returns:\n Model's predicted greedy response for the input, represented as string.\n \"\"\"\n\n # You should make only one call to model.encode() at the start of the function, \n # and make only one call to model.decode() per inference step.\n with torch.no_grad(): # needed for memory\n\n t = tokenizer(sentence, padding=True, truncation=True, return_tensors=\"pt\").to(device)\n outputs = bert_model(**t)\n bert_input_word_embeddings = outputs[0].permute(1,0,2)\n del outputs\n torch.cuda.empty_cache()\n\n input_ids = t['input_ids'].permute(1,0)\n attention_mask = t['attention_mask']\n \n \n model.eval()\n model.encode(bert_input_word_embeddings,attention_mask)\n encoder_output, encoder_mask, encoder_hidden = model.encode(bert_input_word_embeddings, attention_mask)\n\n last_hidden = encoder_hidden\n\n start = bos_id\n sent = [start]\n i = 0\n while start != eos_id and i < 100:\n use_classifier = True if i==0 else False\n start = torch.unsqueeze(torch.tensor(start).cuda(), 0)\n logits, decoder_hidden, attention_weights, out_classifier = model.decode(torch.unsqueeze(torch.tensor(start).cuda(), 0), last_hidden, encoder_output, encoder_mask, use_classifier)\n start = torch.argmax(logits[0], 0)\n last_hidden = decoder_hidden\n sent.append(start.item())\n i += 1\n if use_classifier:\n if out_classifier < -1:\n return False\n\n sent = vocab.decode_sentence_from_ids(sent)\n return sent\n\n#predictions = []\n#for i in range(100):\n# temp = []\n# for j in range(len(chunked_text[i])):\n# a = predict_greedy(model, chunked_text[i][j])\n# temp.append(a)\n# predictions.append(temp)\n# print(dataset_label[i])\n# print(temp)\nscore = 0\ndef jaccard(str1, str2): \n a = set(str1.lower().split()) \n b = set(str2.lower().split())\n c = a.intersection(b)\n return float(len(c)) / (len(a) + len(b) - len(c))\n\n",
"_____no_output_____"
],
[
"predictions[40]",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"def predict_beam(model, sentence, k=3, max_length=100, thresh=-9999):\n \"\"\"Make predictions for the given inputs using beam search.\n \n Args:\n model: A sequence-to-sequence model.\n sentence: An input sentence, represented as string.\n k: The size of the beam.\n max_length: The maximum length at which to truncate outputs in order to\n avoid non-terminating inference.\n \n Returns:\n A list of k beam predictions. Each element in the list should be a string\n corresponding to one of the top k predictions for the corresponding input,\n sorted in descending order by its final score.\n \"\"\"\n\n # Implementation tip: once an eos_token has been generated for any beam, \n # remove its subsequent predictions from that beam by adding a small negative \n # number like -1e9 to the appropriate logits. This will ensure that the \n # candidates are removed from the beam, as its probability will be very close\n # to 0. Using this method, uou will be able to reuse the beam of an already \n # finished candidate\n\n # Implementation tip: while you are encouraged to keep your tensor dimensions\n # constant for simplicity (aside from the sequence length), some special care\n # will need to be taken on the first iteration to ensure that your beam\n # doesn't fill up with k identical copies of the same candidate.\n \n # You are welcome to tweak alpha\n alpha = 0.9\n with torch.no_grad(): # needed for memory\n\n t = tokenizer(sentence, padding=True, truncation=True, return_tensors=\"pt\").to(device)\n outputs = bert_model(**t)\n bert_input_word_embeddings = outputs[0].permute(1,0,2)\n del outputs\n torch.cuda.empty_cache()\n\n input_ids = t['input_ids'].permute(1,0)\n attention_mask = t['attention_mask']\n model.eval()\n model.encode(bert_input_word_embeddings,attention_mask)\n encoder_output, encoder_mask, encoder_hidden = model.encode(bert_input_word_embeddings, attention_mask)\n\n last_hidden = encoder_hidden\n\n start = bos_id\n sent = [start]\n i = 0\n start = bos_id \n beams = []\n start = torch.unsqueeze(torch.tensor(start).cuda(), 0)\n logits, decoder_hidden, attention_weights, out_classifier = model.decode(torch.unsqueeze(torch.tensor(start).cuda(), 0), last_hidden, encoder_output, encoder_mask, 1)\n\n if out_classifier < -2:\n return False\n\n out = torch.log_softmax(logits[0], 0)\n values, start = torch.topk(out, k, 0)\n for i in range(len(values)):\n # Each beam contains the log probs at its first index and the hidden states at its last index\n beams.append([values[i], start[i].item(), decoder_hidden])\n generation = []\n i = 0\n while i < k:\n curr = []\n for j in beams:\n start = torch.unsqueeze(torch.tensor(j[-2]).cuda(), 0)\n logits, decoder_hidden, attention_weights, out_classifier = model.decode(torch.unsqueeze(torch.tensor(start).cuda(), 0), j[-1], encoder_output, encoder_mask, 0)\n \n out = torch.log_softmax(logits[0], 0)\n\n values, start = torch.topk(out, k, 0)\n for z in range(len(values)):\n temp = j.copy()\n temp[0] = values[z] + temp[0]\n temp.insert(-1, start[z].item())\n temp[-1] = decoder_hidden\n curr.append(temp)\n curr = sorted(curr,reverse=True, key=lambda x: x[0])\n curr = curr[0:k - i]\n beams = []\n for j in curr:\n if j[-2] == eos_id or len(j) > 20:\n generation.append(j[:-1])\n i +=1\n else:\n beams.append(j)\n final = []\n generation = sorted(generation, reverse=True, key=lambda x: x[0]/(len(x)-1)**alpha)\n #for i in generation:\n\n # if i[0].item() > thresh:\n final.append(vocab.decode_sentence_from_ids(generation[0][1:]).lower())\n return final\n",
"_____no_output_____"
],
[
"predictions = []\nfor i in range(2000):\n temp = []\n for j in chunked_text[i]:\n x = predict_beam(model, j)\n if x:\n temp.append(x[0])\n \n predictions.append(temp)",
"_____no_output_____"
],
[
"print(len(predictions))",
"2000\n"
],
[
"score = 0\nfor i in range(2000):\n for j in predictions[i]:\n found = False\n if jaccard(df.loc[i][5], j) > 0.5:\n score += 1\n found = True\n break\n \n\nprint(\"max accuracy\")\nprint(score/2000)",
"max accuracy\n0.7275\n"
],
[
"print(df.loc[5][5])",
"adni\n"
],
[
"testing = {}\nfor i in range(0, len(predictions)):\n if publication_ids[i] not in testing.keys():\n pred = predictions[i]\n \n \n testing[publication_ids[i]] = (pred, [df.loc[i][5]])\n else:\n testing[publication_ids[i]][1].append(df.loc[i][5])",
"_____no_output_____"
],
[
"print(len(testing.keys()))",
"1761\n"
],
[
"tp = 0\nfp = 0\nfn = 0\nfor i in testing.values():\n prediction = set(i[0])\n cop = prediction.copy()\n true_pred = i[1].copy()\n check = False\n #check exact match first\n for j in prediction:\n if j in true_pred:\n tp += 1\n true_pred.remove(j)\n cop.remove(j)\n #then check rest for jaccard score\n for j in cop:\n found = False\n removal = 0\n for k in true_pred:\n if jaccard(j, k) >= 0.5:\n found = True\n removal = k\n break\n if found:\n tp += 1\n true_pred.remove(removal)\n else:\n fp += 1\n fn += len(true_pred)",
"_____no_output_____"
]
],
[
[
"TRAINING PERFORMANCE",
"_____no_output_____"
]
],
[
[
"print(\"training performance\")\nprint(\"micro F score\")\nprint(fp)\nprint(fn)\nprint(tp/(tp + 1/2*(fp+fn)))\nprint(\"accuracy\")\nprint(tp/(tp+fn))",
"training performance\nmicro F score\n383\n567\n0.7510482180293501\naccuracy\n0.7165\n"
],
[
"print(len(df))",
"3284\n"
],
[
"predictions = []\nfor i in range(2000, 3000):\n temp = []\n for j in chunked_text[i]:\n x = predict_beam(model, j)\n if x:\n temp.append(x[0])\n \n predictions.append(temp)",
"_____no_output_____"
],
[
"print(predictions)",
"[['adni'], ['adni'], ['adni', 'adni'], ['trends in international mathematics and science study'], ['adni'], [], ['adni'], ['baltimore longitudinal study of aging', 'baltimore longitudinal study of aging blsa'], ['baltimore longitudinal study of aging'], ['coastal change analysis program'], ['census change agriculture', 'agricultural resource management survey'], ['adni', 'adni'], ['genome sequence of sars cov', 'covid open study of'], ['early childhood longitudinal study'], ['adni'], [], ['adni'], ['adni'], ['agricultural resource management survey', 'census of agriculture'], ['adni s disease neuroimaging initiative adni'], ['early childhood longitudinal study'], ['adni s disease neuroimaging initiative adni'], ['baltimore longitudinal study of aging', 'early childhood longitudinal study'], ['national education longitudinal'], ['adni'], ['trends in international mathematics and science study'], ['adni', 'adni'], ['adni'], ['adni'], ['adni'], ['covid open research dataset'], ['early childhood longitudinal study'], ['adni', 'adni'], ['adni'], ['adni'], ['agricultural resource management survey'], ['national education longitudinal study', 'national education longitudinal'], ['coastal change analysis program'], [], ['early childhood longitudinal study'], ['adni'], ['baccalaureate and beyond study and'], ['adni', 'adni'], ['adni'], ['covid open study mathematics'], ['baltimore of study of aging blsa'], ['adni', 'adni'], ['adni', 'adni'], ['trends in international mathematics and science study'], ['baltimore longitudinal study of aging blsa'], ['adni'], ['genome'], ['adni'], [], ['census of agriculture', 'census of agriculture'], [], ['adni'], ['adni', 'adni'], [], [], ['adni s disease neuroimaging initiative adni'], [], ['adni'], ['adni', 'adni'], [], ['adni'], ['adni'], ['adni'], ['trends in international mathematics and science study'], ['adni', 'adni', 'adni'], [], ['adni', 'adni'], ['adni', 'adni'], ['adni'], ['north american breeding bird survey'], ['adni'], ['adni', 'adni'], ['adni'], ['adni', 'adni'], ['adni', 'adni'], ['adni s disease neuroimaging initiative adni'], ['adni'], ['baltimore longitudinal study of aging'], ['adni s disease neuroimaging initiative adni', 'adni'], ['trends in international mathematics and science study'], ['adni'], ['adni'], ['adni', 'adni'], ['adni', 'adni s disease neuroimaging initiative adni'], ['adni'], ['adni', 'adni'], ['adni'], ['adni'], ['adni'], ['agricultural resource management survey'], ['trends for international mathematics'], ['adni'], ['adni'], ['baltimore'], ['early childhood longitudinal study'], ['trends in international mathematics and science study'], ['national education longitudinal study'], ['trends in international mathematics and science study'], ['adni'], ['adni', 'adni'], ['genome sequence of sars'], ['adni'], ['trends in international mathematics and science study'], ['adni', 'adni'], ['north american breeding bird survey', 'north resource breeding bird survey'], ['adni', 'adni'], ['adni', 'adni'], ['adni'], ['census of agriculture'], ['adni', 'adni'], ['early childhood longitudinal study'], ['trends in international mathematics and science study'], ['early childhood longitudinal study'], ['adni'], ['adni'], ['agricultural resource management survey'], ['national education longitudinal study'], ['adni', 'adni'], ['baltimore longitudinal study of aging', 'baltimore longitudinal study of aging blsa'], ['covid open research dataset'], [], ['baccalaureate and beyond study and'], ['early childhood longitudinal study'], ['adni'], ['adni', 'adni'], ['adni'], ['adni'], [], ['adni'], [], ['adni'], ['trends in international mathematics and science adult competencies'], [], ['adni'], ['adni', 'adni'], ['baltimore longitudinal study of aging blsa'], ['north american breeding bird survey'], ['north american breeding bird survey', 'north american breeding bird survey'], ['adni'], ['baltimore longitudinal study of aging blsa'], ['adni'], ['early childhood longitudinal study'], ['adni'], ['adni'], ['baltimore longitudinal study of aging'], ['genome sequence of sars cov'], ['national childhood longitudinal'], ['adni'], ['adni'], ['baltimore longitudinal study of aging blsa'], [], ['national longitudinal longitudinal study'], ['adni'], ['early childhood longitudinal study'], ['adni s disease neuroimaging initiative adni'], ['adni', 'adni'], ['baltimore longitudinal study of aging blsa'], ['trends in international mathematics and science study'], ['adni'], ['trends in international mathematics and science study'], ['adni'], ['education education longitudinal study'], ['adni', 'adni'], ['adni', 'adni'], ['trends and international mathematics and science study'], ['trends in international mathematics and science study'], ['adni'], ['adni', 'early childhood study of aging adni'], ['adni'], ['north american breeding bird survey'], ['adni'], ['adni'], [], ['adni'], ['adni'], ['baltimore longitudinal study of aging'], [], ['adni s disease neuroimaging initiative adni'], ['coastal of of'], ['census of agriculture'], ['baccalaureate and beyond study and'], [], ['adni'], ['adni'], ['early childhood longitudinal study'], ['beginning postsecondary students study and'], ['adni'], ['adni'], [], ['baccalaureate and beyond'], ['program for the of assessment'], ['adni'], ['north american breeding bird survey'], ['survey of doctorate recipients'], ['trends in international mathematics and science study', 'trends in international mathematics and science study', 'trends in international mathematics and science study'], ['national education longitudinal study'], ['trends in international mathematics and science'], ['coastal change analysis program'], ['adni', 'adni'], [], ['national education longitudinal study'], ['trends in international mathematics and science study'], ['adni'], ['adni', 'adni'], ['adni'], ['adni', 'adni'], ['adni longitudinal study neuroimaging aging adni'], ['adni s disease neuroimaging initiative adni', 'adni s disease neuroimaging initiative adni'], ['early childhood longitudinal study'], ['baltimore longitudinal study of aging blsa'], ['early childhood longitudinal study', 'early childhood longitudinal study'], ['adni'], ['adni'], ['baltimore longitudinal study of aging blsa'], ['adni'], ['survey of doctorate recipients'], ['beginning postsecondary students'], ['trends in international mathematics and science study'], ['adni'], ['baltimore longitudinal study of aging blsa', 'baltimore longitudinal study of aging blsa'], ['genome'], ['baltimore longitudinal study of aging blsa'], ['beginning postsecondary students study and'], ['adni'], ['national education longitudinal study'], ['adni'], ['adni', 'adni'], ['adni'], ['beginning postsecondary students study and'], ['adni', 'adni'], ['adni'], ['adni s disease neuroimaging initiative adni'], ['ibtracs'], ['alzheimer in disease neuroimaging initiative adni', 'adni'], ['agricultural resource management survey'], ['adni', 'adni'], [], ['adni'], ['adni', 'adni'], ['adni s disease neuroimaging initiative adni'], ['early childhood longitudinal study', 'early childhood longitudinal study'], ['adni'], ['adni'], ['baltimore longitudinal study of aging'], ['adni'], ['trends of international mathematics'], ['adni', 'coastal'], ['trends in international mathematics and science study'], ['adni'], ['national'], ['adni'], ['trends in international mathematics and science study'], ['covid open research dataset'], ['adni'], ['trends in international mathematics and science study'], ['genome world of sars'], ['adni s disease neuroimaging initiative adni'], ['adni', 'adni s disease neuroimaging initiative adni'], ['adni'], ['baltimore longitudinal study of aging', 'baltimore longitudinal study of aging blsa'], ['adni'], [], ['survey of doctorate recipients'], ['survey of doctorate recipients'], ['baltimore', 'adni'], ['agricultural resource management survey'], ['baltimore longitudinal study of aging', 'baltimore longitudinal study of aging'], ['trends in international mathematics and science study'], ['adni'], ['adni s disease neuroimaging initiative adni'], ['baltimore longitudinal study of aging'], ['survey and on survey and science', 'national in international'], ['adni s disease neuroimaging initiative adni'], ['genome sequence of sars'], ['trends in international mathematics and science study'], ['survey of doctorate recipients'], ['adni', 'adni'], ['baltimore longitudinal study of aging'], ['adni'], ['adni', 'adni'], ['adni', 'adni'], ['adni'], ['adni'], ['baltimore longitudinal study of aging blsa'], ['baltimore longitudinal study of aging', 'baltimore longitudinal study of aging'], ['early childhood longitudinal study'], ['trends in international mathematics and science study'], ['adni'], ['adni'], ['trends for international mathematics assessment science adult competencies'], ['adni'], ['adni'], ['covid open research dataset'], ['adni'], ['adni'], ['baltimore longitudinal study of aging'], ['adni'], ['survey of doctorate recipients'], ['adni'], ['baltimore'], ['survey of doctorate recipients', 'survey of doctorate recipients'], ['early childhood longitudinal study'], ['adni', 'adni'], ['adni s disease neuroimaging initiative adni'], ['adni'], ['adni', 'adni', 'adni of disease of initiative adni'], ['agricultural resource management survey'], ['adni'], ['beginning postsecondary students', 'survey of doctorate recipients', 'baccalaureate childhood students', 'beginning postsecondary students study'], ['early childhood longitudinal study'], ['trends in international mathematics and science study', 'early childhood longitudinal study'], ['early childhood longitudinal study'], ['early childhood longitudinal study'], ['early childhood longitudinal study'], ['agricultural resource management survey'], ['adni', 'adni', 'adni'], ['survey of doctorate recipients'], ['adni', 'adni', 'adni'], ['baltimore longitudinal study of aging blsa'], ['adni longitudinal disease neuroimaging initiative adni'], ['survey of earned doctorates'], ['adni'], ['survey of doctorate recipients'], [], [], ['adni', 'adni'], ['adni', 'adni'], ['adni', 'adni'], ['adni s disease neuroimaging initiative adni'], ['early childhood longitudinal study'], ['trends in international mathematics and science study', 'trends in international mathematics and science study'], ['baltimore longitudinal study of aging', 'baltimore longitudinal study of aging blsa'], ['adni'], ['adni'], ['adni'], ['adni'], ['baccalaureate and beyond study and'], ['trends in international mathematics and science study'], ['adni'], ['census of agriculture'], ['our'], ['baltimore longitudinal study of aging blsa'], [], ['genome sequence of sars cov'], ['adni'], ['adni'], ['adni'], ['adni'], ['adni', 'adni'], ['baltimore longitudinal study of initiative', 'adni'], ['adni', 'adni'], [], ['national education students'], ['adni'], [], ['covid open research dataset'], ['agricultural resource management survey'], ['early childhood longitudinal study'], ['adni'], ['adni'], ['baltimore longitudinal study of aging blsa', 'baltimore longitudinal study of aging blsa'], ['census of agriculture'], ['baltimore longitudinal study of aging blsa'], ['world ocean database'], ['adni'], ['adni', 'adni'], ['national education longitudinal study'], ['adni', 'adni'], ['adni'], ['adni s disease neuroimaging initiative adni'], ['baltimore longitudinal study of aging blsa'], ['adni', 'adni'], ['adni', 'adni'], ['adni'], ['adni'], ['adni'], ['baltimore longitudinal study of aging'], ['adni'], ['adni'], [], [], [], ['adni'], [], ['adni s disease neuroimaging initiative adni'], ['trends in international mathematics and science study'], ['adni', 'adni'], ['national education students study'], [], ['north american breeding bird survey'], ['national education longitudinal study'], ['baltimore'], ['early childhood longitudinal study'], ['adni'], ['adni'], ['adni', 'adni'], ['adni', 'adni s disease neuroimaging initiative adni'], [], ['adni', 'adni'], ['baltimore longitudinal disease neuroimaging initiative adni'], ['baltimore longitudinal study of aging blsa'], [], ['adni'], ['agricultural resource management survey', 'agricultural resource management survey', 'agricultural resource management survey'], [], [], ['baltimore longitudinal study of aging'], ['genome sequence of sars cov adni'], ['trends in international mathematics assessment science study competencies', 'program for international mathematics assessment science adult competencies'], ['national education longitudinal', 'beginning childhood longitudinal'], ['baccalaureate postsecondary beyond study and'], ['coastal change analysis survey'], ['baltimore longitudinal study of aging'], ['trends in international mathematics and science study', 'trends in international mathematics and science study'], ['adni', 'adni'], ['baltimore longitudinal study study aging'], ['trends in international mathematics and science study'], ['genome open of sars'], ['trends in international mathematics and science study'], ['genome sequence of sars cov'], ['adni'], ['trends in international mathematics and science study', 'trends in international mathematics and science study'], ['adni'], ['adni s disease neuroimaging initiative adni'], ['adni'], ['adni'], ['baltimore longitudinal disease neuroimaging initiative adni'], ['national education longitudinal'], ['early childhood longitudinal study'], ['baltimore longitudinal study of aging blsa'], [], ['baltimore longitudinal study of aging'], ['adni'], ['trends in international mathematics and science study'], ['adni'], ['north american breeding bird survey'], ['trends in international mathematics and science study'], ['adni'], ['adni'], ['national of doctorate recipients'], ['early childhood longitudinal study'], ['adni s disease neuroimaging initiative adni'], ['baltimore longitudinal study of aging blsa'], ['adni'], ['adni', 'adni'], ['adni'], [], ['genome sequence of sars'], ['adni'], [], [], ['adni'], ['adni of of', 'early childhood study study'], ['adni'], ['census of agriculture', 'agricultural resource management survey'], ['north american breeding bird survey'], [], ['adni'], ['adni', 'adni'], ['trends in international mathematics and science study'], ['adni', 'adni'], ['adni s disease neuroimaging initiative adni'], ['adni', 'ibtracs'], ['ibtracs'], ['adni', 'adni'], [], ['baltimore longitudinal study of aging blsa'], ['census of agriculture', 'agricultural resource management survey'], ['adni', 'adni'], ['adni'], ['adni'], ['baltimore longitudinal study bird'], ['survey of doctorate recipients', 'survey of doctorate recipients'], ['adni'], ['adni'], [], ['national education study'], ['adni'], [], ['adni'], ['adni'], ['beginning postsecondary students study and'], ['north american breeding bird survey'], ['adni', 'adni'], ['adni'], [], ['our open international sars'], ['adni s disease neuroimaging initiative adni'], ['adni', 'adni'], ['adni'], ['adni'], ['baltimore'], ['adni', 'adni'], ['adni'], ['genome cov of sars'], ['adni', 'adni'], ['adni'], ['adni'], ['adni', 'adni', 'adni'], ['beginning postsecondary students study'], ['baltimore longitudinal study of aging'], ['adni', 'adni', 'adni', 'adni', 'adni', 'adni'], ['adni'], ['adni'], ['adni'], ['adni'], ['slosh for international dataset'], ['adni'], ['trends in international mathematics and science study'], ['adni'], ['census of agriculture of'], ['adni', 'adni'], ['slosh', 'coastal', 'slosh model international program', 'coastal model analysis program'], ['ibtracs for track archive', 'ibtracs change analysis archive'], [], ['adni'], ['adni'], ['agricultural resource management survey'], ['adni'], ['adni'], ['trends in international mathematics and science study', 'trends in international mathematics and science study'], ['adni'], ['survey and on survey and science'], ['adni'], ['adni'], ['adni', 'adni'], ['adni'], ['early childhood longitudinal study'], ['trends in international mathematics and science study'], ['baltimore longitudinal study of aging blsa'], ['agricultural resource management survey'], ['adni'], ['trends in international mathematics and science study'], ['baltimore longitudinal study of aging blsa'], ['baltimore longitudinal study of aging'], ['early childhood longitudinal study'], ['early childhood longitudinal study'], ['national education longitudinal study'], ['trends in international mathematics and science study'], ['adni'], ['early childhood longitudinal study'], ['adni'], ['adni'], ['adni'], ['baccalaureate and beyond study and', 'baccalaureate and beyond study and'], ['adni', 'adni'], ['adni'], ['early childhood longitudinal study', 'early childhood longitudinal study', 'early childhood longitudinal study'], ['baccalaureate and beyond study and'], ['baltimore longitudinal study of aging blsa', 'baltimore longitudinal study of aging blsa'], ['education education longitudinal study'], ['genome sequence genome sars'], ['baltimore longitudinal study of aging blsa'], ['survey of doctorate recipients', 'survey of doctorate survey and science study'], ['baltimore longitudinal study of aging blsa'], ['trends in international mathematics and science study'], ['adni'], ['trends in international mathematics and science study'], [], ['survey of doctorate recipients'], [], ['trends in international mathematics and science study'], ['adni', 'adni'], ['beginning postsecondary students study'], ['baltimore longitudinal study of aging blsa'], ['national education longitudinal study'], [], [], ['adni'], ['adni'], ['baltimore longitudinal study of aging blsa'], ['baltimore longitudinal study of aging'], ['adni'], ['adni'], ['adni', 'adni'], ['early childhood longitudinal study'], ['adni'], ['national education longitudinal study'], ['adni'], ['baltimore', 'adni'], ['adni'], ['adni', 'adni'], ['early childhood longitudinal study'], [], ['trends in international mathematics and science study', 'trends in international mathematics and science study'], ['adni'], ['adni'], ['adni'], ['adni', 'adni'], ['adni'], ['adni'], ['baltimore longitudinal study of aging blsa'], ['trends in international mathematics and science study'], ['beginning postsecondary students study'], ['baltimore longitudinal study of aging', 'baltimore longitudinal study of aging blsa'], ['beginning postsecondary students study and'], ['early childhood longitudinal study'], ['adni'], ['adni s disease neuroimaging initiative adni'], ['adni', 'adni'], ['adni'], ['national education longitudinal'], ['adni'], ['trends in international mathematics and science study'], ['adni'], ['trends in international mathematics and science study'], ['trends in international mathematics and science study'], ['adni'], ['national education longitudinal study'], ['adni', 'adni'], ['adni'], ['adni'], [], ['adni'], [], ['adni', 'adni'], ['early childhood longitudinal study', 'early childhood longitudinal study'], ['adni'], ['adni', 'adni'], ['adni'], ['adni', 'adni'], [], ['agricultural resource management survey'], ['adni'], ['adni', 'adni'], ['adni s disease neuroimaging initiative adni', 'adni'], ['baltimore longitudinal study of aging blsa'], ['adni'], ['adni'], ['adni', 'adni'], ['trends in international mathematics and science study'], [], ['baltimore longitudinal study of aging'], ['adni'], ['adni'], ['adni'], ['adni'], ['trends in international mathematics and science study'], ['baltimore longitudinal study of aging blsa'], ['agricultural resource management survey'], ['adni', 'adni'], [], ['adni'], ['genome sequence of sars cov'], ['education education longitudinal study', 'national education longitudinal'], ['baltimore longitudinal study of aging blsa', 'baltimore longitudinal study of aging blsa'], ['adni s disease neuroimaging initiative adni', 'adni'], ['adni', 'adni'], [], ['adni'], [], ['adni', 'adni'], ['adni'], ['adni', 'adni s disease neuroimaging initiative adni'], ['adni'], ['adni'], ['baltimore longitudinal study of aging'], ['adni'], ['adni'], ['survey of doctorate recipients'], ['trends in international mathematics and science study'], ['agricultural resource management survey'], ['adni s disease neuroimaging initiative adni'], ['north american breeding bird survey'], ['adni'], ['genome sequence of sars'], ['baltimore longitudinal study of aging'], ['adni', 'adni'], ['adni'], [], ['baltimore longitudinal study of aging blsa'], ['adni'], [], ['trends in international mathematics and science study'], ['adni s disease neuroimaging initiative adni'], ['early childhood longitudinal study'], ['adni'], ['early childhood longitudinal study'], ['genome sequence of sars cov', 'adni'], ['national education longitudinal study'], ['adni', 'adni'], [], [], ['genome sequence of sars'], ['adni', 'adni'], ['national education international mathematics assessment'], ['covid open research dataset'], ['adni'], ['adni'], ['baltimore longitudinal study of aging blsa'], ['adni'], ['agricultural resource management survey'], ['adni'], ['adni'], ['adni', 'adni'], ['national education study study', 'trends in international mathematics and science study'], ['baltimore'], ['adni'], ['world ocean database'], ['adni', 'adni'], ['trends in international mathematics and science study'], ['adni'], ['baltimore longitudinal study of aging blsa'], ['adni'], ['adni'], ['baltimore longitudinal study of aging'], ['early childhood longitudinal study'], ['adni'], ['adni', 'adni'], ['baltimore longitudinal study of aging blsa'], ['national education longitudinal study'], ['beginning postsecondary students study'], ['adni', 'adni', 'adni'], [], ['adni'], ['north american breeding bird survey'], ['adni'], ['baltimore longitudinal study of aging blsa'], ['census of agriculture', 'census resource management survey'], ['adni'], ['adni'], ['adni'], ['early childhood longitudinal study', 'early childhood longitudinal study'], ['adni'], ['adni'], ['trends in international mathematics and science study'], ['north american breeding bird survey', 'north american breeding bird'], ['adni'], ['adni'], ['adni', 'adni'], ['adni'], ['covid open research dataset'], ['adni'], ['adni'], ['our open study sars'], ['north american breeding bird survey'], ['trends in international mathematics and science study'], ['adni', 'adni'], ['early childhood study study'], ['adni', 'adni'], ['adni'], ['adni'], ['adni'], ['trends in international mathematics and science study', 'early longitudinal study study'], ['baltimore longitudinal study of aging blsa'], ['trends in international mathematics'], ['adni'], ['adni'], ['education education longitudinal'], ['genome sequence genome sars', 'genome sequence genome sars'], ['adni'], ['adni'], ['adni'], ['baltimore longitudinal study of aging'], ['baltimore longitudinal study of aging', 'baltimore longitudinal study of aging blsa'], ['genome sequence of sars'], ['adni'], ['adni'], ['adni'], ['agricultural resource management survey'], ['adni'], ['adni'], [], ['beginning postsecondary students study and'], ['adni', 'adni'], ['adni', 'adni'], ['adni', 'adni'], ['adni'], ['genome sequence of sars cov'], ['adni'], ['adni'], ['adni', 'adni'], ['adni'], ['adni'], ['adni'], ['adni'], ['adni', 'adni'], ['genome sequence genome sars cov'], ['adni'], ['national education longitudinal'], ['adni', 'adni'], ['early childhood longitudinal study'], ['genome world of sars'], ['adni'], ['baltimore longitudinal study of aging'], [], ['baltimore longitudinal study of aging'], ['adni', 'adni'], ['genome sequence genome sars'], ['baltimore longitudinal study of aging'], ['baltimore longitudinal study of aging'], ['adni'], ['early childhood longitudinal study'], ['adni'], ['early childhood longitudinal study'], ['baltimore longitudinal study of aging blsa'], ['adni'], ['adni', 'adni'], ['adni'], ['adni'], ['adni'], ['adni', 'adni'], ['adni'], ['adni'], ['adni'], ['north american breeding bird survey'], ['adni', 'adni'], ['adni'], ['early childhood longitudinal study', 'early childhood longitudinal study'], ['genome sequence of sars', 'genome sequence research sars'], ['trends in international mathematics and science study'], ['world ocean database'], ['adni'], ['genome sequence of sars cov'], ['adni'], ['north american breeding bird survey', 'north resource breeding bird survey'], ['north american breeding bird survey'], ['trends in international mathematics and science study'], [], ['coastal change analysis program'], ['adni'], ['adni', 'adni'], ['adni', 'adni'], [], ['baltimore longitudinal study neuroimaging aging adni', 'baltimore longitudinal study of aging'], ['baltimore longitudinal study of aging'], ['adni', 'adni'], ['adni'], ['adni'], ['coastal change analysis program'], ['north american breeding bird survey'], ['adni'], ['adni'], ['adni'], ['genome sequence of sars cov'], ['adni', 'adni s disease neuroimaging initiative adni'], ['ibtracs'], ['adni s disease neuroimaging initiative adni'], ['adni'], ['agricultural resource management survey'], ['adni', 'baltimore longitudinal study of aging'], ['early childhood longitudinal study'], ['adni', 'adni'], ['adni s disease neuroimaging initiative adni'], [], ['early childhood longitudinal study'], ['adni', 'adni'], ['baltimore'], ['agricultural resource management survey'], ['north american breeding bird survey'], [], ['adni'], ['adni'], ['adni'], ['national education longitudinal'], ['north american breeding bird survey', 'north american breeding bird survey'], [], ['adni'], ['covid open research dataset'], ['baltimore longitudinal study of aging blsa', 'baltimore longitudinal study of aging blsa'], ['adni'], ['covid open international dataset'], ['adni', 'adni'], [], ['north american breeding bird survey'], ['adni'], ['trends in international mathematics and science study'], ['baltimore longitudinal study of aging'], ['adni'], ['trends in international mathematics and science study'], [], ['agricultural resource management survey', 'agricultural resource management survey'], ['national education longitudinal study'], ['baltimore longitudinal study of aging', 'baltimore longitudinal study of aging blsa'], ['adni'], ['coastal change analysis program'], ['adni'], [], ['adni'], ['adni'], ['adni'], ['baltimore longitudinal study of aging blsa', 'baltimore longitudinal study of aging blsa'], ['trends in international mathematics and science study'], ['covid open research'], [], ['agricultural resource management survey'], ['national education longitudinal study', 'national education students'], ['baccalaureate and beyond study and'], ['trends in international mathematics and science study'], ['north american breeding bird survey'], ['adni s disease neuroimaging initiative adni'], ['adni', 'adni'], [], ['adni'], ['adni'], ['adni', 'adni'], ['adni'], ['beginning postsecondary students study and'], ['baltimore longitudinal study of aging', 'baltimore longitudinal study of aging blsa'], ['agricultural resource management survey'], ['adni'], [], ['trends in international mathematics and science study'], [], [], ['covid open research dataset'], [], ['adni'], ['adni'], ['adni'], ['adni'], ['trends in international mathematics and science study'], ['adni', 'adni'], ['adni'], ['adni'], ['census of agriculture'], ['adni'], ['trends in international mathematics and science study'], ['adni', 'adni'], ['trends in international mathematics and science study'], ['adni'], ['beginning postsecondary students study'], ['adni'], ['adni', 'adni'], ['adni'], [], ['baltimore longitudinal study of aging', 'baltimore longitudinal study study aging blsa study'], ['adni in international of initiative adni', 'adni'], ['adni', 'adni'], ['coastal change management program', 'agricultural resource management survey survey'], ['adni', 'adni'], ['adni'], ['adni'], ['world ocean database bird'], ['education education longitudinal study'], ['adni'], ['early childhood longitudinal study'], ['survey of doctorate recipients'], ['early childhood longitudinal study'], ['genome'], ['ibtracs model international', 'ibtracs'], ['census of agriculture'], ['baltimore longitudinal study of aging blsa'], [], ['baltimore longitudinal study of aging blsa'], ['trends in international mathematics and science study'], ['adni'], ['beginning postsecondary students study and'], ['adni', 'adni'], ['adni'], ['adni'], ['adni'], ['adni'], ['genome sequence of sars'], [], ['adni'], ['trends in international mathematics and science study'], ['survey of agriculture', 'census resource agriculture', 'trends in international mathematics', 'census of agriculture'], ['adni'], ['north american breeding bird survey'], ['adni'], ['covid open research dataset'], ['national education longitudinal study'], ['adni'], ['agricultural resource management survey survey'], ['adni'], ['agricultural resource management survey'], ['adni'], ['adni s disease neuroimaging initiative adni'], ['adni', 'adni'], ['adni'], ['trends in international mathematics and science study'], ['early childhood longitudinal study'], ['adni', 'adni s disease neuroimaging initiative adni'], ['baltimore longitudinal study of aging'], ['baltimore longitudinal study of aging blsa'], ['genome sequence of sars cov'], ['adni'], ['adni'], ['north american breeding bird survey'], ['adni model research dataset'], ['adni'], ['adni'], ['trends in international mathematics and science study'], ['coastal change analysis'], ['national education longitudinal'], ['baltimore longitudinal study of aging blsa'], ['north american breeding bird survey'], ['ibtracs', 'adni', 'adni'], ['trends in international mathematics and science study'], ['adni', 'adni'], ['adni', 'adni s disease neuroimaging initiative adni', 'adni'], ['adni'], ['adni'], ['national of doctorate recipients aging']]\n"
]
],
[
[
"Checking Classifer Accuracy",
"_____no_output_____"
]
],
[
[
"len(chunked_text)",
"_____no_output_____"
],
[
"count = 0\nfor i in predictions:\n if not i:\n count += 1\nprint(count)",
"82\n"
],
[
"testing = {}\nfor i in range(0, len(predictions)):\n if publication_ids[2000+i] not in testing.keys():\n pred = predictions[i]\n print(pred)\n print(df.loc[2000+i][5])\n \n testing[publication_ids[2000+i]] = (pred, [df.loc[2000+i][5]])\n else:\n testing[publication_ids[2000+i]][1].append(df.loc[2000+i][5])",
"['adni']\nadni\n['adni']\nadni\n['adni', 'adni']\nalzheimer s disease neuroimaging initiative adni \n['trends in international mathematics and science study']\ntrends in international mathematics and science study\n['adni']\nadni\n[]\nnoaa c cap\n['adni']\nadni\n['baltimore longitudinal study of aging', 'baltimore longitudinal study of aging blsa']\nbaltimore longitudinal study of aging blsa \n['baltimore longitudinal study of aging']\nbaltimore longitudinal study of aging\n['coastal change analysis program']\nslosh model\n['census change agriculture', 'agricultural resource management survey']\ncensus of agriculture\n['adni', 'adni']\nadni\n['genome sequence of sars cov', 'covid open study of']\ngenome sequence of sars cov 2\n['early childhood longitudinal study']\nearly childhood longitudinal study\n['adni']\nalzheimer s disease neuroimaging initiative adni \n[]\nadni\n['adni']\nadni\n['adni']\nalzheimer s disease neuroimaging initiative adni \n['agricultural resource management survey', 'census of agriculture']\ncensus of agriculture\n['adni s disease neuroimaging initiative adni']\nadni\n['early childhood longitudinal study']\nearly childhood longitudinal study\n['adni s disease neuroimaging initiative adni']\nalzheimer s disease neuroimaging initiative adni \n['baltimore longitudinal study of aging', 'early childhood longitudinal study']\nearly childhood longitudinal study\n['national education longitudinal']\nnational education longitudinal study\n['adni']\nadni\n['trends in international mathematics and science study']\ntrends in international mathematics and science study\n['adni', 'adni']\nalzheimer s disease neuroimaging initiative adni \n['adni']\nadni\n['adni']\nalzheimer s disease neuroimaging initiative adni \n['adni']\nadni\n['covid open research dataset']\ncovid 19 open research dataset cord 19 \n['early childhood longitudinal study']\nearly childhood longitudinal study\n['adni', 'adni']\nadni\n['adni']\nadni\n['adni']\nadni\n['agricultural resource management survey']\nagricultural resource management survey\n['national education longitudinal study', 'national education longitudinal']\neducation longitudinal study\n['coastal change analysis program']\nnoaa c cap\n[]\nworld ocean database\n['early childhood longitudinal study']\nearly childhood longitudinal study\n['adni']\nalzheimer s disease neuroimaging initiative adni \n['baccalaureate and beyond study and']\nbaccalaureate and beyond longitudinal study\n['adni', 'adni']\nadni\n['adni']\nadni\n['covid open study mathematics']\neducation longitudinal study\n['baltimore of study of aging blsa']\nbaltimore longitudinal study of aging blsa \n['adni', 'adni']\nalzheimer s disease neuroimaging initiative adni \n['adni', 'adni']\nalzheimer s disease neuroimaging initiative adni \n['trends in international mathematics and science study']\ntrends in international mathematics and science study\n['baltimore longitudinal study of aging blsa']\nbaltimore longitudinal study of aging\n['adni']\nadni\n['genome']\ngenome sequence of sars cov 2\n['adni']\nadni\n[]\nnational assessment of education progress\n['census of agriculture', 'census of agriculture']\ncensus of agriculture\n[]\nnational assessment of education progress\n['adni']\nadni\n[]\nrural urban continuum codes\n[]\nnational teacher and principal survey\n['adni s disease neuroimaging initiative adni']\nadni\n[]\nslosh model\n['adni']\nadni\n['adni', 'adni']\nalzheimer s disease neuroimaging initiative adni \n[]\nnational teacher and principal survey\n['adni']\nalzheimer s disease neuroimaging initiative adni \n['adni']\nadni\n['adni']\nalzheimer s disease neuroimaging initiative adni \n['trends in international mathematics and science study']\ntrends in international mathematics and science study\n['adni', 'adni', 'adni']\nalzheimer s disease neuroimaging initiative adni \n[]\nworld ocean database\n['adni', 'adni']\nalzheimer s disease neuroimaging initiative adni \n['adni', 'adni']\nalzheimer s disease neuroimaging initiative adni \n['adni']\nadni\n['north american breeding bird survey']\nnorth american breeding bird survey bbs \n['adni']\nsurvey of industrial research and development\n['adni', 'adni']\nalzheimer s disease neuroimaging initiative adni \n['adni']\nalzheimer s disease neuroimaging initiative adni \n['adni', 'adni']\nalzheimer s disease neuroimaging initiative adni \n['adni', 'adni']\nbaltimore longitudinal study of aging\n['adni s disease neuroimaging initiative adni']\nalzheimer s disease neuroimaging initiative adni \n['adni']\nadni\n['baltimore longitudinal study of aging']\nbaltimore longitudinal study of aging\n['adni s disease neuroimaging initiative adni', 'adni']\nalzheimer s disease neuroimaging initiative adni \n['trends in international mathematics and science study']\ntrends in international mathematics and science study\n['adni']\nadni\n['adni']\nalzheimer s disease neuroimaging initiative adni \n['adni', 'adni']\nalzheimer s disease neuroimaging initiative adni \n['adni', 'adni s disease neuroimaging initiative adni']\nadni\n['adni']\nadni\n['adni', 'adni']\nadni\n['adni']\nadni\n['adni']\nadni\n['adni']\nalzheimer s disease neuroimaging initiative adni \n['agricultural resource management survey']\nagricultural resource management survey\n['trends for international mathematics']\nbeginning postsecondary student\n['adni']\nadni\n['adni']\nadni\n['baltimore']\nadni\n['early childhood longitudinal study']\nearly childhood longitudinal study\n['trends in international mathematics and science study']\ntrends in international mathematics and science study\n['national education longitudinal study']\nbeginning postsecondary students longitudinal study\n['trends in international mathematics and science study']\ntrends in international mathematics and science study\n['adni']\nadni\n['adni', 'adni']\nalzheimer s disease neuroimaging initiative adni \n['genome sequence of sars']\ngenome sequence of sars cov 2\n['adni']\nalzheimer s disease neuroimaging initiative adni \n['trends in international mathematics and science study']\ntrends in international mathematics and science study\n['adni', 'adni']\nalzheimer s disease neuroimaging initiative adni \n['north american breeding bird survey', 'north resource breeding bird survey']\nnorth american breeding bird survey bbs \n['adni', 'adni']\nadni\n['adni', 'adni']\nadni\n['adni']\nadni\n['census of agriculture']\ncensus of agriculture\n['adni', 'adni']\nadni\n['early childhood longitudinal study']\nearly childhood longitudinal study\n['trends in international mathematics and science study']\ntrends in international mathematics and science study\n['early childhood longitudinal study']\nearly childhood longitudinal study\n['adni']\nalzheimer s disease neuroimaging initiative adni \n['adni']\nadni\n['agricultural resource management survey']\nagricultural resource management survey\n['national education longitudinal study']\nnational education longitudinal study\n['adni', 'adni']\nadni\n['baltimore longitudinal study of aging', 'baltimore longitudinal study of aging blsa']\nbaltimore longitudinal study of aging blsa \n['covid open research dataset']\ncovid 19 open research dataset\n[]\ncoastal change analysis program\n['baccalaureate and beyond study and']\nbaccalaureate and beyond\n['early childhood longitudinal study']\nearly childhood longitudinal study\n['adni']\nalzheimer s disease neuroimaging initiative adni \n['adni', 'adni']\nalzheimer s disease neuroimaging initiative adni \n['adni']\nadni\n['adni']\nadni\n[]\nalzheimer s disease neuroimaging initiative adni \n['adni']\nadni\n[]\nadni\n['adni']\nalzheimer s disease neuroimaging initiative adni \n['trends in international mathematics and science adult competencies']\ntrends in international mathematics and science study\n[]\nadni\n['adni', 'adni']\nadni\n['baltimore longitudinal study of aging blsa']\nbaltimore longitudinal study of aging\n['north american breeding bird survey']\nnorth american breeding bird survey\n['north american breeding bird survey', 'north american breeding bird survey']\nnorth american breeding bird survey\n['adni']\nalzheimer s disease neuroimaging initiative adni \n['baltimore longitudinal study of aging blsa']\nbaltimore longitudinal study of aging\n['adni']\nadni\n['early childhood longitudinal study']\nearly childhood longitudinal study\n['adni']\nalzheimer s disease neuroimaging initiative adni \n['adni']\nalzheimer s disease neuroimaging initiative adni \n['baltimore longitudinal study of aging']\nbaltimore longitudinal study of aging\n['genome sequence of sars cov']\nsars cov 2 genome sequences\n['national childhood longitudinal']\nearly childhood longitudinal study\n['adni']\nalzheimer s disease neuroimaging initiative adni \n['adni']\nadni\n['baltimore longitudinal study of aging blsa']\nbaltimore longitudinal study of aging\n[]\nworld ocean database\n['national longitudinal longitudinal study']\nearly childhood longitudinal study\n['adni']\nadni\n['early childhood longitudinal study']\nearly childhood longitudinal study\n['adni s disease neuroimaging initiative adni']\nalzheimer s disease neuroimaging initiative adni \n['adni', 'adni']\nalzheimer s disease neuroimaging initiative adni \n['baltimore longitudinal study of aging blsa']\nbaltimore longitudinal study of aging\n['trends in international mathematics and science study']\ntrends in international mathematics and science study\n['adni']\nalzheimer s disease neuroimaging initiative adni \n['trends in international mathematics and science study']\ntrends in international mathematics and science study\n['adni']\nadni\n['education education longitudinal study']\neducation longitudinal study\n['adni', 'adni']\nadni\n['adni', 'adni']\nalzheimer s disease neuroimaging initiative adni \n['trends and international mathematics and science study']\ntrends in international mathematics and science study\n['trends in international mathematics and science study']\ntrends in international mathematics and science study\n['adni']\nalzheimers disease neuroimaging initiative\n['adni', 'early childhood study of aging adni']\nadni\n['adni']\nadni\n['north american breeding bird survey']\nnorth american breeding bird survey bbs \n['adni']\nadni\n['adni']\nadni\n[]\nbeginning postsecondary students longitudinal study\n['adni']\nalzheimer s disease neuroimaging initiative adni \n['adni']\nadni\n['baltimore longitudinal study of aging']\nbaltimore longitudinal study of aging\n[]\nibtracs\n['adni s disease neuroimaging initiative adni']\nalzheimer s disease neuroimaging initiative adni \n['coastal of of']\nslosh model\n['census of agriculture']\ncensus of agriculture\n['baccalaureate and beyond study and']\nbaccalaureate and beyond longitudinal study\n[]\ncommon core of data\n['adni']\nadni\n['adni']\nadni\n['early childhood longitudinal study']\nearly childhood longitudinal study\n['beginning postsecondary students study and']\nbeginning postsecondary students\n['adni']\nalzheimer s disease neuroimaging initiative adni \n['adni']\nbaltimore longitudinal study of aging\n[]\ncommon core of data\n['baccalaureate and beyond']\nbaccalaureate and beyond longitudinal study\n['program for the of assessment']\nprogram for the international assessment of adult competencies\n['adni']\nadni\n['survey of doctorate recipients']\nsurvey of earned doctorates\n['trends in international mathematics and science study', 'trends in international mathematics and science study', 'trends in international mathematics and science study']\ntrends in international mathematics and science study\n['national education longitudinal study']\neducation longitudinal study\n['trends in international mathematics and science']\ntrends in international mathematics and science study\n['coastal change analysis program']\nnational water level observation network\n['adni', 'adni']\nadni\n[]\nbaltimore longitudinal study of aging blsa \n['national education longitudinal study']\neducation longitudinal study\n['trends in international mathematics and science study']\ntrends in international mathematics and science study\n['adni']\nadni\n['adni', 'adni']\nalzheimer s disease neuroimaging initiative adni \n['adni']\nadni\n['adni', 'adni']\nalzheimer s disease neuroimaging initiative adni \n['adni longitudinal study neuroimaging aging adni']\nbaltimore longitudinal study of aging\n['adni s disease neuroimaging initiative adni', 'adni s disease neuroimaging initiative adni']\nalzheimer s disease neuroimaging initiative adni \n['early childhood longitudinal study']\nearly childhood longitudinal study\n['baltimore longitudinal study of aging blsa']\nbaltimore longitudinal study of aging blsa \n['early childhood longitudinal study', 'early childhood longitudinal study']\nearly childhood longitudinal study\n['adni']\nadni\n['adni']\nalzheimer s disease neuroimaging initiative adni \n['baltimore longitudinal study of aging blsa']\nbaltimore longitudinal study of aging\n['adni']\nalzheimer s disease neuroimaging initiative adni \n['survey of doctorate recipients']\nsurvey of earned doctorates\n['beginning postsecondary students']\nbeginning postsecondary students longitudinal study\n['trends in international mathematics and science study']\ntrends in international mathematics and science study\n['adni']\nadni\n['baltimore longitudinal study of aging blsa', 'baltimore longitudinal study of aging blsa']\nbaltimore longitudinal study of aging\n['genome']\nsars cov 2 genome sequence\n['baltimore longitudinal study of aging blsa']\nbaltimore longitudinal study of aging blsa \n['adni']\nadni\n['national education longitudinal study']\nnational education longitudinal study\n['adni']\nadni\n['adni', 'adni']\nadni\n['adni']\nalzheimer s disease neuroimaging initiative adni \n['beginning postsecondary students study and']\nbeginning postsecondary student\n['adni', 'adni']\nadni\n['adni']\nadni\n['adni s disease neuroimaging initiative adni']\nadni\n['ibtracs']\ninternational best track archive for climate stewardship\n['alzheimer in disease neuroimaging initiative adni', 'adni']\nadni\n['agricultural resource management survey']\nagricultural resource management survey\n['adni', 'adni']\nalzheimer s disease neuroimaging initiative adni \n[]\nibtracs\n['adni']\nadni\n['adni', 'adni']\nadni\n['early childhood longitudinal study', 'early childhood longitudinal study']\nearly childhood longitudinal study\n['adni']\nadni\n['adni']\nadni\n['baltimore longitudinal study of aging']\nbaltimore longitudinal study of aging\n['adni']\nadni\n['trends of international mathematics']\ncensus of agriculture\n['adni', 'coastal']\nslosh model\n['trends in international mathematics and science study']\ntrends in international mathematics and science study\n['adni']\nalzheimer s disease neuroimaging initiative adni \n['national']\ntrends in international mathematics and science study\n['adni']\nalzheimer s disease neuroimaging initiative adni \n['trends in international mathematics and science study']\ntrends in international mathematics and science study\n['adni']\nadni\n['trends in international mathematics and science study']\ntrends in international mathematics and science study\n['genome world of sars']\ngenome sequences of sars cov 2\n['adni s disease neuroimaging initiative adni']\nalzheimer s disease neuroimaging initiative adni \n['adni', 'adni s disease neuroimaging initiative adni']\nadni\n['adni']\nalzheimer s disease neuroimaging initiative adni \n['baltimore longitudinal study of aging', 'baltimore longitudinal study of aging blsa']\nbaltimore longitudinal study of aging blsa \n['adni']\nadni\n[]\nearly childhood longitudinal study\n['survey of doctorate recipients']\nnational science foundation survey of earned doctorates\n['survey of doctorate recipients']\nsurvey of earned doctorates\n['baltimore', 'adni']\nalzheimer s disease neuroimaging initiative adni \n['agricultural resource management survey']\nagricultural resource management survey\n['baltimore longitudinal study of aging', 'baltimore longitudinal study of aging']\nbaltimore longitudinal study of aging\n['trends in international mathematics and science study']\ntrends in international mathematics and science study\n['adni']\nadni\n['adni s disease neuroimaging initiative adni']\nadni\n['baltimore longitudinal study of aging']\nbaltimore longitudinal study of aging\n['survey and on survey and science', 'national in international']\nschool survey on crime and safety\n['adni s disease neuroimaging initiative adni']\nadni\n['genome sequence of sars']\ngenome sequences of sars cov 2\n['trends in international mathematics and science study']\ntrends in international mathematics and science study\n['survey of doctorate recipients']\nsurvey of doctorate recipients\n['adni', 'adni']\nadni\n['baltimore longitudinal study of aging']\nbaltimore longitudinal study of aging\n['adni']\nalzheimer s disease neuroimaging initiative adni \n['adni', 'adni']\nadni\n['adni', 'adni']\nalzheimer s disease neuroimaging initiative adni \n['adni']\nalzheimer s disease neuroimaging initiative adni \n['adni']\nalzheimer s disease neuroimaging initiative adni \n['baltimore longitudinal study of aging blsa']\nbaltimore longitudinal study of aging\n['baltimore longitudinal study of aging', 'baltimore longitudinal study of aging']\nbaltimore longitudinal study of aging\n['early childhood longitudinal study']\nearly childhood longitudinal study\n['trends in international mathematics and science study']\ntrends in international mathematics and science study\n['adni']\nadni\n['adni']\nadni\n['trends for international mathematics assessment science adult competencies']\nprogram for the international assessment of adult competencies\n['adni']\nadni\n['adni']\nalzheimer s disease neuroimaging initiative adni \n['covid open research dataset']\ncovid 19 open research dataset\n['adni']\nalzheimer s disease neuroimaging initiative adni \n['adni']\nalzheimer s disease neuroimaging initiative adni \n['baltimore longitudinal study of aging']\nbaltimore longitudinal study of aging\n['adni']\nadni\n['survey of doctorate recipients']\nsurvey of doctorate recipients\n['adni']\nadni\n['baltimore']\nalzheimer s disease neuroimaging initiative adni \n['survey of doctorate recipients', 'survey of doctorate recipients']\nsurvey of doctorate recipients\n['early childhood longitudinal study']\nearly childhood longitudinal study\n['adni', 'adni']\nalzheimer s disease neuroimaging initiative adni \n['adni s disease neuroimaging initiative adni']\nalzheimer s disease neuroimaging initiative adni \n['adni']\nadni\n['adni', 'adni', 'adni of disease of initiative adni']\nadni\n['agricultural resource management survey']\nagricultural resource management survey\n['adni']\nadni\n['beginning postsecondary students', 'survey of doctorate recipients', 'baccalaureate childhood students', 'beginning postsecondary students study']\nbeginning postsecondary students\n['early childhood longitudinal study']\nearly childhood longitudinal study\n['trends in international mathematics and science study', 'early childhood longitudinal study']\nearly childhood longitudinal study\n['early childhood longitudinal study']\neducation longitudinal study\n['early childhood longitudinal study']\nearly childhood longitudinal study\n['early childhood longitudinal study']\nearly childhood longitudinal study\n['agricultural resource management survey']\nagricultural resource management survey\n['adni', 'adni', 'adni']\nalzheimer s disease neuroimaging initiative adni \n['survey of doctorate recipients']\nsurvey of earned doctorates\n['adni', 'adni', 'adni']\nalzheimer s disease neuroimaging initiative adni \n['baltimore longitudinal study of aging blsa']\nbaltimore longitudinal study of aging blsa \n['adni longitudinal disease neuroimaging initiative adni']\nadni\n['survey of earned doctorates']\nsurvey of earned doctorates\n['adni']\nadni\n['survey of doctorate recipients']\nnational science foundation survey of earned doctorates\n[]\ntrends in international mathematics and science study\n[]\nnational teacher and principal survey\n['adni', 'adni']\nadni\n['adni', 'adni']\nalzheimer s disease neuroimaging initiative adni \n['adni s disease neuroimaging initiative adni']\nadni\n['early childhood longitudinal study']\nearly childhood longitudinal study\n['trends in international mathematics and science study', 'trends in international mathematics and science study']\ntrends in international mathematics and science study\n['baltimore longitudinal study of aging', 'baltimore longitudinal study of aging blsa']\nbaltimore longitudinal study of aging blsa \n['adni']\nadni\n['adni']\nadni\n['adni']\nadni\n['adni']\nadni\n['trends in international mathematics and science study']\ntrends in international mathematics and science study\n['adni']\nalzheimers disease neuroimaging initiative\n['census of agriculture']\ncensus of agriculture\n['our']\nour world in data\n['baltimore longitudinal study of aging blsa']\nbaltimore longitudinal study of aging blsa \n[]\nbaltimore longitudinal study of aging\n['genome sequence of sars cov']\ncovid 19 open research dataset\n['adni']\nadni\n['adni']\nalzheimer s disease neuroimaging initiative adni \n['adni']\nadni\n['adni']\nadni\n['adni', 'adni']\nalzheimer s disease neuroimaging initiative adni \n['baltimore longitudinal study of initiative', 'adni']\nadni\n['adni', 'adni']\nalzheimer s disease neuroimaging initiative adni \n[]\ntrends in international mathematics and science study\n['national education students']\nsurvey of earned doctorates\n['adni']\nadni\n[]\ntrends in international mathematics and science study\n['covid open research dataset']\ncovid 19 open research dataset\n['agricultural resource management survey']\nagricultural resource management survey\n['early childhood longitudinal study']\nearly childhood longitudinal study\n['adni']\nalzheimer s disease neuroimaging initiative adni \n['adni']\nalzheimer s disease neuroimaging initiative adni \n['baltimore longitudinal study of aging blsa', 'baltimore longitudinal study of aging blsa']\nbaltimore longitudinal study of aging blsa \n['census of agriculture']\ncensus of agriculture\n['baltimore longitudinal study of aging blsa']\nbaltimore longitudinal study of aging blsa \n['world ocean database']\nworld ocean database\n['adni']\nalzheimer s disease neuroimaging initiative adni \n['adni', 'adni']\nadni\n['national education longitudinal study']\neducation longitudinal study\n['adni', 'adni']\nadni\n['adni']\nadni\n['adni s disease neuroimaging initiative adni']\nalzheimer s disease neuroimaging initiative adni \n['baltimore longitudinal study of aging blsa']\nbaltimore longitudinal study of aging\n['adni', 'adni']\nalzheimer s disease neuroimaging initiative adni \n['adni', 'adni']\nalzheimer s disease neuroimaging initiative adni \n['adni']\nadni\n['adni']\nadni\n['adni']\nadni\n['baltimore longitudinal study of aging']\nbaltimore longitudinal study of aging\n['adni']\nadni\n['adni']\nadni\n[]\nslosh model\n[]\nworld ocean database\n[]\nearly childhood longitudinal study\n['adni']\nadni\n[]\ngenome sequence of sars cov 2\n['adni s disease neuroimaging initiative adni']\nalzheimer s disease neuroimaging initiative adni \n['trends in international mathematics and science study']\ntrends in international mathematics and science study\n['adni', 'adni']\nadni\n['national education students study']\nbaccalaureate and beyond\n[]\nearly childhood longitudinal study\n['north american breeding bird survey']\nnorth american breeding bird survey\n['national education longitudinal study']\nnational education longitudinal study\n['baltimore']\nadni\n['early childhood longitudinal study']\nearly childhood longitudinal study\n['adni']\nalzheimer s disease neuroimaging initiative adni \n['adni']\nadni\n['adni', 'adni']\nadni\n['adni', 'adni s disease neuroimaging initiative adni']\nadni\n[]\nrural urban continuum codes\n['baltimore longitudinal disease neuroimaging initiative adni']\nalzheimer s disease neuroimaging initiative adni \n[]\ntrends in international mathematics and science study\n['adni']\nadni\n['agricultural resource management survey', 'agricultural resource management survey', 'agricultural resource management survey']\nagricultural resource management survey\n[]\nsurvey of doctorate recipients\n[]\nadni\n['baltimore longitudinal study of aging']\nbaltimore longitudinal study of aging\n['genome sequence of sars cov adni']\nsars cov 2 genome sequences\n['trends in international mathematics assessment science study competencies', 'program for international mathematics assessment science adult competencies']\nprogram for the international assessment of adult competencies\n['national education longitudinal', 'beginning childhood longitudinal']\nhigh school longitudinal study\n['baccalaureate postsecondary beyond study and']\nbaccalaureate and beyond longitudinal study\n['coastal change analysis survey']\ncoastal change analysis program\n['baltimore longitudinal study of aging']\nbaltimore longitudinal study of aging\n['trends in international mathematics and science study', 'trends in international mathematics and science study']\ntrends in international mathematics and science study\n['adni', 'adni']\nadni\n['baltimore longitudinal study study aging']\nbaltimore longitudinal study of aging\n['trends in international mathematics and science study']\ntrends in international mathematics and science study\n['genome open of sars']\nour world in data\n['trends in international mathematics and science study']\ntrends in international mathematics and science study\n['genome sequence of sars cov']\nsars cov 2 genome sequence\n['adni']\nalzheimer s disease neuroimaging initiative adni \n['trends in international mathematics and science study', 'trends in international mathematics and science study']\ntrends in international mathematics and science study\n['adni']\nadni\n['adni s disease neuroimaging initiative adni']\nadni\n['adni']\nadni\n['adni']\nadni\n['baltimore longitudinal disease neuroimaging initiative adni']\nadni\n['national education longitudinal']\nnational education longitudinal study\n['early childhood longitudinal study']\nearly childhood longitudinal study\n['baltimore longitudinal study of aging blsa']\nbaltimore longitudinal study of aging\n[]\nadni\n['baltimore longitudinal study of aging']\nbaltimore longitudinal study of aging\n['adni']\nadni\n['trends in international mathematics and science study']\ntrends in international mathematics and science study\n['adni']\nalzheimer s disease neuroimaging initiative adni \n['north american breeding bird survey']\nnorth american breeding bird survey\n['trends in international mathematics and science study']\ntrends in international mathematics and science study\n['adni']\nadni\n['adni']\nalzheimer s disease neuroimaging initiative adni \n['national of doctorate recipients']\nsurvey of doctorate recipients\n['early childhood longitudinal study']\nearly childhood longitudinal study\n['adni s disease neuroimaging initiative adni']\nadni\n['baltimore longitudinal study of aging blsa']\nbaltimore longitudinal study of aging blsa \n['adni', 'adni']\nadni\n['adni']\nadni\n['genome sequence of sars']\ngenome sequence of sars cov 2\n['adni']\nalzheimer s disease neuroimaging initiative adni \n[]\nslosh model\n[]\ncensus of agriculture\n['adni']\nadni\n['adni of of', 'early childhood study study']\nearly childhood longitudinal study\n['adni']\nadni\n['census of agriculture', 'agricultural resource management survey']\nagricultural resource management survey\n['north american breeding bird survey']\nnorth american breeding bird survey\n[]\nadni\n['adni']\nalzheimer s disease neuroimaging initiative adni \n['adni', 'adni']\nadni\n['trends in international mathematics and science study']\ntrends in international mathematics and science study\n['adni', 'adni']\nalzheimer s disease neuroimaging initiative adni \n['adni s disease neuroimaging initiative adni']\nalzheimer s disease neuroimaging initiative adni \n['adni', 'ibtracs']\nibtracs\n['ibtracs']\ninternational best track archive for climate stewardship\n['adni', 'adni']\nalzheimer s disease neuroimaging initiative adni \n['baltimore longitudinal study of aging blsa']\nbaltimore longitudinal study of aging\n['census of agriculture', 'agricultural resource management survey']\nagricultural resource management survey\n['adni', 'adni']\nalzheimer s disease neuroimaging initiative adni \n['adni']\nadni\n['adni']\nadni\n['baltimore longitudinal study bird']\nbaltimore longitudinal study of aging\n['survey of doctorate recipients', 'survey of doctorate recipients']\nsurvey of earned doctorates\n['adni']\nalzheimer s disease neuroimaging initiative adni \n['adni']\nadni\n[]\ngenome sequence of sars cov 2\n['national education study']\neducation longitudinal study\n['adni']\nalzheimers disease neuroimaging initiative\n[]\nour world in data\n['adni']\nbaltimore longitudinal study of aging\n['adni']\nadni\n['beginning postsecondary students study and']\nbeginning postsecondary students\n['north american breeding bird survey']\nnorth american breeding bird survey\n['adni', 'adni']\nalzheimer s disease neuroimaging initiative adni \n['adni']\nadni\n[]\nearly childhood longitudinal study\n['our open international sars']\ngenome sequence of sars cov 2\n['adni s disease neuroimaging initiative adni']\nadni\n['adni', 'adni']\nalzheimer s disease neuroimaging initiative adni \n['adni']\nalzheimer s disease neuroimaging initiative adni \n['adni']\nalzheimer s disease neuroimaging initiative adni \n['baltimore']\nbaltimore longitudinal study of aging blsa \n['adni', 'adni']\nalzheimer s disease neuroimaging initiative adni \n['adni']\nadni\n['genome cov of sars']\ngenome sequence of sars cov 2\n['adni', 'adni']\nalzheimer s disease neuroimaging initiative adni \n['adni']\nadni\n['adni', 'adni', 'adni']\nadni\n['beginning postsecondary students study']\nbeginning postsecondary students\n['baltimore longitudinal study of aging']\nbaltimore longitudinal study of aging\n['adni', 'adni', 'adni', 'adni', 'adni', 'adni']\nalzheimer s disease neuroimaging initiative adni \n['adni']\nadni\n['adni']\nadni\n['adni']\nalzheimer s disease neuroimaging initiative adni \n['adni']\nalzheimer s disease neuroimaging initiative adni \n['slosh for international dataset']\nslosh model\n['adni']\nadni\n['trends in international mathematics and science study']\ntrends in international mathematics and science study\n['adni']\nadni\n['census of agriculture of']\ncensus of agriculture\n['adni', 'adni']\nadni\n['slosh', 'coastal', 'slosh model international program', 'coastal model analysis program']\nnoaa tidal station\n['ibtracs for track archive', 'ibtracs change analysis archive']\nibtracs\n[]\nbaccalaureate and beyond\n['adni']\nalzheimer s disease neuroimaging initiative adni \n['adni']\nsars cov 2 genome sequence\n['agricultural resource management survey']\nagricultural resource management survey\n['adni']\nadni\n['adni']\nalzheimer s disease neuroimaging initiative adni \n['trends in international mathematics and science study', 'trends in international mathematics and science study']\ntrends in international mathematics and science study\n['adni']\nadni\n['survey and on survey and science']\nschool survey on crime and safety\n['adni']\nadni\n['adni']\nadni\n['adni', 'adni']\nadni\n['adni']\nadni\n['early childhood longitudinal study']\nearly childhood longitudinal study\n['trends in international mathematics and science study']\ntrends in international mathematics and science study\n['baltimore longitudinal study of aging blsa']\nbaltimore longitudinal study of aging blsa \n['agricultural resource management survey']\nagricultural resource management survey\n['adni']\nalzheimer s disease neuroimaging initiative adni \n['trends in international mathematics and science study']\ntrends in international mathematics and science study\n['baltimore longitudinal study of aging blsa']\nbaltimore longitudinal study of aging blsa \n['baltimore longitudinal study of aging']\nbaltimore longitudinal study of aging\n['early childhood longitudinal study']\nearly childhood longitudinal study\n['early childhood longitudinal study']\nearly childhood longitudinal study\n['national education longitudinal study']\nnational education longitudinal study\n['trends in international mathematics and science study']\ntrends in international mathematics and science study\n['adni']\nadni\n['early childhood longitudinal study']\nearly childhood longitudinal study\n['adni']\nadni\n['adni']\nadni\n['adni']\nalzheimer s disease neuroimaging initiative adni \n['baccalaureate and beyond study and', 'baccalaureate and beyond study and']\nbaccalaureate and beyond longitudinal study\n['adni', 'adni']\nalzheimer s disease neuroimaging initiative adni \n['adni']\nadni\n['early childhood longitudinal study', 'early childhood longitudinal study', 'early childhood longitudinal study']\neducation longitudinal study\n['baccalaureate and beyond study and']\nbaccalaureate and beyond\n['baltimore longitudinal study of aging blsa', 'baltimore longitudinal study of aging blsa']\nbaltimore longitudinal study of aging blsa \n['education education longitudinal study']\nnational education longitudinal study\n['genome sequence genome sars']\ngenome sequences of sars cov 2\n['baltimore longitudinal study of aging blsa']\nbaltimore longitudinal study of aging\n['survey of doctorate recipients', 'survey of doctorate survey and science study']\nsurvey of graduate students and postdoctorates in science and engineering\n['trends in international mathematics and science study']\ntrends in international mathematics and science study\n['adni']\nadni\n['trends in international mathematics and science study']\ntrends in international mathematics and science study\n[]\ngenome sequences of sars cov 2\n['survey of doctorate recipients']\nsurvey of earned doctorates\n[]\nnational teacher and principal survey\n['trends in international mathematics and science study']\ntrends in international mathematics and science study\n['adni', 'adni']\nadni\n['beginning postsecondary students study']\nbeginning postsecondary students\n['national education longitudinal study']\nbeginning postsecondary student\n[]\nnorth american breeding bird survey\n[]\nnorth american breeding bird survey\n['adni']\nadni\n['adni']\nadni\n['baltimore longitudinal study of aging blsa']\nbaltimore longitudinal study of aging\n['baltimore longitudinal study of aging']\nbaltimore longitudinal study of aging\n['adni']\nalzheimer s disease neuroimaging initiative adni \n['adni']\nalzheimer s disease neuroimaging initiative adni \n['early childhood longitudinal study']\nearly childhood longitudinal study\n['adni']\nadni\n['national education longitudinal study']\neducation longitudinal study\n['adni']\nadni\n['baltimore', 'adni']\nadni\n['adni']\nadni\n['adni', 'adni']\nalzheimer s disease neuroimaging initiative adni \n['early childhood longitudinal study']\nearly childhood longitudinal study\n[]\nadni\n['trends in international mathematics and science study', 'trends in international mathematics and science study']\ntrends in international mathematics and science study\n['adni']\nalzheimer s disease neuroimaging initiative adni \n['adni']\nadni\n['adni']\nadni\n['adni', 'adni']\nalzheimer s disease neuroimaging initiative adni \n['adni']\nadni\n['adni']\nalzheimer s disease neuroimaging initiative adni \n['baltimore longitudinal study of aging blsa']\nbaltimore longitudinal study of aging blsa \n['trends in international mathematics and science study']\ntrends in international mathematics and science study\n['beginning postsecondary students study']\nbeginning postsecondary students\n['baltimore longitudinal study of aging', 'baltimore longitudinal study of aging blsa']\nbaltimore longitudinal study of aging blsa \n['beginning postsecondary students study and']\nbeginning postsecondary students\n['early childhood longitudinal study']\nearly childhood longitudinal study\n['adni']\nalzheimer s disease neuroimaging initiative adni \n['adni s disease neuroimaging initiative adni']\nadni\n['adni', 'adni']\nalzheimer s disease neuroimaging initiative adni \n['adni']\nadni\n['national education longitudinal']\nnational education longitudinal study\n['adni']\nadni\n['trends in international mathematics and science study']\ntrends in international mathematics and science study\n['adni']\nadni\n['trends in international mathematics and science study']\ntrends in international mathematics and science study\n['trends in international mathematics and science study']\ntrends in international mathematics and science study\n['adni']\nalzheimer s disease neuroimaging initiative adni \n['national education longitudinal study']\nnational education longitudinal study\n['adni', 'adni']\nadni\n['adni']\nalzheimer s disease neuroimaging initiative adni \n['adni']\nadni\n[]\nnational teacher and principal survey\n['adni']\nadni\n[]\ngenome sequence of sars cov 2\n['adni', 'adni']\nadni\n['early childhood longitudinal study', 'early childhood longitudinal study']\nearly childhood longitudinal study\n['adni']\nadni\n['adni', 'adni']\nadni\n['adni']\nadni\n['adni', 'adni']\nadni\n[]\nalzheimer s disease neuroimaging initiative adni \n['agricultural resource management survey']\nagricultural resource management survey\n['adni']\nadni\n['adni s disease neuroimaging initiative adni', 'adni']\nadni\n['baltimore longitudinal study of aging blsa']\nbaltimore longitudinal study of aging\n['adni']\nadni\n['adni', 'adni']\nalzheimer s disease neuroimaging initiative adni \n['trends in international mathematics and science study']\ntrends in international mathematics and science study\n[]\ntrends in international mathematics and science study\n['baltimore longitudinal study of aging']\nbaltimore longitudinal study of aging\n['adni']\nalzheimer s disease neuroimaging initiative adni \n['adni']\nadni\n['adni']\nadni\n['adni']\nadni\n['trends in international mathematics and science study']\ntrends in international mathematics and science study\n['baltimore longitudinal study of aging blsa']\nbaltimore longitudinal study of aging\n['agricultural resource management survey']\nagricultural resource management survey\n['adni', 'adni']\nadni\n[]\nsars cov 2 genome sequence\n['adni']\nadni\n['genome sequence of sars cov']\ncovid 19 image data collection\n['education education longitudinal study', 'national education longitudinal']\neducation longitudinal study\n['adni', 'adni']\nadni\n[]\nalzheimer s disease neuroimaging initiative adni \n['adni']\nalzheimer s disease neuroimaging initiative adni \n[]\nbaccalaureate and beyond\n['adni']\nadni\n['adni', 'adni s disease neuroimaging initiative adni']\nadni\n['adni']\nadni\n['adni']\nalzheimer s disease neuroimaging initiative adni \n['baltimore longitudinal study of aging']\nbaltimore longitudinal study of aging\n['adni']\nadni\n['adni']\nadni\n['survey of doctorate recipients']\nsurvey of doctorate recipients\n['trends in international mathematics and science study']\ntrends in international mathematics and science study\n['agricultural resource management survey']\nagricultural resource management survey\n['adni s disease neuroimaging initiative adni']\nalzheimer s disease neuroimaging initiative adni \n['north american breeding bird survey']\nnorth american breeding bird survey bbs \n['adni']\nalzheimer s disease neuroimaging initiative adni \n['genome sequence of sars']\ngenome sequence of 2019 ncov\n['baltimore longitudinal study of aging']\nbaltimore longitudinal study of aging\n['adni', 'adni']\nalzheimer s disease neuroimaging initiative adni \n['adni']\nadni\n[]\ntrends in international mathematics and science study\n['baltimore longitudinal study of aging blsa']\nbaltimore longitudinal study of aging blsa \n[]\nibtracs\n['trends in international mathematics and science study']\ntrends in international mathematics and science study\n['adni s disease neuroimaging initiative adni']\nadni\n['early childhood longitudinal study']\nearly childhood longitudinal study\n['early childhood longitudinal study']\nearly childhood longitudinal study\n['genome sequence of sars cov', 'adni']\ngenome sequence of sars cov 2\n['national education longitudinal study']\neducation longitudinal study\n['adni', 'adni']\nadni\n[]\nsurvey of earned doctorates\n[]\nagricultural resource management survey\n['genome sequence of sars']\nsars cov 2 genome sequences\n['adni', 'adni']\nadni\n['national education international mathematics assessment']\nnational assessment of education progress\n['covid open research dataset']\ncovid 19 open research dataset\n['adni']\nadni\n['adni']\nadni\n['baltimore longitudinal study of aging blsa']\nbaltimore longitudinal study of aging\n['adni']\nadni\n['agricultural resource management survey']\nagricultural resource management survey\n['adni']\nadni\n['adni', 'adni']\nadni\n['national education study study', 'trends in international mathematics and science study']\neducation longitudinal study\n['baltimore']\nbaltimore longitudinal study of aging blsa \n['adni']\nalzheimer s disease neuroimaging initiative adni \n['world ocean database']\nworld ocean database\n['adni', 'adni']\nadni\n['trends in international mathematics and science study']\ntrends in international mathematics and science study\n['adni']\nalzheimer s disease neuroimaging initiative adni \n['baltimore longitudinal study of aging blsa']\nbaltimore longitudinal study of aging blsa \n['adni']\nadni\n['adni']\nadni\n['baltimore longitudinal study of aging']\nbaltimore longitudinal study of aging\n['early childhood longitudinal study']\nearly childhood longitudinal study\n['adni']\nadni\n['adni', 'adni']\nalzheimer s disease neuroimaging initiative adni \n['baltimore longitudinal study of aging blsa']\nbaltimore longitudinal study of aging\n['national education longitudinal study']\nnational education longitudinal study\n['beginning postsecondary students study']\nbeginning postsecondary student\n['adni', 'adni', 'adni']\nalzheimer s disease neuroimaging initiative adni \n[]\ncensus of agriculture\n['adni']\nadni\n['north american breeding bird survey']\nnorth american breeding bird survey bbs \n['adni']\nadni\n['census of agriculture', 'census resource management survey']\ncensus of agriculture\n['adni']\nalzheimer s disease neuroimaging initiative adni \n['adni']\nadni\n['early childhood longitudinal study', 'early childhood longitudinal study']\nearly childhood longitudinal study\n['adni']\nadni\n['adni']\nalzheimer s disease neuroimaging initiative adni \n['trends in international mathematics and science study']\ntrends in international mathematics and science study\n['north american breeding bird survey', 'north american breeding bird']\nnorth american breeding bird survey\n['adni']\nadni\n['adni']\nadni\n['adni', 'adni']\nalzheimer s disease neuroimaging initiative adni \n['adni']\nadni\n['covid open research dataset']\ncovid 19 open research dataset cord 19 \n['adni']\nadni\n['adni']\nadni\n['our open study sars']\ngenome sequence of sars cov 2\n['north american breeding bird survey']\nnorth american breeding bird survey\n['trends in international mathematics and science study']\ntrends in international mathematics and science study\n['adni', 'adni']\nalzheimer s disease neuroimaging initiative adni \n['early childhood study study']\nearly childhood longitudinal study\n['adni', 'adni']\nadni\n['adni']\nalzheimer s disease neuroimaging initiative adni \n['adni']\nalzheimer s disease neuroimaging initiative adni \n['adni']\nadni\n['trends in international mathematics and science study', 'early longitudinal study study']\ntrends in international mathematics and science study\n['baltimore longitudinal study of aging blsa']\nbaltimore longitudinal study of aging blsa \n['trends in international mathematics']\nour world in data\n['adni']\nadni\n['education education longitudinal']\nnational education longitudinal study\n['genome sequence genome sars', 'genome sequence genome sars']\ngenome sequence of 2019 ncov\n['adni']\nadni\n['adni']\nadni\n['adni']\nalzheimer s disease neuroimaging initiative adni \n['baltimore longitudinal study of aging']\nbaltimore longitudinal study of aging\n['baltimore longitudinal study of aging', 'baltimore longitudinal study of aging blsa']\nbaltimore longitudinal study of aging blsa \n['genome sequence of sars']\ngenome sequences of sars cov 2\n['adni']\nadni\n['adni']\nadni\n['adni']\nadni\n['agricultural resource management survey']\nagricultural resource management survey\n['adni']\nadni\n[]\ncensus of agriculture\n['beginning postsecondary students study and']\nbeginning postsecondary students longitudinal study\n['adni', 'adni']\nadni\n['adni', 'adni']\nalzheimer s disease neuroimaging initiative adni \n['adni', 'adni']\nalzheimer s disease neuroimaging initiative adni \n['adni']\nadni\n['genome sequence of sars cov']\nsars cov 2 genome sequences\n['adni']\nalzheimer s disease neuroimaging initiative adni \n['adni']\nalzheimer s disease neuroimaging initiative adni \n['adni', 'adni']\nadni\n['adni']\nadni\n['adni']\nadni\n['adni']\nadni\n['adni']\nalzheimer s disease neuroimaging initiative adni \n['genome sequence genome sars cov']\ngenome sequences of sars cov 2\n['adni']\nadni\n['national education longitudinal']\neducation longitudinal study\n['adni', 'adni']\nadni\n['early childhood longitudinal study']\nearly childhood longitudinal study\n['genome world of sars']\ngenome sequence of sars cov 2\n['adni']\nadni\n['baltimore longitudinal study of aging']\nbaltimore longitudinal study of aging\n[]\nnational assessment of education progress\n['baltimore longitudinal study of aging']\nbaltimore longitudinal study of aging\n['adni', 'adni']\nalzheimer s disease neuroimaging initiative adni \n['genome sequence genome sars']\ngenome sequences of sars cov 2\n['baltimore longitudinal study of aging']\nbaltimore longitudinal study of aging\n['baltimore longitudinal study of aging']\nbaltimore longitudinal study of aging\n['adni']\nadni\n['early childhood longitudinal study']\nearly childhood longitudinal study\n['adni']\nadni\n['early childhood longitudinal study']\nearly childhood longitudinal study\n['baltimore longitudinal study of aging blsa']\nbaltimore longitudinal study of aging\n['adni']\nadni\n['adni', 'adni']\nalzheimer s disease neuroimaging initiative adni \n['adni']\nadni\n['adni']\nadni\n['adni']\nalzheimer s disease neuroimaging initiative adni \n['adni', 'adni']\nadni\n['adni']\nadni\n['adni']\nadni\n['adni']\nnational assessment of education progress\n['adni', 'adni']\nadni\n['adni']\nadni\n['early childhood longitudinal study', 'early childhood longitudinal study']\nearly childhood longitudinal study\n['genome sequence of sars', 'genome sequence research sars']\ngenome sequence of sars cov 2\n['trends in international mathematics and science study']\ntrends in international mathematics and science study\n['world ocean database']\nworld ocean database\n['adni']\nalzheimer s disease neuroimaging initiative adni \n['genome sequence of sars cov']\nsars cov 2 genome sequence\n['adni']\nadni\n['north american breeding bird survey']\nnorth american breeding bird survey\n['trends in international mathematics and science study']\ntrends in international mathematics and science study\n[]\nalzheimer s disease neuroimaging initiative adni \n['coastal change analysis program']\nnational water level observation network\n['adni']\nalzheimer s disease neuroimaging initiative adni \n['adni', 'adni']\nadni\n['adni', 'adni']\nadni\n[]\nour world in data\n['baltimore longitudinal study neuroimaging aging adni', 'baltimore longitudinal study of aging']\nbaltimore longitudinal study of aging\n['baltimore longitudinal study of aging']\nbaltimore longitudinal study of aging\n['adni', 'adni']\nalzheimer s disease neuroimaging initiative adni \n['adni']\nalzheimer s disease neuroimaging initiative adni \n['coastal change analysis program']\ncoastal change analysis program\n['north american breeding bird survey']\ncoastal change analysis program\n['adni']\nalzheimer s disease neuroimaging initiative adni \n['adni']\nadni\n['adni']\nadni\n['genome sequence of sars cov']\nsars cov 2 genome sequence\n['adni', 'adni s disease neuroimaging initiative adni']\nadni\n['adni s disease neuroimaging initiative adni']\nadni\n['adni']\nadni\n['agricultural resource management survey']\nagricultural resources management survey\n['adni', 'baltimore longitudinal study of aging']\nbaltimore longitudinal study of aging\n['early childhood longitudinal study']\nearly childhood longitudinal study\n['adni', 'adni']\nadni\n['adni s disease neuroimaging initiative adni']\nalzheimer s disease neuroimaging initiative adni \n['early childhood longitudinal study']\nearly childhood longitudinal study\n['adni', 'adni']\nadni\n['baltimore']\nsurvey of industrial research and development\n['agricultural resource management survey']\ncensus of agriculture\n[]\nalzheimer s disease neuroimaging initiative adni \n['adni']\nadni\n['adni']\nadni\n['adni']\nadni\n['national education longitudinal']\neducation longitudinal study\n['north american breeding bird survey', 'north american breeding bird survey']\nnorth american breeding bird survey\n[]\nsars cov 2 genome sequence\n['adni']\nalzheimer s disease neuroimaging initiative adni \n['covid open research dataset']\ncovid 19 open research dataset cord 19 \n['adni']\nadni\n['covid open international dataset']\nsars cov 2 genome sequence\n['adni', 'adni']\nadni\n[]\nsars cov 2 genome sequence\n['north american breeding bird survey']\nnorth american breeding bird survey\n['adni']\nalzheimer s disease neuroimaging initiative adni \n['trends in international mathematics and science study']\ntrends in international mathematics and science study\n['baltimore longitudinal study of aging']\nbaltimore longitudinal study of aging\n['trends in international mathematics and science study']\ntrends in international mathematics and science study\n[]\nadni\n['agricultural resource management survey', 'agricultural resource management survey']\nagricultural resource management survey\n['national education longitudinal study']\nhigh school longitudinal study\n['baltimore longitudinal study of aging', 'baltimore longitudinal study of aging blsa']\nbaltimore longitudinal study of aging blsa \n['adni']\nalzheimer s disease neuroimaging initiative adni \n['coastal change analysis program']\ncoastal change analysis program\n['adni']\nadni\n['adni']\nadni\n['adni']\nadni\n['adni']\nalzheimer s disease neuroimaging initiative adni \n['baltimore longitudinal study of aging blsa', 'baltimore longitudinal study of aging blsa']\nbaltimore longitudinal study of aging blsa \n['trends in international mathematics and science study']\ntrends in international mathematics and science study\n['covid open research']\ncovid 19 open research dataset cord 19 \n[]\nadni\n['agricultural resource management survey']\nagricultural resource management survey\n['national education longitudinal study', 'national education students']\nnational education longitudinal study\n['baccalaureate and beyond study and']\nbaccalaureate and beyond\n['trends in international mathematics and science study']\ntrends in international mathematics and science study\n['north american breeding bird survey']\nnorth american breeding bird survey bbs \n['adni s disease neuroimaging initiative adni']\nalzheimer s disease neuroimaging initiative adni \n['adni', 'adni']\nadni\n[]\nadni\n['adni']\nadni\n['adni']\nadni\n['adni', 'adni']\nadni\n['adni']\nalzheimers disease neuroimaging initiative\n['beginning postsecondary students study and']\nbeginning postsecondary students\n['agricultural resource management survey']\nagricultural resource management survey\n['adni']\nadni\n[]\ncensus of agriculture\n['trends in international mathematics and science study']\ntrends in international mathematics and science study\n[]\nsars cov 2 genome sequence\n[]\nadni\n['covid open research dataset']\ncovid 19 open research dataset cord 19 \n[]\nour world in data\n['adni']\nadni\n['adni']\nadni\n['adni']\nalzheimer s disease neuroimaging initiative adni \n['adni']\nalzheimer s disease neuroimaging initiative adni \n['trends in international mathematics and science study']\ntrends in international mathematics and science study\n['adni', 'adni']\nalzheimer s disease neuroimaging initiative adni \n['adni']\nadni\n['adni']\nalzheimer s disease neuroimaging initiative adni \n['census of agriculture']\ncensus of agriculture\n['adni']\nalzheimer s disease neuroimaging initiative adni \n['trends in international mathematics and science study']\ntrends in international mathematics and science study\n['trends in international mathematics and science study']\ntrends in international mathematics and science study\n['adni']\nalzheimer s disease neuroimaging initiative adni \n['adni']\nadni\n['adni', 'adni']\nadni\n['adni']\nalzheimer s disease neuroimaging initiative adni \n[]\nagricultural resource management survey\n['baltimore longitudinal study of aging', 'baltimore longitudinal study study aging blsa study']\nbaltimore longitudinal study of aging blsa \n['adni in international of initiative adni', 'adni']\nadni\n['coastal change management program', 'agricultural resource management survey survey']\nagricultural resource management survey\n['adni', 'adni']\nalzheimer s disease neuroimaging initiative adni \n['adni']\nalzheimer s disease neuroimaging initiative adni \n['adni']\nadni\n['world ocean database bird']\nworld ocean database\n['education education longitudinal study']\nnational education longitudinal study\n['adni']\nalzheimer s disease neuroimaging initiative adni \n['early childhood longitudinal study']\nearly childhood longitudinal study\n['survey of doctorate recipients']\nsurvey of doctorate recipients\n['early childhood longitudinal study']\nearly childhood longitudinal study\n['genome']\nsars cov 2 genome sequences\n['ibtracs model international', 'ibtracs']\nibtracs\n['census of agriculture']\ncensus of agriculture\n['baltimore longitudinal study of aging blsa']\nbaltimore longitudinal study of aging\n['baltimore longitudinal study of aging blsa']\nbaltimore longitudinal study of aging blsa \n['trends in international mathematics and science study']\ntrends in international mathematics and science study\n['adni']\nadni\n['beginning postsecondary students study and']\nbeginning postsecondary students\n['adni', 'adni']\nadni\n['adni']\nadni\n['adni']\nadni\n['adni']\nadni\n['adni']\nadni\n['genome sequence of sars']\ngenome sequence of sars cov 2\n[]\nibtracs\n['adni']\nbaltimore longitudinal study of aging\n['trends in international mathematics and science study']\ntrends in international mathematics and science study\n['survey of agriculture', 'census resource agriculture', 'trends in international mathematics', 'census of agriculture']\ncensus of agriculture\n['adni']\nadni\n['north american breeding bird survey']\nnorth american breeding bird survey\n['adni']\nadni\n['covid open research dataset']\ncovid 19 open research dataset\n['adni']\nadni\n['agricultural resource management survey survey']\nagricultural resource management survey\n['agricultural resource management survey']\nagricultural resource management survey\n['adni']\nadni\n['adni', 'adni']\nadni\n['trends in international mathematics and science study']\nadni\n['early childhood longitudinal study']\nearly childhood longitudinal study\n['adni', 'adni s disease neuroimaging initiative adni']\nalzheimer s disease neuroimaging initiative adni \n['baltimore longitudinal study of aging']\nbaltimore longitudinal study of aging\n['baltimore longitudinal study of aging blsa']\nbaltimore longitudinal study of aging\n['genome sequence of sars cov']\ngenome sequence of sars cov 2\n['adni']\nadni\n['north american breeding bird survey']\nnorth american breeding bird survey\n['adni model research dataset']\nslosh model\n['adni']\nadni\n['adni']\nadni\n['trends in international mathematics and science study']\ntrends in international mathematics and science study\n['coastal change analysis']\ncoastal change analysis program\n['national education longitudinal']\ncommon core of data\n['north american breeding bird survey']\nnorth american breeding bird survey\n['ibtracs', 'adni', 'adni']\nslosh model\n['trends in international mathematics and science study']\ntrends in international mathematics and science study\n['adni', 'adni s disease neuroimaging initiative adni', 'adni']\nalzheimer s disease neuroimaging initiative adni \n['adni']\nadni\n['adni']\nalzheimer s disease neuroimaging initiative adni \n['national of doctorate recipients aging']\nsurvey of earned doctorates\n"
],
[
"tp = 0\nfp = 0\nfn = 0\nfor i in testing.values():\n prediction = i[0]\n cop = set(prediction.copy())\n true_pred = i[1].copy()\n check = False\n #check exact match first\n for j in prediction:\n if j in true_pred:\n tp += 1\n true_pred.remove(j)\n cop.remove(j)\n #then check rest for jaccard score\n for j in cop:\n found = False\n removal = 0\n for k in true_pred:\n if jaccard(j, k) >= 0.5:\n found = True\n removal = k\n break\n if found:\n tp += 1\n true_pred.remove(removal)\n else:\n fp += 1\n fn += len(true_pred)",
"_____no_output_____"
]
],
[
[
"Testing Performance",
"_____no_output_____"
]
],
[
[
"print(\"testing performance\")\nprint(\"micro F score\")\nprint(fp)\nprint(fn)\nprint(tp/(tp + 1/2*(fp+fn)))\nprint(\"accuracy\")\nprint(tp/(tp+fn))",
"testing performance\nmicro F score\n291\n356\n0.6656330749354005\naccuracy\n0.644\n"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0c8e9ea78d06ca417c0f36a5d70e8d1a386094d | 154,154 | ipynb | Jupyter Notebook | CSCI6040 Project 1 Phase 4.ipynb | KelleClark/CSCI6040Project1 | ef517aca9c13b489f886fcca938f1b2d813e94dd | [
"MIT",
"Unlicense"
] | null | null | null | CSCI6040 Project 1 Phase 4.ipynb | KelleClark/CSCI6040Project1 | ef517aca9c13b489f886fcca938f1b2d813e94dd | [
"MIT",
"Unlicense"
] | null | null | null | CSCI6040 Project 1 Phase 4.ipynb | KelleClark/CSCI6040Project1 | ef517aca9c13b489f886fcca938f1b2d813e94dd | [
"MIT",
"Unlicense"
] | null | null | null | 94.457108 | 26,788 | 0.734227 | [
[
[
"#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n\n#@author: Kelle Clark, Andrew Florian, Xinyu Xiong\n#Created on Tue Feb 4 10:05:49 2020\n#CSCI 6040 Project 1 Text Generation\n#PHASE 3: Smoothing the Language Models for the Corpus\n\n#Various folders of .txt files were created in the CSCI6040 Team Project 1 folder\n#to be used for testing our application during develpment\n#/Short Test Data\n# has 3 .txt files each about 4KB\n#/Med test Data \n# has 2 .txt files one of 119KB (Tragedy of Macbeth) and 6.5MB (big)\n#/Grande test Data (the 18-document-gutenburg-copus but with 19? files cleaned using the \n#boilerplate.ipynb -author Andrew Florian and resulting files \n#shared on Canvas in Project 1 discussion forum)\n# has 19 .txt files with a total of 11.8MB",
"_____no_output_____"
],
[
"#we needed the help of a few packages...import all those at once\nimport langid\nimport itertools \nimport mmap\nimport nltk\nimport numpy\nimport os\nimport pandas\nimport random\nimport re\nimport string\nimport sys\nfrom collections import Counter\nfrom math import log10\nfrom matplotlib.pyplot import yscale, xscale, title, plot\nfrom nltk.tokenize import word_tokenize, sent_tokenize\n\nfrom nltk.tokenize import RegexpTokenizer\nfrom nltk.corpus import stopwords\n\n#from keras.models import Sequential\n#from keras.layers import Dense, Dropout, LSTM\n#from keras.utils import np_utils\n#from keras.callbacks import ModelCheckpoint\n",
"_____no_output_____"
],
[
"#**** from phase 1 reading in the tokenized corpus\n\ndef tokensByFiles(folderpath):\n textfiles = [f for f in os.listdir(folderpath) if '.txt' in f]\n tokenfilelist =[]\n \n for f in textfiles:\n rawcorpus = []\n substring = ''\n file = open(folderpath+\"/\"+f,'rt', encoding='utf-8', errors='replace') \n print (f\" Reading from: '{f}' . . .\")\n rawcorpus.append(file.read()\n .replace('. . .','.')\n .replace('!',' .') # substitue space period for ! mark to have a simple token to end a sentence \n .replace('\"',' ')\n .replace('#',' ') \n .replace('$',' ')\n .replace('%',' ')\n .replace('&',' ')\n .replace('\\\\',' ') \n .replace('\\' ',' ') # only remove ' if it has a space before or after meaning it is used as a quote\n .replace(' \\'',' ') # but leave it in if it is inside a word as a contraction\n .replace('\\- ',' ') # only remove - if it has a space before or after meaning it is to be left in the \n .replace(' \\-',' ') # word e.g. C-A-T\n .replace('(',' ')\n .replace('\\n', ' ') \n .replace(')',' ')\n .replace('*',' ')\n .replace('+',' ')\n .replace(',',' ')\n .replace('. ',' ') \n .replace('/',' ') \n .replace(':',' ')\n .replace(';',' ')\n .replace('<',' ')\n .replace('=',' ')\n .replace('>',' ')\n .replace('?',' .') # substitue space period for ? mark to have a simple token to end a sentence\n .replace('@',' ')\n .replace('[',' ')\n .replace('\\\\',' ')\n .replace(']',' ')\n .replace('^',' ')\n .replace('_',' ') # remove all unwanted punctuation\n .replace('`',' ')\n .replace('{',' ')\n .replace('|',' ')\n .replace('}',' ')\n .replace('~',' ')\n .replace('0',' ') # remove all digits\n .replace('1',' ')\n .replace('2',' ')\n .replace('3',' ')\n .replace('4',' ')\n .replace('5',' ') \n .replace('6',' ')\n .replace('7',' ')\n .replace('8',' ')\n .replace('9',' ')) \n file.close()\n \n substring = substring + rawcorpus[0]\n #print(f\"the language of file \"+f+\" is {nltk.language(substring)}\")\n print(f\"the estimated language of the file {f} is {langid.classify(substring)}\")\n \n #tokens=substring.split()\n tokens = word_tokenize(substring)\n tokens = [w.lower() for w in tokens]\n tokenfilelist.append(tokens)\n \n return tokenfilelist\n\n\n#we have the different files tokenized, in the variable tokenfilelist\n#method below creates one corpus from the string of tokens in each file \ndef createOneCorpus(inlist):\n temp = \" \"\n for i in range(len(inlist)):\n for w in inlist[i]:\n temp = temp + w + \" \"\n return temp\n\ndef printcorpus(instring):\n if len(instring) > 500: \n print(f\"The first & last 50 tokens of this corpus are:\\n {instring[:50]} \\t ... {instring[-50:]}\\n\")\n else:\n print(f\"The tokens in the corpus are: \\n {instring} \\n\")\n\n#ngrams returns a dictionary\n# enumerate ngrams code copied from Eisentein and CSCI6040 ipynb\n# returns the ngram from instring and n\ndef ngrams(instring, n):\n outset = {}\n for i in range(len(instring) - n + 1):\n g = ' '.join(instring[i:i+n])\n outset.setdefault(g, 0)\n outset[g] += 1\n return outset \n",
"_____no_output_____"
],
[
"#**** from phase 1 reading in the .txt files and creating the tokenized corpus\npathname = 'Test Data/short test data'\n#pathname = 'your choice of path here'\n\n#read in the corups file by file\ntokenfilelist = tokensByFiles(pathname)\n#print(tokenfilelist)\n\ntokencorpus = createOneCorpus(tokenfilelist)\n#printcorpus(tokencorpus)\ntokens = tokencorpus.split()",
" Reading from: 'Testset1.txt' . . .\nthe estimated language of the file Testset1.txt is ('en', -112.47618627548218)\n Reading from: 'Testset2.txt' . . .\nthe estimated language of the file Testset2.txt is ('en', -757.8414204120636)\n Reading from: 'Testset3.txt' . . .\nthe estimated language of the file Testset3.txt is ('en', -295.17291164398193)\n"
],
[
"#**** from phase 2 creating the four different language models using ngrams:\n#unigram prob. model using prob(x) = (frequency of x in corpus)/(total in corpus)\ndef createUnigramModel(instring):\n n = 1\n outset = word_tokenize(instring)\n\n totalpossible = len(outset)\n sumofprob = 0\n \n anoutcome = ngrams(outset,n)\n probmodel = anoutcome\n \n for keyword in anoutcome:\n probmodel[keyword] = (anoutcome[keyword]) / totalpossible\n sumofprob = sumofprob + probmodel[keyword]\n \n print(f\"The sum of all the probabiities of unigrams needs to be 1 and it is {sumofprob}\\n\")\n return probmodel\n \n#create the unigram model \nunigrammodel = createUnigramModel(tokencorpus)\n\n\npandas.set_option(\"display.max_rows\", 10)\nunidataframe = pandas.DataFrame.from_dict(unigrammodel, orient = 'index', columns = ['prob.'])\nprint('Number of rows in Unigram Prob. Model : ', len(unidataframe.index))\nprint(unidataframe)\n\n#Attempt to try and plot the unigram language model using first a Counter object\nCOUNT = Counter(unigrammodel)\ngreatestprob = 0\nbigword = ''\nfor w in COUNT.keys():\n if COUNT[w] >= greatestprob:\n bigword = w\n greatestprob = COUNT[w]\n \nprint(f\"the unigram of greatest freq is: {bigword} \\n\")\nM = COUNT[bigword]\nyscale('log'); xscale('log'); title('Frequency of n-th most frequent word and 1/n line.')\n##RAN INTO SOME ISSUES GETTING THE GRAPH TO PRINT THE RANK ORDER OF THE WORDS...\n##BUT WHAT I THINK THIS IS SHOWING IS THAT IF WE WANT TO SMOOTH THE PROB. MODEL FOR\n##UNIGRAMS, WE COULD USE PROB. M/i for the ith rankend term and M is the frequency of the\n##MOST COMMON UNIGRAM\nplot([c for (w,c) in COUNT.most_common()])\nplot([M/i for i in range(1, len(COUNT)+1)]);\n\n#method to create the bigram model\ndef createBigramModel(instring):\n n = 2\n outset = word_tokenize(instring)\n totalpossible = len(outset)\n \n anoutcome = ngrams(outset,n)\n previousoutcome = ngrams(outset,n-1)\n sumofprob = 0\n \n probmodel = anoutcome\n \n for keyword in anoutcome:\n listword = keyword.split()\n prob1 = (previousoutcome[listword[0]]) / totalpossible\n probmodel[keyword] = prob1 * ((probmodel[keyword]) / (previousoutcome[listword[0]]))\n sumofprob = sumofprob + probmodel[keyword]\n \n print(f\"The sum of all the probabiities for bigrams needs to be 1 and it is {sumofprob}\")\n return probmodel \n\n\n#create the bigram model\nbigrammodel = createBigramModel(tokencorpus)\n\n\npandas.set_option(\"display.max_rows\", 10)\nbidataframe = pandas.DataFrame.from_dict(bigrammodel, orient = 'index', columns = ['prob.'])\nprint('Number of rows in Bigram Prob. Model : ', len(bidataframe.index))\nprint(bidataframe)\n\n#Attempt to try and plot the bigram language model using first a Counter object\nCOUNT2 = Counter(bigrammodel)\ngreatestprob2 = 0\nbigword2 = ''\nfor w in COUNT2.keys():\n if COUNT2[w] >= greatestprob2:\n bigword2 = w\n greatestprob2 = COUNT[w]\n \nprint(f\"the bigram of greatest freq is: {bigword2} \\n\")\nM2 = COUNT2[bigword2]\nyscale('log'); xscale('log'); title('Frequency of n-th most frequent 2-itemset and 1/n line.')\n##RAN INTO SOME ISSUES GETTING THE GRAPH TO PRINT THE RANK ORDER OF THE WORDS...\n##BUT WHAT I THINK THIS IS SHOWING IS THAT IF WE WANT TO SMOOTH THE PROB. MODEL FOR\n##BIGRAMS, WE COULD USE PROB. M/i for the ith rankend term and M is the frequency of the\n##MOST COMMON BIGRAM\nplot([c for (w,c) in COUNT2.most_common()])\nplot([(M2)/i for i in range(1, len(COUNT2)+1)]);\n\n\n#create the trigram model\ndef createTrigramModel(instring):\n n = 3\n outset = word_tokenize(instring)\n totalpossible = len(outset)\n \n anoutcome = ngrams(outset,3)\n probmodel = anoutcome\n sumofprob = 0\n \n previous1outcome = ngrams(outset,n-2)\n previous2outcome = ngrams(outset,n-1)\n \n for keyword in anoutcome: \n listword = keyword.split()\n wordofinterest = listword[0]\n prob1 = previous1outcome[wordofinterest]/ totalpossible\n \n wordofinterest = listword[0] + \" \" + listword[1]\n prob2 = previous2outcome[wordofinterest]/previous1outcome[listword[0]] \n \n wordofinterest = keyword\n probmodel[keyword] = prob1 * prob2 * anoutcome[wordofinterest]/ previous2outcome[listword[0]+ \" \" + listword[1]]\n sumofprob = sumofprob + probmodel[keyword]\n \n print(f\"The sum of all the probabiities for trigrams needs to be 1 and it is {sumofprob}\")\n return probmodel \n\n#create the trigram model\ntrigrammodel = createTrigramModel(tokencorpus)\n\n\npandas.set_option(\"display.max_rows\", 10)\ntridataframe = pandas.DataFrame.from_dict(trigrammodel, orient = 'index', columns = ['prob.'])\nprint('Number of rows in Trigram Prob. Model : ', len(tridataframe.index))\nprint(tridataframe)\n\n#Attempt to plot the trigram language model using first a Counter object\nCOUNT3 = Counter(trigrammodel)\ngreatestprob3 = 0\nbigword3 = ''\nfor w in COUNT3.keys():\n if COUNT3[w] >= greatestprob3:\n bigword3 = w\n greatestprob3 = COUNT3[w]\n \nprint(f\"the trigram of greatest freq is: {bigword3} \\n\")\nM3 = COUNT3[bigword3]\nyscale('log'); xscale('log'); title('Frequency of n-th most frequent 3-itemset and 1/n line.')\n##RAN INTO SOME ISSUES GETTING THE GRAPH TO PRINT THE RANK ORDER OF THE WORDS...\n##BUT WHAT I THINK THIS IS SHOWING IS THAT IF WE WANT TO SMOOTH THE PROB. MODEL FOR\n##TRIGRAMS, WE COULD USE PROB. M3/i for the ith rankend term and M3 is the frequency of the\n##MOST COMMON TRIGRAM\nplot([c for (w,c) in COUNT3.most_common()])\nplot([(M3)/i for i in range(1, len(COUNT3)+1)]);\n\n#create the quadgram model\ndef createQuadgramModel(instring):\n n = 4\n outset = word_tokenize(instring)\n totalpossible = len(outset)\n \n anoutcome = ngrams(outset,n)\n probmodel = anoutcome \n sumofprob = 0\n \n previous1outcome = ngrams(outset,n-3)\n previous2outcome = ngrams(outset,n-2)\n previous3outcome = ngrams(outset,n-1)\n \n for keyword in anoutcome: \n listword = keyword.split()\n wordofinterest = listword[0]\n prob1 = previous1outcome[wordofinterest]/ totalpossible\n \n wordofinterest = listword[0] + \" \" + listword[1]\n prob2 = previous2outcome[wordofinterest]/previous1outcome[listword[0]] \n \n wordofinterest = listword[0]+ \" \" + listword[1] + \" \" + listword[2]\n prob3 = previous3outcome[wordofinterest]/previous2outcome[listword[0] + \" \" + listword[1]]\n \n wordofinterest = keyword\n probmodel[keyword] = prob1 * prob2 * prob3 * anoutcome[wordofinterest]/ previous3outcome[listword[0]+ \" \" + listword[1] + \" \"+ listword[2]]\n sumofprob = sumofprob + probmodel[keyword]\n \n print(f\"The sum of all the probabiities of quadgrams needs to be 1 and it is {sumofprob}\")\n return probmodel \n\n#create the quadgram model\nquadgrammodel = createQuadgramModel(tokencorpus)\n\n\npandas.set_option(\"display.max_rows\", 10)\nquaddataframe = pandas.DataFrame.from_dict(quadgrammodel, orient = 'index', columns = ['prob.'])\nprint('Number of rows in Quadgram Prob. Model : ', len(quaddataframe.index))\nprint(quaddataframe)\n\n#Attempt to plot the trigram language model using first a Counter object\nCOUNT4 = Counter(quadgrammodel)\ngreatestprob4 = 0\nbigword4 = ''\nfor w in COUNT4.keys():\n if COUNT4[w] >= greatestprob4:\n bigword4 = w\n greatestprob4 = COUNT4[w]\n \nprint(f\"the quadgram of greatest freq is: {bigword4} \\n\")\nM4 = COUNT4[bigword4]\nyscale('log'); xscale('log'); title('Frequency of n-th most frequent 4-itemset and 1/n line.')\n##RAN INTO SOME ISSUES GETTING THE GRAPH TO PRINT THE RANK ORDER OF THE WORDS...\n##BUT WHAT I THINK THIS IS SHOWING IS THAT IF WE WANT TO SMOOTH THE PROB. MODEL FOR\n##QUADGRAMS, WE COULD USE PROB. M4/i for the ith rankend term and M3 is the frequency of the\n##MOST COMMON TRIGRAM\nplot([c for (w,c) in COUNT4.most_common()])\nplot([(M4)/i for i in range(1, len(COUNT4)+1)]);",
"The sum of all the probabiities of unigrams needs to be 1 and it is 1.0000000000000009\n\nNumber of rows in Unigram Prob. Model : 63\n prob.\nthe 0.052632\ncat 0.021053\nnamed 0.010526\nbob 0.021053\nis 0.052632\n... ...\ncats 0.010526\nshould 0.010526\nnot 0.010526\nexist 0.010526\ndogs 0.010526\n\n[63 rows x 1 columns]\nthe unigram of greatest freq is: do \n\nThe sum of all the probabiities for bigrams needs to be 1 and it is 0.9894736842105274\nNumber of rows in Bigram Prob. Model : 87\n prob.\nthe cat 0.010526\ncat named 0.010526\nnamed bob 0.010526\nbob is 0.021053\nis damn 0.010526\n... ...\nnot exist 0.010526\nexist on 0.010526\nearth dogs 0.010526\ndogs are 0.010526\nare the 0.010526\n\n[87 rows x 1 columns]\nthe bigram of greatest freq is: are the \n\nThe sum of all the probabiities for trigrams needs to be 1 and it is 0.9789473684210537\nNumber of rows in Trigram Prob. Model : 91\n prob.\nthe cat named 0.010526\ncat named bob 0.010526\nnamed bob is 0.010526\nbob is damn 0.010526\nis damn good 0.010526\n... ...\nexist on earth 0.010526\non earth dogs 0.010526\nearth dogs are 0.010526\ndogs are the 0.010526\nare the best 0.010526\n\n[91 rows x 1 columns]\nthe trigram of greatest freq is: what do we \n\nThe sum of all the probabiities of quadgrams needs to be 1 and it is 0.96842105263158\nNumber of rows in Quadgram Prob. Model : 91\n prob.\nthe cat named bob 0.010526\ncat named bob is 0.010526\nnamed bob is damn 0.010526\nbob is damn good 0.010526\nis damn good he 0.010526\n... ...\nnot exist on earth 0.010526\nexist on earth dogs 0.010526\non earth dogs are 0.010526\nearth dogs are the 0.010526\ndogs are the best 0.010526\n\n[91 rows x 1 columns]\nthe quadgram of greatest freq is: what do we do \n\n"
],
[
"####****KEPT IN PHASE 4 TO PROVIDE COMPARISON IN EVALUATION TEXT GENERATION IN PHASE 5..\n####****from phase 3 were we create new models of the language using the linear smoothing and weightings lambda\n####****the linear smoothing quadgram model has minor error in indexing and should be updated.\n#smoothing the ngramModel using a linear function of the kgrams for k = 1 to n\ndef ngramModel_LinearSmooth(inlist, n):\n #generate ngrams\n total = len(inlist)\n anoutcome = []\n for i in range(1,n+1):\n anoutcome.append(ngrams(inlist, i))\n #print(\"outcome: \")\n #print(anoutcome[i-1])\n \n #generate lamd coefficients for terms in model\n k = 1\n lamd = []\n last_lamd = 0\n for i in range(1,n):\n lamd.append(random.uniform(0,k))\n k = k-lamd[i -1] \n lamd.append(k)\n print(\"lamd: \", lamd)\n #generate smooth model\n smooth_model = {}\n for keyword in anoutcome[n-1]:\n grams = keyword.split(' ')\n #print(\"grams:\")\n #print(grams)\n smooth_model.setdefault(keyword, lamd[0]*anoutcome[0][grams[0]]/total)\n for i in range(1,len(grams) - 2):\n sub_string = ' '.join(grams[0:i])\n sub_sub_string = ' '.join(input[0:i -1])\n # print(sub_string)\n smooth_model[keyword] = smooth_model[keyword] + lamd[i] * (anoutcome[i][sub_string]/anoutcome[i-1][keyword])\n #print(keyword + \":\")\n #print(smooth_model[keyword])\n #print(\"smooth_model:\")\n #print(smooth_model)\n return smooth_model\n\nlinearsmoothunimodel = ngramModel_LinearSmooth(tokens, 1)\n\npandas.set_option(\"display.max_rows\", 10)\nlinearsmoothunidataframe = pandas.DataFrame.from_dict(linearsmoothunimodel, orient = 'index', columns = ['prob.'])\nprint('Number of rows in Linear Smoothed Unigram Prob. Model : ', len(linearsmoothunidataframe.index))\nprint(linearsmoothunidataframe)\n\n#Attempt to plot the unigram language model using first a Counter object\nCOUNTLSMOOTH1 = Counter(linearsmoothunimodel)\ngreatestlinearsmoothprob1 = 0\nbiglinearsmoothword1 = ''\nfor w in COUNTLSMOOTH1.keys():\n if COUNTLSMOOTH1[w] >= greatestlinearsmoothprob1:\n biglinearsmoothword1 = w\n greatestlinearsmoothprob1 = COUNTLSMOOTH1[w]\n \nprint(f\"the unigram of greatest freq in the smoothed unigram model is: {biglinearsmoothword1} \\n\")\nMLS1 = COUNTLSMOOTH1[biglinearsmoothword1]\nyscale('log'); xscale('log'); title('Frequency of n-th most frequent 1-itemset in linear smoothed model and 1/n line.')\n\nplot([c for (w,c) in COUNTLSMOOTH1.most_common()])\nplot([(MLS1)/i for i in range(1, len(COUNTLSMOOTH1)+1)]);\n\nlinearsmoothbimodel = ngramModel_LinearSmooth(tokens, 2)\n\npandas.set_option(\"display.max_rows\", 10)\nlinearsmoothbidataframe = pandas.DataFrame.from_dict(linearsmoothbimodel, orient = 'index', columns = ['prob.'])\nprint('Number of rows in Linear Smoothed Bigram Prob. Model : ', len(linearsmoothbidataframe.index))\nprint(linearsmoothbidataframe)\n\n#Attempt to plot the bigram language model using first a Counter object\nCOUNTLSMOOTH2 = Counter(linearsmoothbimodel)\ngreatestlinearsmoothprob2 = 0\nbiglinearsmoothword2 = ''\nfor w in COUNTLSMOOTH2.keys():\n if COUNTLSMOOTH2[w] >= greatestlinearsmoothprob2:\n biglinearsmoothword2 = w\n greatestlinearsmoothprob2 = COUNTLSMOOTH2[w]\n \nprint(f\"the bigram of greatest freq in the linear smoothed bigram model is: {biglinearsmoothword2} \\n\")\nMLS2 = COUNTLSMOOTH2[biglinearsmoothword2]\nyscale('log'); xscale('log'); title('Frequency of n-th most frequent 2-itemset in linear smoothed model and 1/n line.')\n\nplot([c for (w,c) in COUNTLSMOOTH2.most_common()])\nplot([(MLS2)/i for i in range(1, len(COUNTLSMOOTH1)+1)]);\n\nlinearsmoothtrimodel = ngramModel_LinearSmooth(tokens, 3)\n\npandas.set_option(\"display.max_rows\", 10)\nlinearsmoothtridataframe = pandas.DataFrame.from_dict(linearsmoothtrimodel, orient = 'index', columns = ['prob.'])\nprint('Number of rows in Linear Smoothed Trigram Prob. Model : ', len(linearsmoothtridataframe.index))\nprint(linearsmoothtridataframe)\n\n#Attempt to plot the trigram language model using first a Counter object\nCOUNTLSMOOTH3 = Counter(linearsmoothtrimodel)\ngreatestlinearsmoothprob3 = 0\nbiglinearsmoothword3 = ''\nfor w in COUNTLSMOOTH3.keys():\n if COUNTLSMOOTH3[w] >= greatestlinearsmoothprob3:\n biglinearsmoothword3 = w\n greatestlinearsmoothprob3 = COUNTLSMOOTH3[w]\n \nprint(f\"the trigram of greatest freq in the smoothed trigram model is: {biglinearsmoothword3} \\n\")\nMLS3 = COUNTLSMOOTH3[biglinearsmoothword3]\nyscale('log'); xscale('log'); title('Frequency of n-th most frequent 3-itemset in linear smoothed model and 1/n line.')\n\nplot([c for (w,c) in COUNTLSMOOTH3.most_common()])\nplot([(MLS3)/i for i in range(1, len(COUNTLSMOOTH3)+1)]);\n\n#linearsmoothquadmodel = ngramModel_LinearSmooth(tokens, 4)\n\n#pandas.set_option(\"display.max_rows\", 10)\n#linearsmoothquaddf = pandas.DataFrame.from_dict(linearsmoothquadmodel, orient = 'index', columns = ['prob.'])\n#print('Number of rows in Linear Smoothed Quadgram Prob. Model : ', len(linearsmoothquaddf.index))\n#print(linearsmoothquaddf)\n\n##Attempt to plot the quadgram language model using first a Counter object\n#COUNTLSMOOTH4 = Counter(linearsmoothquadmodel)\n#greatestlinearsmoothprob4 = 0\n#biglinearsmoothword4 = ''\n#for w in COUNTLSMOOTH4.keys():\n# if COUNTLSMOOTH4[w] >= greatestlinearsmoothprob4:\n# biglinearsmoothword4 = w\n# greatestlinearsmoothprob4 = COUNTLSMOOTH4[w]\n \n#print(f\"the quadgram of greatest freq in the smoothed quadgram model is: {biglinearsmoothword4} \\n\")\n#MLS4 = COUNTLSMOOTH4[biglinearsmoothword4]\n#yscale('log'); xscale('log'); title('Frequency of n-th most frequent 4-itemset in linear smoothed model and 1/n line.')\n\n#plot([c for (w,c) in COUNTLSMOOTH4.most_common()])\n#plot([(MLS4)/i for i in range(1, len(COUNTLSMOOTH4)+1)]);",
"lamd: [1]\nNumber of rows in Linear Smoothed Unigram Prob. Model : 63\n prob.\nthe 0.052632\ncat 0.021053\nnamed 0.010526\nbob 0.021053\nis 0.052632\n... ...\ncats 0.010526\nshould 0.010526\nnot 0.010526\nexist 0.010526\ndogs 0.010526\n\n[63 rows x 1 columns]\nthe unigram of greatest freq in the smoothed unigram model is: do \n\nlamd: [0.5994833201512387, 0.40051667984876127]\nNumber of rows in Linear Smoothed Bigram Prob. Model : 87\n prob.\nthe cat 0.031552\ncat named 0.012621\nnamed bob 0.006310\nbob is 0.012621\nis damn 0.031552\n... ...\nnot exist 0.006310\nexist on 0.006310\nearth dogs 0.012621\ndogs are 0.006310\nare the 0.012621\n\n[87 rows x 1 columns]\nthe bigram of greatest freq in the linear smoothed bigram model is: is better \n\nlamd: [0.989084279141172, 0.006890054191926328, 0.004025666666901643]\nNumber of rows in Linear Smoothed Trigram Prob. Model : 91\n prob.\nthe cat named 0.052057\ncat named bob 0.020823\nnamed bob is 0.010411\nbob is damn 0.020823\nis damn good 0.052057\n... ...\nexist on earth 0.010411\non earth dogs 0.020823\nearth dogs are 0.020823\ndogs are the 0.010411\nare the best 0.020823\n\n[91 rows x 1 columns]\nthe trigram of greatest freq in the smoothed trigram model is: is better cats \n\n"
],
[
"####****from phase 3, the next cell below uses in the smoothing of the language models with Laplace...\n#In case we want to take into consideration of the file size when smoothing\n#the language models... we created a Counter object for each file to seperate\n#the unigrams, bigrams, trigrams and quadgrams in each file and their fruency in the file...\n#the createListDoc_Foo_Counters below take in a list of strings, one fore each incoming file, which we\n#created when we read in the files ....the smoothing in the Laplace smoothing below do not weight\n#the files by size but do use these counters to tally up the total freqeuencies of ngrams and token count\ndef createListDocUniCounter(inlist):\n docfreqlist = []\n for i in range(len(inlist)):\n counter = Counter(newngram(inlist[i],1))\n docfreqlist.append(counter)\n return docfreqlist\n\ndfforuniperfile = createListDocUniCounter(tokenfilelist)\nfirstunifile = dfforuniperfile[0]\n#print(dfforuniperfile)\n#print(firstunifile)\n\ndef createListDocBiCounter(inlist):\n df = []\n for i in range(len(inlist)):\n #words = re.findall(\"\\w+\",inlist[i])\n counter = Counter(newngram(inlist[i],2))\n df.append(counter)\n return df\n\ndfforbiperfile = createListDocBiCounter(tokenfilelist)\nfirstbifile = dfforbiperfile[0]\n#print(firstbifile)\n#print(dfforbiperfile)\n\ndef createListDocTriCounter(inlist):\n df = []\n for i in range(len(inlist)):\n #words = re.findall(\"\\w+\",inlist[i])\n counter = Counter(newngram(inlist[i],3))\n df.append(counter)\n return df\n\ndffortriperfile = createListDocTriCounter(tokenfilelist)\nfirsttrifile = dffortriperfile[0]\n#print(firsttrifile)\n#print(dffortriperfile)\n\ndef createListDocQuadCounter(inlist):\n df = []\n for i in range(len(inlist)):\n #words = re.findall(\"\\w+\",inlist[i])\n counter = Counter(newngram(inlist[i],4))\n df.append(counter)\n return df\n\ndfforquadperfile = createListDocQuadCounter(tokenfilelist)\nfirstquadfile = dfforquadperfile[0]\n#print(firstquadfile)\n#print(dfforquadperfile)",
"_____no_output_____"
],
[
"###****From Phase 3 of the project the Laplace smoothed unigram, bigram, trigram and quadgram models\n###****using the chosen training data test folder....relies on computation of the above module for the dataframes\n###****per file\n#Laplace smoothed unigram prob. model using prob(x) = (1 + frequency of x in corpus)/(total in corpus)\ndef createLeplaceSmoothedUnigramModel(outset, dfperfilelist):\n n = 1 \n anoutcome = ngrams(outset,n)\n sumoflaplaceprob = 0\n \n laplaceprobmodel = anoutcome\n \n for w in laplaceprobmodel:\n laplaceprobmodel[w] = 0\n \n filecount = 0\n for temp in anoutcome:\n for i in range(len(dfperfilelist)):\n count = dfperfilelist[i]\n filecount = filecount + count[temp] + 1\n \n for keyword in anoutcome:\n #print(keyword)\n for i in range(len(dfperfilelist)):\n count = dfperfilelist[i]\n laplaceprobmodel[keyword] = laplaceprobmodel[keyword] + (count[keyword] + 1)/(filecount) \n sumoflaplaceprob = sumoflaplaceprob + laplaceprobmodel[keyword]\n \n \n #print(f\"The laplaceprobmodel is \\n {laplaceprobmodel}\")\n print(f\"The sum of all the unigram probabiities in the laplace smoothed model needs to be 1 and it is {sumoflaplaceprob}\")\n return laplaceprobmodel \n\nlaplacesmoothunimodel = createLeplaceSmoothedUnigramModel(tokens, dfforuniperfile)\n\npandas.set_option(\"display.max_rows\", 10)\nlaplacesmoothunidf = pandas.DataFrame.from_dict(laplacesmoothunimodel, orient = 'index', columns = ['prob.'])\nprint('Number of rows in Laplace Smoothed Unigram Prob. Model : ', len(laplacesmoothunidf.index))\nprint(laplacesmoothunidf)\n\n#Attempt to plot the unigram language model using first a Counter object\nCOUNTLapSMOOTH1 = Counter(laplacesmoothunimodel)\ngreatestlaplacesmoothprob1 = 0\nbiglaplacesmoothword1 = ''\nfor w in COUNTLapSMOOTH1.keys():\n if COUNTLapSMOOTH1[w] >= greatestlaplacesmoothprob1:\n biglaplacesmoothword1 = w\n greatestlaplacesmoothprob1 = COUNTLapSMOOTH1[w]\n \nprint(f\"the unigram of greatest freq in the Laplace smoothed unigram model is: {biglaplacesmoothword1} \\n\")\nMLapS1 = COUNTLapSMOOTH1[biglaplacesmoothword1]\nyscale('log'); xscale('log'); title('Frequency of n-th most frequent 1-itemset in laplace smoothed model and 1/n line.')\n\nplot([c for (w,c) in COUNTLapSMOOTH1.most_common()])\nplot([(MLapS1)/i for i in range(1, len(COUNTLapSMOOTH1)+1)]);\n\n#Laplace smoothed bigram prob. model using prob(x) = (1 + frequency of x in corpus)/(total in corpus)\ndef createLeplaceSmoothedBigramModel(outset, dfperfilelist):\n n = 2 \n anoutcome = ngrams(outset,n)\n sumoflaplaceprob = 0\n \n laplaceprobmodel = anoutcome\n \n for w in laplaceprobmodel:\n laplaceprobmodel[w] = 0\n \n filecount = 0\n for temp in anoutcome:\n for i in range(len(dfperfilelist)):\n count = dfperfilelist[i]\n filecount = filecount + count[temp] + 1\n \n for keyword in anoutcome:\n #print(keyword)\n for i in range(len(dfperfilelist)):\n count = dfperfilelist[i]\n #print(keyword, count[keyword], filecount)\n laplaceprobmodel[keyword] = laplaceprobmodel[keyword] + (count[keyword] + 1)/(filecount) \n #print(laplaceprobmodel[keyword], keyword)\n sumoflaplaceprob = sumoflaplaceprob + laplaceprobmodel[keyword]\n #print(sumoflaplaceprob)\n \n #print(f\"The laplaeprobmodel is \\n {laplaceprobmodel}\")\n #print(f\"The sum of all the probabiities needs to be 1 and it is {sumoflaplaceprob}\")\n return laplaceprobmodel \n\nlaplacesmoothbimodel = createLeplaceSmoothedBigramModel(tokens, dfforbiperfile)\n\npandas.set_option(\"display.max_rows\", 10)\nlaplacesmoothbidf = pandas.DataFrame.from_dict(laplacesmoothbimodel, orient = 'index', columns = ['prob.'])\nprint('Number of rows in Laplace Smoothed Bigram Prob. Model : ', len(laplacesmoothbidf.index))\nprint(laplacesmoothbidf)\n\n#Attempt to plot the bigram language model using first a Counter object\nCOUNTLapSMOOTH2 = Counter(laplacesmoothbimodel)\ngreatestlaplacesmoothprob2 = 0\nbiglaplacesmoothword2 = ''\nfor w in COUNTLapSMOOTH2.keys():\n if COUNTLapSMOOTH2[w] >= greatestlaplacesmoothprob2:\n biglaplacesmoothword2 = w\n greatestlaplacesmoothprob2 = COUNTLapSMOOTH2[w]\n \nprint(f\"the bigram of greatest freq in the Laplace smoothed bigram model is: {biglaplacesmoothword2} \\n\")\nMLapS2 = COUNTLapSMOOTH2[biglaplacesmoothword2]\nyscale('log'); xscale('log'); title('Frequency of n-th most frequent 2-itemset in laplace smoothed model and 1/n line.')\n\nplot([c for (w,c) in COUNTLapSMOOTH2.most_common()])\nplot([(MLapS2)/i for i in range(1, len(COUNTLapSMOOTH2)+1)]);\n\n#Laplace smoothed trigram prob. model using prob(x) = (1 + frequency of x in corpus)/(total in corpus)\ndef createLeplaceSmoothedTrigramModel(outset, dfperfilelist):\n n = 3 \n anoutcome = ngrams(outset,3)\n sumoflaplaceprob = 0\n \n laplaceprobmodel = anoutcome\n \n for w in laplaceprobmodel:\n laplaceprobmodel[w] = 0\n \n filecount = 0\n for temp in anoutcome:\n for i in range(len(dfperfilelist)):\n count = dfperfilelist[i]\n filecount = filecount + count[temp] + 1\n \n for keyword in anoutcome:\n #print(keyword)\n for i in range(len(dfperfilelist)):\n count = dfperfilelist[i]\n #print(keyword, count[keyword], filecount)\n laplaceprobmodel[keyword] = laplaceprobmodel[keyword] + (count[keyword] + 1)/(filecount) \n #print(laplaceprobmodel[keyword], keyword)\n sumoflaplaceprob = sumoflaplaceprob + laplaceprobmodel[keyword]\n #print(sumoflaplaceprob)\n \n #print(f\"The laplaeprobmodel is \\n {laplaceprobmodel}\")\n #print(f\"The sum of all the trigram probabiities in the Laplace smoothed model needs to be 1 and it is {sumoflaplaceprob}\")\n return laplaceprobmodel \n\nlaplacesmoothtrimodel = createLeplaceSmoothedTrigramModel(tokens, dffortriperfile)\n\npandas.set_option(\"display.max_rows\", 10)\nlaplacesmoothtridf = pandas.DataFrame.from_dict(laplacesmoothtrimodel, orient = 'index', columns = ['prob.'])\nprint('Number of rows in Laplace Smoothed Trigram Prob. Model : ', len(laplacesmoothtridf.index))\nprint(laplacesmoothtridf)\n\n#Attempt to plot the trigram language model using first a Counter object\nCOUNTLapSMOOTH3 = Counter(laplacesmoothtrimodel)\ngreatestlaplacesmoothprob3 = 0\nbiglaplacesmoothword3 = ''\nfor w in COUNTLapSMOOTH3.keys():\n if COUNTLapSMOOTH3[w] >= greatestlaplacesmoothprob3:\n biglaplacesmoothword3 = w\n greatestlaplacesmoothprob3 = COUNTLapSMOOTH3[w]\n \nprint(f\"the trigram of greatest freq in the Laplace smoothed trigram model is: {biglaplacesmoothword3} \\n\")\nMLapS3 = COUNTLapSMOOTH3[biglaplacesmoothword3]\nyscale('log'); xscale('log'); title('Frequency of n-th most frequent 3-itemset in laplace smoothed model and 1/n line.')\n\nplot([c for (w,c) in COUNTLapSMOOTH3.most_common()])\nplot([(MLapS3)/i for i in range(1, len(COUNTLapSMOOTH3)+1)]);\n\n#Laplace smoothed trigram prob. model using prob(x) = (1 + frequency of x in corpus)/(total in corpus)\ndef createLeplaceSmoothedQuadgramModel(outset, dfperfilelist):\n n = 4 \n anoutcome = ngrams(outset,4)\n sumoflaplaceprob = 0\n \n laplaceprobmodel = anoutcome\n \n for w in laplaceprobmodel:\n laplaceprobmodel[w] = 0\n \n filecount = 0\n for temp in anoutcome:\n for i in range(len(dfperfilelist)):\n count = dfperfilelist[i]\n filecount = filecount + count[temp] + 1\n \n for keyword in anoutcome:\n #print(keyword)\n for i in range(len(dfperfilelist)):\n count = dfperfilelist[i]\n #print(keyword, count[keyword], filecount)\n laplaceprobmodel[keyword] = laplaceprobmodel[keyword] + (count[keyword] + 1)/(filecount) \n #print(laplaceprobmodel[keyword], keyword)\n sumoflaplaceprob = sumoflaplaceprob + laplaceprobmodel[keyword]\n #print(sumoflaplaceprob)\n \n #print(f\"The laplaeprobmodel is \\n {laplaceprobmodel}\")\n #print(f\"The sum of all the quadgram probabiities in the Laplace smoothed model needs to be 1 and it is {sumoflaplaceprob}\")\n return laplaceprobmodel \n\nlaplacesmoothquadmodel = createLeplaceSmoothedQuadgramModel(tokens, dfforquadperfile)\n\npandas.set_option(\"display.max_rows\", 10)\nlaplacesmoothquaddf = pandas.DataFrame.from_dict(laplacesmoothquadmodel, orient = 'index', columns = ['prob.'])\n\nprint('Number of rows in Laplace Smoothed Quadgram Prob. Model : ', len(laplacesmoothquaddf.index))\nprint(laplacesmoothquaddf)\n\n#Attempt to plot the quadgram language model using first a Counter object\nCOUNTLapSMOOTH4 = Counter(laplacesmoothquadmodel)\ngreatestlaplacesmoothprob4 = 0\nbiglaplacesmoothword4 = ''\n\nfor w in COUNTLapSMOOTH4.keys():\n if COUNTLapSMOOTH4[w] >= greatestlaplacesmoothprob4:\n biglaplacesmoothword4 = w\n greatestlaplacesmoothprob4 = COUNTLapSMOOTH4[w]\n \nprint(f\"the quadgram of greatest freq in the Laplace smoothed quadgram model is: {biglaplacesmoothword4} \\n\")\nMLapS4 = COUNTLapSMOOTH4[biglaplacesmoothword4]\nyscale('log'); xscale('log'); title('Frequency of n-th most frequent 4-itemset in laplace smoothed model and 1/n line.')\n\nplot([c for (w,c) in COUNTLapSMOOTH4.most_common()])\nplot([(MLapS4)/i for i in range(1, len(COUNTLapSMOOTH4)+1)]);\n\n",
"The sum of all the unigram probabiities in the laplace smoothed model needs to be 1 and it is 0.9999999999999991\nNumber of rows in Laplace Smoothed Unigram Prob. Model : 63\n prob.\nthe 0.028169\ncat 0.017606\nnamed 0.014085\nbob 0.017606\nis 0.028169\n... ...\ncats 0.014085\nshould 0.014085\nnot 0.014085\nexist 0.014085\ndogs 0.014085\n\n[63 rows x 1 columns]\nthe unigram of greatest freq in the Laplace smoothed unigram model is: is \n\nNumber of rows in Laplace Smoothed Bigram Prob. Model : 87\n prob.\nthe cat 0.011331\ncat named 0.011331\nnamed bob 0.011331\nbob is 0.014164\nis damn 0.011331\n... ...\nnot exist 0.011331\nexist on 0.011331\nearth dogs 0.011331\ndogs are 0.011331\nare the 0.011331\n\n[87 rows x 1 columns]\nthe bigram of greatest freq in the Laplace smoothed bigram model is: do we \n\nNumber of rows in Laplace Smoothed Trigram Prob. Model : 91\n prob.\nthe cat named 0.01105\ncat named bob 0.01105\nnamed bob is 0.01105\nbob is damn 0.01105\nis damn good 0.01105\n... ...\nexist on earth 0.01105\non earth dogs 0.01105\nearth dogs are 0.01105\ndogs are the 0.01105\nare the best 0.01105\n\n[91 rows x 1 columns]\nthe trigram of greatest freq in the Laplace smoothed trigram model is: do we do \n\nNumber of rows in Laplace Smoothed Quadgram Prob. Model : 91\n prob.\nthe cat named bob 0.011142\ncat named bob is 0.011142\nnamed bob is damn 0.011142\nbob is damn good 0.011142\nis damn good he 0.011142\n... ...\nnot exist on earth 0.011142\nexist on earth dogs 0.011142\non earth dogs are 0.011142\nearth dogs are the 0.011142\ndogs are the best 0.011142\n\n[91 rows x 1 columns]\nthe quadgram of greatest freq in the Laplace smoothed quadgram model is: what do we do \n\n"
],
[
"ranint = random.randint(0,len(laplacesmoothunimodel)-1)\nprint(ranint)",
"23\n"
],
[
"####***** if you are feeling like generating a random seed for the text:\ni = 0;\nlapcounter = Counter(laplacesmoothunimodel)\nranint = random.randint(0, len(laplacesmoothunimodel)-1)\nfor w in laplacesmoothunimodel.keys():\n if (i == ranint):\n seedword = w\n i = i + 1\nprint(seedword)\n\n####**** to set the seed to one of the most common 10 unigrams:\nseedpossibilities = lapcounter.most_common(10)\nranint = random.randint(0,9)\nseedtuple = seedpossibilities[ranint]\nseedword = seedtuple[0]\nprint(seedword)",
"do\nwords\n"
],
[
"#### from phase 2, The team kept both ngrams method and newngram method for computing the \n###unigrams, bigrams, trigrams and quadgrams smoothed models....\n###output of newngram is a Counter obj and output of ngrams is a dictionary object...\n\n#newngram outputs to files:\n#the most common unigrams are set to unigramfile.dat\n#the most common bigrams are set to bigramfile.dat\n#the most common trigrams are set to trigramfile.dat\n#the most common quadgrams are set to quadgramfile.dat\n\n#!!!newngram again returns a Counter object \ndef newngram(toks, n):\n output = {} \n for i in range(len(toks) - n + 1):\n g = ' '.join(toks[i:i+n])\n output.setdefault(g, 0)\n output[g] += 1\n COUNTS = Counter(output)\n outputstring = ''\n outputstring = outputstring + str(COUNTS.most_common(3000)) + \" \"\n if n == 1:\n #print(f\"\\n The most common unigrams are: {(COUNTS.most_common(10))}\")\n f=open(\"unigramfile.dat\",\"w+\", encoding='utf-8', errors='replace')\n f.write(str(sum(COUNTS.values())))\n f.write(str(COUNTS.most_common(3000))) #trying to keep file size at about 50 k for this sample\n outputstring = outputstring + str(COUNTS.most_common(3000)) + \" \"\n f.close()\n if n == 2:\n #print(f\"\\n The most common bigrams are: {(COUNTS.most_common(10))}\")\n f=open(\"bigramfile.dat\",\"w+\", encoding='utf-8', errors='replace')\n f.write(str(sum(COUNTS.values())))\n f.write(str(COUNTS.most_common(2700))) #trying to keep file size at about 50 k for this sample\n outputstring = outputstring + str(COUNTS.most_common(2700)) + \" \"\n f.close()\n if n == 3:\n #print(f\"\\n The most common trigrams are: {(COUNTS.most_common(10))}\")\n f=open(\"trigramfile.dat\",\"w+\", encoding='utf-8', errors='replace')\n f.write(str(sum(COUNTS.values())))\n f.write(str(COUNTS.most_common(2300))) #trying to keep file size at about 50 k for this sample\n outputstring = outputstring + str(COUNTS.most_common(2300)) + \" \"\n f.close()\n if n == 4:\n #print(f\"\\n The most common quadgrams are: {(COUNTS.most_common(10))}\")\n f=open(\"quadgramfile.dat\",\"w+\", encoding='utf-8', errors='replace')\n f.write(str(sum(COUNTS.values())))\n f.write(str(COUNTS.most_common(2100))) #trying to keep file size at about 50 k for this sample\n outputstring = outputstring + str(COUNTS.most_common(2100)) + \" \"\n f.close()\n \n return output\n\n###!!!! THESE COUNTS WILL BE USED IN THE TEXT GENERATION METHOD BELOW...THE PHASE 3 LAPLACE SMOOTH\n###!!! MODELS ARE CREATED IMPLICITLY WITHIN THE GENERATING TEXT MODULE\nnewunigrams = newngram(tokens, 1)\nprint(unigrams)\nnewbigrams = newngram(tokens, 2)\n#print(bigrams)\nnewtrigrams = newngram(tokens, 3)\n#print(trigrams)\nnewquadgrams = newngram(tokens, 4)\n#print(quadgrams)\n\n####piecing together the development from when we posted the most common unigrams, bigrams\n####trigrams and quadgrams to the files..now we are able to use them in this application \n####to generate text....\n\nunigramfile=\"unigramfile.dat\"\nbigramfile=\"bigramfile.dat\"\ntrigramfile=\"trigramfile.dat\"\nquadgramfile=\"quadgramfile.dat\"\n\nwith open(unigramfile, 'rb', 0) as file, \\\n mmap.mmap(file.fileno(), 0, access=mmap.ACCESS_READ) as text:\n unigramtotal= text.read(text.find(b'[')).decode('utf-8')\n unigrams= text.read(text.find(b']')).decode('utf-8')\n\nwith open(bigramfile, 'rb', 0) as file, \\\n mmap.mmap(file.fileno(), 0, access=mmap.ACCESS_READ) as text:\n bigramtotal= text.read(text.find(b'[')).decode('utf-8')\n bigrams= text.read(text.find(b']')).decode('utf-8')\n\nwith open(trigramfile, 'rb', 0) as file, \\\n mmap.mmap(file.fileno(), 0, access=mmap.ACCESS_READ) as text:\n trigramtotal= text.read(text.find(b'[')).decode('utf-8')\n trigrams= text.read(text.find(b']')).decode('utf-8')\n \nwith open(quadgramfile, 'rb', 0) as file, \\\n mmap.mmap(file.fileno(), 0, access=mmap.ACCESS_READ) as text:\n quadgramtotal= text.read(text.find(b'[')).decode('utf-8')\n quadgrams= text.read(text.find(b']')).decode('utf-8')\n\nwords=unigrams.replace('[','').replace(']','').replace('(','').replace(')','').split(',')\nunigramrange=0\nfor w in range(1,len(words)-1,2):\n unigramrange +=int(words[w])",
"_____no_output_____"
],
[
"####****Finally Phase 4! Here is the geneartion method. In the cell above,\n####****you created a seedword for generating the text...either from the most frequent 10\n####****unigramsstring or a random unigram...\n\ndef generate(seedtext, length):\n \n if length==0:\n output=''\n print(\"Scroll down for the final result. Here is the process used:\\n\")\n for gword in range(1,length+1):\n if gword > 3: # use quadgram model\n\n \n print(f\"Searching quadigrams for '{currenttrigram}'.\") \n quadgramoccurance = []\n quadgramoccurance.append(quadgrams.find(\"'\" + currenttrigram + ' '))\n if quadgramoccurance[0] > -1:\n possiblequadgram = []\n possiblequadfrequency = []\n possiblequadtotalfreq = 0\n n = 0\n possiblequadgram.append(quadgrams[quadgramoccurance[n]+len(currenttrigram)+2:quadgrams.find(\"'\",quadgramoccurance[n]+len(currenttrigram)+2)])\n try:\n possiblequadfrequency.append(int(quadgrams[quadgrams.find(\"', \",quadgramoccurance[n])+3:quadgrams.find(\")\",quadgramoccurance[n])]))\n possiblequadtotalfreq += possiblequadfrequency[n]\n except:\n print(\"Error\")\n n += 1\n while True:\n quadgramoccurance.append(quadgrams.find(\"'\" + currenttrigram + ' ', quadgramoccurance[n-1]+1))\n \n if quadgramoccurance[n] == -1: break\n possiblequadgram.append(quadgrams[quadgramoccurance[n]+len(currenttrigram)+2:quadgrams.find(\"'\",quadgramoccurance[n]+len(currenttrigram)+2)])\n try:\n possiblequadfrequency.append(int(quadgrams[quadgrams.find(\"', \",quadgramoccurance[n])+3:quadgrams.find(\")\",quadgramoccurance[n])]))\n possiblequadtotalfreq += possiblequadfrequency[n]\n except:\n print(\"Error\")\n break\n n += 1\n rand=random.randint(0,possiblequadtotalfreq)\n look = rand\n for w in range(0,n):\n look = look - possiblequadfrequency[w]\n if look < 0:\n nextword = possiblequadgram[w]\n break\n print(f\" Out of {possiblequadtotalfreq} occurances in the quadgram model the following word:\")\n for w in range(0,n):\n print(f\" '{possiblequadgram[w]}' appeared {possiblequadfrequency[w]} times,\")\n print(f\" From the {n} possibilities, we randomly chose '{nextword}'.\") \n else:\n print(f\" Not found. Searching trigrams for '{currentbigram}'.\") \n trigramoccurance = []\n trigramoccurance.append(trigrams.find(\"'\" + currentbigram + ' '))\n if trigramoccurance[0] > -1:\n possibletrigram = []\n possibletrifrequency = []\n possibletritotalfreq = 0\n n = 0\n possibletrigram.append(trigrams[trigramoccurance[n]+len(currentbigram)+2:trigrams.find(\"'\",trigramoccurance[n]+len(currentbigram)+2)])\n try:\n possibletrifrequency.append(int(trigrams[trigrams.find(\"', \",trigramoccurance[n])+3:trigrams.find(\")\",trigramoccurance[n])]))\n possibletritotalfreq += possibletrifrequency[n]\n except:\n print(\"Error\")\n n += 1\n while True:\n trigramoccurance.append(trigrams.find(\"'\" + currentbigram + ' ', trigramoccurance[n-1]+1))\n if trigramoccurance[n] == -1: break\n possibletrigram.append(trigrams[trigramoccurance[n]+len(currentbigram)+2:trigrams.find(\"'\",trigramoccurance[n]+len(currentbigram)+2)])\n try:\n possibletrifrequency.append(int(trigrams[trigrams.find(\"', \",trigramoccurance[n])+3:trigrams.find(\")\",trigramoccurance[n])]))\n possibletritotalfreq += possibletrifrequency[n]\n except:\n print(\"Error\")\n break\n n += 1\n rand=random.randint(0,possibletritotalfreq)\n look = rand\n for w in range(0,n):\n look = look - possibletrifrequency[w]\n if look < 0:\n nextword = possibletrigram[w]\n break\n print(f\" Out of {possibletritotalfreq} occurances in the trigram model the following word:\")\n for w in range(0,n):\n print(f\" '{possibletrigram[w]}' appeared {possibletrifrequency[w]} times,\")\n print(f\" From the {n} possibilities, we randomly chose '{nextword}'.\") \n else:\n print(f\" Not found. Searching bigrams for '{currentword}'.\") \n bigramoccurance = []\n bigramoccurance.append(bigrams.find(\"'\" + currentword + ' '))\n if bigramoccurance[0] > -1:\n possiblebigram = []\n possiblebifrequency = []\n possiblebitotalfreq = 0\n n = 0\n possiblebigram.append(bigrams[bigramoccurance[n]+len(currentword)+2:bigrams.find(\"'\",bigramoccurance[n]+len(currentword)+2)])\n try:\n possiblebifrequency.append(int(bigrams[bigrams.find(\"', \",bigramoccurance[n])+3:bigrams.find(\")\",bigramoccurance[n])]))\n possiblebitotalfreq += possiblebifrequency[n]\n except:\n print(\"Error\")\n n += 1\n while True:\n bigramoccurance.append(bigrams.find(\"'\" + currentword + ' ', bigramoccurance[n-1]+1))\n nextword = bigrams[bigrams.find(\"'\" + currentword + ' ')+len(currentword)+2:bigrams.find(\"'\",bigrams.find(currentword + ' '))]\n if bigramoccurance[n] == -1: break\n possiblebigram.append(bigrams[bigramoccurance[n]+len(currentword)+2:bigrams.find(\"'\",bigramoccurance[n]+len(currentword)+2)])\n try:\n possiblebifrequency.append(int(bigrams[bigrams.find(\"', \",bigramoccurance[n])+3:bigrams.find(\")\",bigramoccurance[n])]))\n possiblebitotalfreq += possiblebifrequency[n]\n except:\n print(\"Error\")\n break\n n += 1\n rand=random.randint(0,possiblebitotalfreq)\n look = rand\n for w in range(0,n):\n look = look - possiblebifrequency[w]\n if look < 0:\n nextword = possiblebigram[w]\n break\n print(f\" Out of {possiblebitotalfreq} occurances in the bigram model the following word:\")\n for w in range(0,n):\n print(f\" '{possiblebigram[w]}' appeared {possiblebifrequency[w]} times,\")\n print(f\" From the {n} possibilities, we randomly chose '{nextword}'.\") \n else:\n rand=random.randint(0,unigramrange)\n look = rand\n for w in range(1,len(words)-1,2):\n look = look - int(words[w])\n if look < 0:\n nextword = words[w-1][2:-1]\n break\n print(f\" Not found. We randomly choose '{nextword}'.\")\n pastword =currentword\n currentword=nextword\n currentquadgram=currenttrigram + ' ' + currentword\n currenttrigram=currentbigram + ' ' + currentword\n currentbigram=pastword + ' ' + currentword\n output+=' ' + currentword \n \n \n else:\n if gword == 3: # use trigram model\n print(f\" Searching trigrams for '{currentbigram}'.\") \n trigramoccurance = []\n trigramoccurance.append(trigrams.find(\"'\" + currentbigram + ' '))\n if trigramoccurance[0] > -1:\n possibletrigram = []\n possibletrifrequency = []\n possibletritotalfreq = 0\n n = 0\n possibletrigram.append(trigrams[trigramoccurance[n]+len(currentbigram)+2:trigrams.find(\"'\",trigramoccurance[n]+len(currentbigram)+2)])\n try:\n possibletrifrequency.append(int(trigrams[trigrams.find(\"', \",trigramoccurance[n])+3:trigrams.find(\")\",trigramoccurance[n])]))\n possibletritotalfreq += possibletrifrequency[n]\n except:\n print(\"Error\")\n n += 1\n while True:\n trigramoccurance.append(trigrams.find(\"'\" + currentbigram + ' ', trigramoccurance[n-1]+1))\n if trigramoccurance[n] == -1: break\n possibletrigram.append(trigrams[trigramoccurance[n]+len(currentbigram)+2:trigrams.find(\"'\",trigramoccurance[n]+len(currentbigram)+2)])\n try:\n possibletrifrequency.append(int(trigrams[trigrams.find(\"', \",trigramoccurance[n])+3:trigrams.find(\")\",trigramoccurance[n])]))\n possibletritotalfreq += possibletrifrequency[n]\n except:\n print(\"Error\")\n break\n n += 1\n rand=random.randint(0,possibletritotalfreq)\n look = rand\n for w in range(0,n):\n look = look - possibletrifrequency[w]\n if look < 0:\n nextword = possibletrigram[w]\n break\n print(f\" Out of {possibletritotalfreq} occurances in the trigram model the following word:\")\n for w in range(0,n):\n print(f\" '{possibletrigram[w]}' appeared {possibletrifrequency[w]} times,\")\n print(f\" From the {n} possibilities, we randomly chose '{nextword}'.\") \n else:\n print(f\" Not found. Searching bigrams for '{currentword}'.\") \n bigramoccurance = []\n bigramoccurance.append(bigrams.find(\"'\" + currentword + ' '))\n if bigramoccurance[0] > -1:\n possiblebigram = []\n possiblebifrequency = []\n possiblebitotalfreq = 0\n n = 0\n possiblebigram.append(bigrams[bigramoccurance[n]+len(currentword)+2:bigrams.find(\"'\",bigramoccurance[n]+len(currentword)+2)])\n try:\n possiblebifrequency.append(int(bigrams[bigrams.find(\"', \",bigramoccurance[n])+3:bigrams.find(\")\",bigramoccurance[n])]))\n possiblebitotalfreq += possiblebifrequency[n]\n except:\n print(\"Error\")\n n += 1\n while True:\n bigramoccurance.append(bigrams.find(\"'\" + currentword + ' ', bigramoccurance[n-1]+1))\n \n if bigramoccurance[n] == -1: break\n possiblebigram.append(bigrams[bigramoccurance[n]+len(currentword)+2:bigrams.find(\"'\",bigramoccurance[n]+len(currentword)+2)])\n try:\n possiblebifrequency.append(int(bigrams[bigrams.find(\"', \",bigramoccurance[n])+3:bigrams.find(\")\",bigramoccurance[n])]))\n possiblebitotalfreq += possiblebifrequency[n]\n except:\n print(\"Error\")\n break\n n += 1\n rand=random.randint(0,possiblebitotalfreq)\n look = rand\n for w in range(0,n):\n look = look - possiblebifrequency[w]\n if look < 0:\n nextword = possiblebigram[w]\n break\n print(f\" Out of {possiblebitotalfreq} occurances in the bigram model the following word:\")\n for w in range(0,n):\n print(f\" '{possiblebigram[w]}' appeared {possiblebifrequency[w]} times,\")\n print(f\" From the {n} possibilities, we randomly chose '{nextword}'.\") \n else:\n rand=random.randint(0,unigramrange)\n look = rand\n for w in range(1,len(words)-1,2):\n look = look - int(words[w])\n if look < 0:\n nextword = words[w-1][2:-1]\n break\n print(f\" Not found. We randomly choose '{nextword}'.\")\n pastword =currentword\n currentword=nextword\n currenttrigram=currentbigram + ' ' + currentword\n currentbigram=pastword + ' ' + currentword\n output+=' ' + currentword \n elif gword == 2: # use bigram model\n print(f\" Searching bigrams for '{currentword}'.\") \n bigramoccurance = []\n bigramoccurance.append(bigrams.find(\"'\" + currentword + ' '))\n if bigramoccurance[0] > -1:\n possiblebigram = []\n possiblebifrequency = []\n possiblebitotalfreq = 0\n n = 0\n possiblebigram.append(bigrams[bigramoccurance[n]+len(currentword)+2:bigrams.find(\"'\",bigramoccurance[n]+len(currentword)+2)])\n try:\n possiblebifrequency.append(int(bigrams[bigrams.find(\"', \",bigramoccurance[n])+3:bigrams.find(\")\",bigramoccurance[n])]))\n possiblebitotalfreq += possiblebifrequency[n]\n except:\n print(\"Error\")\n n += 1\n while True:\n bigramoccurance.append(bigrams.find(\"'\" + currentword + ' ', bigramoccurance[n-1]+1))\n if bigramoccurance[n] == -1: break\n possiblebigram.append(bigrams[bigramoccurance[n]+len(currentword)+2:bigrams.find(\"'\",bigramoccurance[n]+len(currentword)+2)])\n try:\n possiblebifrequency.append(int(bigrams[bigrams.find(\"', \",bigramoccurance[n])+3:bigrams.find(\")\",bigramoccurance[n])]))\n possiblebitotalfreq += possiblebifrequency[n]\n except:\n print(\"Error\")\n break\n n += 1\n rand=random.randint(0,possiblebitotalfreq)\n look = rand\n for w in range(0,n):\n look = look - possiblebifrequency[w]\n if look < 0:\n nextword = possiblebigram[w]\n break\n print(f\" Out of {possiblebitotalfreq} occurances in the bigram model the following word:\")\n for w in range(0,n):\n print(f\" '{possiblebigram[w]}' appeared {possiblebifrequency[w]} times,\")\n print(f\" From the {n} possibilities, we randomly chose '{nextword}'.\") \n else:\n rand=random.randint(0,unigramrange)\n look = rand\n for w in range(1,len(words)-1,2):\n look = look - int(words[w])\n if look < 0:\n nextword = words[w-1][2:-1]\n break\n print(f\" Not found. We randomly choose '{nextword}'.\")\n pastword =currentword\n currentword=nextword\n currentbigram=pastword + ' ' + currentword\n output+=' ' + currentword\n\n elif gword == 1: # check seedtext\n for char in range(0,len(seedtext)):\n maybe = seedtext[:len(seedtext)-char]\n print(f\" Searching unigrams for '{maybe}'.\") \n if unigrams.find(maybe) > -1:\n if unigrams[unigrams.find(maybe)-1] == \"'\":\n print(\" Found the word (or a word that starts with it).\")\n currentword = unigrams[unigrams.find(maybe):unigrams.find(\"'\",unigrams.find(maybe))]\n break\n print(\" Not found. Dropping a letter\")\n currentword = \"\" \n if currentword == \"\":\n rand=random.randint(0,unigramrange)\n look = rand\n for w in range(1,len(words)-1,2):\n look = look - int(words[w])\n if look < 0:\n currentword = words[w-1][2:-1] \n break\n print(f\" We randomly choose '{currentword}'.\")\n output=currentword\n \n print(f\"\\n\\n Given '{seedtext}', our initial model generates the following {length} words:\\n\\n{output.replace(' .','.')}\")\n",
"_____no_output_____"
],
[
"generate(seedword, 100)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0c8f6346436bee3f21f91f0e1135543993fa8ad | 7,123 | ipynb | Jupyter Notebook | samples/notebooks/fsharp/Docs/Object Formatters.ipynb | haraldsteinlechner/interactive | 0fb8d88fc6a400e0f0507629067a32c8d3724a8d | [
"MIT"
] | 2 | 2020-07-25T20:10:29.000Z | 2020-07-26T18:23:30.000Z | samples/notebooks/fsharp/Docs/Object Formatters.ipynb | Keboo/interactive | fb89048f73d2cb66505b090c8f55bb8b97b863b3 | [
"MIT"
] | null | null | null | samples/notebooks/fsharp/Docs/Object Formatters.ipynb | Keboo/interactive | fb89048f73d2cb66505b090c8f55bb8b97b863b3 | [
"MIT"
] | null | null | null | 30.835498 | 371 | 0.578689 | [
[
[
"[this doc on github](https://github.com/dotnet/interactive/tree/master/samples/notebooks/fsharp/Docs)\n\n# Object formatters",
"_____no_output_____"
],
[
"## Default formatting behaviors",
"_____no_output_____"
],
[
"When you return a value or a display a value in a .NET notebook, the default formatting behavior is to try to provide some useful information about the object. If it's an array or other type implementing `IEnumerable`, that might look like this:",
"_____no_output_____"
]
],
[
[
"display [\"hello\"; \"world\"]\n\nEnumerable.Range(1, 5)",
"_____no_output_____"
]
],
[
[
"As you can see, the same basic structure is used whether you pass the object to the `display` method or return it as the cell's value.\n\nSimilarly to the behavior for `IEnumerable` objects, you'll also see table output for dictionaries, but for each value in the dictionary, the key is provided rather than the index within the collection.",
"_____no_output_____"
]
],
[
[
"// Cannot simply use 'dict' here, see https://github.com/dotnet/interactive/issues/12\n\nlet d = dict [(\"zero\", 0); (\"one\", 1); (\"two\", 2)]\nSystem.Collections.Generic.Dictionary<string, int>(d)",
"_____no_output_____"
]
],
[
[
"The default formatting behavior for other types of objects is to produce a table showing their properties and the values of those properties.",
"_____no_output_____"
]
],
[
[
"type Person = { FirstName: string; LastName: string; Age: int }\n\n// Evaluate a new person\n{ FirstName = \"Mitch\"; LastName = \"Buchannon\"; Age = 42 }",
"_____no_output_____"
]
],
[
[
"When you have a collection of such objects, you can see the values listed for each item in the collection:",
"_____no_output_____"
]
],
[
[
"let people =\n [\n { FirstName = \"Mitch\"; LastName = \"Buchannon\"; Age = 42 }\n { FirstName = \"Hobie \"; LastName = \"Buchannon\"; Age = 23 }\n { FirstName = \"Summer\"; LastName = \"Quinn\"; Age = 25 }\n { FirstName = \"C.J.\"; LastName = \"Parker\"; Age = 23 }\n ]\n\npeople",
"_____no_output_____"
]
],
[
[
"Now let's try something a bit more complex. Let's look at a graph of objects. \n\nWe'll redefine the `Person` class to allow a reference to a collection of other `Person` instances.",
"_____no_output_____"
]
],
[
[
"type Person =\n { FirstName: string\n LastName: string\n Age: int\n Friends: ResizeArray<Person> }\n\nlet mitch = { FirstName = \"Mitch\"; LastName = \"Buchannon\"; Age = 42; Friends = ResizeArray() }\nlet hobie = { FirstName = \"Hobie \"; LastName = \"Buchannon\"; Age = 23; Friends = ResizeArray() }\nlet summer = { FirstName = \"Summer\"; LastName = \"Quinn\"; Age = 25; Friends = ResizeArray() }\n\nmitch.Friends.AddRange([ hobie; summer ])\nhobie.Friends.AddRange([ mitch; summer ])\nsummer.Friends.AddRange([ mitch; hobie ])\n\nlet people = [ mitch; hobie; summer ]\ndisplay people",
"_____no_output_____"
]
],
[
[
"That's a bit hard to read, right? \n\nThe defaut formatting behaviors are thorough, but that doesn't always mean they're as useful as they might be. In order to give you more control in these kinds of cases, the object formatters can be customized from within the .NET notebook.",
"_____no_output_____"
],
[
"## Custom formatters",
"_____no_output_____"
],
[
"Let's clean up the output above by customizing the formatter for the `Person.Friends` property, which is creating a lot of noise. \n\nThe way to do this is to use the `Formatter` API. This API lets you customize the formatting for a specific type. Since `Person.Friends` is of type `ResizeArray<Person>`, we can register a custom formatter for that type to change the output. Let's just list their first names:",
"_____no_output_____"
]
],
[
[
"Formatter<ResizeArray<Person>>.Register(\n fun people writer ->\n for person in people do\n writer.Write(\"person\")\n , mimeType = \"text/plain\")\n\npeople",
"_____no_output_____"
]
],
[
[
"You might have noticed that `people` is of type `ResizeArray<Person>`, but the table output still includes columns for `LastName`, `Age`, and `Friends`. What's going on here?\n\nNotice that the custom formatter we just registered was registered for the mime type `\"text/plain\"`. The top-level formatter that's used when we call `display` requests output of mime type `\"text/html\"` and the nested objects are formatted using `\"text/plain\"`. It's the nested objects, not the top-level HTML table, that's using the custom formatter here.\n\nWith that in mind, we can make it even more concise by registering a formatter for `Person`:",
"_____no_output_____"
]
],
[
[
"Formatter<Person>.Register(\n fun person writer ->\n writer.Write(person.FirstName)\n , mimeType = \"text/plain\");\n\npeople",
"_____no_output_____"
]
],
[
[
"Of course, you might not want table output. To replace the default HTML table view, you can register a formatter for the `\"text/html\"` mime type. Let's do that, and write some HTML using PocketView.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d0c8f8347498f93e0eb22a886e9988e934b0097d | 139,909 | ipynb | Jupyter Notebook | labs/lab_06.ipynb | maryonmorales/MAT-281_MaryonMorales | 2abf33b6592bf77ee3153ef871c46a73a190f710 | [
"MIT"
] | null | null | null | labs/lab_06.ipynb | maryonmorales/MAT-281_MaryonMorales | 2abf33b6592bf77ee3153ef871c46a73a190f710 | [
"MIT"
] | null | null | null | labs/lab_06.ipynb | maryonmorales/MAT-281_MaryonMorales | 2abf33b6592bf77ee3153ef871c46a73a190f710 | [
"MIT"
] | null | null | null | 130.512127 | 36,696 | 0.834778 | [
[
[
"# MAT281 - Laboratorio N°06\n\n",
"_____no_output_____"
],
[
"## Problema 01\n<img src=\"./images/logo_iris.jpg\" width=\"360\" height=\"360\" align=\"center\"/>",
"_____no_output_____"
],
[
"El **Iris dataset** es un conjunto de datos que contine una muestras de tres especies de Iris (Iris setosa, Iris virginica e Iris versicolor). Se midió cuatro rasgos de cada muestra: el largo y ancho del sépalo y pétalo, en centímetros.\n\nLo primero es cargar el conjunto de datos y ver las primeras filas que lo componen:",
"_____no_output_____"
]
],
[
[
"# librerias\n \nimport os\nimport numpy as np\nimport pandas as pd\n\nimport matplotlib.pyplot as plt\nimport seaborn as sns \npd.set_option('display.max_columns', 500) # Ver más columnas de los dataframes\n\n\n# Ver gráficos de matplotlib en jupyter notebook/lab\n%matplotlib inline",
"_____no_output_____"
],
[
"# cargar datos\ndf = pd.read_csv(os.path.join(\"data\",\"iris_contaminados.csv\"))\ndf.columns = ['sepalLength',\n 'sepalWidth',\n 'petalLength',\n 'petalWidth',\n 'species']\n\ndf.head() ",
"_____no_output_____"
]
],
[
[
"### Bases del experimento\n\nLo primero es identificar las variables que influyen en el estudio y la naturaleza de esta.\n\n* **species**: \n * Descripción: Nombre de la especie de Iris. \n * Tipo de dato: *string*\n * Limitantes: solo existen tres tipos (setosa, virginia y versicolor).\n* **sepalLength**: \n * Descripción: largo del sépalo. \n * Tipo de dato: *integer*. \n * Limitantes: los valores se encuentran entre 4.0 y 7.0 cm.\n* **sepalWidth**: \n * Descripción: ancho del sépalo. \n * Tipo de dato: *integer*. \n * Limitantes: los valores se encuentran entre 2.0 y 4.5 cm.\n* **petalLength**: \n * Descripción: largo del pétalo. \n * Tipo de dato: *integer*. \n * Limitantes: los valores se encuentran entre 1.0 y 7.0 cm.\n* **petalWidth**: \n * Descripción: ancho del pépalo. \n * Tipo de dato: *integer*. \n * Limitantes: los valores se encuentran entre 0.1 y 2.5 cm.",
"_____no_output_____"
],
[
"Su objetivo es realizar un correcto **E.D.A.**, para esto debe seguir las siguientes intrucciones:",
"_____no_output_____"
],
[
"1. Realizar un conteo de elementos de la columna **species** y corregir según su criterio. Reemplace por \"default\" los valores nan..",
"_____no_output_____"
]
],
[
[
"df['species'].fillna('default',inplace=True)\ndf",
"_____no_output_____"
],
[
"df['species'].value_counts()",
"_____no_output_____"
],
[
"#Veamos los valores que puede tomar species\ndf['species'].value_counts().index",
"_____no_output_____"
],
[
"#Dejamos todo sin espacios ni mayúscula\ndf['species'].astype('str')\ndf['species'] = df['species'].str.lower().str.strip()\ndf['species'].value_counts().index",
"_____no_output_____"
]
],
[
[
"2. Realizar un gráfico de box-plot sobre el largo y ancho de los petalos y sépalos. Reemplace por **0** los valores nan.",
"_____no_output_____"
]
],
[
[
"df['sepalLength'].fillna(0,inplace=True)\ndf['sepalWidth'].fillna(0,inplace=True)\ndf['petalLength'].fillna(0,inplace=True)\ndf['petalWidth'].fillna(0,inplace=True)\n\ndf_1=df.drop(['species'], axis=1)\n\nsns.boxplot(data=df_1)",
"_____no_output_____"
],
[
"\n",
"_____no_output_____"
]
],
[
[
"3. Anteriormente se define un rango de valores válidos para los valores del largo y ancho de los petalos y sépalos. Agregue una columna denominada **label** que identifique cuál de estos valores esta fuera del rango de valores válidos.",
"_____no_output_____"
]
],
[
[
"lista_label = []\n\nfor i in range(len(df)):\n if df['sepalLength'][i]<4.0 or df['sepalLength'][i]>7.0:\n lista_label.append(\"sepalLength\")\n \n elif df['sepalWidth'][i]<2.0 or df['sepalWidth'][i]>4.5:\n lista_label.append(\"sepalWidth\")\n \n elif df['petalLength'][i]<1.0 or df['petalLength'][i]>7.0:\n lista_label.append(\"petalLength\")\n \n elif df['petalWidth'][i]<0.1 or df['petalWidth'][i]>2.5:\n lista_label.append(\"petalWidth\")\n \n else: \n lista_label.append('Dentro de los rangos')\n \ndf['label']=lista_label\n\ndf",
"_____no_output_____"
]
],
[
[
"4. Realice un gráfico de *sepalLength* vs *petalLength* y otro de *sepalWidth* vs *petalWidth* categorizados por la etiqueta **label**. Concluya sus resultados.",
"_____no_output_____"
]
],
[
[
"# tamano del grafico\nplt.figure(figsize=(10, 5)) \n\n# graficar\nsns.scatterplot(\n x='petalLength',\n y='sepalLength',\n data=df,\n hue='label',\n \n)",
"_____no_output_____"
],
[
"# tamano del grafico\nplt.figure(figsize=(10, 5)) \n\n# graficar\nsns.scatterplot(\n x='petalWidth',\n y='sepalWidth',\n data=df,\n hue='label',\n \n)",
"_____no_output_____"
]
],
[
[
"Vemos en ambos gráficos que los largos y anchos de los sépalos y los pétalos están en su mayoría dentro de los rangos, además podemos observar que la columna sepalLength es la que mas errada está.",
"_____no_output_____"
],
[
"5. Filtre los datos válidos y realice un gráfico de *sepalLength* vs *petalLength* categorizados por la etiqueta **species**.",
"_____no_output_____"
]
],
[
[
"mask_sl = df['sepalLength']<=7.0\nmask_sl1 = df['sepalLength']>=4.0\n\nmask_sw = df['sepalWidth']<=4.5\nmask_sw1 = df['sepalWidth']>=2.0\n\nmask_pl = df['petalLength']<=7.0\nmask_pl1 = df['petalLength']>=1.0\n\nmask_pw = df['petalWidth']<=2.5\nmask_pw1 = df['petalWidth']>=0.1",
"_____no_output_____"
],
[
"df_filtrado = df[mask_sl & mask_sw & mask_pl & mask_pw & mask_sl1 & mask_sw1 & mask_pl1 & mask_pw1]",
"_____no_output_____"
],
[
"df_filtrado",
"_____no_output_____"
],
[
"# tamano del grafico\nplt.figure(figsize=(10, 5)) \n\n# graficar\nsns.scatterplot(\n x='petalLength',\n y='sepalLength',\n data=df_filtrado,\n hue='species',\n \n)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
d0c8fc1417a0024f510d3231048443a4e1850a0e | 87,953 | ipynb | Jupyter Notebook | src/budget-text-analysis-Durham-County.ipynb | naseebth/Budget_Text_Analysis | cac0210b8b4b998fe798da92a9bbdd10eb1c4773 | [
"MIT"
] | null | null | null | src/budget-text-analysis-Durham-County.ipynb | naseebth/Budget_Text_Analysis | cac0210b8b4b998fe798da92a9bbdd10eb1c4773 | [
"MIT"
] | 13 | 2019-09-24T14:32:26.000Z | 2019-12-12T02:16:03.000Z | src/budget-text-analysis-Durham-County.ipynb | naseebth/Budget_Text_Analysis | cac0210b8b4b998fe798da92a9bbdd10eb1c4773 | [
"MIT"
] | 2 | 2020-01-04T07:32:56.000Z | 2020-09-16T07:20:09.000Z | 81.816744 | 51,294 | 0.561777 | [
[
[
"import pandas as pd\nimport nltk\nimport spacy\nimport gensim\nimport seaborn as sb\nfrom gensim import corpora, models, similarities\nfrom spacy.lang.en import English\nfrom nltk.corpus import wordnet as wn",
"_____no_output_____"
],
[
"data=pd.read_csv(\"DurhamCountyOriginalDataFY20.csv\",)",
"_____no_output_____"
],
[
"data.head()",
"_____no_output_____"
],
[
"data.drop(data.columns[0], axis=1)",
"_____no_output_____"
],
[
"data.describe()",
"_____no_output_____"
],
[
"#cleaning \nspacy.load('en')\nparser=English()\ndef tokenize(text):\n lda_tokens = []\n tokens = parser(text)\n for token in tokens:\n if token.orth_.isspace():\n continue\n elif token.orth_.startswith('http'):\n lda_tokens.append('com')\n else:\n lda_tokens.append(token.lower_)\n return lda_tokens",
"_____no_output_____"
],
[
"nltk.download('wordnet')",
"[nltk_data] Downloading package wordnet to\n[nltk_data] C:\\Users\\messi\\AppData\\Roaming\\nltk_data...\n[nltk_data] Package wordnet is already up-to-date!\n"
],
[
"def get_lemma(word):\n lemma=wn.morphy(word)\n if lemma is None:\n return word\n else:\n return lemma",
"_____no_output_____"
],
[
"from nltk.stem.wordnet import WordNetLemmatizer\ndef get_lemma2(word):\n return WordNetLemmatizer().lemmatize(word)",
"_____no_output_____"
],
[
"#filtering stopwords\nnltk.download('stopwords')\nstop=set(nltk.corpus.stopwords.words('english'))",
"[nltk_data] Downloading package stopwords to\n[nltk_data] C:\\Users\\messi\\AppData\\Roaming\\nltk_data...\n[nltk_data] Package stopwords is already up-to-date!\n"
],
[
"#preparing for lda\ndef prepare_lda(text):\n tokens=tokenize(text)\n tokens=[token for token in tokens if len(token)>4]\n tokens=[token for token in tokens if token not in stop]\n tokens=[get_lemma(token) for token in tokens]\n return tokens",
"_____no_output_____"
],
[
"#creating tokens\nimport random\ntext_data=[]\nwith open('DurhamCountyOriginalDataFY20.csv') as f:\n for line in f:\n tokens=prepare_lda(line)\n if random.random()>.99:\n print(tokens)\n text_data.append(tokens)",
"['66\",2,\"jay']\n['224\",3,\"register']\n['373\",4,\"officials']\n['569\",4,\"is']\n['577\",4,\"was']\n['607\",4,\"high']\n['623\",4,\"county']\n['628\",4,\"of']\n['705\",4,\"as']\n['833\",4,\"the']\n['894\",4,\"resources']\n['914\",4,\"for']\n['1158\",6,\"united']\n['1221\",7,\"shows']\n['1242\",7,\"funds']\n['1252\",7,\"is']\n['1272\",7,\"include']\n['1294\",7,\"of']\n['1307\",7,\"a']\n['1328\",7,\"show']\n['1459\",7,\"an']\n['1593\",7,\"adequate']\n['1711\",7,\"employees']\n['1867\",8,\"the']\n['2046\",8,\"capital']\n['2392\",9,\"fee']\n['2567\",10,\"mental']\n['2705\",11,\"fund']\n['2740\",12,\"debt']\n['2769\",12,\"contacts']\n['2838\",13,\"a']\n['2846\",13,\"a']\n['2875\",13,\"our']\n['2911\",14,\"county']\n['3166\",14,\"in']\n['3206\",14,\"developing']\n['3310\",14,\"in']\n['3361\",14,\"matter']\n['3399\",15,\"tax']\n['3427\",15,\"cents']\n['3456\",15,\"is']\n['3485\",15,\"neutral']\n['3533\",15,\"new']\n['3608\",15,\"in']\n['3649\",15,\"challenge']\n['3678\",15,\"consumed']\n['3782\",15,\"county']\n['3887\",16,\"in']\n['3892\",16,\"state']\n['3894\",16,\"this']\n['4153\",16,\"our']\n['4187\",16,\"october']\n['4198\",16,\"of']\n['4318\",16,\"a']\n['4445\",17,\"earlier']\n['4456\",17,\"fy']\n['4457\",17,\"was']\n['4542\",17,\"growth']\n['4823\",18,\"tax']\n['4871\",18,\"while']\n['4896\",18,\"cent']\n['4952\",19,\"fy']\n['4972\",19,\"of']\n['5031\",19,\"of']\n['5127\",19,\"available']\n['5205\",19,\"of']\n['5289\",19,\"durham']\n['5368\",20,\"plan']\n['5391\",20,\"budget']\n['5401\",20,\"by']\n['5455\",20,\"earlier']\n['5508\",20,\"million']\n['5540\",20,\"goal']\n['5560\",20,\"support']\n['5563\",20,\"opening']\n['5807\",21,\"durham']\n['5810\",21,\"college']\n['5869\",21,\"increase']\n['5980\",21,\"with']\n['6183\",21,\"several']\n['6238\",21,\"of']\n['6422\",22,\"of']\n['6494\",22,\"a']\n['6650\",22,\"children']\n['6662\",22,\"in']\n['6768\",22,\"appropriated']\n['6848\",22,\"with']\n['7205\",23,\"i']\n['7212\",23,\"better']\n['7262\",23,\"childhood']\n['7693\",24,\"durham']\n['7794\",24,\"manager']\n['7963\",24,\"alliance']\n['8070\",25,\"center']\n['8201\",25,\"innovative']\n['8462\",25,\"assist']\n['8487\",25,\"safety']\n['8524\",25,\"coverage']\n['8666\",26,\"this']\n['8705\",26,\"in']\n['8835\",26,\"base']\n['9022\",26,\"service']\n['9029\",26,\"teams']\n['9100\",26,\"work']\n['9228\",26,\"the']\n['9349\",27,\"safety']\n['9436\",27,\"continued']\n['9445\",27,\"utility']\n['9655\",27,\"with']\n['9787\",27,\"government']\n['9817\",27,\"operational']\n['10032\",28,\"for']\n['10066\",28,\"government']\n['10397\",28,\"driver']\n['10406\",28,\"this']\n['10520\",29,\"and']\n['10546\",29,\"and']\n['10603\",29,\"by']\n['10746\",29,\"moved']\n['10794\",29,\"influence']\n['10972\",30,\"our']\n['10998\",30,\"is']\n"
],
[
"#creating dictionary\nfrom gensim import corpora\ndictionary=corpora.Dictionary(text_data)\ncorpus= [dictionary.doc2bow(text) for text in text_data]\nimport pickle \npickle.dump(corpus, open('corpus.pkl', 'wb'))",
"_____no_output_____"
],
[
"#example\nldamodel=gensim.models.ldamodel.LdaModel(corpus, num_topics=10, id2word=dictionary, passes=6)\nfor idx, topic in ldamodel.print_topics(-1):\n print('Topic: {} word: {}'.format(idx, topic))",
"Topic: 0 word: 0.052*\"9655\",27,\"with\" + 0.052*\"7963\",24,\"alliance\" + 0.052*\"5807\",21,\"durham\" + 0.052*\"833\",4,\"the\" + 0.052*\"2392\",9,\"fee\" + 0.052*\"1242\",7,\"funds\" + 0.052*\"6848\",22,\"with\" + 0.052*\"6650\",22,\"children\" + 0.052*\"7212\",23,\"better\" + 0.005*\"4187\",16,\"october\"\nTopic: 1 word: 0.050*\"3782\",15,\"county\" + 0.050*\"10603\",29,\"by\" + 0.050*\"2567\",10,\"mental\" + 0.050*\"9817\",27,\"operational\" + 0.050*\"4823\",18,\"tax\" + 0.050*\"5540\",20,\"goal\" + 0.050*\"2838\",13,\"a\" + 0.050*\"5560\",20,\"support\" + 0.050*\"569\",4,\"is\" + 0.050*\"1593\",7,\"adequate\"\nTopic: 2 word: 0.040*\"7794\",24,\"manager\" + 0.040*\"5031\",19,\"of\" + 0.040*\"1221\",7,\"shows\" + 0.040*\"2911\",14,\"county\" + 0.040*\"10546\",29,\"and\" + 0.040*\"5368\",20,\"plan\" + 0.040*\"2046\",8,\"capital\" + 0.040*\"8070\",25,\"center\" + 0.040*\"705\",4,\"as\" + 0.040*\"6494\",22,\"a\"\nTopic: 3 word: 0.040*\"10998\",30,\"is\" + 0.040*\"5455\",20,\"earlier\" + 0.040*\"224\",3,\"register\" + 0.040*\"1459\",7,\"an\" + 0.040*\"9787\",27,\"government\" + 0.040*\"5205\",19,\"of\" + 0.040*\"4972\",19,\"of\" + 0.040*\"1711\",7,\"employees\" + 0.040*\"1328\",7,\"show\" + 0.040*\"3166\",14,\"in\"\nTopic: 4 word: 0.044*\"7205\",23,\"i\" + 0.044*\"9436\",27,\"continued\" + 0.044*\"3678\",15,\"consumed\" + 0.044*\"8201\",25,\"innovative\" + 0.044*\"9100\",26,\"work\" + 0.044*\"5869\",21,\"increase\" + 0.044*\"4457\",17,\"was\" + 0.044*\"914\",4,\"for\" + 0.044*\"4198\",16,\"of\" + 0.044*\"3892\",16,\"state\"\nTopic: 5 word: 0.050*\"10520\",29,\"and\" + 0.050*\"3427\",15,\"cents\" + 0.050*\"8705\",26,\"in\" + 0.050*\"8462\",25,\"assist\" + 0.050*\"9228\",26,\"the\" + 0.050*\"373\",4,\"officials\" + 0.050*\"5508\",20,\"million\" + 0.050*\"10746\",29,\"moved\" + 0.050*\"628\",4,\"of\" + 0.050*\"9029\",26,\"teams\"\nTopic: 6 word: 0.050*\"3399\",15,\"tax\" + 0.050*\"894\",4,\"resources\" + 0.050*\"4542\",17,\"growth\" + 0.050*\"10972\",30,\"our\" + 0.050*\"10397\",28,\"driver\" + 0.050*\"10406\",28,\"this\" + 0.050*\"5810\",21,\"college\" + 0.050*\"3310\",14,\"in\" + 0.050*\"8524\",25,\"coverage\" + 0.050*\"6238\",21,\"of\"\nTopic: 7 word: 0.040*\"6662\",22,\"in\" + 0.040*\"1294\",7,\"of\" + 0.040*\"6183\",21,\"several\" + 0.040*\"66\",2,\"jay\" + 0.040*\"6768\",22,\"appropriated\" + 0.040*\"623\",4,\"county\" + 0.040*\"5289\",19,\"durham\" + 0.040*\"3649\",15,\"challenge\" + 0.040*\"5391\",20,\"budget\" + 0.040*\"1252\",7,\"is\"\nTopic: 8 word: 0.045*\"577\",4,\"was\" + 0.045*\"5980\",21,\"with\" + 0.045*\"4456\",17,\"fy\" + 0.045*\"8487\",25,\"safety\" + 0.045*\"5401\",20,\"by\" + 0.045*\"2875\",13,\"our\" + 0.045*\"4318\",16,\"a\" + 0.045*\"4896\",18,\"cent\" + 0.045*\"7262\",23,\"childhood\" + 0.045*\"2769\",12,\"contacts\"\nTopic: 9 word: 0.044*\"4952\",19,\"fy\" + 0.044*\"4445\",17,\"earlier\" + 0.044*\"607\",4,\"high\" + 0.044*\"10066\",28,\"government\" + 0.044*\"1867\",8,\"the\" + 0.044*\"7693\",24,\"durham\" + 0.044*\"8666\",26,\"this\" + 0.044*\"3533\",15,\"new\" + 0.044*\"2705\",11,\"fund\" + 0.044*\"10032\",28,\"for\"\n"
],
[
"import pyLDAvis.gensim\nlda_display = pyLDAvis.gensim.prepare(ldamodel, corpus, dictionary, sort_topics=True)\npyLDAvis.display(lda_display)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0c8fe644ac76cf65d9b287f176e3de700420c13 | 198,865 | ipynb | Jupyter Notebook | Examples/Polarizations/Polarizations.ipynb | Zahner-elektrik/Zahner-Remote-Python | a25752321c4b4dc565235194a9e289322f50b413 | [
"MIT"
] | null | null | null | Examples/Polarizations/Polarizations.ipynb | Zahner-elektrik/Zahner-Remote-Python | a25752321c4b4dc565235194a9e289322f50b413 | [
"MIT"
] | null | null | null | Examples/Polarizations/Polarizations.ipynb | Zahner-elektrik/Zahner-Remote-Python | a25752321c4b4dc565235194a9e289322f50b413 | [
"MIT"
] | null | null | null | 173.378378 | 49,268 | 0.898544 | [
[
[
"# Polarization\n\n**Prerequisite: Basic Introduction**\n\nIn this notebook parameterization of the polarization primitives and few methods derived from the primitives are presented. In particular, setting up of\n\n* General parameters\n* Current/voltage range\n* Current/voltage limits\n* Online display\n* Tolerance limits\n\nis explained in detail.\n\n**Test object:** Polarization measurements are carried out for a 200 F, 2.7 V supercapacitor. ",
"_____no_output_____"
]
],
[
[
"from zahner_potentiostat.scpi_control.searcher import SCPIDeviceSearcher\nfrom zahner_potentiostat.scpi_control.serial_interface import SerialCommandInterface, SerialDataInterface\nfrom zahner_potentiostat.scpi_control.control import *\nfrom zahner_potentiostat.scpi_control.datahandler import DataManager\nfrom zahner_potentiostat.scpi_control.datareceiver import TrackTypes\nfrom zahner_potentiostat.display.onlinedisplay import OnlineDisplay\n\nfrom jupyter_utils import executionInNotebook, notebookCodeToPython\nif __name__ == '__main__':\n deviceSearcher = SCPIDeviceSearcher()\n deviceSearcher.searchZahnerDevices()\n commandSerial, dataSerial = deviceSearcher.selectDevice()",
"_____no_output_____"
],
[
" ZahnerPP2x2 = SCPIDevice(SerialCommandInterface(commandSerial), SerialDataInterface(dataSerial))",
"_____no_output_____"
]
],
[
[
"# Setting up general parameters\n\nAt first, general parameters are set which will be used in all primitives to be executed.\n\n[setRaiseonErrorEnabled(True)](https://doc.zahner.de/zahner_potentiostat/scpi_control/control.html?highlight=setraiseonerrorenabled#zahner_potentiostat.scpi_control.control.SCPIDevice.setRaiseOnErrorEnabled) enables that every error that comes back from the device triggers an exception. By default, it is turned off and errors are only printed on the console.\n\nThe next command sets the sampling frequency to 50 Hz. A maximum sampling frequency of 100 Hz is possible.",
"_____no_output_____"
]
],
[
[
" ZahnerPP2x2.setRaiseOnErrorEnabled(True)\n ZahnerPP2x2.setSamplingFrequency(50)",
"_____no_output_____"
]
],
[
[
"The **mains power-line frequency** of the device is pre-set to the customer's mains frequency before delivery.\n\nTo ensure the correct frequency value, the user must also provide the **mains power-line frequency** with which the device will be operated. \n\n**The mains frequency is stored in the device's internal memory and remains stored even after a software update or a reboot hence providing mains frequency with every script execution is not necessay**.",
"_____no_output_____"
]
],
[
[
" ZahnerPP2x2.setLineFrequency(50)",
"_____no_output_____"
]
],
[
[
"Each potentiostat is factory-calibrated before delivery. The calibration is carried out after potentiostat's warm-up time of 30 minutes.\n\nWith the following primitive, the users may calibrate the potentiostat again. However it is strongly recommended to start calibration after a warm up time of 30 minutes. The calibration only takes a few seconds.\n\n<div class=\"alert alert-block alert-warning\">\n <b>Warning:</b> The offsets must be calibrated manually by calling this method after the instrument has been warmed up for at least 30 minutes. \n If a cold instrument is calibrated, the offsets will be worse when the instrument will be warm during operation.\n</div>\n\nIf the device repeatedly displays an error code after calibration, there may be a defect in the device. In this case, please contact your Zahner support representive or Zahner distributor.",
"_____no_output_____"
]
],
[
[
" ZahnerPP2x2.calibrateOffsets()",
"_____no_output_____"
]
],
[
[
"## Set current ranging parameters\n\nIn the following code, at first, autoranging of the current range is carried out.\n\nIf possible, **autoranging should be avoided**, as autoranges provide noisy measurement results, since the shunt change causes disturbances. It also takes time for the measuring device to find the correct current range, during which time-sensitive measurement data may be lost.\n\nIn order to see less disturbances during autoranging, the interpolation is switched on [setInterpolationEnabled(True)](https://doc.zahner.de/zahner_potentiostat/scpi_control/control.html?highlight=interpolation#zahner_potentiostat.scpi_control.control.SCPIDevice.setInterpolationEnabled). With this, the measurement data is linearly interpolated for the duration of the shunt change and voltage disturbances are reduced. \n\nDepending on the measurement object, this may works for better or for worse.\n\nZahner's potentiostats (PP2X2) have four shunts (0, 1, 2, and 3). In this section, shunt 1 is selected because with supercapacitor, a voltage jump is initially measured which leads to a substantial current flow in the supercapacitor. Hence Shunt 1 is used as it covers a big current range. To get further information about the suitable shunts for different current ranges, please check the respective [manual of the potentiostat](http://zahner.de/files/power_potentiostats.pdf). Shunt 0 is only be used when PP2X2 potentiostats are used as EPC devices with the Zennium series potentiostats.\n\nAlternatively, the [setCurrentRange()](https://doc.zahner.de/zahner_potentiostat/scpi_control/control.html#zahner_potentiostat.scpi_control.control.SCPIDevice.setCurrentRange) method can also be used to select an appropriate current range to match the expected currents.\n\nFinally, the shunts limits in which autoranging is possible are set.\n\nIn stand-alone mode, only DC measurements are possible. ",
"_____no_output_____"
]
],
[
[
" ZahnerPP2x2.setAutorangingEnabled(True)\n ZahnerPP2x2.setInterpolationEnabled(True)\n \n ZahnerPP2x2.setShuntIndex(1)\n #or\n ZahnerPP2x2.setCurrentRange(20)\n \n ZahnerPP2x2.setMinimumShuntIndex(1)\n ZahnerPP2x2.setMaximumShuntIndex(3)",
"_____no_output_____"
]
],
[
[
"## Set voltage range\n\nThe voltage range index must be selected manually. This is not switched automatically. The range can be set like the current by shunt index or by desired maximum working range. \n\nThe maximum voltages for each range can be found in the manual of the potentiostat. ",
"_____no_output_____"
]
],
[
[
" ZahnerPP2x2.setVoltageRangeIndex(0)\n #or\n ZahnerPP2x2.setVoltageRange(2.5)",
"_____no_output_____"
]
],
[
[
"## Set current and voltage limits\n\n**The voltage limits are always absolute values and not related to the OCV.** If the limits are exceeded, the potentiostat switches off and the device assumes an error state.\n\n<div class=\"alert alert-block alert-danger\">\n <b>Danger:</b> Limits are monitored only in primitives. If only the potentiostat is switched on, neither measurement nor limits are monitored.\n</div>\n\n\nWith ZahnerPP2x2.[clearState()](https://doc.zahner.de/zahner_potentiostat/scpi_control/control.html#zahner_potentiostat.scpi_control.control.SCPIDevice.clearState), the error state could be cleared so that primitives can be executed.\n\nIf an attempt is made to execute primitives in the error state, an error message is displayed.\n\nIn the following code, current range of $\\pm$ 30 A and voltage are $\\pm$ 5 V are set and enabled.",
"_____no_output_____"
]
],
[
[
" ZahnerPP2x2.setMinimumCurrentGlobal(-30)\n ZahnerPP2x2.setMaximumCurrentGlobal(30)\n ZahnerPP2x2.setGlobalCurrentCheckEnabled(True)\n \n ZahnerPP2x2.setMinimumVoltageGlobal(0)\n ZahnerPP2x2.setMaximumVoltageGlobal(2.5)\n ZahnerPP2x2.setGlobalVoltageCheckEnabled(True)",
"_____no_output_____"
]
],
[
[
"# Starting the live data display\n\nWith the following command, a plotting window can be opened, in which the measured voltage and current values from the measuring device are displayed live.\n\nThe function executionInNotebook() is used to check if the execution is taking place in Jupyter notebook or not. As the Jupyter cannot display the live measured data so if the execution take place in Jupyter notebook then the online display will not be executed.",
"_____no_output_____"
]
],
[
[
" onlineDisplay = None\n if executionInNotebook() == False:\n onlineDisplay = OnlineDisplay(ZahnerPP2x2.getDataReceiver())",
"_____no_output_____"
]
],
[
[
"# Polarisation at OCV/OCP with change tolerance abort\n\nAs a first example, a potential jump is output at the open circuit voltage/potential of a supercap, then polarization is performed until the current change is below a defined value. \nIn the following, OCV is always used, which is the same as OCP.\n\nFor this measurement, it would be better to measure with the autoranging switched off, since the ranging is slower than the current change during the potential jump.\nThe largest current range is set because a large current will flow during the jump.",
"_____no_output_____"
]
],
[
[
" ZahnerPP2x2.setAutorangingEnabled(False)\n ZahnerPP2x2.setShuntIndex(1)",
"_____no_output_____"
]
],
[
[
"## Setting the measurement on the OCV\n\nIn order to measure with a defined potential, potentiostatic mode is chosen.",
"_____no_output_____"
]
],
[
[
" ZahnerPP2x2.setCoupling(COUPLING.POTENTIOSTATIC)",
"_____no_output_____"
]
],
[
[
"The first command [setVoltageRelation()](https://doc.zahner.de/zahner_potentiostat/scpi_control/control.html#zahner_potentiostat.scpi_control.control.SCPIDevice.setVoltageRelation) sets the voltage value, assigned to the [setVoltageValue()](https://doc.zahner.de/zahner_potentiostat/scpi_control/control.html#zahner_potentiostat.scpi_control.control.SCPIDevice.setVoltageValue).\n\nThe [setVoltageValue()](https://doc.zahner.de/zahner_potentiostat/scpi_control/control.html#zahner_potentiostat.scpi_control.control.SCPIDevice.setVoltageValue) or [setCurrentValue()](https://doc.zahner.de/zahner_potentiostat/scpi_control/control.html#zahner_potentiostat.scpi_control.control.SCPIDevice.setCurrentValue) set the voltage or current values which are applied when the potentiostat is switched on, without starting a primitive. In this state, the voltage or current are not recorded. Only when a primitive is run, recording of voltage or current (as well as autoranging) is carried out.\n\nThe second command [setVoltageParameterRelation()](https://doc.zahner.de/zahner_potentiostat/scpi_control/control.html#zahner_potentiostat.scpi_control.control.SCPIDevice.setVoltageParameterRelation) set the value defined in the [setVoltageParameter()](https://doc.zahner.de/zahner_potentiostat/scpi_control/control.html#zahner_potentiostat.scpi_control.control.SCPIDevice.setVoltageParameter). This command tells the device the voltage or current parameter needed in potentiostatic or galvanostatic methods.\n\n[RELATION.ZERO](https://doc.zahner.de/zahner_potentiostat/scpi_control/control.html#zahner_potentiostat.scpi_control.control.RELATION) defines that absolute voltages are concerned.\n\nWith the command [measureOCV()](https://doc.zahner.de/zahner_potentiostat/scpi_control/control.html#zahner_potentiostat.scpi_control.control.SCPIDevice.measureOCV), the open circuit voltage is defined, to which the voltage values are refered.",
"_____no_output_____"
]
],
[
[
" ZahnerPP2x2.setVoltageRelation(RELATION.OCV)\n ZahnerPP2x2.setVoltageParameterRelation(\"OCV\")",
"_____no_output_____"
]
],
[
[
"## Setting up tolerance limits\n\nIn the subsection, the tolerance limits are defined. The tolerance limit refers to the complementary quantity (current in the potentiostatic and the voltage in the galvanostatic mode). In the OCV scan, this value also refers to the voltage.\n\nThe absolute tolerance is defined as a change in amperes or volts per second. To proceed to the next primitive, the value of current or voltage change per second should fall below the defined limit. The relative tolerance is related to the current or voltage value at the start of the primitive. Absolute and relative tolerances are defined as following\n\n$Absolute Tolerance = \\frac{X_{n}-X_{n-1}}{t_{n}-t_{n-1}}$\n\n$Relative Tolerance = \\frac{Absolute Tolerance}{X_{0}}$\n\nIn the following example, the absolute tolerance is set to 1 $\\frac{mA}{s}$. Here as the relative tolerance is not needed so it is set to 0. \n\nThe tolerance check must be activated so that the primitive can be aborted when the tolerance limits are met.",
"_____no_output_____"
]
],
[
[
" ZahnerPP2x2.setAbsoluteTolerance(0.001)\n ZahnerPP2x2.setRelativeTolerance(0.000)\n ZahnerPP2x2.setToleranceBreakEnabled(True)",
"_____no_output_____"
]
],
[
[
"A minumum and maximum time can also be defined in regards to the tolerance limits. The minimum time provides the times for which the test object should be polarized. If the voltage or current tolerance is met before the minimum defined time is passed then the polarization is carried out till the minimum time is passed.\n\nIn the following example, 10 seconds are selected as the minimum time.",
"_____no_output_____"
]
],
[
[
" ZahnerPP2x2.setMinimumTimeParameter(10)",
"_____no_output_____"
]
],
[
[
"If the tolerance is not reached, the polarization should be terminated after the maximum time at the latest.\n\nHere the second input possibility of the time is selected. Either the time is entered in seconds as a floating point number or as a string. \n\nAs string, the user has the possibility to enter a time unit (s, min, m or h) as parameter into the method [setMaximumTimeParameter()](https://doc.zahner.de/zahner_potentiostat/scpi_control/control.html#zahner_potentiostat.scpi_control.control.SCPIDevice.setMaximumTimeParameter).",
"_____no_output_____"
]
],
[
[
" ZahnerPP2x2.setMaximumTimeParameter(\"1 m\")",
"_____no_output_____"
]
],
[
[
"## Execute the primitives\n\nNext, the voltage is set, which is outputted when the pot is switched on.\n\nThe potentiostat is turned on before the primitives, then it stays on after the primitives as long as no global limits are exceeded in the primitive.",
"_____no_output_____"
]
],
[
[
" ZahnerPP2x2.setVoltageValue(0)",
"_____no_output_____"
]
],
[
[
"Now the open circuit voltage is measured, which is used as a reference for the commands related to the",
"_____no_output_____"
]
],
[
[
" print(\"open circuit reference voltage: \" + str(ZahnerPP2x2.measureOCV()) + \" V\")",
"open circuit reference voltage: 0.6147955226842653 V\n"
],
[
" ZahnerPP2x2.setPotentiostatEnabled(True)\n \n ZahnerPP2x2.setVoltageParameter(0.1) #OCV + 0.1\n ZahnerPP2x2.measurePolarization()\n \n ZahnerPP2x2.setVoltageParameter(0) #OCV\n ZahnerPP2x2.measurePolarization()\n \n ZahnerPP2x2.setPotentiostatEnabled(False)",
"_____no_output_____"
]
],
[
[
"## Plot the data",
"_____no_output_____"
]
],
[
[
" dataReceiver = ZahnerPP2x2.getDataReceiver()\n dataManager = DataManager(dataReceiver)\n dataManager.plotTIUData()",
"_____no_output_____"
]
],
[
[
"## Reset configurations\n\nNow the specific example configurations are reset to the default configuration, which are not needed for the next example.",
"_____no_output_____"
]
],
[
[
" ZahnerPP2x2.setToleranceBreakEnabled(False)\n ZahnerPP2x2.setVoltageRelation(RELATION.ZERO)\n ZahnerPP2x2.setVoltageParameterRelation(RELATION.ZERO)\n dataReceiver.deletePoints()\n ZahnerPP2x2.setAutorangingEnabled(True)",
"_____no_output_____"
]
],
[
[
"# Polarization - aborted with charge limit\n\nThe following example shows how polarizing primitives can be aborted on reaching a chargelimit.\n\nThe settings of the global limits of current and voltage remain to protect the supercapacitor.",
"_____no_output_____"
],
[
"## Setting up the measurement\n\nHere galvanostatic measurement is performed, which makes it easier to read the charge from the diagram. However, potentiostatic charge verification would also be possible.\n\nGalvanostatic mode automatically selects the correct current range. The previously allowed current ranges are kept here as well.",
"_____no_output_____"
]
],
[
[
" ZahnerPP2x2.setCoupling(COUPLING.GALVANOSTATIC)",
"_____no_output_____"
]
],
[
[
"## Setting up the charge conditions\n\nThe supercapacitor is charged by 100 As and then discharged by 50 As.",
"_____no_output_____"
]
],
[
[
" ZahnerPP2x2.setMaximumCharge(100)\n ZahnerPP2x2.setMinimumCharge(-50)\n ZahnerPP2x2.setChargeBreakEnabled(True)",
"_____no_output_____"
]
],
[
[
"## Execute the primitives\n\nThe maximum time is set to 2 minutes.\n\nCharging to 100 As with 2 A charging current should take 50 seconds. Similarly discharging to -50 As with -2 A discharging currents should take 25 seconds.",
"_____no_output_____"
]
],
[
[
" ZahnerPP2x2.setMaximumTimeParameter(\"2 m\")",
"_____no_output_____"
]
],
[
[
"Set the current to 2 A and measure the polarization to charge the supercapacitor.",
"_____no_output_____"
]
],
[
[
" ZahnerPP2x2.setCurrentParameter(2)\n ZahnerPP2x2.measurePolarization() ",
"_____no_output_____"
]
],
[
[
"Set a discharge current of -2 A. The charging and discharging parameters can be customized individually.",
"_____no_output_____"
]
],
[
[
" ZahnerPP2x2.setCurrentParameter(-2)\n ZahnerPP2x2.measurePolarization()",
"_____no_output_____"
]
],
[
[
"## Plot the data\n\nThe dataManager measured data can be easily plotted.",
"_____no_output_____"
]
],
[
[
" dataManager.plotTIUData()",
"_____no_output_____"
]
],
[
[
"The voltage jump at 50 s (observed at the current polarity change) is due to the internal resistance of the supercapacitor, which is about 4 mΩ.",
"_____no_output_____"
],
[
"## Reset configurations\n\nAgain the configurations are reset to the default configuration, which are not needed for the next example.",
"_____no_output_____"
]
],
[
[
" ZahnerPP2x2.setChargeBreakEnabled(False)\n dataReceiver.deletePoints()",
"_____no_output_____"
]
],
[
[
"# Charging and discharging routines\n\nThe following example shows how to charge and discharge a supercapacitor with the polarization primitive.\n\nThe settings of the global limits of current and voltage remain to protect the supercapacitor.",
"_____no_output_____"
],
[
"## Setting the measurement\n\nThe supercapacitor is galvanostatically charged.",
"_____no_output_____"
]
],
[
[
" ZahnerPP2x2.setCoupling(COUPLING.GALVANOSTATIC)",
"_____no_output_____"
]
],
[
[
"## Setting the reversal potentials\n\nThe supercapactor is cycled between 1 V and 2 V. \nTo realize this, the functional voltage drop is set to 1 V and 2 V for the polarization primitive.\n\nHere, at the beginning of the primitive the minimum and maximum values must also be observed. \n\nDepending on test object, the minimum voltage may have to be changed before the first polarization. In the case of supercapacitor, it is assumed that the supercapacitor is completely uncharged therefore a minumum voltage of 0 V is set.\n\nFunctional current aborts for potentiostatic primitives also exist.",
"_____no_output_____"
]
],
[
[
" ZahnerPP2x2.setMinimumVoltageParameter(1)\n ZahnerPP2x2.setMaximumVoltageParameter(2)\n ZahnerPP2x2.setMinMaxVoltageParameterCheckEnabled(True)",
"_____no_output_____"
]
],
[
[
"Charging the super capacitor with 10 A current will charge the capacitor to 2 V in 20 seconds hence the maximum charging time is set to 30 seconds.\n\nFor safety, a maximum time must always be specified with this primitive can be switched off.",
"_____no_output_____"
]
],
[
[
" ZahnerPP2x2.setMaximumTimeParameter(\"30 s\")",
"_____no_output_____"
]
],
[
[
"## Execute the primitives",
"_____no_output_____"
]
],
[
[
" ZahnerPP2x2.setCurrentParameter(10)\n ZahnerPP2x2.measurePolarization()",
"_____no_output_____"
]
],
[
[
"After charging the possibly empty electrolytic capacitor, now the lower voltage limit can be adjusted.",
"_____no_output_____"
]
],
[
[
" ZahnerPP2x2.setMinimumVoltageParameter(1)",
"_____no_output_____"
]
],
[
[
"Afterwards, two cycles of charging nad discharging are carried out.",
"_____no_output_____"
]
],
[
[
" cycles = 2\n for i in range(cycles):\n ZahnerPP2x2.setCurrentParameter(-10)\n ZahnerPP2x2.measurePolarization()\n ZahnerPP2x2.setCurrentParameter(10)\n ZahnerPP2x2.measurePolarization()",
"_____no_output_____"
]
],
[
[
"At the end, the supercapcitor will have a voltage of 1 V.",
"_____no_output_____"
]
],
[
[
" ZahnerPP2x2.setCurrentParameter(-10)\n ZahnerPP2x2.measurePolarization()",
"_____no_output_____"
]
],
[
[
"Instead of manually composing the charge or discharge with primitives, user may also use the charge [measureCharge()](https://doc.zahner.de/zahner_potentiostat/scpi_control/control.html#zahner_potentiostat.scpi_control.control.SCPIDevice.measureCharge) and discharge [measureDischarge()](https://doc.zahner.de/zahner_potentiostat/scpi_control/control.html#zahner_potentiostat.scpi_control.control.SCPIDevice.measureDischarge) methods, which have been programmed as an example of how primitives can be composed into more complex methods.",
"_____no_output_____"
],
[
"## Plot the data",
"_____no_output_____"
]
],
[
[
" dataManager.plotTIUData()",
"_____no_output_____"
]
],
[
[
"## Reset configurations\n\nNow the specific example configurations are reset to the default configuration, which are not needed for the next example.",
"_____no_output_____"
]
],
[
[
" ZahnerPP2x2.setMinMaxVoltageParameterCheckEnabled(False)\n dataReceiver.deletePoints()",
"_____no_output_____"
]
],
[
[
"# Open circuit voltage scan\n\nThe following example shows how to record the open circuit voltage scan.\n\nIn principle, the open circuit voltage scan [measureOCVScan()](https://doc.zahner.de/zahner_potentiostat/scpi_control/control.html#zahner_potentiostat.scpi_control.control.SCPIDevice.measureOCVScan) is same as a galvanostatic polarization with the potentiostat turned off. \nThe user may set a minimum and maximum voltage at which the primitive can be stopped. \nAlso a voltage change tolerance as shown in a previous example is possible. For example, to measure OCV until the voltage change has diminshed and the OCV is stable.\n\nThe settings of the global limits of current and voltage remain to protect the supercapacitor.",
"_____no_output_____"
],
[
"## Setting the measurement\n\nOnly the maximum time is configured. A change in tolerance or a range limit has already been shown in the previous examples.\n\n5 minutes measurement time is set.\n\nAnd the sampling rate is reduced to 1 Hz because there is less dynamic change in the measurement.",
"_____no_output_____"
]
],
[
[
" ZahnerPP2x2.setMaximumTimeParameter(\"2 min\")\n ZahnerPP2x2.setSamplingFrequency(1)",
"_____no_output_____"
]
],
[
[
"## Execute the primitive\n\nThe primitve can simply be started, nothing else needs to be configured.",
"_____no_output_____"
]
],
[
[
" ZahnerPP2x2.measureOCVScan()",
"_____no_output_____"
]
],
[
[
"## Plot the data",
"_____no_output_____"
]
],
[
[
" dataManager.plotTIUData()",
"_____no_output_____"
]
],
[
[
"# Close the connection\n\nClosing the online display when it has been opened and close the connection to the device.",
"_____no_output_____"
]
],
[
[
" if onlineDisplay != None:\n onlineDisplay.close()\n \n ZahnerPP2x2.close()\n print(\"finish\")",
"finish\n"
]
],
[
[
"# Deployment of the source code\n\n**The following instruction is not needed by the user.**\n\nIt automatically extracts the pure python code from the jupyter notebook to provide it for the user. \nThus the user does not need jupyter itself and does not have to copy the code manually.\n\nThe code is stored in a notebook-like file with the extension .py.",
"_____no_output_____"
]
],
[
[
" if executionInNotebook() == True:\n notebookCodeToPython(\"Polarizations.ipynb\")",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0c9027d6f1174ae25a13526219e6145b0070792 | 12,152 | ipynb | Jupyter Notebook | ipynb/03/Arrays.ipynb | matthew-brett/cfd-uob | cc9233a26457f5e688ed6297ebbf410786cfd806 | [
"CC-BY-4.0"
] | 1 | 2019-09-30T13:31:41.000Z | 2019-09-30T13:31:41.000Z | ipynb/03/Arrays.ipynb | matthew-brett/cfd-uob | cc9233a26457f5e688ed6297ebbf410786cfd806 | [
"CC-BY-4.0"
] | 1 | 2020-08-14T11:16:11.000Z | 2020-08-14T11:16:11.000Z | ipynb/03/Arrays.ipynb | matthew-brett/cfd-uob | cc9233a26457f5e688ed6297ebbf410786cfd806 | [
"CC-BY-4.0"
] | 5 | 2019-12-03T00:54:39.000Z | 2020-09-21T14:30:43.000Z | 32.148148 | 173 | 0.526004 | [
[
[
"# Arrays\n\nThere are several kinds of sequences in Python. A [list](lists) is one. However, the sequence type that we will use most in the class, is the array.\n\nThe `numpy` package, abbreviated `np` in programs, provides Python programmers\nwith convenient and powerful functions for creating and manipulating arrays.",
"_____no_output_____"
]
],
[
[
"# Load the numpy package, and call it \"np\".\nimport numpy as np",
"_____no_output_____"
]
],
[
[
"## Creating arrays",
"_____no_output_____"
],
[
"The `array` function from the Numpy package creates an array from single values, or sequences of values.\n\nFor example, remember `my_list`?",
"_____no_output_____"
]
],
[
[
"my_list = [1, 2, 3]",
"_____no_output_____"
]
],
[
[
"This is a `list`:",
"_____no_output_____"
]
],
[
[
"type(my_list)",
"_____no_output_____"
]
],
[
[
"The `array` function from Numpy can make an array from this list:",
"_____no_output_____"
]
],
[
[
"my_array = np.array(my_list)\nmy_array",
"_____no_output_____"
]
],
[
[
"As you can see from the display above, this is an array. We confirm it with `type`:",
"_____no_output_____"
]
],
[
[
"type(my_array)",
"_____no_output_____"
]
],
[
[
"We can also create the list and then the array in one call, like this:",
"_____no_output_____"
]
],
[
[
"my_array = np.array([1, 2, 3])\nmy_array",
"_____no_output_____"
]
],
[
[
"Here `[1, 2, 3]` is an *expression* that returns a list. `np.array` then operates on the returned list, to create an array.\n\nArrays often contain numbers, but, like lists, they can also contain strings\nor other types of values. However, a single array can only contain a single\nkind of data. (It usually doesn't make sense to group together unlike data\nanyway.)\n\nFor example,",
"_____no_output_____"
]
],
[
[
"english_parts_of_speech = np.array([\"noun\", \"pronoun\", \"verb\", \"adverb\", \"adjective\", \"conjunction\", \"preposition\", \"interjection\"])\nenglish_parts_of_speech",
"_____no_output_____"
]
],
[
[
"We have not seen this yet, but Python allows us to spread expressions between\nround and square brackets across many lines. It knows that the expression has\nnot finished yet because it is waiting for the closing bracket. For example, this cell works in the exactly the same way as the cell above, and may be easier to read:",
"_____no_output_____"
]
],
[
[
"# An expression between brackets spread across many lines.\nenglish_parts_of_speech = np.array(\n [\"noun\",\n \"pronoun\",\n \"verb\",\n \"adverb\",\n \"adjective\",\n \"conjunction\",\n \"preposition\",\n \"interjection\"]\n )\nenglish_parts_of_speech",
"_____no_output_____"
]
],
[
[
"Below, we collect four different temperatures into a list called `temps`.\nThese are the [estimated average daily high\ntemperatures](http://berkeleyearth.lbl.gov/regions/global-land) over all land\non Earth (in degrees Celsius) for the decades surrounding 1850, 1900, 1950,\nand 2000, respectively, expressed as deviations from the average absolute high\ntemperature between 1951 and 1980, which was 14.48 degrees.\n\nIf you are interested, you can get more data from [this file of daily high\ntemperatures](http://berkeleyearth.lbl.gov/auto/Regional/TMAX/Text/global-land-TMAX-Trend.txt).",
"_____no_output_____"
]
],
[
[
"baseline_high = 14.48\nhighs = np.array([baseline_high - 0.880,\n baseline_high - 0.093,\n baseline_high + 0.105,\n baseline_high + 0.684])\nhighs",
"_____no_output_____"
]
],
[
[
"## Calculations with arrays",
"_____no_output_____"
],
[
"Arrays can be used in arithmetic expressions to compute over their contents.\nWhen an array is combined with a single number, that number is combined with\neach element of the array. Therefore, we can convert all of these temperatures\nto Fahrenheit by writing the familiar conversion formula.",
"_____no_output_____"
]
],
[
[
"(9/5) * highs + 32",
"_____no_output_____"
]
],
[
[
"<img src=\"https://matthew-brett.github.io/cfd2019/images/array_arithmetic.png\">",
"_____no_output_____"
],
[
"As we saw for strings, arrays have *methods*, which are functions that\noperate on the array values. The `mean` of a collection of numbers is its\naverage value: the sum divided by the length. Each pair of parentheses in the\nexamples below is part of a call expression; it's calling a function with no\narguments to perform a computation on the array called `highs`.",
"_____no_output_____"
]
],
[
[
"# The number of elements in the array\nhighs.size",
"_____no_output_____"
],
[
"highs.sum()",
"_____no_output_____"
],
[
"highs.mean()",
"_____no_output_____"
]
],
[
[
"## Functions on Arrays",
"_____no_output_____"
],
[
"Numpy provides various useful functions for operating on arrays.\n\nFor example, the `diff` function computes the difference between each adjacent\npair of elements in an array. The first element of the `diff` is the second\nelement minus the first.",
"_____no_output_____"
]
],
[
[
"np.diff(highs)",
"_____no_output_____"
]
],
[
[
"The [full Numpy reference](http://docs.scipy.org/doc/numpy/reference/) lists\nthese functions exhaustively, but only a small subset are used commonly for\ndata processing applications. These are grouped into different packages within\n`np`. Learning this vocabulary is an important part of learning the Python\nlanguage, so refer back to this list often as you work through examples and\nproblems.\n\nHowever, you **don't need to memorize these**. Use this as a reference.\n\nEach of these functions takes an array as an argument and returns a single\nvalue.\n\n| **Function** | Description |\n|--------------------|----------------------------------------------------------------------|\n| `np.prod` | Multiply all elements together |\n| `np.sum` | Add all elements together |\n| `np.all` | Test whether all elements are true values (non-zero numbers are true)|\n| `np.any` | Test whether any elements are true values (non-zero numbers are true)|\n| `np.count_nonzero` | Count the number of non-zero elements |\n\nEach of these functions takes an array as an argument and returns an array of values.\n\n| **Function** | Description |\n|--------------------|----------------------------------------------------------------------|\n| `np.diff` | Difference between adjacent elements |\n| `np.round` | Round each number to the nearest integer (whole number) |\n| `np.cumprod` | A cumulative product: for each element, multiply all elements so far |\n| `np.cumsum` | A cumulative sum: for each element, add all elements so far |\n| `np.exp` | Exponentiate each element |\n| `np.log` | Take the natural logarithm of each element |\n| `np.sqrt` | Take the square root of each element |\n| `np.sort` | Sort the elements |\n\nEach of these functions takes an array of strings and returns an array.\n\n| **Function** | **Description** |\n|---------------------|--------------------------------------------------------------|\n| `np.char.lower` | Lowercase each element |\n| `np.char.upper` | Uppercase each element |\n| `np.char.strip` | Remove spaces at the beginning or end of each element |\n| `np.char.isalpha` | Whether each element is only letters (no numbers or symbols) |\n| `np.char.isnumeric` | Whether each element is only numeric (no letters) \n\nEach of these functions takes both an array of strings and a *search string*; each returns an array.\n\n| **Function** | **Description** |\n|----------------------|----------------------------------------------------------------------------------|\n| `np.char.count` | Count the number of times a search string appears among the elements of an array |\n| `np.char.find` | The position within each element that a search string is found first |\n| `np.char.rfind` | The position within each element that a search string is found last |\n| `np.char.startswith` | Whether each element starts with the search string \n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d0c90b435204c69a9a703eb78052311e8a212fc1 | 2,159 | ipynb | Jupyter Notebook | notebooks/image.ipynb | priyanshu-bisht/FaceMaskDetection | 3237ca1b4545019e0fcd5cd33fb9d848515e24a4 | [
"MIT"
] | 2 | 2020-09-01T13:25:21.000Z | 2020-09-01T15:20:12.000Z | notebooks/image.ipynb | priyanshu-bisht/FaceMaskDetection | 3237ca1b4545019e0fcd5cd33fb9d848515e24a4 | [
"MIT"
] | null | null | null | notebooks/image.ipynb | priyanshu-bisht/FaceMaskDetection | 3237ca1b4545019e0fcd5cd33fb9d848515e24a4 | [
"MIT"
] | null | null | null | 25.702381 | 99 | 0.528949 | [
[
[
"import numpy as np\nimport cv2 as cv\nimport matplotlib.pyplot as plt\nimport tensorflow as tf\nmodel = tf.keras.models.load_model(\"mask01.h5\")\ncas = cv.CascadeClassifier('haarcascade_frontalface_default.xml')",
"_____no_output_____"
],
[
"def detectV2(image):\n faces = cas.detectMultiScale(image, 1.1,4)\n font = cv.FONT_HERSHEY_PLAIN \n color = (255, 255, 255)\n classes = ['WithMask','WithoutMask']\n for (x,y,w,h) in faces:\n new_img = cv.resize(image[y:y+h, x:x+w], (50,50))\n pred = model.predict(np.expand_dims(new_img/255,0))[0][0]\n ans = int(np.round(pred))\n cv.rectangle(image, (x, y+h),(x+w,y+h+30) , (255*ans,255*(1-ans),0), thickness=-1)\n cv.rectangle(image, (x,y),(x+w,y+h), (255*ans,255*(1-ans),0),2)\n cv.putText(image, classes[ans] , (x+5,y+h+25), font, 2, color,2)\n return image",
"_____no_output_____"
],
[
"img = cv.imread('someimage.jpg')\nimg = cv.cvtColor(img, cv.COLOR_BGR2RGB)\nimg = detectV2(img)\nplt.figure(figsize=(10,10))\nplt.imshow(img)",
"_____no_output_____"
],
[
"img = cv.cvtColor(img, cv.COLOR_BGR2RGB)\ncv.imwrite('img.jpg',img)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code"
]
] |
d0c9222ecac883d22000d49cba1774a2d4d97f5f | 35,331 | ipynb | Jupyter Notebook | allstate/10-21-2016 Tate .ipynb | nkwan12/kaggle-competitions | 7174a69fe558c15210143ad52716d4d63bb42017 | [
"MIT"
] | null | null | null | allstate/10-21-2016 Tate .ipynb | nkwan12/kaggle-competitions | 7174a69fe558c15210143ad52716d4d63bb42017 | [
"MIT"
] | null | null | null | allstate/10-21-2016 Tate .ipynb | nkwan12/kaggle-competitions | 7174a69fe558c15210143ad52716d4d63bb42017 | [
"MIT"
] | null | null | null | 30.068936 | 386 | 0.402593 | [
[
[
"import pandas as pd, numpy as np\nimport matplotlib.pyplot as plt \n%matplotlib inline \nfrom sklearn.ensemble import RandomForestRegressor, GradientBoostingRegressor\nfrom sklearn.cross_validation import train_test_split\nfrom sklearn.grid_search import GridSearchCV\nimport pandas as pd, numpy as np\nfrom sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier\nfrom sklearn.linear_model import LogisticRegression, LinearRegression\nfrom sklearn.neighbors import KNeighborsClassifier, KNeighborsRegressor\nfrom sklearn.naive_bayes import GaussianNB\nfrom sklearn.svm import SVC, SVR\nfrom sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis\nfrom sklearn.ensemble import GradientBoostingClassifier\nfrom sklearn.cross_validation import train_test_split\nfrom gensim.models import word2vec\nimport nltk\nfrom scipy import stats\nfrom itertools import combinations\nimport pickle \nimport warnings\nwarnings.filterwarnings(\"ignore\")",
"/Users/tate/anaconda/lib/python3.5/site-packages/sklearn/cross_validation.py:44: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the refactored classes and functions are moved. Also note that the interface of the new CV iterators are different from that of this module. This module will be removed in 0.20.\n \"This module will be removed in 0.20.\", DeprecationWarning)\n/Users/tate/anaconda/lib/python3.5/site-packages/sklearn/grid_search.py:43: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the refactored classes and functions are moved. This module will be removed in 0.20.\n DeprecationWarning)\n"
],
[
"train = pd.read_csv('data_files/train.csv')",
"_____no_output_____"
],
[
"train.head()",
"_____no_output_____"
],
[
"train.shape",
"_____no_output_____"
],
[
"train.dtypes",
"_____no_output_____"
]
],
[
[
"### cat1 - cat116 are categorical",
"_____no_output_____"
]
],
[
[
"categorical_vars = ['cat{}'.format(i+1) for i in range(116)]",
"_____no_output_____"
],
[
"for var in categorical_vars:\n train = pd.get_dummies(train, columns=[var])",
"_____no_output_____"
],
[
"train.head()",
"_____no_output_____"
],
[
"def multi_model_prediction(test_df, models):\n preds = list()\n for model in models:\n preds.append(model.predict(test_df))\n return [np.mean(p) for p in np.array(preds).T]",
"_____no_output_____"
],
[
"# rf = RandomForestRegressor(n_estimators=30, max_depth=10, max_features='sqrt')\n# lr = LinearRegression()\n# X, y = train_test_split(train)",
"_____no_output_____"
],
[
"# rf.fit(X.drop(['loss'], axis=1), X.loss)\n# lr.fit(X.drop(['loss'], axis=1), X.loss)",
"_____no_output_____"
],
[
"#preds = multi_model_prediction(y.drop(['loss'], axis=1), [rf, lr])",
"_____no_output_____"
],
[
"#np.mean([abs(prediction - loss) for prediction, loss in zip(preds, y.loss)])",
"_____no_output_____"
],
[
"# n_sample = 1000\n# errors = list()\n# for _ in range(3):\n# sample_data = train.sample(n_sample)\n# X, y = train_test_split(sample_data)\n# rf = RandomForestRegressor(n_estimators=50, max_depth=10, max_features='sqrt')\n# rf.fit(X.drop(['loss'], axis=1), X.loss)\n# lr = LinearRegression()\n# lr.fit(X.drop(['loss'], axis=1), X.loss)\n# gbt = GradientBoostingRegressor(n_estimators=50, max_depth=10, max_features='sqrt')\n# gbt.fit(X.drop(['loss'], axis=1), X.loss)\n# knn = KNeighborsRegressor(n_neighbors=7)\n# knn.fit(X.drop(['loss'], axis=1), X.loss)\n# svr = SVR(kernel='poly', degree=4)\n# svr.fit(X.drop(['loss'], axis=1), X.loss)\n# model_list = [rf, lr, gbt, knn, svr]\n# preds = multi_model_prediction(y.drop(['loss'], axis=1), model_list)\n# errors.append(np.mean([abs(p - loss) for p, loss in zip(preds, y.loss)]))\n# np.mean(errors)",
"_____no_output_____"
],
[
"test = pd.read_csv('data_files/test.csv')",
"_____no_output_____"
],
[
"for var in categorical_vars:\n test = pd.get_dummies(test, columns=[var])",
"_____no_output_____"
],
[
"test.head()",
"_____no_output_____"
],
[
"rf = RandomForestRegressor(n_estimators=10, max_depth=10, max_features='sqrt')\nrf.fit(train.drop(['loss'], axis=1), train.loss)\nlr = LinearRegression()\nlr.fit(train.drop(['loss'], axis=1), train.loss)\ngbt = GradientBoostingRegressor(n_estimators=10, max_depth=10, max_features='sqrt')\ngbt.fit(train.drop(['loss'], axis=1), train.loss)\nknn = KNeighborsRegressor(n_neighbors=7)\nknn.fit(train.drop(['loss'], axis=1), train.loss)\nsvr = SVR(kernel='poly', degree=4)\nsvr.fit(train.drop(['loss'], axis=1), train.loss)\nmodel_list = [rf, lr, gbt, knn, svr]\ntest['loss'] = multi_model_prediction(test, model_list)",
"_____no_output_____"
],
[
"test[['id', 'loss']].head()",
"_____no_output_____"
],
[
"import csv \nwith open('tate_submission1.csv', 'a') as file:\n writer = csv.writer(file)\n writer.writerow(['id', 'loss'])\n writer.writerows(test[['id', 'loss']].values.tolist())",
"_____no_output_____"
],
[
"predictions = rf.predict(y.drop(['loss'], axis=1))",
"_____no_output_____"
],
[
"np.mean([abs(prediction - loss) for prediction, loss in zip(predictions, y.loss)])",
"_____no_output_____"
],
[
"# def mae(estimator, X, y):\n# return np.mean([abs(prediction - value) \n# for prediction, value in zip(estimator.predict(X), y)])",
"_____no_output_____"
],
[
"# param_grid = {'n_estimators': np.arange(50, 251, 50), \n# 'max_depth': np.arange(5, 21, 5),\n# 'max_features': ['auto', 'sqrt']}\n# random_forest = RandomForestRegressor()\n# cv = GridSearchCV(random_forest, param_grid, scoring=mae)",
"_____no_output_____"
],
[
"#cv.fit(train.drop(['loss'], axis=1), train.loss)",
"_____no_output_____"
],
[
"#cv",
"_____no_output_____"
],
[
"predictions = cv.predict(y.drop(['loss'], axis=1))",
"_____no_output_____"
],
[
"np.mean([abs(prediction - loss) for prediction, loss in zip(predictions, y.loss)])",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0c923251a4e7ed2aa72aa7228086d72fdc1110a | 2,567 | ipynb | Jupyter Notebook | Stack.ipynb | chapman-cs510-2017f/cw-12-evan_logan_tim | 882d5d812be55166202cfd397d9f30c3c4cebeb1 | [
"MIT"
] | null | null | null | Stack.ipynb | chapman-cs510-2017f/cw-12-evan_logan_tim | 882d5d812be55166202cfd397d9f30c3c4cebeb1 | [
"MIT"
] | null | null | null | Stack.ipynb | chapman-cs510-2017f/cw-12-evan_logan_tim | 882d5d812be55166202cfd397d9f30c3c4cebeb1 | [
"MIT"
] | null | null | null | 17.228188 | 305 | 0.507596 | [
[
[
"## Stack Analysis ##",
"_____no_output_____"
]
],
[
[
"# The Implementation #",
"_____no_output_____"
],
[
"# Additional Questions to Address #",
"_____no_output_____"
],
[
"**Difference Between Class and Struct:**",
"_____no_output_____"
],
[
"**Public and Private:**",
"_____no_output_____"
],
[
"**size_t:**",
"_____no_output_____"
],
[
"**Where'd all the Pointers go?:**",
"_____no_output_____"
]
],
[
[
"**new and delete:** new is a keyword for creating a new instance of an object, delete is the keyword to free up the data allocated for a given instance of an object. The equivalent in C is free(obj) instead of delete and calloc() in C would be a similar memory allocation method for the keyword new.",
"_____no_output_____"
]
],
[
[
"**Memory Leaks:**",
"_____no_output_____"
],
[
"**unique_ptr:** This mimics the action of a \"garbage collector\", ie it will delete any unused memory blocks.",
"_____no_output_____"
],
[
"**List Initializer:**",
"_____no_output_____"
],
[
"**Rule of Zero:**",
"_____no_output_____"
]
]
] | [
"markdown",
"raw",
"markdown",
"raw"
] | [
[
"markdown"
],
[
"raw",
"raw",
"raw",
"raw",
"raw",
"raw"
],
[
"markdown"
],
[
"raw",
"raw",
"raw",
"raw"
]
] |
d0c92659ae0f2e89c1a3ef0fb7684a70d2681b7d | 175,825 | ipynb | Jupyter Notebook | c2/pde_solver.ipynb | c-abbott/num-rep | fb548007b84f96d46527b8ea3ba0461b32a34452 | [
"MIT"
] | null | null | null | c2/pde_solver.ipynb | c-abbott/num-rep | fb548007b84f96d46527b8ea3ba0461b32a34452 | [
"MIT"
] | null | null | null | c2/pde_solver.ipynb | c-abbott/num-rep | fb548007b84f96d46527b8ea3ba0461b32a34452 | [
"MIT"
] | null | null | null | 178.321501 | 145,636 | 0.894076 | [
[
[
"# Checkpoint 2",
"_____no_output_____"
]
],
[
[
"# imports.\nfrom datetime import datetime\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport numpy as np\nfrom scipy import integrate\nfrom mpl_toolkits.mplot3d import Axes3D\nfrom matplotlib import cm\nimport random",
"_____no_output_____"
],
[
"plt.rcParams['figure.figsize'] = (10, 6)\nplt.rcParams['font.size'] = 16",
"_____no_output_____"
],
[
"# Constants\nG = 6.67408e-11 # m^3 s^-1 kg^-2\nAU = 149.597e9 # m\nMearth = 5.9721986e24 # kg\nMmars = 6.41693e23 # kg\nMsun = 1.988435e30 # kg\nday2sec = 3600 * 24 # seconds in one day",
"_____no_output_____"
]
],
[
[
"## Initial Conditions\nBelow are the initial positions and velocities for Earth and Mars.",
"_____no_output_____"
]
],
[
[
"# positions and velocities at t=0 (2019/6/2)\nrs = [[-4.8957151e10, -1.4359284e11, 501896.65], # Earth\n [-1.1742901e11, 2.1375285e11, 7.3558899e9]] # Mars (units of m)\nvs = [[27712., -9730., -0.64148], # Earth\n [-20333., -9601., 300.34]] # Mars (units of m/s)",
"_____no_output_____"
]
],
[
[
"## Historical Positions\nBelow are historical positions for Earth and Mars at t=-1000 days prior to 2019/6/2. These will be used in tasks 5 and 6.",
"_____no_output_____"
]
],
[
[
"# positions of the planets at (2019/6/2)-1000 days\nrspast = [[1.44109e11, -4.45267e10, -509142.], # Earth\n [1.11393e11, -1.77611e11, -6.45385e9]] # Mars",
"_____no_output_____"
]
],
[
[
"## Earth/Mars functions\nBelow are functions for the equations of motion (the vector of 1st derivtives) for Earth and Mars and for calculating the angle between Earth and Mars.",
"_____no_output_____"
]
],
[
[
"def earth_mars_motion(t, y):\n \"\"\"\n # order of variables\n # 0,1,2 rx,ry,rz for Earth\n # 3,4,5 rx,ry,rz for Mars\n # 6,7,8 vx,vy,vz for Earth\n # 9,10,11 vx,vy,vz for Mars\n\n # order of derivatives:\n # 0,1,2 Drx,Dry,Drz for Earth\n # 3,4,5 Drx,Dry,Drz for Mars\n # 6,7,8 Dvx,Dvy,Dvz for Earth\n # 9,10,11 Dvx,Dvy,Dvy for Mars\n \"\"\"\n\n rx1,ry1,rz1, rx2,ry2,rz2, vx1,vy1,vz1, vx2,vy2,vz2 = y\n drx1 = vx1\n dry1 = vy1\n drz1 = vz1\n drx2 = vx2\n dry2 = vy2\n drz2 = vz2\n \n GMmars = G*Mmars\n GMearth = G*Mearth\n GMsun = G*Msun\n \n rx12 = rx1 - rx2\n ry12 = ry1 - ry2\n rz12 = rz1 - rz2\n xy12 = np.power(np.power(rx12,2) + 2*np.power(ry12,2),1.5)\n xyz1 = np.power(np.power(rx1,2) + np.power(ry1,2) + np.power(rz1,2),1.5)\n xyz2 = np.power(np.power(rx2,2) + np.power(ry2,2) + np.power(rz2,2),1.5)\n\n dvx1 = GMmars * rx12 / xy12 - GMsun * rx1 / xyz1\n dvy1 = GMmars * ry12 / xy12 - GMsun * ry1 / xyz1\n dvz1 = GMmars * rz12 / xy12 - GMsun * rz1 / xyz1\n dvx2 = -GMearth * rx12 / xy12 - GMsun * rx2 / xyz2\n dvy2 = -GMearth * ry12 / xy12 - GMsun * ry2 / xyz2\n dvz2 = -GMearth * rz12 / xy12 - GMsun * rz2 / xyz2\n \n return np.array([drx1,dry1,drz1, drx2,dry2,drz2,\n dvx1,dvy1,dvz1, dvx2,dvy2,dvz2])\n\ndef angle_between_planets(y):\n \"\"\"\n Input should be same form as the y variable in the earth_mars_motion function.\n \"\"\"\n r1 = y[0:3]\n r2 = y[3:6]\n return np.arccos((r1*r2).sum(axis=0) /\n np.sqrt((r1*r1).sum(axis=0) * (r2*r2).sum(axis=0)))",
"_____no_output_____"
]
],
[
[
"## Task 1\nWrite a code that solves the equations and plots trajectories of Mars and Earth up to some $t_{max}$. The 3D plot should include at least one full orbit for each body.",
"_____no_output_____"
]
],
[
[
"# setting time domain.\ntmax = 8000*day2sec # 8000 days.\ndt = 3600 # 1 hour.\nts = np.arange(0, tmax, dt)\ntrange = (ts[0], ts[-1])\n \ndef get_traj(initial_rs, initial_vs):\n ini = np.append(initial_rs, initial_vs) # initial coordinates of Earth and Mars\n sol = integrate.solve_ivp(earth_mars_motion, trange, ini, method = 'RK45', t_eval=ts, max_step = 1e6)\n \n rx1 = sol.y[0] # x pos of Earth\n ry1 = sol.y[1] # y pos of Earth\n rz1 = sol.y[2] # z pos of Earth\n \n rx2 = sol.y[3] # x pos of Mars\n ry2 = sol.y[4] # y pos of Mars\n rz2 = sol.y[5] # z pos of Mars\n \n vx1 = sol.y[6] # x velocity of Earth\n vy1 = sol.y[7] # y velocity of Earth\n vz1 = sol.y[8] # z velocity of Earth\n \n vx2 = sol.y[9] # x velocity of Mars\n vy2 = sol.y[10] # y velocity of Mars\n vz2 = sol.y[11] # z velocity of Mars\n \n y = sol.y\n t = sol.t\n \n return y, t\n \n\ndef plot_traj(y):\n \n # creating 3D figure object\n fig = plt.figure(figsize = (15,10))\n ax = fig.gca(projection='3d')\n \n rx1 = y[0] # x pos of Earth\n ry1 = y[1] # y pos of Earth\n rz1 = y[2] # z pos of Earth\n \n rx2 = y[3] # x pos of Mars\n ry2 = y[4] # y pos of Mars\n rz2 = y[5] # z pos of Mars\n \n # Plotting Earth and Mars trajectories\n earth, = ax.plot(rx1, ry1, rz1, c = 'b')\n earth.set_label('Earth')\n mars, = ax.plot(rx2, ry2, rz2, c = 'r')\n mars.set_label('Mars')\n plt.legend()\n \n # Labelling axes\n ax.xaxis.set_label_text('x position from Sun (m)', fontsize = 12)\n ax.xaxis.labelpad = 15\n ax.yaxis.set_label_text('y position from Sun (m)', fontsize = 12)\n ax.yaxis.labelpad = 15\n ax.zaxis.set_label_text('z position from Sun (m)', fontsize = 12)\n ax.zaxis.labelpad = 15\n plt.title('Trajectory of the Earth and Mars relative to the Sun', fontsize = 22)\n plt.show()\n \ny, t = get_traj(rs, vs)\nplot_traj(y)",
"/Users/callum/.conda/envs/coding/lib/python3.7/site-packages/IPython/core/pylabtools.py:128: UserWarning: Creating legend with loc=\"best\" can be slow with large amounts of data.\n fig.canvas.print_figure(bytes_io, **kw)\n"
]
],
[
[
"## Task 2\nFind the time of the next opposition to $\\pm10$ days. Return the time in days from $t_0$ = 2 June 2019.",
"_____no_output_____"
]
],
[
[
"def get_opp_times(solutions, times):\n thetas = angle_between_planets(solutions)\n # finding relationship between neighbouring points.\n thetas_diff = np.diff(thetas)\n # determining locations of minima.\n indices = np.where(np.sign(thetas_diff[:-1]) < np.sign(thetas_diff[1:]))[0] + 1\n # recording times these minima occur.\n opp_times = times[indices]\n \n return (opp_times[:10]) / 86400\n\ndef time_to_next_opposition():\n # get trajectories.\n solutions, times = get_traj(rs, vs)\n # get opposition times.\n opp_times = get_opp_times(solutions, times)\n \n return opp_times",
"_____no_output_____"
],
[
"t_opp = time_to_next_opposition()\nprint (f\"Next opposition in {t_opp} days.\")",
"Next opposition in [ 500.25 1285.75 2056. 2820.875 3585.875\n 4356.29166667 5142.08333333 5952.45833333 6748.08333333 7522.58333333] days.\n"
]
],
[
[
"## Task 3\nFind the times for 10 oppositions in days since 2 June 2019. The results must be accurate to 1 day. Convert this to dates (year/month/day) and print out on the screen. Do not worry if the dates come out different than the actual dates you can find online, it’s supposed to be like that.\n\nThe `calculate_oppositions` function should return a list of the ten next opposition times after 2 June, 2019. The times should be returned in units of days. You may create additional functions outside this cell that are called by `calculate_oppositions`.",
"_____no_output_____"
]
],
[
[
"def get_opp_times(solutions, times):\n thetas = angle_between_planets(solutions)\n # finding relationship between neighbouring points\n thetas_diff = np.diff(thetas)\n # determining locations of minima\n indices = np.where(np.sign(thetas_diff[:-1]) < np.sign(thetas_diff[1:]))[0] + 1\n # recording times these minima occur\n opp_times = times[indices]\n \n return (opp_times[:10]) / 86400\n\ndef calculate_oppositions():\n # get trajectories\n solutions, times = get_traj(rs, vs)\n # get opposition times\n opp_times = get_opp_times(solutions, times)\n \n return opp_times",
"_____no_output_____"
],
[
"opp_times = calculate_oppositions()\nopp_times *= day2sec\ndate0 = datetime.fromisoformat('2019-06-02')\ntimestamp0 = datetime.timestamp(date0)\nfor t in opp_times:\n print(f\"t = {t/day2sec:.2f} day: {datetime.fromtimestamp(t+timestamp0)}\")",
"t = 500.25 day: 2020-10-14 06:00:00\nt = 1285.75 day: 2022-12-08 17:00:00\nt = 2056.00 day: 2025-01-16 23:00:00\nt = 2820.88 day: 2027-02-20 20:00:00\nt = 3585.88 day: 2029-03-26 21:00:00\nt = 4356.29 day: 2031-05-06 07:00:00\nt = 5142.08 day: 2033-06-30 02:00:00\nt = 5952.46 day: 2035-09-18 11:00:00\nt = 6748.08 day: 2037-11-22 01:00:00\nt = 7522.58 day: 2040-01-05 13:00:00\n"
]
],
[
[
"## Task 4\nEstimate standard errors of these times assuming that all initial positions and velocities (12 numbers) are normally distributed random numbers with means as specified in the list of parameters, and coefficients of variation (standard deviation divided by the mean) equal to 3x10$^{-5}$.\n\nThe `estimate_errors` function should return two lists:\n1. a list (or array) of the mean opposition times for 10 oppositions\n2. a list (or array) of the standard deviation for each time\n\n\nUnits should be in days.",
"_____no_output_____"
],
[
"RUN TIME FOR N = 50 ~ 1min WHEN RAN LOCALLY",
"_____no_output_____"
]
],
[
[
"# gaussian sampling function.\ndef get_sample(arr):\n arr = np.array(arr)\n sample = np.random.normal(loc = arr, scale = abs(3e-5*arr))\n return sample",
"_____no_output_____"
],
[
"def estimate_errors():\n # number of Monte Carlo simulations.\n N = 50\n sample_space = np.arange(0, tmax, tmax / N)\n # initialise array to store the opposition times from each simulation.\n opp_times_arr = np.zeros((N, 10))\n # finding opposition times.\n for i in range(sample_space.size): \n # varying initial conditions through random sampling of normal dist.\n ini_r = get_sample(rs)\n ini_v = get_sample(vs)\n trajs, times = get_traj(ini_r, ini_v)\n opp_times_arr[i] = get_opp_times(trajs, times)\n \n # analysis.\n mean_opp_times = np.mean(opp_times_arr, axis = 0)\n error = np.std(opp_times_arr, axis = 0)\n return mean_opp_times, error",
"_____no_output_____"
],
[
"tmean, tstd = estimate_errors()\nfor i in range(10):\n print(f\"{i}: {tmean[i]:.2f} +- {tstd[i]:.2f} days.\")",
"0: 500.28 +- 0.15 days.\n1: 1285.82 +- 0.31 days.\n2: 2056.07 +- 0.44 days.\n3: 2820.99 +- 0.58 days.\n4: 3586.00 +- 0.76 days.\n5: 4356.49 +- 1.04 days.\n6: 5142.35 +- 1.60 days.\n7: 5952.83 +- 2.08 days.\n8: 6748.41 +- 1.76 days.\n9: 7522.88 +- 1.65 days.\n"
]
],
[
[
"## Task 5\nUse historical positions of Earth and Mars (boundary value problem) to improve the accuracy of your prediction. What are the standard errors now?\n\nThe `estimate_errors_improved` function should return two lists:\n1. a list (or array) of the mean opposition times for 10 oppositions\n2. a list (or array) of the standard deviation for each time\n\nUnits should be in days.",
"_____no_output_____"
],
[
"PLEASE READ BELOW PARAGRAPH.",
"_____no_output_____"
],
[
"In order to solve task 5, I wanted to solve the boundary value problem from t = -1000 days to t = 0 days. I did this by propagating my solution to task 1 backwards in time to t = -1000 days and then using this solution as my initial guess for the boundary value problem. Once I had the solution from BVP, I was able to obtain the velocities from BVP at t = 0 days. I used these velocities to run my Monte Carlo simulation which consisted of \nsample from gaussian --> solve ivp (past) --> solve bvp --> solveivp (future) --> get opp_times --> repeat\n\nRUNTIME FOR N = 50 ~ 2mins when ran locally",
"_____no_output_____"
]
],
[
[
"def bc(ya, yb):\n \"\"\"\n :param ya: array of positions and velocities of Earth and Mars at t = -1000 days\n :param yb: array of positions and velocities of Earth and Mars at t = 0 days\n \"\"\"\n # converting lists to arrays to have correct dimensions for solve_bvp.\n ra = np.array(rspast)\n ra = ra.reshape(6)\n rb = np.array(rs)\n rb = rb.reshape(6)\n \n bc_a = ya[:6] - ra\n bc_b = yb[:6] - rb\n \n return np.append(bc_a, bc_b)",
"_____no_output_____"
],
[
"def reverse_ivp(initial_rs, initial_vs):\n # function used to the initial value problem in reverse e.g. propagate backwards in time.\n ini = np.append(initial_rs, initial_vs) # initial coordinates of Earth and Mars.\n sol = integrate.solve_ivp(earth_mars_motion, trange_r, ini, method = 'RK45', t_eval=ts_r, max_step = 1e6)\n \n rx1 = sol.y[0] # x pos of Earth.\n ry1 = sol.y[1] # y pos of Earth.\n rz1 = sol.y[2] # z pos of Earth.\n \n rx2 = sol.y[3] # x pos of Mars.\n ry2 = sol.y[4] # y pos of Mars.\n rz2 = sol.y[5] # z pos of Mars.\n \n vx1 = sol.y[6] # x velocity of Earth.\n vy1 = sol.y[7] # y velocity of Earth.\n vz1 = sol.y[8] # z velocity of Earth.\n \n vx2 = sol.y[9] # x velocity of Mars.\n vy2 = sol.y[10] # y velocity of Mars.\n vz2 = sol.y[11] # z velocity of Mars.\n \n y = sol.y\n t = sol.t\n \n return y, t ",
"_____no_output_____"
],
[
"# bvp solver.\ndef get_traj_bvp(t, y, bc):\n sol = integrate.solve_bvp(earth_mars_motion, bc, t, y, max_nodes = 1e5)\n y_sol = sol.sol(t)\n return y_sol",
"_____no_output_____"
],
[
"def estimate_errors_improved():\n ts_r = np.linspace(0, 1000*day2sec, 5000) # reverse time domain.\n trange_r = (ts_r[0], ts_r[-1])\n t_bvp = np.linspace(-1000*day2sec, 0, 5000) # bvp time domain.\n \n # reverse velocities to propagate backwards.\n vs_arr = np.array(vs)\n y_reverse, t_reverse = reverse_ivp(rs, -1 * vs_arr)\n y_reverse = np.flip(y_reverse, axis = 1) # flipping along the columns.\n y = y_reverse # setting initial guess for BVP to solution from reverse IVP.\n \n # using velocities obtained from BVP to run simulations again.\n y_bvp = get_traj_bvp(t_bvp, y, bc)\n new_vs = y_bvp[6:,-1]\n \n # number of Monte Carlo simulations.\n N = 50\n sample_space = np.arange(0, tmax, tmax / N)\n # initialise array to store the opposition times from each simulation.\n opp_times_arr = np.zeros((N, 10))\n for i in range(sample_space.size):\n # varying initial conditions.\n new_rs = get_sample(rs)\n new_vs = get_sample(new_vs)\n y_reverse, t_reverse = reverse_ivp(new_rs, -1 * new_vs)\n y_reverse = np.flip(y_reverse, axis = 1) # flipping along the columns.\n y = y_reverse\n y_bvp = get_traj_bvp(t_bvp, y, bc)\n # updating velocity for propagation into the future.\n new_vs = y_bvp[6:,-1]\n new_trajs, new_times = get_traj(new_rs, new_vs)\n opp_times_arr[i] = get_opp_times(new_trajs, new_times)\n \n # analysis.\n mean_opp_times = np.mean(opp_times_arr, axis = 0)\n error = np.std(opp_times_arr, axis = 0)\n return mean_opp_times, error\n ",
"_____no_output_____"
],
[
"tmean, tstd = estimate_errors_improved()\n\nfor i in range(10):\n print(f\"{i}: {tmean[i]:.2f} +- {tstd[i]:.2f} days.\")",
"0: 500.14 +- 0.09 days.\n1: 1285.20 +- 0.18 days.\n2: 2055.03 +- 0.25 days.\n3: 2819.57 +- 0.33 days.\n4: 3584.19 +- 0.44 days.\n5: 4354.27 +- 0.60 days.\n6: 5139.64 +- 0.91 days.\n7: 5949.73 +- 1.20 days.\n8: 6745.08 +- 1.00 days.\n9: 7519.23 +- 0.94 days.\n"
]
],
[
[
"## Task 6\nUsing the methods from Task 5, is there a better time point in the last 1000 days to get historical data for increasing the accuracy? Find such time t in the past 1000 days (-1000<$t$<0 days, where $t$=0 corresponds to 2 June 2019) which would yield a maximum error (std. deviation) of less than 0.2 days for each of the 10 oppositions.\n\n$t$ should be a negative number, accurate to +/- 50 days.\n\nThe code for task 6 can take any form you like.",
"_____no_output_____"
]
],
[
[
"# Remove the line that says \"raise NotImplementedError\"\n# YOUR CODE HERE\nraise NotImplementedError()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0c928870cb4503d71c530db5bedf8ee37db6b68 | 565,503 | ipynb | Jupyter Notebook | week05/.ipynb_checkpoints/prep_notes_part1_week05_s2020_normalZscores-checkpoint.ipynb | jnaiman/is507_spring2021 | d58cd945552b3cdcfc7eb4501d253dd67c03e935 | [
"BSD-3-Clause"
] | null | null | null | week05/.ipynb_checkpoints/prep_notes_part1_week05_s2020_normalZscores-checkpoint.ipynb | jnaiman/is507_spring2021 | d58cd945552b3cdcfc7eb4501d253dd67c03e935 | [
"BSD-3-Clause"
] | 1 | 2020-09-22T18:54:10.000Z | 2020-09-22T18:54:10.000Z | week05/.ipynb_checkpoints/prep_notes_part1_week05_s2020_normalZscores-checkpoint.ipynb | jnaiman/is507_fall2020 | ca316f77bb06051302883b7a6de3a16c9d20f2c4 | [
"BSD-3-Clause"
] | 1 | 2021-02-05T16:56:57.000Z | 2021-02-05T16:56:57.000Z | 918.024351 | 79,736 | 0.948423 | [
[
[
"# Week 05, Part 1\n\n### Topic\n 1. For reference: about plotting polygons in R (we won't go over this, but it serves as reference)\n 1. Plotting normal distributions\n 1. Example: Manufacturing rulers\n 1. BACK TO SLIDES FOR PERCENTILES\n",
"_____no_output_____"
]
],
[
[
"# resize\nrequire(repr)\noptions(repr.plot.width=8, repr.plot.height=8)",
"_____no_output_____"
]
],
[
[
"## 1. Intro to polygons in R",
"_____no_output_____"
],
[
"Now we'll go over some useful functions associated with drawing normal distributions. First, a little intro of sequences and polygons:",
"_____no_output_____"
]
],
[
[
"x = seq(-3,3, length=10)\nprint(x)",
" [1] -3.0000000 -2.3333333 -1.6666667 -1.0000000 -0.3333333 0.3333333\n [7] 1.0000000 1.6666667 2.3333333 3.0000000\n"
]
],
[
[
"Now, lets try to understand a `polygon` function. We'll use this to help us draw areas using the `plot_polygons.R` script, but let's look at a few `polygon` examples.\n\nLet's make a triangle -- say the triangle goes from -3 to +3 in x & 0-1 in y:",
"_____no_output_____"
]
],
[
[
"plot(NULL,xlim=c(-3,3),ylim=c(0,1)) # sets up axes\nxvertices = c(-3, 0, 3)\nyvertices = c(0, 1, 0)\npolygon(xvertices, yvertices,col=\"red\") # plots on top of previous plot",
"_____no_output_____"
]
],
[
[
"Let's try overplotting a little rectangle at x = (-1,1), y = (0.4,0.6):",
"_____no_output_____"
]
],
[
[
"# set up empty axis\nplot(NULL,xlim=c(-3,3),ylim=c(0,1)) # sets up axes\n\n# red triangle\nxvertices = c(-3, 0, 3)\nyvertices = c(0, 1, 0)\npolygon(xvertices, yvertices,col=\"red\") # plots on top of previous plot\n\n# blue rectangle\nxvertices = c(-1, -1, 1, 1)\nyvertices = c(0.4, 0.6, 0.6, 0.4)\npolygon(xvertices, yvertices, col=\"blue\")",
"_____no_output_____"
]
],
[
[
"Essentially, polygon just fills in between a list of verticies we give it. We can use this to plot underneath our normal distributions. This will help us get a \"feel\" for how much of the graph is related to our measurement of interest.",
"_____no_output_____"
],
[
"## 2. Plotting normal distributions",
"_____no_output_____"
],
[
"Now, let's build some tool's we will need to examine normal distributions.\n\n(1) Let's plot them using \"dnorm\" moving onto normal distributions. First, let's start by plotting a normal distribution:",
"_____no_output_____"
]
],
[
[
"help(dnorm)",
"_____no_output_____"
],
[
"x=seq(-3,3,length=200)\ny=dnorm(x, mean=0, sd=1)\nplot(x,y)",
"_____no_output_____"
]
],
[
[
"Let's make a little fancier of a plot:",
"_____no_output_____"
]
],
[
[
"x = seq(-3,3,length=200) # plotting normal dist. -3,3 SD\ny1 = dnorm(x, mean=0, sd=1)\nplot(x,y1, type='l', ylim=c(0,2), ylab='Normal Distributions')",
"_____no_output_____"
]
],
[
[
"Overplot a few other normal distributions:",
"_____no_output_____"
]
],
[
[
"# orig plot\nx = seq(-3,3,length=200) # plotting normal dist. -3,3 SD\ny1 = dnorm(x, mean=0, sd=1)\nplot(x,y1, type='l', ylim=c(0,2), ylab='Normal Distributions')\n\n# other distribution\ny2 = dnorm(x, mean=0, sd=0.5)\npar(new=TRUE) # for overplotting\nplot(x, y2, type='l', col='red', ylim=c(0,2), ylab=\"\")",
"_____no_output_____"
]
],
[
[
"Let's add to this by visualizing a Z-score and actually calculating it as well. We'll go back to just one normal distribution.\n\nZ-scores: remember this is a measure of how \"far off\" a score is from the mean.\n\nSo first, as is always a good example, let's plot!",
"_____no_output_____"
]
],
[
[
"x = seq(-6,6,length=200)\nmean_dist = 1.0\nsd_dist = 0.5",
"_____no_output_____"
]
],
[
[
"Note here: I'm calling the dnorm function directly in the \"y\" data position of this function this is instead of doing \"y = dnorm...\"\n\nIts just us being fancy :)",
"_____no_output_____"
]
],
[
[
"plot(x,dnorm(x,mean=mean_dist,sd=sd_dist),ylim=c(0,1.0), type='l')",
"_____no_output_____"
]
],
[
[
"Let's say I want the Zscore for x=2.5 - i.e. given this normal distribution, if I measure pick out an observation that is at the value of 2.5, how off from the mean is it? First of course, lets plot!",
"_____no_output_____"
]
],
[
[
"plot(x,dnorm(x,mean=mean_dist,sd=sd_dist),ylim=c(0,1.0), type='l')\nabline(v=2.5,col=\"red\")",
"_____no_output_____"
]
],
[
[
"We can see already that its pretty far off from the mean here $\\rightarrow$ if we by eye try to compare the area to the right of this line (the little tail) it is very small compared to the area to the left - so we expect our Z-score to be pretty big!\n\nNow let's actually calculate. Recall: $Z_{score} = \\frac{observation - mean}{SD}$",
"_____no_output_____"
]
],
[
[
"Zscore = (2.5 - mean_dist)/sd_dist\nprint(Zscore)",
"[1] 3\n"
]
],
[
[
"This is saying our measurement of 2.5 is 3 times bigger than the standard deviation of our normal distribution. So pretty gosh-darn big!",
"_____no_output_____"
],
[
"Now, let's say I've got a 2nd distribution with mean = 0.5 and sd=2, is the $Z_{score}$ at x=2.5 higher or lower than the first one?\n\nAs always, let's start by plotting:",
"_____no_output_____"
]
],
[
[
"# old plot\nplot(x,dnorm(x,mean=mean_dist,sd=sd_dist),ylim=c(0,1.0), type='l')\nabline(v=2.5,col=\"red\")\n\n# 2nd distribution\nmean_dist2 = 0.5\nsd_dist2 = 2.0\npar(new=TRUE) # overplot on our original axis\nplot(x,dnorm(x,mean=mean_dist2,sd=sd_dist2),col=\"blue\",ylim=c(0,1.0), type='l')",
"_____no_output_____"
]
],
[
[
"By eye we can see that the red line falls at a higher y-value on the blue, 2nd distribution this tells us at x=2.5, we are closer to the mean on the 2nd distribution so we expect a lower $Z_{score}$, but let's find out!",
"_____no_output_____"
]
],
[
[
"Zscore2 = (2.5-mean_dist2)/sd_dist2\nprint(Zscore2)",
"[1] 1\n"
]
],
[
[
"Indeed 1 < 3 - in our 2nd distribution, an observation of x=2.5 is only 1 SD from the mean.",
"_____no_output_____"
],
[
"$Z_{scores}$ allow us to in a sense \"normalize\" each normal distribution to allow for comparisions between normal distributiosn with different means & SDs. For example, if these distributions were measuring a test then a student that scored a 2.5 on both would have done better than the overall class distribution on the first test.",
"_____no_output_____"
],
[
"### 3.A Example: Manufacturing rulers",
"_____no_output_____"
],
[
"I am the manufacturer of rulers. My rulers should be 10cm long, but I am having issues:\n 1. On Run #1 I get rulers with a mean of 11cm and an SD of 2.0cm.\n 1. On Run #2 I get rulers with a mean of 10cm and an SD of 4.0cm.\n\nQ1: Which is the better run of my manufacturing equiptment *Note: could be differing answers!* Think on this for a bit!\n\nQ2: in each run, pull out a ruler to see how off it is. In both runs, I pull out a 9cm ruler - how unusual is it for me to pull out a ruler of this size?\n\n 1. Make a plot showing this & guess using the plot,\n 1. Then, calculate with a Zscore and say for sure.\n",
"_____no_output_____"
],
[
"#### ANS 1:\n",
"_____no_output_____"
]
],
[
[
"options(repr.plot.width=8, repr.plot.height=5) # nicer plotting window\n\n#Plot: Run # 1: \"mean of 11cm and an SD of 2.0cm\"\nx = seq(5,15,length=200)\nplot(x,dnorm(x,mean=11,sd=2), type='l', ylim=c(0,0.2)) # further out for run #1\n\npar(new=TRUE) # to overplot\n#Plot: Run # 2: \"mean of 10 cm and an SD of 4.0 cm\"\nplot(x,dnorm(x,mean=10,sd=4),col=\"blue\", type='l', ylim=c(0,0.2))\n\n# Our observation, a 9cm ruler:\nabline(v=9.0,col=\"red\")\n\n# To remind us what is what:\nlegend(\"topright\", c(\"Run 1\", \"Run 2\"), col=c(\"black\",\"blue\"), lw=1)",
"_____no_output_____"
]
],
[
[
"By eye, it looks like in run 1 (black) we are further from the mean (11cm), than for run 2 (blue). So this means that it is more unusual to get this 9cm ruler in run 1 than run2\n\nBut let's do the calculation to be sure:",
"_____no_output_____"
]
],
[
[
"Z1 = (9.0-11)/2.0 # -1.0\nZ2 = (9.0-10)/4.0 # -0.25\nprint(c(\"Run 1\", \"Run 2\"))\nprint(c(Z1,Z2))",
"[1] \"Run 1\" \"Run 2\"\n[1] -1.00 -0.25\n"
]
],
[
[
"Here -1.0 < -0.25 so run 1 is MORE SDs from the mean even though its negative!",
"_____no_output_____"
],
[
"## BACK TO SLIDES FOR PERCENTILES ",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
d0c9288e404e71c0335cbef6becf9d9c5f15cc84 | 42,004 | ipynb | Jupyter Notebook | jupyter/notebooks/Utilities/Setup_Geoserver_Services.ipynb | pauldzy/NHDPlusInABox | 47cc4628ff86de5ee49f0a40d40a41b995f3b09d | [
"CC0-1.0"
] | 4 | 2019-01-23T00:34:04.000Z | 2020-07-26T00:09:15.000Z | jupyter/notebooks/Utilities/Setup_Geoserver_Services.ipynb | pauldzy/NHDPlusInABox | 47cc4628ff86de5ee49f0a40d40a41b995f3b09d | [
"CC0-1.0"
] | null | null | null | jupyter/notebooks/Utilities/Setup_Geoserver_Services.ipynb | pauldzy/NHDPlusInABox | 47cc4628ff86de5ee49f0a40d40a41b995f3b09d | [
"CC0-1.0"
] | null | null | null | 40.041945 | 106 | 0.46148 | [
[
[
"import os,requests,json;\nfrom ipywidgets import IntProgress,HTML,VBox;\nfrom IPython.display import display;\n\nr = requests.get(\n 'http://dz_gs:8080/geoserver/rest/settings/contact'\n ,auth=('admin','nhdplus')\n);\n\nif r.status_code != 200 or r.json()[\"contact\"][\"contactOrganization\"] != 'NHDPlusInABox':\n raise Exception('geoserver does not appear ready for configuration');\n\nr = requests.get(\n 'http://dz_gs:8080/geoserver/rest/workspaces'\n ,auth=('admin','nhdplus')\n);\n\nboo_check = False;\nfor item in r.json()[\"workspaces\"][\"workspace\"]:\n if item[\"name\"] == \"nhdplus\":\n boo_check = True;\nif not boo_check:\n r = requests.post(\n 'http://dz_gs:8080/geoserver/rest/workspaces'\n ,headers={'Content-Type':'application/json'}\n ,params={'default':True}\n ,data=json.dumps({'workspace':{'name':'nhdplus'}})\n ,auth=('admin','nhdplus')\n );\n \nr = requests.get(\n 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/datastores'\n ,auth=('admin','nhdplus')\n);\nif r.status_code != 200:\n raise Exception('datastores get failed');\n\nboo_check = False;\nif r.json()[\"dataStores\"] != \"\":\n for item in r.json()[\"dataStores\"][\"dataStore\"]:\n if item[\"name\"] == \"dzpg_nhdplus\":\n boo_check = True;\nif not boo_check:\n payload = {\n \"dataStore\": {\n \"name\": \"dzpg_nhdplus\"\n ,\"connectionParameters\": {\n \"entry\": [\n {\"@key\":\"host\" ,\"$\":\"dz_pg\"}\n ,{\"@key\":\"port\" ,\"$\":\"5432\"}\n ,{\"@key\":\"database\",\"$\":\"nhdplus\"}\n ,{\"@key\":\"user\" ,\"$\":\"nhdplus\"}\n ,{\"@key\":\"passwd\" ,\"$\":os.environ['POSTGRES_PASSWORD']}\n ,{\"@key\":\"dbtype\" ,\"$\":\"postgis\"}\n ,{\"@key\":\"schema\" ,\"$\":\"nhdplus\"}\n ]\n }\n }\n }\n r = requests.post(\n 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/datastores'\n ,headers={'Content-Type':'application/json'}\n ,data=json.dumps(payload)\n ,auth=('admin','nhdplus')\n );\n\nr = requests.get(\n 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/styles'\n ,auth=('admin','nhdplus')\n); \nif r.status_code != 200:\n raise Exception('styles get failed');\n \nsty = [];\nif r.json()[\"styles\"] != \"\":\n for item in r.json()[\"styles\"][\"style\"]:\n sty.append(item[\"name\"]);\n \nif 'catchment_polygon' not in sty:\n payload = \"\"\"<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<StyledLayerDescriptor version=\"1.0.0\" \n xsi:schemaLocation=\"http://www.opengis.net/sld StyledLayerDescriptor.xsd\" \n xmlns=\"http://www.opengis.net/sld\" \n xmlns:ogc=\"http://www.opengis.net/ogc\" \n xmlns:xlink=\"http://www.w3.org/1999/xlink\" \n xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\">\n <NamedLayer>\n <Name>catchment_polygon</Name>\n <UserStyle>\n <FeatureTypeStyle>\n <Rule>\n <Name>Viewable</Name>\n <MaxScaleDenominator>288896</MaxScaleDenominator>\n <PolygonSymbolizer>\n <Fill>\n <CssParameter name=\"fill\">#AAAAAA</CssParameter>\n <CssParameter name=\"fill-opacity\">0</CssParameter>\n </Fill>\n <Stroke>\n <CssParameter name=\"stroke\">#E67000</CssParameter>\n <CssParameter name=\"stroke-width\">1.5</CssParameter>\n </Stroke>\n </PolygonSymbolizer>\n </Rule>\n </FeatureTypeStyle>\n </UserStyle>\n </NamedLayer>\n</StyledLayerDescriptor>\"\"\";\n r = requests.post(\n 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/styles'\n ,headers={'Content-Type':'application/vnd.ogc.sld+xml'}\n ,params={'name':'catchment_polygon'}\n ,data=payload\n ,auth=('admin','nhdplus')\n );\n \nif 'wbd_polygon' not in sty:\n payload = \"\"\"<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<StyledLayerDescriptor version=\"1.0.0\" \n xsi:schemaLocation=\"http://www.opengis.net/sld StyledLayerDescriptor.xsd\" \n xmlns=\"http://www.opengis.net/sld\" \n xmlns:ogc=\"http://www.opengis.net/ogc\" \n xmlns:xlink=\"http://www.w3.org/1999/xlink\" \n xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\">\n <NamedLayer>\n <Name>wbd_polygon</Name>\n <UserStyle>\n <FeatureTypeStyle>\n <Rule>\n <Name>Viewable</Name>\n <MaxScaleDenominator>288896</MaxScaleDenominator>\n <PolygonSymbolizer>\n <Fill>\n <CssParameter name=\"fill\">#AAAAAA</CssParameter>\n <CssParameter name=\"fill-opacity\">0</CssParameter>\n </Fill>\n <Stroke>\n <CssParameter name=\"stroke\">#C500FF</CssParameter>\n <CssParameter name=\"stroke-width\">1.5</CssParameter>\n </Stroke>\n </PolygonSymbolizer>\n </Rule>\n </FeatureTypeStyle>\n </UserStyle>\n </NamedLayer>\n</StyledLayerDescriptor>\"\"\";\n r = requests.post(\n 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/styles'\n ,headers={'Content-Type':'application/vnd.ogc.sld+xml'}\n ,params={'name':'wbd_polygon'}\n ,data=payload\n ,auth=('admin','nhdplus')\n );\n \nif 'wbd2_polygon' not in sty:\n payload = \"\"\"<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<StyledLayerDescriptor version=\"1.0.0\" \n xsi:schemaLocation=\"http://www.opengis.net/sld StyledLayerDescriptor.xsd\" \n xmlns=\"http://www.opengis.net/sld\" \n xmlns:ogc=\"http://www.opengis.net/ogc\" \n xmlns:xlink=\"http://www.w3.org/1999/xlink\" \n xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\">\n <NamedLayer>\n <Name>wbd2_polygon</Name>\n <UserStyle>\n <FeatureTypeStyle>\n <Rule>\n <Name>Viewable</Name>\n <PolygonSymbolizer>\n <Fill>\n <CssParameter name=\"fill\">#AAAAAA</CssParameter>\n <CssParameter name=\"fill-opacity\">0</CssParameter>\n </Fill>\n <Stroke>\n <CssParameter name=\"stroke\">#C500FF</CssParameter>\n <CssParameter name=\"stroke-width\">1.5</CssParameter>\n </Stroke>\n </PolygonSymbolizer>\n </Rule>\n </FeatureTypeStyle>\n </UserStyle>\n </NamedLayer>\n</StyledLayerDescriptor>\"\"\";\n r = requests.post(\n 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/styles'\n ,headers={'Content-Type':'application/vnd.ogc.sld+xml'}\n ,params={'name':'wbd2_polygon'}\n ,data=payload\n ,auth=('admin','nhdplus')\n );\n \nif 'nhdwaterbody_polygon' not in sty:\n payload = \"\"\"<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<StyledLayerDescriptor version=\"1.0.0\" \n xsi:schemaLocation=\"http://www.opengis.net/sld StyledLayerDescriptor.xsd\" \n xmlns=\"http://www.opengis.net/sld\" \n xmlns:ogc=\"http://www.opengis.net/ogc\" \n xmlns:xlink=\"http://www.w3.org/1999/xlink\" \n xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\">\n <NamedLayer>\n <Name>nhdwaterbody_polygon</Name>\n <UserStyle>\n <FeatureTypeStyle>\n <Rule>\n <Name>Viewable</Name>\n <MaxScaleDenominator>288896</MaxScaleDenominator>\n <PolygonSymbolizer>\n <Fill>\n <CssParameter name=\"fill\">#97DBF2</CssParameter>\n <CssParameter name=\"fill-opacity\">1</CssParameter>\n </Fill>\n <Stroke>\n <CssParameter name=\"stroke\">#AAAAAA</CssParameter>\n <CssParameter name=\"stroke-width\">0</CssParameter>\n </Stroke>\n </PolygonSymbolizer>\n </Rule>\n </FeatureTypeStyle>\n </UserStyle>\n </NamedLayer>\n</StyledLayerDescriptor>\"\"\";\n r = requests.post(\n 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/styles'\n ,headers={'Content-Type':'application/vnd.ogc.sld+xml'}\n ,params={'name':'nhdwaterbody_polygon'}\n ,data=payload\n ,auth=('admin','nhdplus')\n );\n\nif 'nhdarea_polygon' not in sty:\n payload = \"\"\"<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<StyledLayerDescriptor version=\"1.0.0\" \n xsi:schemaLocation=\"http://www.opengis.net/sld StyledLayerDescriptor.xsd\" \n xmlns=\"http://www.opengis.net/sld\" \n xmlns:ogc=\"http://www.opengis.net/ogc\" \n xmlns:xlink=\"http://www.w3.org/1999/xlink\" \n xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\">\n <NamedLayer>\n <Name>nhdarea_polygon</Name>\n <UserStyle>\n <FeatureTypeStyle>\n <Rule>\n <Name>Viewable</Name>\n <MaxScaleDenominator>288896</MaxScaleDenominator>\n <PolygonSymbolizer>\n <Fill>\n <CssParameter name=\"fill\">#70D0F8</CssParameter>\n <CssParameter name=\"fill-opacity\">1</CssParameter>\n </Fill>\n <Stroke>\n <CssParameter name=\"stroke\">#AAAAAA</CssParameter>\n <CssParameter name=\"stroke-width\">0</CssParameter>\n </Stroke>\n </PolygonSymbolizer>\n </Rule>\n </FeatureTypeStyle>\n </UserStyle>\n </NamedLayer>\n</StyledLayerDescriptor>\"\"\";\n r = requests.post(\n 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/styles'\n ,headers={'Content-Type':'application/vnd.ogc.sld+xml'}\n ,params={'name':'nhdarea_polygon'}\n ,data=payload\n ,auth=('admin','nhdplus')\n );\n \nif 'nhdflowline_line' not in sty:\n payload = \"\"\"<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<StyledLayerDescriptor version=\"1.0.0\" \n xsi:schemaLocation=\"http://www.opengis.net/sld StyledLayerDescriptor.xsd\" \n xmlns=\"http://www.opengis.net/sld\" \n xmlns:ogc=\"http://www.opengis.net/ogc\" \n xmlns:xlink=\"http://www.w3.org/1999/xlink\" \n xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\">\n <NamedLayer>\n <Name>nhdflowline_line</Name>\n <UserStyle>\n <FeatureTypeStyle>\n <Rule>\n <Name>Viewable</Name>\n <MaxScaleDenominator>288896</MaxScaleDenominator>\n <LineSymbolizer>\n <Stroke>\n <CssParameter name=\"stroke\">#0000FF</CssParameter>\n <CssParameter name=\"stroke-width\">1</CssParameter>\n </Stroke>\n </LineSymbolizer>\n </Rule>\n </FeatureTypeStyle>\n </UserStyle>\n </NamedLayer>\n</StyledLayerDescriptor>\"\"\";\n r = requests.post(\n 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/styles'\n ,headers={'Content-Type':'application/vnd.ogc.sld+xml'}\n ,params={'name':'nhdflowline_line'}\n ,data=payload\n ,auth=('admin','nhdplus')\n );\n \nif 'nhdline_line' not in sty:\n payload = \"\"\"<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<StyledLayerDescriptor version=\"1.0.0\" \n xsi:schemaLocation=\"http://www.opengis.net/sld StyledLayerDescriptor.xsd\" \n xmlns=\"http://www.opengis.net/sld\" \n xmlns:ogc=\"http://www.opengis.net/ogc\" \n xmlns:xlink=\"http://www.w3.org/1999/xlink\" \n xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\">\n <NamedLayer>\n <Name>nhdline_line</Name>\n <UserStyle>\n <FeatureTypeStyle>\n <Rule>\n <Name>Viewable</Name>\n <MaxScaleDenominator>288896</MaxScaleDenominator>\n <LineSymbolizer>\n <Stroke>\n <CssParameter name=\"stroke\">#0000FF</CssParameter>\n <CssParameter name=\"stroke-width\">1</CssParameter>\n </Stroke>\n </LineSymbolizer>\n </Rule>\n </FeatureTypeStyle>\n </UserStyle>\n </NamedLayer>\n</StyledLayerDescriptor>\"\"\";\n r = requests.post(\n 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/styles'\n ,headers={'Content-Type':'application/vnd.ogc.sld+xml'}\n ,params={'name':'nhdline_line'}\n ,data=payload\n ,auth=('admin','nhdplus')\n );\n \nif 'nhdpoint_point' not in sty:\n payload = \"\"\"<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<StyledLayerDescriptor version=\"1.0.0\" \n xsi:schemaLocation=\"http://www.opengis.net/sld StyledLayerDescriptor.xsd\" \n xmlns=\"http://www.opengis.net/sld\" \n xmlns:ogc=\"http://www.opengis.net/ogc\" \n xmlns:xlink=\"http://www.w3.org/1999/xlink\" \n xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\">\n <NamedLayer>\n <Name>nhdpoint_point</Name>\n <UserStyle>\n <FeatureTypeStyle>\n <Rule>\n <Name>Viewable</Name>\n <MaxScaleDenominator>288896</MaxScaleDenominator>\n <PointSymbolizer>\n <Graphic>\n <Mark>\n <WellKnownName>square</WellKnownName>\n <Fill>\n <CssParameter name=\"fill\">#0000FF</CssParameter>\n </Fill>\n </Mark>\n <Size>6</Size>\n </Graphic>\n </PointSymbolizer>\n </Rule>\n </FeatureTypeStyle>\n </UserStyle>\n </NamedLayer>\n</StyledLayerDescriptor>\"\"\";\n r = requests.post(\n 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/styles'\n ,headers={'Content-Type':'application/vnd.ogc.sld+xml'}\n ,params={'name':'nhdpoint_point'}\n ,data=payload\n ,auth=('admin','nhdplus')\n );\n \nr = requests.get(\n 'http://dz_gs:8080/geoserver/rest/layers'\n ,auth=('admin','nhdplus')\n);\nif r.status_code != 200:\n raise Exception('layers get failed');\n \nlyr = [];\nif r.json()[\"layers\"] != \"\":\n for item in r.json()[\"layers\"][\"layer\"]:\n lyr.append(item[\"name\"]);\n\nif 'nhdplus:catchment_np21' not in lyr:\n payload = {\n \"featureType\":{\n \"name\":\"catchment_np21\"\n ,\"nativeName\":\"catchment_np21\"\n ,\"namespace\":{\n \"name\":\"nhdplus\"\n }\n ,\"title\":\"catchment_np21\"\n ,\"nativeCRS\":\"EPSG:4269\"\n ,\"srs\":\"EPSG:4269\"\n ,\"projectionPolicy\":\"FORCE_DECLARED\"\n ,\"enabled\": True\n ,\"store\":{\n \"@class\":\"dataStore\"\n ,\"name\":\"nhdplus:dzpg_nhdplus\"\n }\n ,\"maxFeatures\":0\n ,\"numDecimals\":0\n ,\"overridingServiceSRS\": False\n ,\"skipNumberMatched\": False\n ,\"circularArcPresent\": False\n }\n }\n r = requests.post(\n 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/datastores/dzpg_nhdplus/featuretypes'\n ,headers={'Content-Type':'application/json'}\n ,data=json.dumps(payload)\n ,auth=('admin','nhdplus')\n );\n \n payload = \"\"\"\n<layer>\n <defaultStyle>\n <name>catchment_polygon</name>\n </defaultStyle>\n</layer>\"\"\"\n r = requests.put(\n 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/layers/catchment_np21'\n ,headers={'Content-Type':'text/xml'}\n ,data=payload\n ,auth=('admin','nhdplus')\n );\n if r.status_code != 200:\n raise Exception('layer alteration failed <' + r.status_code + '>');\n \nif 'nhdplus:catchmentsp_np21' not in lyr:\n payload = {\n \"featureType\":{\n \"name\":\"catchmentsp_np21\"\n ,\"nativeName\":\"catchmentsp_np21\"\n ,\"namespace\":{\n \"name\":\"nhdplus\"\n }\n ,\"title\":\"catchmentsp_np21\"\n ,\"nativeCRS\":\"EPSG:4269\"\n ,\"srs\":\"EPSG:4269\"\n ,\"projectionPolicy\":\"FORCE_DECLARED\"\n ,\"enabled\": True\n ,\"store\":{\n \"@class\":\"dataStore\"\n ,\"name\":\"nhdplus:dzpg_nhdplus\"\n }\n ,\"maxFeatures\":0\n ,\"numDecimals\":0\n ,\"overridingServiceSRS\": False\n ,\"skipNumberMatched\": False\n ,\"circularArcPresent\": False\n }\n }\n r = requests.post(\n 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/datastores/dzpg_nhdplus/featuretypes'\n ,headers={'Content-Type':'application/json'}\n ,data=json.dumps(payload)\n ,auth=('admin','nhdplus')\n );\n if r.status_code != 201:\n raise Exception('layer creation failed');\n \n payload = \"\"\"\n<layer>\n <defaultStyle>\n <name>catchment_polygon</name>\n </defaultStyle>\n</layer>\"\"\"\n r = requests.put(\n 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/layers/catchmentsp_np21'\n ,headers={'Content-Type':'text/xml'}\n ,data=payload\n ,auth=('admin','nhdplus')\n );\n if r.status_code != 200:\n raise Exception('layer alteration failed <' + r.status_code + '>');\n \nif 'nhdplus:wbd_hu2_np21' not in lyr:\n payload = {\n \"featureType\":{\n \"name\":\"wbd_hu2_np21\"\n ,\"nativeName\":\"wbd_hu2_np21\"\n ,\"namespace\":{\n \"name\":\"nhdplus\"\n }\n ,\"title\":\"wbd_hu2_np21\"\n ,\"nativeCRS\":\"EPSG:4269\"\n ,\"srs\":\"EPSG:4269\"\n ,\"projectionPolicy\":\"FORCE_DECLARED\"\n ,\"enabled\": True\n ,\"store\":{\n \"@class\":\"dataStore\"\n ,\"name\":\"nhdplus:dzpg_nhdplus\"\n }\n ,\"maxFeatures\":0\n ,\"numDecimals\":0\n ,\"overridingServiceSRS\": False\n ,\"skipNumberMatched\": False\n ,\"circularArcPresent\": False\n }\n }\n r = requests.post(\n 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/datastores/dzpg_nhdplus/featuretypes'\n ,headers={'Content-Type':'application/json'}\n ,data=json.dumps(payload)\n ,auth=('admin','nhdplus')\n );\n if r.status_code != 201:\n raise Exception('layer creation failed');\n \n payload = \"\"\"\n<layer>\n <defaultStyle>\n <name>wbd2_polygon</name>\n </defaultStyle>\n</layer>\"\"\"\n r = requests.put(\n 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/layers/wbd_hu2_np21'\n ,headers={'Content-Type':'text/xml'}\n ,data=payload\n ,auth=('admin','nhdplus')\n );\n if r.status_code != 200:\n raise Exception('layer alteration failed <' + r.status_code + '>');\n \nif 'nhdplus:wbd_hu4_np21' not in lyr:\n payload = {\n \"featureType\":{\n \"name\":\"wbd_hu4_np21\"\n ,\"nativeName\":\"wbd_hu4_np21\"\n ,\"namespace\":{\n \"name\":\"nhdplus\"\n }\n ,\"title\":\"wbd_hu4_np21\"\n ,\"nativeCRS\":\"EPSG:4269\"\n ,\"srs\":\"EPSG:4269\"\n ,\"projectionPolicy\":\"FORCE_DECLARED\"\n ,\"enabled\": True\n ,\"store\":{\n \"@class\":\"dataStore\"\n ,\"name\":\"nhdplus:dzpg_nhdplus\"\n }\n ,\"maxFeatures\":0\n ,\"numDecimals\":0\n ,\"overridingServiceSRS\": False\n ,\"skipNumberMatched\": False\n ,\"circularArcPresent\": False\n }\n }\n r = requests.post(\n 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/datastores/dzpg_nhdplus/featuretypes'\n ,headers={'Content-Type':'application/json'}\n ,data=json.dumps(payload)\n ,auth=('admin','nhdplus')\n );\n if r.status_code != 201:\n raise Exception('layer creation failed');\n \n payload = \"\"\"\n<layer>\n <defaultStyle>\n <name>wbd_polygon</name>\n </defaultStyle>\n</layer>\"\"\"\n r = requests.put(\n 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/layers/wbd_hu4_np21'\n ,headers={'Content-Type':'text/xml'}\n ,data=payload\n ,auth=('admin','nhdplus')\n );\n if r.status_code != 200:\n raise Exception('layer alteration failed <' + r.status_code + '>');\n\nif 'nhdplus:wbd_hu6_np21' not in lyr:\n payload = {\n \"featureType\":{\n \"name\":\"wbd_hu6_np21\"\n ,\"nativeName\":\"wbd_hu6_np21\"\n ,\"namespace\":{\n \"name\":\"nhdplus\"\n }\n ,\"title\":\"wbd_hu6_np21\"\n ,\"nativeCRS\":\"EPSG:4269\"\n ,\"srs\":\"EPSG:4269\"\n ,\"projectionPolicy\":\"FORCE_DECLARED\"\n ,\"enabled\": True\n ,\"store\":{\n \"@class\":\"dataStore\"\n ,\"name\":\"nhdplus:dzpg_nhdplus\"\n }\n ,\"maxFeatures\":0\n ,\"numDecimals\":0\n ,\"overridingServiceSRS\": False\n ,\"skipNumberMatched\": False\n ,\"circularArcPresent\": False\n }\n }\n r = requests.post(\n 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/datastores/dzpg_nhdplus/featuretypes'\n ,headers={'Content-Type':'application/json'}\n ,data=json.dumps(payload)\n ,auth=('admin','nhdplus')\n );\n if r.status_code != 201:\n raise Exception('layer creation failed');\n \n payload = \"\"\"\n<layer>\n <defaultStyle>\n <name>wbd_polygon</name>\n </defaultStyle>\n</layer>\"\"\"\n r = requests.put(\n 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/layers/wbd_hu6_np21'\n ,headers={'Content-Type':'text/xml'}\n ,data=payload\n ,auth=('admin','nhdplus')\n );\n if r.status_code != 200:\n raise Exception('layer alteration failed <' + r.status_code + '>');\n \nif 'nhdplus:wbd_hu8_np21' not in lyr:\n payload = {\n \"featureType\":{\n \"name\":\"wbd_hu8_np21\"\n ,\"nativeName\":\"wbd_hu8_np21\"\n ,\"namespace\":{\n \"name\":\"nhdplus\"\n }\n ,\"title\":\"wbd_hu8_np21\"\n ,\"nativeCRS\":\"EPSG:4269\"\n ,\"srs\":\"EPSG:4269\"\n ,\"projectionPolicy\":\"FORCE_DECLARED\"\n ,\"enabled\": True\n ,\"store\":{\n \"@class\":\"dataStore\"\n ,\"name\":\"nhdplus:dzpg_nhdplus\"\n }\n ,\"maxFeatures\":0\n ,\"numDecimals\":0\n ,\"overridingServiceSRS\": False\n ,\"skipNumberMatched\": False\n ,\"circularArcPresent\": False\n }\n }\n r = requests.post(\n 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/datastores/dzpg_nhdplus/featuretypes'\n ,headers={'Content-Type':'application/json'}\n ,data=json.dumps(payload)\n ,auth=('admin','nhdplus')\n );\n if r.status_code != 201:\n raise Exception('layer creation failed');\n \n payload = \"\"\"\n<layer>\n <defaultStyle>\n <name>wbd_polygon</name>\n </defaultStyle>\n</layer>\"\"\"\n r = requests.put(\n 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/layers/wbd_hu8_np21'\n ,headers={'Content-Type':'text/xml'}\n ,data=payload\n ,auth=('admin','nhdplus')\n );\n if r.status_code != 200:\n raise Exception('layer alteration failed <' + r.status_code + '>');\n \nif 'nhdplus:wbd_hu10_np21' not in lyr:\n payload = {\n \"featureType\":{\n \"name\":\"wbd_hu10_np21\"\n ,\"nativeName\":\"wbd_hu10_np21\"\n ,\"namespace\":{\n \"name\":\"nhdplus\"\n }\n ,\"title\":\"wbd_hu10_np21\"\n ,\"nativeCRS\":\"EPSG:4269\"\n ,\"srs\":\"EPSG:4269\"\n ,\"projectionPolicy\":\"FORCE_DECLARED\"\n ,\"enabled\": True\n ,\"store\":{\n \"@class\":\"dataStore\"\n ,\"name\":\"nhdplus:dzpg_nhdplus\"\n }\n ,\"maxFeatures\":0\n ,\"numDecimals\":0\n ,\"overridingServiceSRS\": False\n ,\"skipNumberMatched\": False\n ,\"circularArcPresent\": False\n }\n }\n r = requests.post(\n 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/datastores/dzpg_nhdplus/featuretypes'\n ,headers={'Content-Type':'application/json'}\n ,data=json.dumps(payload)\n ,auth=('admin','nhdplus')\n );\n if r.status_code != 201:\n raise Exception('layer creation failed');\n \n payload = \"\"\"\n<layer>\n <defaultStyle>\n <name>wbd_polygon</name>\n </defaultStyle>\n</layer>\"\"\"\n r = requests.put(\n 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/layers/wbd_hu10_np21'\n ,headers={'Content-Type':'text/xml'}\n ,data=payload\n ,auth=('admin','nhdplus')\n );\n if r.status_code != 200:\n raise Exception('layer alteration failed <' + r.status_code + '>');\n \nif 'nhdplus:wbd_hu12_np21' not in lyr:\n payload = {\n \"featureType\":{\n \"name\":\"wbd_hu12_np21\"\n ,\"nativeName\":\"wbd_hu12_np21\"\n ,\"namespace\":{\n \"name\":\"nhdplus\"\n }\n ,\"title\":\"wbd_hu12_np21\"\n ,\"nativeCRS\":\"EPSG:4269\"\n ,\"srs\":\"EPSG:4269\"\n ,\"projectionPolicy\":\"FORCE_DECLARED\"\n ,\"enabled\": True\n ,\"store\":{\n \"@class\":\"dataStore\"\n ,\"name\":\"nhdplus:dzpg_nhdplus\"\n }\n ,\"maxFeatures\":0\n ,\"numDecimals\":0\n ,\"overridingServiceSRS\": False\n ,\"skipNumberMatched\": False\n ,\"circularArcPresent\": False\n }\n }\n r = requests.post(\n 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/datastores/dzpg_nhdplus/featuretypes'\n ,headers={'Content-Type':'application/json'}\n ,data=json.dumps(payload)\n ,auth=('admin','nhdplus')\n );\n if r.status_code != 201:\n raise Exception('layer creation failed');\n \n payload = \"\"\"\n<layer>\n <defaultStyle>\n <name>wbd_polygon</name>\n </defaultStyle>\n</layer>\"\"\"\n r = requests.put(\n 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/layers/wbd_hu12_np21'\n ,headers={'Content-Type':'text/xml'}\n ,data=payload\n ,auth=('admin','nhdplus')\n );\n if r.status_code != 200:\n raise Exception('layer alteration failed <' + r.status_code + '>');\n \nif 'nhdplus:nhdwaterbody_np21' not in lyr:\n payload = {\n \"featureType\":{\n \"name\":\"nhdwaterbody_np21\"\n ,\"nativeName\":\"nhdwaterbody_np21\"\n ,\"namespace\":{\n \"name\":\"nhdplus\"\n }\n ,\"title\":\"nhdwaterbody_np21\"\n ,\"nativeCRS\":\"EPSG:4269\"\n ,\"srs\":\"EPSG:4269\"\n ,\"projectionPolicy\":\"FORCE_DECLARED\"\n ,\"enabled\": True\n ,\"store\":{\n \"@class\":\"dataStore\"\n ,\"name\":\"nhdplus:dzpg_nhdplus\"\n }\n ,\"maxFeatures\":0\n ,\"numDecimals\":0\n ,\"overridingServiceSRS\": False\n ,\"skipNumberMatched\": False\n ,\"circularArcPresent\": False\n }\n }\n r = requests.post(\n 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/datastores/dzpg_nhdplus/featuretypes'\n ,headers={'Content-Type':'application/json'}\n ,data=json.dumps(payload)\n ,auth=('admin','nhdplus')\n );\n if r.status_code != 201:\n raise Exception('layer creation failed');\n \n payload = \"\"\"\n<layer>\n <defaultStyle>\n <name>nhdwaterbody_polygon</name>\n </defaultStyle>\n</layer>\"\"\"\n r = requests.put(\n 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/layers/nhdwaterbody_np21'\n ,headers={'Content-Type':'text/xml'}\n ,data=payload\n ,auth=('admin','nhdplus')\n );\n if r.status_code != 200:\n raise Exception('layer alteration failed <' + r.status_code + '>');\n \nif 'nhdplus:nhdarea_np21' not in lyr:\n payload = {\n \"featureType\":{\n \"name\":\"nhdarea_np21\"\n ,\"nativeName\":\"nhdarea_np21\"\n ,\"namespace\":{\n \"name\":\"nhdplus\"\n }\n ,\"title\":\"nhdarea_np21\"\n ,\"nativeCRS\":\"EPSG:4269\"\n ,\"srs\":\"EPSG:4269\"\n ,\"projectionPolicy\":\"FORCE_DECLARED\"\n ,\"enabled\": True\n ,\"store\":{\n \"@class\":\"dataStore\"\n ,\"name\":\"nhdplus:dzpg_nhdplus\"\n }\n ,\"maxFeatures\":0\n ,\"numDecimals\":0\n ,\"overridingServiceSRS\": False\n ,\"skipNumberMatched\": False\n ,\"circularArcPresent\": False\n }\n }\n r = requests.post(\n 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/datastores/dzpg_nhdplus/featuretypes'\n ,headers={'Content-Type':'application/json'}\n ,data=json.dumps(payload)\n ,auth=('admin','nhdplus')\n );\n if r.status_code != 201:\n raise Exception('layer creation failed');\n \n payload = \"\"\"\n<layer>\n <defaultStyle>\n <name>nhdarea_polygon</name>\n </defaultStyle>\n</layer>\"\"\"\n r = requests.put(\n 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/layers/nhdarea_np21'\n ,headers={'Content-Type':'text/xml'}\n ,data=payload\n ,auth=('admin','nhdplus')\n );\n if r.status_code != 200:\n raise Exception('layer alteration failed <' + r.status_code + '>');\n \nif 'nhdplus:nhdflowline_np21' not in lyr:\n payload = {\n \"featureType\":{\n \"name\":\"nhdflowline_np21\"\n ,\"nativeName\":\"nhdflowline_np21\"\n ,\"namespace\":{\n \"name\":\"nhdplus\"\n }\n ,\"title\":\"nhdflowline_np21\"\n ,\"nativeCRS\":\"EPSG:4269\"\n ,\"srs\":\"EPSG:4269\"\n ,\"projectionPolicy\":\"FORCE_DECLARED\"\n ,\"enabled\": True\n ,\"store\":{\n \"@class\":\"dataStore\"\n ,\"name\":\"nhdplus:dzpg_nhdplus\"\n }\n ,\"maxFeatures\":0\n ,\"numDecimals\":0\n ,\"overridingServiceSRS\": False\n ,\"skipNumberMatched\": False\n ,\"circularArcPresent\": False\n }\n }\n r = requests.post(\n 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/datastores/dzpg_nhdplus/featuretypes'\n ,headers={'Content-Type':'application/json'}\n ,data=json.dumps(payload)\n ,auth=('admin','nhdplus')\n );\n if r.status_code != 201:\n raise Exception('layer creation failed');\n \n payload = \"\"\"\n<layer>\n <defaultStyle>\n <name>nhdflowline_line</name>\n </defaultStyle>\n</layer>\"\"\"\n r = requests.put(\n 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/layers/nhdflowline_np21'\n ,headers={'Content-Type':'text/xml'}\n ,data=payload\n ,auth=('admin','nhdplus')\n );\n if r.status_code != 200:\n raise Exception('layer alteration failed <' + r.status_code + '>');\n \nif 'nhdplus:nhdline_np21' not in lyr:\n payload = {\n \"featureType\":{\n \"name\":\"nhdline_np21\"\n ,\"nativeName\":\"nhdline_np21\"\n ,\"namespace\":{\n \"name\":\"nhdplus\"\n }\n ,\"title\":\"nhdline_np21\"\n ,\"nativeCRS\":\"EPSG:4269\"\n ,\"srs\":\"EPSG:4269\"\n ,\"projectionPolicy\":\"FORCE_DECLARED\"\n ,\"enabled\": True\n ,\"store\":{\n \"@class\":\"dataStore\"\n ,\"name\":\"nhdplus:dzpg_nhdplus\"\n }\n ,\"maxFeatures\":0\n ,\"numDecimals\":0\n ,\"overridingServiceSRS\": False\n ,\"skipNumberMatched\": False\n ,\"circularArcPresent\": False\n }\n }\n r = requests.post(\n 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/datastores/dzpg_nhdplus/featuretypes'\n ,headers={'Content-Type':'application/json'}\n ,data=json.dumps(payload)\n ,auth=('admin','nhdplus')\n );\n if r.status_code != 201:\n raise Exception('layer creation failed');\n \n payload = \"\"\"\n<layer>\n <defaultStyle>\n <name>nhdline_line</name>\n </defaultStyle>\n</layer>\"\"\"\n r = requests.put(\n 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/layers/nhdline_np21'\n ,headers={'Content-Type':'text/xml'}\n ,data=payload\n ,auth=('admin','nhdplus')\n );\n if r.status_code != 200:\n raise Exception('layer alteration failed <' + r.status_code + '>');\n \nif 'nhdplus:nhdpoint_np21' not in lyr:\n payload = {\n \"featureType\":{\n \"name\":\"nhdpoint_np21\"\n ,\"nativeName\":\"nhdpoint_np21\"\n ,\"namespace\":{\n \"name\":\"nhdplus\"\n }\n ,\"title\":\"nhdpoint_np21\"\n ,\"nativeCRS\":\"EPSG:4269\"\n ,\"srs\":\"EPSG:4269\"\n ,\"projectionPolicy\":\"FORCE_DECLARED\"\n ,\"enabled\": True\n ,\"store\":{\n \"@class\":\"dataStore\"\n ,\"name\":\"nhdplus:dzpg_nhdplus\"\n }\n ,\"maxFeatures\":0\n ,\"numDecimals\":0\n ,\"overridingServiceSRS\": False\n ,\"skipNumberMatched\": False\n ,\"circularArcPresent\": False\n }\n }\n r = requests.post(\n 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/datastores/dzpg_nhdplus/featuretypes'\n ,headers={'Content-Type':'application/json'}\n ,data=json.dumps(payload)\n ,auth=('admin','nhdplus')\n );\n if r.status_code != 201:\n raise Exception('layer creation failed');\n \n payload = \"\"\"\n<layer>\n <defaultStyle>\n <name>nhdpoint_point</name>\n </defaultStyle>\n</layer>\"\"\"\n r = requests.put(\n 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/layers/nhdpoint_np21'\n ,headers={'Content-Type':'text/xml'}\n ,data=payload\n ,auth=('admin','nhdplus')\n );\n if r.status_code != 200:\n raise Exception('layer alteration failed <' + r.status_code + '>');\n\n",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code"
]
] |
d0c9313c623e68ad8e3bc471431480aa6097de8a | 660,712 | ipynb | Jupyter Notebook | ML Micro Projects/Machine Learning with Linear Regression.ipynb | anaxsouza/Data_Science_and_ML_Portfolio | 18c1529413889d74aa17cc185e84dfd26ffd457a | [
"MIT"
] | null | null | null | ML Micro Projects/Machine Learning with Linear Regression.ipynb | anaxsouza/Data_Science_and_ML_Portfolio | 18c1529413889d74aa17cc185e84dfd26ffd457a | [
"MIT"
] | null | null | null | ML Micro Projects/Machine Learning with Linear Regression.ipynb | anaxsouza/Data_Science_and_ML_Portfolio | 18c1529413889d74aa17cc185e84dfd26ffd457a | [
"MIT"
] | null | null | null | 790.325359 | 452,106 | 0.949862 | [
[
[
"# Machine Learning with Linear Regression",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"## Data\n",
"_____no_output_____"
]
],
[
[
"customers = pd.read_csv('data/Ecommerce Customers')",
"_____no_output_____"
],
[
"customers.head()",
"_____no_output_____"
],
[
"customers.describe()",
"_____no_output_____"
],
[
"customers.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 500 entries, 0 to 499\nData columns (total 8 columns):\nEmail 500 non-null object\nAddress 500 non-null object\nAvatar 500 non-null object\nAvg. Session Length 500 non-null float64\nTime on App 500 non-null float64\nTime on Website 500 non-null float64\nLength of Membership 500 non-null float64\nYearly Amount Spent 500 non-null float64\ndtypes: float64(5), object(3)\nmemory usage: 31.3+ KB\n"
]
],
[
[
"## Exploratory Analysis",
"_____no_output_____"
]
],
[
[
"sns.jointplot(customers['Time on Website'],customers['Yearly Amount Spent'])",
"_____no_output_____"
],
[
"sns.jointplot(customers['Time on App'],customers['Yearly Amount Spent'])",
"_____no_output_____"
],
[
"sns.pairplot(customers)",
"_____no_output_____"
],
[
"customers.corr()",
"_____no_output_____"
],
[
"sns.lmplot('Length of Membership','Yearly Amount Spent',data=customers)",
"_____no_output_____"
]
],
[
[
"## Splitting the Data",
"_____no_output_____"
]
],
[
[
"customers.columns",
"_____no_output_____"
],
[
"#Selecting only the numerical features for training the model.\nX = customers[['Avg. Session Length', 'Time on App','Time on Website', 'Length of Membership']]\ny = customers['Yearly Amount Spent']",
"_____no_output_____"
],
[
"from sklearn.model_selection import train_test_split",
"_____no_output_____"
],
[
"X_test,X_train,y_test,y_train = train_test_split(X,y,test_size=0.3,random_state=101)",
"_____no_output_____"
]
],
[
[
"## Training the Model",
"_____no_output_____"
]
],
[
[
"from sklearn.linear_model import LinearRegression",
"_____no_output_____"
],
[
"lm = LinearRegression()",
"_____no_output_____"
],
[
"lm.fit(X_train,y_train)",
"_____no_output_____"
]
],
[
[
"## Making Predictions\n",
"_____no_output_____"
]
],
[
[
"predictions = lm.predict(X_test) ",
"_____no_output_____"
],
[
"plt.scatter(y_test,predictions)",
"_____no_output_____"
]
],
[
[
"## Evaluation and Understanding Results",
"_____no_output_____"
]
],
[
[
"from sklearn import metrics",
"_____no_output_____"
],
[
"print('MAE:',metrics.mean_absolute_error(y_test,predictions))\nprint('MSE:',metrics.mean_squared_error(y_test,predictions))\nprint('RMSE:',np.sqrt(metrics.mean_squared_error(y_test,predictions)))",
"MAE: 8.27722410559\nMSE: 109.363379298\nRMSE: 10.4576947411\n"
],
[
"cust_coeff = pd.DataFrame(lm.coef_,X.columns)\ncust_coeff.columns = ['Coefficient']\ncust_coeff",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
d0c9368294602c05c7254a59380bfa95fcfd6f25 | 80,228 | ipynb | Jupyter Notebook | GNSS_Data/Python/Introduction_to_GNSS_data_using_FITS_in_Python.ipynb | megankortink/data-tutorials | 19372969b27fe43c9ad73fb4acd8d624145cb97c | [
"Apache-2.0"
] | 11 | 2019-02-15T00:54:24.000Z | 2022-02-10T05:46:09.000Z | GNSS_Data/Python/Introduction_to_GNSS_data_using_FITS_in_Python.ipynb | megankortink/data-tutorials | 19372969b27fe43c9ad73fb4acd8d624145cb97c | [
"Apache-2.0"
] | 9 | 2019-02-10T23:05:09.000Z | 2022-03-22T20:19:18.000Z | GNSS_Data/Python/Introduction_to_GNSS_data_using_FITS_in_Python.ipynb | megankortink/data-tutorials | 19372969b27fe43c9ad73fb4acd8d624145cb97c | [
"Apache-2.0"
] | 16 | 2019-02-05T01:51:30.000Z | 2022-01-12T10:58:12.000Z | 82.285128 | 18,808 | 0.80872 | [
[
[
"# <center>Introduction on Using Python to access GeoNet's GNSS data",
"_____no_output_____"
],
[
"In this notebook we will learn how to get data from one GNSS(Global Navigation Satellite System) station. By the end of this tutorial you will have make a graph like the one below. <img src=\"plot.png\">",
"_____no_output_____"
],
[
"## Table of contents\n### 1. Introduction\n### 2. Building the base FITS query\n### 3. Get GNSS data\n### 4. Plot data \n### 5. Save data",
"_____no_output_____"
],
[
"## 1. Introduction",
"_____no_output_____"
],
[
"In this tutorial we will be learning how to use Python to access GNSS (commonly referred to at GPS) data from the continuous GNSS sites in the GeoNet and PositioNZ networks.\nGeoNet has a API (Application Programming Interface) to access its GNSS data. You do not need to know anything about APIs to use this tutorial. If you would like more info see https://fits.geonet.org.nz/api-docs/. ",
"_____no_output_____"
],
[
"To use this tutorial you will need to install the package pandas (https://pandas.pydata.org/).",
"_____no_output_____"
],
[
"This tutorial assumes that you have a basic knowledge of Python.",
"_____no_output_____"
],
[
"###### About GeoNet GNSS data",
"_____no_output_____"
],
[
"GeoNet uses GNSS technology to work out the precise positions of over 190 stations in and around NZ everyday. These positions are used to generate a displacement timeseries for each station, so we can observe how much and how quickly each station moves. <br>\nThis data comes split into 3 components:\n<ul>\n <li> The displacement in the east-west direction where east is positive displacement. This data has a typeID of \"e\"\n <li> The displacement in the north-south direction where north is a positive displacement. This data has a typeID of \"n\"\n <li> The displacement in the up-down direction where up is a positive displacement. This data has a typeID of \"u\"</ul>\nFor more on data types go to http://fits.geonet.org.nz/type (for best formatting use firefox) ",
"_____no_output_____"
],
[
"## 2. Building the base FITS query",
"_____no_output_____"
],
[
"###### Import packages",
"_____no_output_____"
]
],
[
[
"import requests\nimport pandas as pd\nimport datetime\nimport matplotlib.pyplot as plt\npd.plotting.register_matplotlib_converters()",
"_____no_output_____"
]
],
[
[
"###### Set URL and endpoint",
"_____no_output_____"
]
],
[
[
"base_url = \"http://fits.geonet.org.nz/\"\nendpoint = \"observation\"",
"_____no_output_____"
]
],
[
[
"The base URL should be set as above to access the FITS database webservice containing the GeoNet GNSS data. The endpoint is set to observation to get the data itself in csv format. There are other endpoints which will return different information such as plot and site. To learn more go to https://fits.geonet.org.nz/api-docs/",
"_____no_output_____"
],
[
"###### Combine URL and endpoint",
"_____no_output_____"
]
],
[
[
"url = base_url + endpoint",
"_____no_output_____"
]
],
[
[
"Combine the base URL and the endpoint to give the information to request the data.",
"_____no_output_____"
],
[
"## 3. Get GNSS data",
"_____no_output_____"
],
[
"In this section we will learn how to get all the GNSS observation data from a site and put it into a pandas dataframe, so we can plot and save the data",
"_____no_output_____"
],
[
"###### Set query parameters",
"_____no_output_____"
]
],
[
[
"parameters ={\"typeID\": \"e\", \"siteID\": \"HANM\"}",
"_____no_output_____"
]
],
[
[
"Set the parameters to get the east component(`'typeID':'e'`) of the GNSS station in the Hanmer Basin (`'siteID': 'HANM'`). To find the 4 letter site ID of a station you can use https://www.geonet.org.nz/data/network/sensor/search to find stations in an area of interest",
"_____no_output_____"
],
[
"##### Get GNSS data",
"_____no_output_____"
]
],
[
[
"response_e = requests.get(url, params=parameters)",
"_____no_output_____"
]
],
[
[
"We use `requests.get` to get the data using the URL we made earlier and the parameters we set in the last stage",
"_____no_output_____"
]
],
[
[
"parameters[\"typeID\"] = \"n\"\nresponse_n = requests.get(url, params=parameters)\nparameters[\"typeID\"] = \"u\"\nresponse_u = requests.get(url, params=parameters)",
"_____no_output_____"
]
],
[
[
"Here we've changed the typeID in the parameters dictionary to get the other components for the GNSS station",
"_____no_output_____"
],
[
"###### Check that your requests worked ",
"_____no_output_____"
]
],
[
[
"print (\"The Response status code of the east channel is\", response_e.status_code)\nprint (\"The Response status code of the north channel is\",response_n.status_code)\nprint (\"The Response status code of the up channel is\",response_u.status_code)",
"The Response status code of the east channel is 200\nThe Response status code of the north channel is 200\nThe Response status code of the up channel is 200\n"
]
],
[
[
"The response status code says whether we were successful in getting the data requested and why not if we were unsuccessful:\n<ul>\n<li>200 -- everything went okay, and the result has been returned (if any)\n<li>301 -- the server is redirecting you to a different endpoint. This can happen when a company switches domain names, or an endpoint name is changed.\n<li>400 -- the server thinks you made a bad request. This can happen when you don't send along the right data, among other things.\n<li>404 -- the resource you tried to access wasn't found on the server.\n</ul>",
"_____no_output_____"
],
[
"Now that we know our request for data was successful we want to transform it into a format that we can deal with in Python. Right now the data is one long string",
"_____no_output_____"
],
[
"###### Split the string of data",
"_____no_output_____"
]
],
[
[
"data_e = response_e.content.decode(\"utf-8\").split(\"\\n\")",
"_____no_output_____"
]
],
[
[
"The above code decodes the response and then splits the east displacement data on the new line symbol as each line is one point of data. If you are using Python2 remove the code `.decode(\"utf-8\")`",
"_____no_output_____"
],
[
"###### Split the points of data",
"_____no_output_____"
]
],
[
[
"for i in range(0, len(data_e)):\n data_e[i]= data_e[i].split(\",\")",
"_____no_output_____"
]
],
[
[
"The above code uses a for loop to split each point of data on the \",\" symbol as each value is separated by a \",\", producing a list of lists",
"_____no_output_____"
],
[
"###### Reformat data values",
"_____no_output_____"
]
],
[
[
"for i in range(1, (len(data_e)-1)):\n data_e[i][0] = datetime.datetime.strptime(data_e[i][0], '%Y-%m-%dT%H:%M:%S.%fZ') #make 1st value into a datetime object\n data_e[i][1] = float(data_e[i][1]) #makes 2nd value into a decimal number\n data_e[i][2] = float(data_e[i][2]) #makes 3rd value into a decimal number",
"_____no_output_____"
]
],
[
[
"The above code uses a `for` loop to go over each point of data and reformat it, so that the first value in each point is seen as a time, and the second and third values are seen as numbers.<br>\nNote that we choose to miss the first and last data points in our loop as the first data point has the names of the data values and the last point is empty due to how we split the data. ",
"_____no_output_____"
],
[
"###### Convert nested list into dataframe object",
"_____no_output_____"
]
],
[
[
"df_e = pd.DataFrame(data_e[1:-1],index = range(1, (len(data_e)-1)), columns=data_e[0])",
"_____no_output_____"
]
],
[
[
"`data_e[1:-1]` makes the list of data be the data in the data frame, `index = range(1, (len(data_e)-1))` makes rows named 1, 2, ... n where n is the number of data points, and `columns=data_e[0]` gives the columns the names that where in the first line of the response string",
"_____no_output_____"
],
[
"###### Print the first few lines of the data frame",
"_____no_output_____"
]
],
[
[
"df_e.head()",
"_____no_output_____"
]
],
[
[
"Here we can see on the 4th of June 2014 how much the site HANM had moved east (with formal error) in mm from its reference position, this being the midpoint of the position timeseries.",
"_____no_output_____"
],
[
"###### Make everything we have just done into a function ",
"_____no_output_____"
]
],
[
[
"def GNSS_dataframe(data):\n \"\"\"\n This function turns the string of GNSS data received by requests.get\n into a data frame with GNSS data correctly formatted.\n \"\"\"\n data = data.split(\"\\n\") # splits data on the new line symbol\n for i in range(0, len(data)):\n data[i]= data[i].split(\",\")# splits data ponits on the , symbol\n for i in range(1, (len(data)-1)):\n data[i][0] = datetime.datetime.strptime(data[i][0], '%Y-%m-%dT%H:%M:%S.%fZ') #make 1st value into a datetime object\n data[i][1] = float(data[i][1]) #makes 2nd value into a decimal number\n data[i][2] = float(data[i][2]) #makes 3rd value into a decimal number\n df = pd.DataFrame(data[1:-1],index = range(1, (len(data)-1)), columns=data[0]) #make the list into a data frame\n return df \ndf_e.head()",
"_____no_output_____"
]
],
[
[
"This makes code cells 8 to 11 into a function to be called later in the notebook.",
"_____no_output_____"
],
[
"###### Run the above function on the North and Up data ",
"_____no_output_____"
]
],
[
[
"df_n = GNSS_dataframe(response_n.content.decode(\"utf-8\"))\ndf_u = GNSS_dataframe(response_u.content.decode(\"utf-8\"))",
"_____no_output_____"
]
],
[
[
"Make sure to run this function on the content string of the requested data. If in Python2 use remove the code `.decode(\"utf-8\")`",
"_____no_output_____"
],
[
"##### Why make the data into a data frame?",
"_____no_output_____"
],
[
"A data frame is a way of formatting data into a table with column and row name much like a csv file and makes long list of data a lot easier to use. \nData frame data can be called by column or row name making it easy to get the point(s) of data you want. \nData, much like in a table, can be “linked” so that you can do something like plot a data point on a 2D plot.\nSadly, data frames are not a built-in data format in Python, so we must use the pandas (https://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe) package to be able to make a data frame. ",
"_____no_output_____"
],
[
"## 4. Plot data",
"_____no_output_____"
],
[
"###### Plot the east data",
"_____no_output_____"
]
],
[
[
"e_plot = df_e.plot(x='date-time', y= ' e (mm)', marker='o', title = 'Relative east displacement for HANM')\n#plt.savefig(\"e_plot\") ",
"_____no_output_____"
]
],
[
[
"The above code plots time on the x axis and the displacement in millimetres on the y axis. `marker = ‘o’` makes each point of data a small circle. If you want to save the plot as a png file in the folder you are running this code from you can uncomment ` plt.savefig(\"e_plot\")`",
"_____no_output_____"
],
[
"###### Plot the north data",
"_____no_output_____"
]
],
[
[
"n_plot = df_n.plot(x='date-time', y= ' n (mm)', marker='o', title = 'Relative north displacement for HANM')\n#plt.savefig(\"n_plot\") ",
"_____no_output_____"
]
],
[
[
"###### Plot the up data",
"_____no_output_____"
]
],
[
[
"u_plot = df_u.plot(x='date-time', y= ' u (mm)', marker='o', title='Relative up displacement for HANM')\n#plt.savefig(\"u_plot\") ",
"_____no_output_____"
]
],
[
[
"## 5. Save data",
"_____no_output_____"
],
[
"##### Make a copy of the east data frame",
"_____no_output_____"
]
],
[
[
"df = df_e",
"_____no_output_____"
]
],
[
[
"This makes what is call a deep copy of the data frame with the east displacement data in it. This means that if `df` is edited `df_e` is not effected.",
"_____no_output_____"
],
[
"###### Remove the error column from this copy of the data",
"_____no_output_____"
]
],
[
[
"df = df.drop(\" error (mm)\",axis=1)",
"_____no_output_____"
]
],
[
[
"The above code removes the column called error (mm) and all its data from `df`. ` axis=1` says that we are looking for a column. If we put ` axis=0` we would be looking for a row. ",
"_____no_output_____"
],
[
"###### Add the up and north data to this data frame (but not the respective errors)",
"_____no_output_____"
]
],
[
[
"df[\"u (mm)\"] = df_u[' u (mm)']\ndf[\"n (mm)\"] = df_n[' n (mm)']",
"_____no_output_____"
]
],
[
[
"###### Print the first few lines of the data frame",
"_____no_output_____"
]
],
[
[
"df.head()",
"_____no_output_____"
]
],
[
[
"Here we can see the layout of the data frame with the columns date, east displacement, up displacement and north displacement ",
"_____no_output_____"
],
[
"###### Save as CSV file",
"_____no_output_____"
]
],
[
[
"df.to_csv(\"HANM.csv\")",
"_____no_output_____"
]
],
[
[
"This saves the data frame csv file with the same formatting as the data frame. It will have saved in the same place as this notebook is run from and be named HANM",
"_____no_output_____"
],
[
"## Useful links",
"_____no_output_____"
],
[
"<ul>\n <li>This notebook uses Python https://www.python.org/\n <li>This notebook also uses pandas https://pandas.pydata.org/\n <li>There is a notebook on this data set in R at https://github.com/GeoNet/data-tutorials/tree/master/GNSS_Data/R/Introduction_to_GNSS_data_using_FITS_in_R.ipynb \n <li>More tutorials on GNSS data can be found at https://github.com/GeoNet/data-tutorials/tree/master/GNSS_Data/R \n <li>To learn more about station codes go to https://www.geonet.org.nz/data/supplementary/channels\n <li>For more on data types in FITS go to http://fits.geonet.org.nz/type (for best formatting use firefox)\n <li>For more on FITS go to https://fits.geonet.org.nz/api-docs/ \n</ul>",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
]
] |
d0c941018e008e032dbafdbf9863530882e91900 | 23,726 | ipynb | Jupyter Notebook | ethnicolr/models/ethnicolr_keras_lstm_fl_voter_name.ipynb | djakaitis/ethnicolr | 9ccb482cce3b2b436be87d553e23f3764100cac4 | [
"MIT"
] | 8 | 2017-05-28T12:31:13.000Z | 2017-09-08T21:15:28.000Z | ethnicolr/models/ethnicolr_keras_lstm_fl_voter_name.ipynb | djakaitis/ethnicolr | 9ccb482cce3b2b436be87d553e23f3764100cac4 | [
"MIT"
] | 3 | 2017-09-05T22:47:54.000Z | 2017-09-14T20:54:07.000Z | ethnicolr/models/ethnicolr_keras_lstm_fl_voter_name.ipynb | djakaitis/ethnicolr | 9ccb482cce3b2b436be87d553e23f3764100cac4 | [
"MIT"
] | 2 | 2017-05-29T03:16:15.000Z | 2021-12-07T13:43:59.000Z | 33.044568 | 260 | 0.482593 | [
[
[
"import keras\nimport tensorflow as tf\nprint(keras.__version__)\nprint(tf.__version__)",
"2021-12-21 15:33:07.636536: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory\n2021-12-21 15:33:07.636592: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.\n"
],
[
"import numpy as np\nimport pandas as pd\nfrom sklearn.feature_extraction.text import CountVectorizer\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.metrics import classification_report,confusion_matrix\n\nNGRAMS = 2\nSAMPLE = 1000000\nEPOCHS = 15\n\n# Florida voter\ndf = pd.read_csv('/opt/data/fl_voterreg/fl_reg_name_race.csv.gz')\ndf.dropna(subset=['name_first', 'name_last'], inplace=True)\nsdf = df[df.race.isin(['multi_racial', 'native_indian', 'other', 'unknown']) == False].sample(SAMPLE, random_state=21)\ndel df\n\n# Additional features\nsdf['name_first'] = sdf.name_first.str.title()\nsdf['name_last'] = sdf.name_last.str.title()\n\nsdf",
"_____no_output_____"
],
[
"rdf = sdf.groupby('race').agg({'name_first': 'count'})\nrdf.to_csv('./fl_voter_reg/lstm/fl_name_race.csv', columns=[])\nrdf",
"_____no_output_____"
],
[
"sdf.groupby('race').agg({'name_last': 'nunique'})",
"_____no_output_____"
]
],
[
[
"## Preprocessing the input data",
"_____no_output_____"
]
],
[
[
"# concat last name and first name\nsdf['name_last_name_first'] = sdf['name_last'] + ' ' + sdf['name_first']\n\n# build n-gram list\nvect = CountVectorizer(analyzer='char', max_df=0.3, min_df=3, ngram_range=(NGRAMS, NGRAMS), lowercase=False) \na = vect.fit_transform(sdf.name_last_name_first)\nvocab = vect.vocabulary_\n\n# sort n-gram by freq (highest -> lowest)\nwords = []\nfor b in vocab:\n c = vocab[b]\n #print(b, c, a[:, c].sum())\n words.append((a[:, c].sum(), b))\n #break\nwords = sorted(words, reverse=True)\nwords_list = ['UNK']\nwords_list.extend([w[1] for w in words])\nnum_words = len(words_list)\nprint(\"num_words = %d\" % num_words)\n\n\ndef find_ngrams(text, n):\n a = zip(*[text[i:] for i in range(n)])\n wi = []\n for i in a:\n w = ''.join(i)\n try:\n idx = words_list.index(w)\n except:\n idx = 0\n wi.append(idx)\n return wi\n\n# build X from index of n-gram sequence\nX = np.array(sdf.name_last_name_first.apply(lambda c: find_ngrams(c, NGRAMS)))\n\n# check max/avg feature\nX_len = []\nfor x in X:\n X_len.append(len(x))\n\nmax_feature_len = max(X_len)\navg_feature_len = int(np.mean(X_len))\n\nprint(\"Max feature len = %d, Avg. feature len = %d\" % (max_feature_len, avg_feature_len))\ny = np.array(sdf.race.astype('category').cat.codes)\n\n# Split train and test dataset\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=21, stratify=y)",
"num_words = 1260\nMax feature len = 41, Avg. feature len = 12\n"
]
],
[
[
"## Train a LSTM model\n\nref: http://machinelearningmastery.com/sequence-classification-lstm-recurrent-neural-networks-python-keras/",
"_____no_output_____"
]
],
[
[
"'''The dataset is actually too small for LSTM to be of any advantage\ncompared to simpler, much faster methods such as TF-IDF + LogReg.\nNotes:\n\n- RNNs are tricky. Choice of batch size is important,\nchoice of loss and optimizer is critical, etc.\nSome configurations won't converge.\n\n- LSTM loss decrease patterns during training can be quite different\nfrom what you see with CNNs/MLPs/etc.\n'''\nfrom keras.preprocessing import sequence\nfrom keras.models import Sequential\nfrom keras.layers import Dense, Embedding, Dropout, Activation\nfrom keras.layers import LSTM\nfrom keras.layers.convolutional import Conv1D\nfrom keras.layers.convolutional import MaxPooling1D\nfrom keras.models import load_model\n\nmax_features = num_words # 20000\nfeature_len = 25 # avg_feature_len # cut texts after this number of words (among top max_features most common words)\nbatch_size = 32\n\nprint(len(X_train), 'train sequences')\nprint(len(X_test), 'test sequences')\n\nprint('Pad sequences (samples x time)')\nX_train = sequence.pad_sequences(X_train, maxlen=feature_len)\nX_test = sequence.pad_sequences(X_test, maxlen=feature_len)\nprint('X_train shape:', X_train.shape)\nprint('X_test shape:', X_test.shape)\n\nnum_classes = np.max(y_train) + 1\nprint(num_classes, 'classes')\n\nprint('Convert class vector to binary class matrix '\n '(for use with categorical_crossentropy)')\ny_train = tf.keras.utils.to_categorical(y_train, num_classes)\ny_test = tf.keras.utils.to_categorical(y_test, num_classes)\nprint('y_train shape:', y_train.shape)\nprint('y_test shape:', y_test.shape)",
"800000 train sequences\n200000 test sequences\nPad sequences (samples x time)\nX_train shape: (800000, 25)\nX_test shape: (200000, 25)\n4 classes\nConvert class vector to binary class matrix (for use with categorical_crossentropy)\ny_train shape: (800000, 4)\ny_test shape: (200000, 4)\n"
],
[
"print('Build model...')\n\nmodel = Sequential()\nmodel.add(Embedding(num_words, 32, input_length=feature_len))\nmodel.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2))\nmodel.add(Dense(num_classes, activation='softmax'))\n\n# try using different optimizers and different optimizer configs\nmodel.compile(loss='categorical_crossentropy',\n optimizer='adam',\n metrics=['accuracy'])\n\nprint(model.summary())",
"Build model...\nModel: \"sequential\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nembedding (Embedding) (None, 25, 32) 40320 \n_________________________________________________________________\nlstm (LSTM) (None, 128) 82432 \n_________________________________________________________________\ndense (Dense) (None, 4) 516 \n=================================================================\nTotal params: 123,268\nTrainable params: 123,268\nNon-trainable params: 0\n_________________________________________________________________\nNone\n"
],
[
"print('Train...')\nmodel.fit(X_train, y_train, batch_size=batch_size, epochs=EPOCHS,\n validation_split=0.1, verbose=1)\nscore, acc = model.evaluate(X_test, y_test,\n batch_size=batch_size, verbose=1)\nprint('Test score:', score)\nprint('Test accuracy:', acc)",
"Train...\n"
],
[
"print('Test score:', score)\nprint('Test accuracy:', acc)",
"Test score: 0.45394647121429443\nTest accuracy: 0.8408349752426147\n"
]
],
[
[
"## Confusion Matrix",
"_____no_output_____"
]
],
[
[
"p = model.predict(X_test, verbose=2) # to predict probability\ny_pred = np.argmax(p, axis=-1)\ntarget_names = list(sdf.race.astype('category').cat.categories)\nprint(classification_report(np.argmax(y_test, axis=1), y_pred, target_names=target_names))\nprint(confusion_matrix(np.argmax(y_test, axis=1), y_pred))",
"6250/6250 - 32s\n precision recall f1-score support\n\n asian 0.81 0.42 0.55 3876\n hispanic 0.82 0.86 0.84 33455\n nh_black 0.76 0.43 0.55 28290\n nh_white 0.86 0.93 0.89 134379\n\n accuracy 0.84 200000\n macro avg 0.81 0.66 0.71 200000\nweighted avg 0.83 0.84 0.83 200000\n\n[[ 1612 478 236 1550]\n [ 66 28643 457 4289]\n [ 76 666 12290 15258]\n [ 231 5291 3235 125622]]\n"
]
],
[
[
"## Save model",
"_____no_output_____"
]
],
[
[
"model.save('./fl_voter_reg/lstm/fl_all_name_lstm.h5')",
"_____no_output_____"
],
[
"words_df = pd.DataFrame(words_list, columns=['vocab'])\nwords_df.to_csv('./fl_voter_reg/lstm/fl_all_name_vocab.csv', index=False, encoding='utf-8')\n",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d0c942bdebfe99713ca4542b59fa30f1daf34c16 | 1,109 | ipynb | Jupyter Notebook | week-6-project.ipynb | Zogren01/Week6-lab | 74e610acfc4603dc16ef16c3ba9cbaf38232252f | [
"AFL-1.1"
] | null | null | null | week-6-project.ipynb | Zogren01/Week6-lab | 74e610acfc4603dc16ef16c3ba9cbaf38232252f | [
"AFL-1.1"
] | null | null | null | week-6-project.ipynb | Zogren01/Week6-lab | 74e610acfc4603dc16ef16c3ba9cbaf38232252f | [
"AFL-1.1"
] | null | null | null | 18.180328 | 53 | 0.475203 | [
[
[
"#code for week 6 lab\nname = input(\"Please enter your name.\")\nnum = int(input(\"Now, enter an integer.\"))\nfor a in range(num):\n print(name)",
"Please enter your name. Zach\nNow, enter an integer. 7\n"
]
]
] | [
"code"
] | [
[
"code"
]
] |
d0c95b4edf01eba67f5f221356bb288501504b63 | 9,955 | ipynb | Jupyter Notebook | cuml/pca_demo.ipynb | divyegala/notebooks | e4b2893a033823199634e5d39bc7cbe08f5298ac | [
"Apache-2.0"
] | null | null | null | cuml/pca_demo.ipynb | divyegala/notebooks | e4b2893a033823199634e5d39bc7cbe08f5298ac | [
"Apache-2.0"
] | null | null | null | cuml/pca_demo.ipynb | divyegala/notebooks | e4b2893a033823199634e5d39bc7cbe08f5298ac | [
"Apache-2.0"
] | null | null | null | 31.109375 | 356 | 0.57559 | [
[
[
"\n# Principal Componenet Analysis (PCA)\nThe PCA algorithm is a dimensionality reduction algorithm which works really well for datasets which have correlated columns. It combines the features of X in linear combination such that the new components capture the most information of the data. \nThe PCA model is implemented in the cuML library and can accept the following parameters: \n1. svd_solver: selects the type of algorithm used: Jacobi or full (default = full)\n2. n_components: the number of top K vectors to be present in the output (default = 1)\n3. random_state: select a random state if the results should be reproducible across multiple runs (default = None)\n4. copy: if 'True' then it copies the data and removes mean from it else the data will be overwritten with its mean centered version (default = True)\n5. whiten: if True, de-correlates the components (default = False)\n6. tol: if the svd_solver = 'Jacobi' then this variable is used to set the tolerance (default = 1e-7)\n7. iterated_power: if the svd_solver = 'Jacobi' then this variable decides the number of iterations (default = 15)\n\nThe cuml implementation of the PCA model has the following functions that one can run:\n1. Fit: it fits the model with the dataset\n2. Fit_transform: fits the PCA model with the dataset and performs dimensionality reduction on it\n3. Inverse_transform: returns the original dataset when the transformed dataset is passed as the input\n4. Transform: performs dimensionality reduction on the dataset\n5. Get_params: returns the value of the parameters of the PCA model\n6. Set_params: allows the user to set the value of the parameters of the PCA model\n\nThe model accepts only numpy arrays or cudf dataframes as the input. In order to convert your dataset to cudf format please read the cudf documentation on https://rapidsai.github.io/projects/cudf/en/latest/. For additional information on the PCA model please refer to the documentation on https://rapidsai.github.io/projects/cuml/en/latest/index.html",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd\nfrom sklearn.decomposition import PCA as skPCA\nfrom cuml import PCA as cumlPCA\nimport cudf\nimport os",
"_____no_output_____"
]
],
[
[
"# Helper Functions",
"_____no_output_____"
]
],
[
[
"# calculate the time required by a cell to run\nfrom timeit import default_timer\n\nclass Timer(object):\n def __init__(self):\n self._timer = default_timer\n \n def __enter__(self):\n self.start()\n return self\n\n def __exit__(self, *args):\n self.stop()\n\n def start(self):\n \"\"\"Start the timer.\"\"\"\n self.start = self._timer()\n\n def stop(self):\n \"\"\"Stop the timer. Calculate the interval in seconds.\"\"\"\n self.end = self._timer()\n self.interval = self.end - self.start",
"_____no_output_____"
],
[
"# check if the mortgage dataset is present and then extract the data from it, else do not run \nimport gzip\ndef load_data(nrows, ncols, cached = 'data/mortgage.npy.gz'):\n if os.path.exists(cached):\n print('use mortgage data')\n with gzip.open(cached) as f:\n X = np.load(f)\n X = X[np.random.randint(0,X.shape[0]-1,nrows),:ncols]\n else:\n # throws FileNotFoundError error if mortgage dataset is not present\n raise FileNotFoundError('Please download the required dataset or check the path')\n df = pd.DataFrame({'fea%d'%i:X[:,i] for i in range(X.shape[1])})\n return df",
"_____no_output_____"
],
[
"# this function checks if the results obtained from two different methods (sklearn and cuml) are the equal\nfrom sklearn.metrics import mean_squared_error\ndef array_equal(a,b,threshold=2e-3,with_sign=True):\n a = to_nparray(a)\n b = to_nparray(b)\n if with_sign == False:\n a,b = np.abs(a),np.abs(b)\n error = mean_squared_error(a,b)\n res = error<threshold\n return res\n\n# the function converts a variable from ndarray or dataframe format to numpy array\ndef to_nparray(x):\n if isinstance(x,np.ndarray) or isinstance(x,pd.DataFrame):\n return np.array(x)\n elif isinstance(x,np.float64):\n return np.array([x])\n elif isinstance(x,cudf.DataFrame) or isinstance(x,cudf.Series):\n return x.to_pandas().values\n return x ",
"_____no_output_____"
]
],
[
[
"# Run tests",
"_____no_output_____"
]
],
[
[
"%%time\n# nrows = number of samples\n# ncols = number of features of each sample\n\nnrows = 2**15\nnrows = int(nrows * 1.5)\nncols = 400\n\nX = load_data(nrows,ncols)\nprint('data',X.shape)",
"use mortgage data\ndata (49152, 400)\nCPU times: user 3.82 s, sys: 622 ms, total: 4.44 s\nWall time: 4.44 s\n"
],
[
"# set parameters for the PCA model\nn_components = 10\nwhiten = False\nrandom_state = 42\nsvd_solver=\"full\"",
"_____no_output_____"
],
[
"%%time\n# use the sklearn PCA on the dataset\npca_sk = skPCA(n_components=n_components,svd_solver=svd_solver, \n whiten=whiten, random_state=random_state)\n# creates an embedding\nresult_sk = pca_sk.fit_transform(X)",
"CPU times: user 3.49 s, sys: 413 ms, total: 3.9 s\nWall time: 906 ms\n"
],
[
"%%time\n# convert the pandas dataframe to cudf dataframe\nX = cudf.DataFrame.from_pandas(X)",
"CPU times: user 965 ms, sys: 31.6 ms, total: 996 ms\nWall time: 994 ms\n"
],
[
"%%time\n# use the cuml PCA model on the dataset\npca_cuml = cumlPCA(n_components=n_components,svd_solver=svd_solver, \n whiten=whiten, random_state=random_state)\n# obtain the embedding of the model\nresult_cuml = pca_cuml.fit_transform(X)",
"CPU times: user 688 ms, sys: 160 ms, total: 848 ms\nWall time: 848 ms\n"
],
[
"# calculate the attributes of the two models and compare them\nfor attr in ['singular_values_','components_','explained_variance_',\n 'explained_variance_ratio_']:\n passed = array_equal(getattr(pca_sk,attr),getattr(pca_cuml,attr))\n message = 'compare pca: cuml vs sklearn {:>25} {}'.format(attr,'equal' if passed else 'NOT equal')\n print(message)",
"compare pca: cuml vs sklearn singular_values_ equal\ncompare pca: cuml vs sklearn components_ equal\ncompare pca: cuml vs sklearn explained_variance_ equal\ncompare pca: cuml vs sklearn explained_variance_ratio_ equal\n"
],
[
"# compare the results of the two models\npassed = array_equal(result_sk,result_cuml)\nmessage = 'compare pca: cuml vs sklearn transformed results %s'%('equal'if passed else 'NOT equal')\nprint(message)",
"compare pca: cuml vs sklearn transformed results equal\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0c9649724476728ea58dcbaef7ab61487ec29ad | 15,683 | ipynb | Jupyter Notebook | using_open_source_model_packages/tensorflow_od_model/using_object_detection_model.ipynb | Leggerla/aws-marketplace-machine-learning | 4cb36593b0bbc57be1b138f33124c1e795d45f1f | [
"MIT-0"
] | 5 | 2021-01-25T18:37:59.000Z | 2021-12-08T02:40:05.000Z | using_open_source_model_packages/tensorflow_od_model/using_object_detection_model.ipynb | Leggerla/aws-marketplace-machine-learning | 4cb36593b0bbc57be1b138f33124c1e795d45f1f | [
"MIT-0"
] | 1 | 2021-08-25T12:46:35.000Z | 2021-08-25T12:46:35.000Z | using_open_source_model_packages/tensorflow_od_model/using_object_detection_model.ipynb | durgasury/aws-marketplace-machine-learning | 5c6ca6e441111b526ff2fe359183360e71e37124 | [
"MIT-0"
] | 8 | 2020-11-11T15:44:24.000Z | 2021-12-08T00:32:16.000Z | 36.901176 | 1,678 | 0.599375 | [
[
[
"# Deploy and perform inference on Model Package from AWS Marketplace \n\nThis notebook provides you instructions on how to deploy and perform inference on model packages from AWS Marketplace object detection model.\n\nThis notebook is compatible only with those object detection model packages which this notebook is linked to.\n\n#### Pre-requisites:\n1. **Note**: This notebook contains elements which render correctly in Jupyter interface. Open this notebook from an Amazon SageMaker Notebook Instance or Amazon SageMaker Studio.\n1. Ensure that IAM role used has **AmazonSageMakerFullAccess**\n1. To deploy this ML model successfully, ensure that:\n 1. Either your IAM role has these three permissions and you have authority to make AWS Marketplace subscriptions in the AWS account used: \n 1. **aws-marketplace:ViewSubscriptions**\n 1. **aws-marketplace:Unsubscribe**\n 1. **aws-marketplace:Subscribe** \n 2. or your AWS account has a subscription to this object detection model. If so, skip step: [Subscribe to the model package](#1.-Subscribe-to-the-model-package)\n\n#### Contents:\n1. [Subscribe to the model package](#1.-Subscribe-to-the-model-package)\n2. [Create an endpoint and perform real-time inference](#2.-Create-an-endpoint-and-perform-real-time-inference)\n 1. [Create an endpoint](#A.-Create-an-endpoint)\n 2. [Create input payload](#B.-Create-input-payload)\n 3. [Perform real-time inference](#C.-Perform-real-time-inference)\n 4. [Visualize output](#D.-Visualize-output)\n 5. [Delete the endpoint](#E.-Delete-the-endpoint)\n3. [Perform batch inference](#3.-Perform-batch-inference) \n4. [Clean-up](#4.-Clean-up)\n 1. [Delete the model](#A.-Delete-the-model)\n 2. [Unsubscribe to the listing (optional)](#B.-Unsubscribe-to-the-listing-(optional))\n \n\n#### Usage instructions\nYou can run this notebook one cell at a time (By using Shift+Enter for running a cell).\n\n**Note** - This notebook requires you to follow instructions and specify values for parameters, as instructed.",
"_____no_output_____"
],
[
"### 1. Subscribe to the model package",
"_____no_output_____"
],
[
"To subscribe to the model package:\n1. Open the model package listing page you opened this notebook for.\n1. On the AWS Marketplace listing, click on the **Continue to subscribe** button.\n1. On the **Subscribe to this software** page, review and click on **\"Accept Offer\"** if you and your organization agrees with EULA, pricing, and support terms. \n1. Once you click on **Continue to configuration button** and then choose a **region**, you will see a **Product Arn** displayed. This is the model package ARN that you need to specify while creating a deployable model using Boto3. Copy the ARN corresponding to your region and specify the same in the following cell.",
"_____no_output_____"
]
],
[
[
"model_package_arn='<Customer to specify Model package ARN corresponding to their AWS region>' ",
"_____no_output_____"
],
[
"import json \nfrom sagemaker import ModelPackage\nimport sagemaker as sage\nfrom sagemaker import get_execution_role\nimport matplotlib.patches as patches\nimport numpy as np\nfrom matplotlib import pyplot as plt\nfrom PIL import Image\nfrom PIL import ImageColor",
"_____no_output_____"
],
[
"role = get_execution_role()\nsagemaker_session = sage.Session()\nboto3 = sagemaker_session.boto_session\nbucket = sagemaker_session.default_bucket()\nregion = sagemaker_session.boto_region_name\n\ns3 = boto3.client(\"s3\")\nruntime= boto3.client('runtime.sagemaker')",
"_____no_output_____"
]
],
[
[
"In next step, you would be deploying the model for real-time inference. For information on how real-time inference with Amazon SageMaker works, see [Documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works-hosting.html).",
"_____no_output_____"
],
[
"### 2. Create an endpoint and perform real-time inference",
"_____no_output_____"
]
],
[
[
"model_name='object-detection-model'",
"_____no_output_____"
],
[
"#The object detection model packages this notebook notebook is compatible with, support application/x-image as the \n#content-type.\ncontent_type='application/x-image'",
"_____no_output_____"
]
],
[
[
"Review and update the compatible instance type for the model package in the following cell.",
"_____no_output_____"
]
],
[
[
"real_time_inference_instance_type='ml.g4dn.xlarge'\nbatch_transform_inference_instance_type='ml.p2.xlarge'",
"_____no_output_____"
]
],
[
[
"#### A. Create an endpoint",
"_____no_output_____"
]
],
[
[
"#create a deployable model from the model package.\nmodel = ModelPackage(role=role,\n model_package_arn=model_package_arn,\n sagemaker_session=sagemaker_session)\n\n#Deploy the model\npredictor = model.deploy(1, real_time_inference_instance_type, endpoint_name=model_name)",
"_____no_output_____"
]
],
[
[
"Once endpoint has been created, you would be able to perform real-time inference.",
"_____no_output_____"
],
[
"#### B. Prepare input file for performing real-time inference\nIn this step, we will download class_id_to_label_mapping from S3 bucket. The mapping files has been downloaded from [TensorFlow](https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt). [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0).",
"_____no_output_____"
]
],
[
[
"s3_bucket = f\"jumpstart-cache-prod-{region}\"\nkey_prefix = \"inference-notebook-assets\"\n\ndef download_from_s3(key_filenames):\n for key_filename in key_filenames:\n s3.download_file(s3_bucket, f\"{key_prefix}/{key_filename}\", key_filename)\n\nimg_jpg = \"Naxos_Taverna.jpg\"\n\n#Download image\ndownload_from_s3(key_filenames=[img_jpg])",
"_____no_output_____"
],
[
"#Mapping from model predictions to class labels \nclass_id_to_label = {\"1\": \"person\", \"2\": \"bicycle\", \"3\": \"car\", \"4\": \"motorcycle\", \"5\": \"airplane\", \"6\": \"bus\", \"7\": \"train\", \"8\": \"truck\", \"9\": \"boat\", \"10\": \"traffic light\", \"11\": \"fire hydrant\", \"13\": \"stop sign\", \"14\": \"parking meter\", \"15\": \"bench\", \"16\": \"bird\", \"17\": \"cat\", \"18\": \"dog\", \"19\": \"horse\", \"20\": \"sheep\", \"21\": \"cow\", \"22\": \"elephant\", \"23\": \"bear\", \"24\": \"zebra\", \"25\": \"giraffe\", \"27\": \"backpack\", \"28\": \"umbrella\", \"31\": \"handbag\", \"32\": \"tie\", \"33\": \"suitcase\", \"34\": \"frisbee\", \"35\": \"skis\", \"36\": \"snowboard\", \"37\": \"sports ball\", \"38\": \"kite\", \"39\": \"baseball bat\", \"40\": \"baseball glove\", \"41\": \"skateboard\", \"42\": \"surfboard\", \"43\": \"tennis racket\", \"44\": \"bottle\", \"46\": \"wine glass\", \"47\": \"cup\", \"48\": \"fork\", \"49\": \"knife\", \"50\": \"spoon\", \"51\": \"bowl\", \"52\": \"banana\", \"53\": \"apple\", \"54\": \"sandwich\", \"55\": \"orange\", \"56\": \"broccoli\", \"57\": \"carrot\", \"58\": \"hot dog\", \"59\": \"pizza\", \"60\": \"donut\", \"61\": \"cake\", \"62\": \"chair\", \"63\": \"couch\", \"64\": \"potted plant\", \"65\": \"bed\", \"67\": \"dining table\", \"70\": \"toilet\", \"72\": \"tv\", \"73\": \"laptop\", \"74\": \"mouse\", \"75\": \"remote\", \"76\": \"keyboard\", \"77\": \"cell phone\", \"78\": \"microwave\", \"79\": \"oven\", \"80\": \"toaster\", \"81\": \"sink\", \"82\": \"refrigerator\", \"84\": \"book\", \"85\": \"clock\", \"86\": \"vase\", \"87\": \"scissors\", \"88\": \"teddy bear\", \"89\": \"hair drier\", \"90\": \"toothbrush\"}",
"_____no_output_____"
]
],
[
[
"#### C. Query endpoint that you have created with the opened images",
"_____no_output_____"
]
],
[
[
"#perform_inference method performs inference on the endpoint and prints predictions.\ndef perform_inference(filename):\n response = runtime.invoke_endpoint(EndpointName='test-tensorflow-test', ContentType=content_type, Body=input_img)\n model_predictions = json.loads(response['Body'].read())\n return model_predictions",
"_____no_output_____"
],
[
"with open(img_jpg, 'rb') as file: input_img = file.read()\nmodel_predictions = perform_inference(input_img)\nresult = {key: np.array(value)[np.newaxis, ...] if isinstance(value, list) else np.array([value]) for key, value in model_predictions['predictions'][0].items()}",
"_____no_output_____"
]
],
[
[
"#### D. Display model predictions as bounding boxes on the input image ",
"_____no_output_____"
]
],
[
[
"colors = list(ImageColor.colormap.values())\n\nimage_pil = Image.open(img_jpg)\nimage_np = np.array(image_pil)\n\nplt.figure(figsize=(20,20))\nax = plt.axes()\nax.imshow(image_np)\nclasses = [class_id_to_label[str(int(index))] for index in result[\"detection_classes\"][0]]\nbboxes, confidences = result[\"detection_boxes\"][0], result[\"detection_scores\"][0]\nfor idx in range(20):\n if confidences[idx] < 0.3:\n break\n ymin, xmin, ymax, xmax = bboxes[idx]\n im_width, im_height = image_pil.size\n left, right, top, bottom = xmin * im_width, xmax * im_width, ymin * im_height, ymax * im_height\n x, y = left, bottom\n color = colors[hash(classes[idx]) % len(colors)]\n rect = patches.Rectangle((left, bottom), right-left, top-bottom, linewidth=3, edgecolor=color, facecolor='none')\n ax.add_patch(rect)\n ax.text(left, top, \"{} {:.0f}%\".format(classes[idx], confidences[idx]*100), bbox=dict(facecolor='white', alpha=0.5))",
"_____no_output_____"
]
],
[
[
"#### D. Delete the endpoint",
"_____no_output_____"
],
[
"Now that you have successfully performed a real-time inference, you do not need the endpoint any more. You can terminate the endpoint to avoid being charged.",
"_____no_output_____"
]
],
[
[
"model.sagemaker_session.delete_endpoint(model_name)\nmodel.sagemaker_session.delete_endpoint_config(model_name)",
"_____no_output_____"
]
],
[
[
"### 3. Perform batch inference",
"_____no_output_____"
],
[
"In this section, you will perform batch inference using multiple input payloads together. If you are not familiar with batch transform, and want to learn more, see [How to run a batch transform job](https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works-batch.html)",
"_____no_output_____"
]
],
[
[
"#upload the batch-transform job input files to S3\ntransform_input_key_prefix = 'object-detection-model-transform-input'\ntransform_input = sagemaker_session.upload_data(img_jpg, key_prefix=transform_input_key_prefix) \nprint(\"Transform input uploaded to \" + transform_input)",
"_____no_output_____"
],
[
"#Run the batch-transform job\ntransformer = model.transformer(1, batch_transform_inference_instance_type)\ntransformer.transform(transform_input, content_type=content_type)\ntransformer.wait()",
"_____no_output_____"
],
[
"# output is available on following path\ntransformer.output_path",
"_____no_output_____"
]
],
[
[
"### 4. Clean-up",
"_____no_output_____"
],
[
"#### A. Delete the model",
"_____no_output_____"
]
],
[
[
"model.delete_model()",
"_____no_output_____"
]
],
[
[
"#### B. Unsubscribe to the listing (optional)",
"_____no_output_____"
],
[
"If you would like to unsubscribe to the model package, follow these steps. Before you cancel the subscription, ensure that you do not have any [deployable model](https://console.aws.amazon.com/sagemaker/home#/models) created from the model package or using the algorithm. Note - You can find this information by looking at the container name associated with the model. \n\n**Steps to unsubscribe to product from AWS Marketplace**:\n1. Navigate to __Machine Learning__ tab on [__Your Software subscriptions page__](https://aws.amazon.com/marketplace/ai/library?productType=ml&ref_=mlmp_gitdemo_indust)\n2. Locate the listing that you want to cancel the subscription for, and then choose __Cancel Subscription__ to cancel the subscription.\n\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.