hexsha
stringlengths 40
40
| size
int64 6
14.9M
| ext
stringclasses 1
value | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 6
260
| max_stars_repo_name
stringlengths 6
119
| max_stars_repo_head_hexsha
stringlengths 40
41
| max_stars_repo_licenses
sequence | max_stars_count
int64 1
191k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 6
260
| max_issues_repo_name
stringlengths 6
119
| max_issues_repo_head_hexsha
stringlengths 40
41
| max_issues_repo_licenses
sequence | max_issues_count
int64 1
67k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 6
260
| max_forks_repo_name
stringlengths 6
119
| max_forks_repo_head_hexsha
stringlengths 40
41
| max_forks_repo_licenses
sequence | max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | avg_line_length
float64 2
1.04M
| max_line_length
int64 2
11.2M
| alphanum_fraction
float64 0
1
| cells
sequence | cell_types
sequence | cell_type_groups
sequence |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
d0bce99ec0451bdcd1988c9ef9cd672ead2089b7 | 4,454 | ipynb | Jupyter Notebook | notebook/introduction/05-spin_fluctuators.ipynb | highrizer/HOQSTTutorials.jl | fb3fb43b7edd7c24fe505e47fa9a3dfd4af1b5eb | [
"MIT"
] | 9 | 2020-11-11T09:02:23.000Z | 2021-11-10T20:31:00.000Z | notebook/introduction/05-spin_fluctuators.ipynb | highrizer/HOQSTTutorials.jl | fb3fb43b7edd7c24fe505e47fa9a3dfd4af1b5eb | [
"MIT"
] | 9 | 2020-11-05T22:37:40.000Z | 2021-11-10T20:55:39.000Z | notebook/introduction/05-spin_fluctuators.ipynb | highrizer/HOQSTTutorials.jl | fb3fb43b7edd7c24fe505e47fa9a3dfd4af1b5eb | [
"MIT"
] | 3 | 2020-12-05T20:20:09.000Z | 2022-02-07T06:37:54.000Z | 44.54 | 656 | 0.577234 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
d0bcf889e3bce34da84236f3a98aab9b85ffdad7 | 46,625 | ipynb | Jupyter Notebook | tests/test_bulk_data.ipynb | Carmelsalad/hackathon | 65472896381f5837d494534df63f1304b1a2b2fe | [
"MIT"
] | 231 | 2019-09-25T13:30:00.000Z | 2022-03-26T08:00:47.000Z | tests/test_bulk_data.ipynb | Hvass-Labs/simfin | 7ab728694c95386c5e54a8afbacd41668808927a | [
"MIT"
] | 11 | 2019-10-01T14:50:15.000Z | 2022-02-23T10:35:47.000Z | tests/test_bulk_data.ipynb | Hvass-Labs/simfin | 7ab728694c95386c5e54a8afbacd41668808927a | [
"MIT"
] | 36 | 2019-09-30T16:14:48.000Z | 2022-03-19T19:59:30.000Z | 31.208166 | 257 | 0.55037 | [
[
[
"# SimFin Test All Datasets\n\nThis Notebook performs automated testing of all the bulk datasets from SimFin. The datasets are first downloaded from the SimFin server and then various tests are performed on the data. An exception is raised if any problems are found.\n\nThis Notebook can be run as usual if you have `simfin` installed, by running the following command from the directory where this Notebook is located:\n\n jupyter notebook\n\nThis Notebook can also be run using `pytest` which makes automated testing easier. You need to have the Python packages `simfin` and `nbval` installed. Then execute the following command from the directory where this Notebook is located:\n\n pytest --nbval-lax -v test_bulk_data.ipynb\n \nThis runs the entire Notebook and outputs error messages for all the cells that raised an exception.",
"_____no_output_____"
],
[
"## IMPORTANT!\n\n- When you make changes to this Notebook, remember to clear all cells before pushing it back to github, because that makes it easier to see the difference from the previous version. Select menu-item \"Kernel / Restart & Clear Output\".\n\n- If you set `refresh_days=0` then it will force a new download of all the datasets.",
"_____no_output_____"
]
],
[
[
"# Set this to 0 to force a new download of all datasets.\nrefresh_days = 30",
"_____no_output_____"
]
],
[
[
"## Imports",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\nimport warnings\nimport sys\nimport os\nfrom IPython.display import display",
"_____no_output_____"
],
[
"import simfin as sf\nfrom simfin.names import *\nfrom simfin.datasets import *",
"_____no_output_____"
]
],
[
[
"## Are We Running Pytest?",
"_____no_output_____"
]
],
[
[
"# Boolean whether this is being run under pytest.\n# This is useful when printing examples of errors\n# if they take a long time to compute, because it\n# is not necessary when running pytest.\nrunning_pytest = ('PYTEST_CURRENT_TEST' in os.environ)",
"_____no_output_____"
]
],
[
[
"## Configure SimFin",
"_____no_output_____"
]
],
[
[
"sf.set_data_dir('~/simfin_data/')",
"_____no_output_____"
],
[
"sf.load_api_key(path='~/simfin_api_key.txt', default_key='free')",
"_____no_output_____"
]
],
[
[
"## Load All Datasets",
"_____no_output_____"
]
],
[
[
"%%time\ndata = AllDatasets(refresh_days=refresh_days)",
"_____no_output_____"
],
[
"# Example for annual Income Statements.\ndata.get(dataset='income', variant='annual', market='us').head()",
"_____no_output_____"
]
],
[
[
"## Lists of Datasets\n\nThese are in addition to the lists of datasets from `datasets.py`.",
"_____no_output_____"
]
],
[
[
"# Datasets that have a column named TICKER.\n# Some tests are probably only necessary for 'companies'\n# but we might as well test all datasets that use tickers.\ndatasets_tickers = ['companies'] + datasets_fundamental() + datasets_shareprices()",
"_____no_output_____"
]
],
[
[
"## Function for Testing Datasets",
"_____no_output_____"
]
],
[
[
"def test_datasets(test_name, datasets=None, variants=None,\n markets=None,\n test_func=None,\n test_func_rows=None,\n test_func_groups=None,\n group_index=SIMFIN_ID,\n process_df_none=False, raise_exception=True):\n \"\"\"\n Helper-function for running tests on many Pandas DataFrames.\n \n :param test_name:\n String with the name of the test.\n \n :param datasets:\n By default (datasets=None) all possible datasets\n will be tested. Otherwise datasets is a list of\n strings with dataset names to be tested.\n \n :param variants:\n By default (variants=None) all possible variants\n for each dataset will be tested, as defined in\n simfin.datasets.valid_variants. Otherwise variants\n is a list of strings and only those variants\n will be tested.\n \n :param markets:\n By default (markets=None) all possible markets\n for each dataset will be tested, as defined in\n simfin.datasets.valid_markets. Otherwise markets\n is a list of strings and only those markets\n will be tested.\n \n :param test_func:\n Function to be called on the Pandas DataFrame for\n each dataset. If there are problems with the DataFrame\n then return True, otherwise return False.\n \n This is generally used for testing problems with the\n entire DataFrame. For example, if the dataset is empty:\n\n test_func = lambda df: len(df) == 0\n \n If this returns True then there is a problem with df.\n \n :param test_func_rows:\n Similar to test_func but for testing individual rows\n of a DataFrame. For example, test if SHARES_BASIC is\n None, zero or negative:\n \n test_func_rows = lambda df: (df[SHARES_BASIC] is None or\n df[SHARES_BASIC] <= 0)\n\n :param test_func_groups:\n Similar to test_func but for testing groups of rows\n in a DataFrame. For example, test on a per-stock basis\n whether SHARES_BASIC is greater than twice its mean:\n \n test_func_groups = lambda df: (df[SHARES_BASIC] >\n df[SHARES_BASIC].mean() * 2).any()\n\n :param group_index:\n String with the column-name used to create groups when\n using test_func_groups e.g. SIMFIN_ID for grouping by companies.\n\n :param process_df_none:\n Boolean whether to process (True) or skip (False)\n DataFrames that are None, because they could not be loaded.\n\n :param raise_exception:\n Boolean. If True then raise an exception if there were\n any problems, but wait until all datasets have been\n tested, so we can print the list of datasets with problems.\n If False then only show a warning if there were problems.\n \n :return:\n None\n \"\"\"\n\n # Convert to test_func.\n if test_func_rows is not None:\n # Convert test_func_rows to test_func.\n test_func = lambda df: test_func_rows(df).any()\n elif test_func_groups is not None:\n # Convert test_func_groups to test_func.\n # NOTE: We must use .any(axis=None) because if the DataFrame\n # is empty then the groupby returns an empty DataFrame, and\n # .any() then returns an empty Series, but we need a boolean.\n # By using .any(axis=None) it is reduced to a boolean value.\n test_func = lambda df: df.groupby(group_index, group_keys=False).apply(test_func_groups).any(axis=None)\n\n # Number of problems found.\n num_problems = 0\n\n # For all datasets, variants and markets.\n for dataset, variant, market, df in data.iter(datasets=datasets,\n variants=variants,\n markets=markets):\n # Also process DataFrames that are None,\n # because they could not be loaded?\n if df is not None or process_df_none:\n try:\n # Perform the user-supplied test.\n problem_found = test_func(df)\n except:\n # An exception occurred so we consider\n # that to be a problem.\n problem_found = True\n \n if problem_found:\n # Increase the number of problems found.\n num_problems += 1\n\n # Print the test's name. Only done once.\n if num_problems==1:\n print(test_name, file=sys.stderr)\n\n # Print the dataset details.\n msg = \"dataset='{}', variant='{}', market='{}'\"\n msg = msg.format(dataset, variant, market)\n print(msg, file=sys.stderr)\n \n # Raise exception or generate warning?\n if num_problems>0:\n if raise_exception:\n raise Exception(test_name)\n else:\n warnings.warn(test_name)",
"_____no_output_____"
]
],
[
[
"## Function for Getting Rows with Problems\n\nWhen a test has found problems in a dataset, it does not show which specific rows have the problem. You can get all the problematic rows using this function:",
"_____no_output_____"
]
],
[
[
"def get_problem_rows(df, test_func_rows):\n \"\"\"\n Perform the given test on all rows of the given DataFrame\n and return a DataFrame with only the problematic rows.\n \n :param df:\n Pandas DataFrame.\n\n :param test_func_rows:\n Function used for testing each row. This takes\n a Pandas DataFrame as an argument and returns\n a Pandas Series of booleans whether each row\n in the original DataFrame has the error.\n \n For example:\n \n test_func_rows = lambda df: (df[SHARES_BASIC] is None or\n df[SHARES_BASIC] <= 0)\n\n :return:\n Pandas DataFrame with only the problematic rows.\n \"\"\"\n\n # Index of the rows with problems.\n idx = test_func_rows(df)\n \n # Extract the rows with problems.\n df2 = df[idx]\n \n return df2",
"_____no_output_____"
]
],
[
[
"## Function for Getting Rows with Missing Data",
"_____no_output_____"
]
],
[
[
"def get_missing_data_rows(df, column):\n \"\"\"\n Return the rows of `df` where the data for the given\n column is missing i.e. it is either NaN, None, or Null.\n \n :param df:\n Pandas DataFrame.\n \n :param column:\n Name of the column.\n\n :return:\n Pandas Series with the rows where the\n column-data is missing.\n \"\"\"\n\n # Index for the rows where column-data is missing.\n idx = df[column].isnull()\n\n # Get those rows from the DataFrame.\n df2 = df[idx]\n\n return df2",
"_____no_output_____"
]
],
[
[
"## Function for Getting Problematic Groups",
"_____no_output_____"
]
],
[
[
"def get_problem_groups(df, test_func_groups, group_index):\n \"\"\"\n Perform the given test on the given DataFrame grouped by\n the given index, and return a DataFrame with only the\n problematic groups.\n \n This is used to perform tests on a DataFrame on a per-group\n basis, e.g. per-stock or per-company, and return a new\n DataFrame with only the rows for the stocks that had problems.\n \n :param df:\n Pandas DataFrame.\n\n :param test_func_groups:\n Similar to test_func but for testing groups of rows\n in a DataFrame. For example, test on a per-stock basis\n whether SHARES_BASIC is greater than twice its mean:\n \n test_func_groups = lambda df: (df[SHARES_BASIC] >\n df[SHARES_BASIC].mean() * 2)\n\n :param group_index:\n String with the column-name used to create groups when\n using test_func_groups e.g. SIMFIN_ID for grouping by companies.\n\n :return:\n Pandas DataFrame with only the problematic groups.\n \"\"\"\n\n return df.groupby(group_index).filter(test_func_groups)",
"_____no_output_____"
]
],
[
[
"## Function for Testing Equality with Tolerance\n\nThis function is useful when comparing floating point numbers, or when comparing accounting numbers that are supposed to have a strict relationship (e.g. Assets = Liabilities + Equity) but we might tolerate a small degree of error in the data e.g. 1%.",
"_____no_output_____"
]
],
[
[
"def isclose(x, y, tolerance=0.01):\n \"\"\"\n Compare whether x and y are approximately equal within\n the given tolerance, which is a ratio so tolerance=0.01\n means that we tolerate max 1% difference between x and y.\n \n This is similar to numpy.isclose() but is a more efficient\n implementation for Pandas which apparently does not have\n this built-in already (v. 0.25.1)\n \n :param x:\n Pandas DataFrame or Series.\n\n :param y:\n Pandas DataFrame or Series.\n\n :param tolerance:\n Max allowed difference as a ratio e.g. 0.01 = 1%.\n\n :return:\n Pandas DataFrame or Series with booleans whether\n x and y are approx. equal.\n \"\"\"\n return (x-y).abs() <= tolerance * y.abs()",
"_____no_output_____"
]
],
[
[
"# Tests",
"_____no_output_____"
],
[
"## Dataset could not be loaded",
"_____no_output_____"
]
],
[
[
"test_name = \"Dataset could not be loaded\"\ntest_func = lambda df: df is None\ntest_datasets(datasets=datasets_all(),\n test_name=test_name, test_func=test_func,\n process_df_none=True)",
"_____no_output_____"
]
],
[
[
"## Dataset is empty",
"_____no_output_____"
]
],
[
[
"test_name = \"Dataset is empty\"\ntest_func = lambda df: len(df) == 0\n\n# Test for all markets. This only raises a warning,\n# because some markets do have some of their datasets empty.\ntest_datasets(datasets=datasets_all(),\n test_name=test_name, test_func=test_func,\n raise_exception=False)",
"_____no_output_____"
],
[
"# Test only for the 'us' market. This raises an exception.\n# It happened once that all the datasets were empty\n# because of some bug on the server or whatever, so it\n# is important to raise an exception in case this happens again.\ntest_datasets(datasets=datasets_all(), markets=['us'],\n test_name=test_name, test_func=test_func,\n raise_exception=True)",
"_____no_output_____"
],
[
"data.get(dataset='income-insurance', variant='quarterly', market='de')",
"_____no_output_____"
]
],
[
[
"## Shares Basic is None or <= 0",
"_____no_output_____"
]
],
[
[
"test_name = \"SHARES_BASIC is None or <= 0\"\ntest_func_rows = lambda df: (df[SHARES_BASIC] is None or\n df[SHARES_BASIC] <= 0)\ntest_datasets(datasets=datasets_fundamental(),\n test_name=test_name, test_func_rows=test_func_rows)",
"_____no_output_____"
],
[
"# Show the problematic rows for a dataset.\ndf = data.get(dataset='income', variant='annual', market='us')\nget_problem_rows(df=df, test_func_rows=test_func_rows)",
"_____no_output_____"
]
],
[
[
"## Shares Diluted is None or <= 0",
"_____no_output_____"
]
],
[
[
"test_name = \"SHARES_DILUTED is None or <= 0\"\ntest_func_rows = lambda df: (df[SHARES_DILUTED] is None or\n df[SHARES_DILUTED] <= 0)\ntest_datasets(datasets=datasets_fundamental(),\n test_name=test_name, test_func_rows=test_func_rows)",
"_____no_output_____"
],
[
"# Show the problematic rows for a dataset.\ndf = data.get(dataset='income', variant='annual', market='us')\nget_problem_rows(df=df, test_func_rows=test_func_rows)",
"_____no_output_____"
]
],
[
[
"## Shares Basic or Diluted looks strange",
"_____no_output_____"
]
],
[
[
"# List of SimFin-Id's to ignore in this test.\n# Use this list when a company's share-counts look strange,\n# but after manual inspection of the financial reports, the\n# share-counts are actually correct.\nignore_simfin_ids = \\\n [ 53151, 61372, 82753, 99062, 148380, 166965, 258731, 378110,\n 498391, 520475, 543421, 543877, 546550, 592461, 620342, 652016,\n 652547, 658464, 658467, 659836, 667668, 689587, 698616, 704562,\n 768206, 778777, 794492, 798464, 826389, 867483, 890308, 896087,\n 899362, 951586]",
"_____no_output_____"
],
[
"# Ensure they are all unique.\nignore_simfin_ids = np.unique(ignore_simfin_ids)\nignore_simfin_ids",
"_____no_output_____"
],
[
"def test_func_groups(df_grp):\n # Perform various tests on the share-counts.\n # Assume `df_grp` only contains data for a single company,\n # because this function should be called using:\n # df.groupby(SIMFIN_ID).apply(test_func_groups)\n \n # Ignore this company?\n if df_grp[SIMFIN_ID].iloc[0] in ignore_simfin_ids:\n return False\n \n # Helper-function for calculating absolute ratio between\n # a value and its average.\n abs_ratio = lambda df: (df / df.mean() - 1).abs()\n\n # Max absolute ratio allowed.\n max_abs_ratio = 2\n \n # Test whether Shares Basic is much different from its mean.\n test1 = (abs_ratio(df_grp[SHARES_BASIC]) > max_abs_ratio).any()\n\n # Test whether Shares Diluted is much different from its mean.\n test2 = (abs_ratio(df_grp[SHARES_DILUTED]) > max_abs_ratio).any()\n\n return (test1 | test2)",
"_____no_output_____"
],
[
"%%time\ntest_name = \"Shares Basic or Shares Diluted looks strange\"\ntest_datasets(datasets=datasets_fundamental(),\n test_name=test_name,\n test_func_groups=test_func_groups,\n group_index=SIMFIN_ID)",
"_____no_output_____"
],
[
"# Show the problematic groups for a dataset.\nif not running_pytest:\n # Get the dataset.\n df = data.get(dataset='income', variant='annual', market='us')\n\n # Get the problematic groups.\n df_problems = get_problem_groups(df=df,\n test_func_groups=test_func_groups,\n group_index=SIMFIN_ID)\n\n # Print the problematic groups.\n for _, df2 in df_problems.groupby(SIMFIN_ID):\n display(df2[[SIMFIN_ID, REPORT_DATE, SHARES_BASIC, SHARES_DILUTED]])",
"_____no_output_____"
]
],
[
[
"## Share-Prices are Zero or Negative",
"_____no_output_____"
]
],
[
[
"test_name = \"Share-prices are zero\"\ndef test_func_rows(df):\n return (df[OPEN] <= 0.0) & (df[LOW] <= 0.0) & \\\n (df[HIGH] <= 0.0) & (df[CLOSE] <= 0.0) & \\\n (df[VOLUME] <= 0.0)\n\ntest_datasets(datasets=['shareprices'],\n test_name=test_name, test_func_rows=test_func_rows)",
"_____no_output_____"
],
[
"# Show the problematic rows for a dataset.\ndf = data.get(dataset='shareprices', variant='daily', market='us')\nget_problem_rows(df=df, test_func_rows=test_func_rows)",
"_____no_output_____"
]
],
[
[
"## Revenue is negative",
"_____no_output_____"
]
],
[
[
"test_name = \"REVENUE < 0\"\ntest_func_rows = lambda df: (df[REVENUE] < 0)\n\n# It is possible that Revenue is negative for banks and\n# insurance companies, so we only test it for \"normal\" companies\n# in the 'income' dataset.\ntest_datasets(datasets=['income'],\n test_name=test_name, test_func_rows=test_func_rows)",
"_____no_output_____"
],
[
"# Show the problematic rows for a dataset.\ndf = data.get(dataset='income-insurance', variant='quarterly', market='us')\nget_problem_rows(df=df, test_func_rows=test_func_rows)",
"_____no_output_____"
]
],
[
[
"## Assets != Liabilities + Equity (Exact Comparison)\n\nThis only generates a warning, because sometimes there are tiny rounding errors.",
"_____no_output_____"
]
],
[
[
"test_name = \"Assets != Liabilities + Equity (Exact Comparison)\"\ntest_func_rows = lambda df: (df[TOTAL_ASSETS] != df[TOTAL_LIABILITIES] + df[TOTAL_EQUITY])\ntest_datasets(datasets=datasets_balance(),\n test_name=test_name, test_func_rows=test_func_rows,\n raise_exception=False)",
"_____no_output_____"
],
[
"# Get the problematic rows for a dataset.\ndf = data.get(dataset='balance', variant='quarterly', market='us')\ndf2 = get_problem_rows(df=df, test_func_rows=test_func_rows)\n\n# Only show the relevant columns.\ndf2[[TICKER, SIMFIN_ID, REPORT_DATE, TOTAL_ASSETS, TOTAL_LIABILITIES, TOTAL_EQUITY]]",
"_____no_output_____"
]
],
[
[
"## Assets != Liabilities + Equity (1% Tolerance)\n\nThe above test used exact comparison. We now allow for 1% error. This raises an exception.",
"_____no_output_____"
]
],
[
[
"def test_func_rows(df):\n x = df[TOTAL_ASSETS]\n y = df[TOTAL_LIABILITIES] + df[TOTAL_EQUITY]\n \n # Compare x and y within 1% tolerance. Note the resulting\n # boolean array is negated because we want to indicate\n # which rows are problematic so x and y are not close.\n return ~isclose(x=x, y=y, tolerance=0.01)",
"_____no_output_____"
],
[
"test_name = \"Assets != Liabilities + Equity (1% Tolerance)\"\ntest_datasets(datasets=datasets_balance(),\n test_name=test_name, test_func_rows=test_func_rows)",
"_____no_output_____"
],
[
"# Get the problematic rows for a dataset.\ndf = data.get(dataset='balance', variant='annual', market='us')\ndf2 = get_problem_rows(df=df, test_func_rows=test_func_rows)\n\n# Only show the relevant columns.\ndf2[[TICKER, SIMFIN_ID, REPORT_DATE, TOTAL_ASSETS, TOTAL_LIABILITIES, TOTAL_EQUITY]]",
"_____no_output_____"
]
],
[
[
"## Dates are invalid (Fundamentals)",
"_____no_output_____"
]
],
[
[
"# Lambda function for converting strings to dates. Format: YYYY-MM-DD\n# This will raise an exception if invalid dates are encountered.\ndate_parser = lambda column: pd.to_datetime(column, yearfirst=True, dayfirst=False)",
"_____no_output_____"
],
[
"# Test function for the entire DataFrame.\n# This cannot show which individual rows have problems.\ndef test_func(df):\n result1 = date_parser(df[REPORT_DATE])\n result2 = date_parser(df[PUBLISH_DATE])\n \n # We only get to this point if date_parser() does not\n # raise any exceptions, in which case we assume the\n # data did not have any problems.\n return False",
"_____no_output_____"
],
[
"test_name = \"REPORT_DATE or PUBLISH_DATE is invalid\"\ntest_datasets(datasets=datasets_fundamental(),\n test_name=test_name, test_func=test_func)",
"_____no_output_____"
]
],
[
[
"## Dates are invalid (Share-Prices)",
"_____no_output_____"
]
],
[
[
"# Test function for the entire DataFrame.\n# This cannot show which individual rows have problems.\ndef test_func(df):\n result1 = date_parser(df[DATE])\n \n # We only get to this point if date_parser() does not\n # raise any exceptions, in which case we assume the\n # data did not have any problems.\n return False",
"_____no_output_____"
],
[
"test_name = \"DATE is invalid\"\ntest_datasets(datasets=datasets_shareprices(),\n test_name=test_name, test_func=test_func)",
"_____no_output_____"
]
],
[
[
"## Duplicate Tickers",
"_____no_output_____"
]
],
[
[
"def get_duplicate_tickers(df):\n \"\"\"\n Return the rows of `df` where multiple SIMFIN_ID\n have the same TICKER.\n \n :param df: Pandas DataFrame with TICKER column.\n :return: Pandas DataFrame.\n \"\"\"\n\n # Remove duplicate rows of [TICKER, SIMFIN_ID] pairs.\n # For the 'companies' dataset this is not necessary,\n # but for e.g. the 'income' dataset we have many rows\n # for each [TICKER, SIMFIN_ID] pair because there are\n # many financial reports for each of these ID pairs.\n idx = df[[TICKER, SIMFIN_ID]].duplicated()\n df2 = df[~idx]\n\n # Now the DataFrame df2 only contains unique rows of\n # [TICKER, SIMFIN_ID] so we need to check if there are\n # any duplicate TICKER.\n\n # Index for rows where TICKER is a duplicate.\n idx1 = df2[TICKER].duplicated()\n\n # Index for rows where TICKER is not NaN.\n # These would otherwise show up as duplicates.\n idx2 = df2[TICKER].notna()\n\n # Index for rows where TICKER is a duplicate but not NaN.\n idx = idx1 & idx2\n\n # Get those rows from the DataFrame.\n df2 = df2[idx]\n\n return df2",
"_____no_output_____"
],
[
"# Test-function whether a DataFrame has duplicate tickers.\ntest_func = lambda df: (len(get_duplicate_tickers(df=df)) > 0)",
"_____no_output_____"
],
[
"test_name = \"Duplicate Tickers\"\ntest_datasets(datasets=datasets_tickers,\n test_name=test_name, test_func=test_func)",
"_____no_output_____"
],
[
"# Show duplicate tickers in the 'companies' dataset.\ndf = data.get(dataset='companies', market='us')\nget_duplicate_tickers(df=df)",
"_____no_output_____"
],
[
"# Show duplicate tickers in the 'income-annual' dataset.\ndf = data.get(dataset='income', variant='annual', market='us')\nget_duplicate_tickers(df=df)",
"_____no_output_____"
]
],
[
[
"## Missing Tickers",
"_____no_output_____"
]
],
[
[
"# Test-function whether a DataFrame has missing tickers.\ntest_func = lambda df: (len(get_missing_data_rows(df=df, column=TICKER)) > 0)",
"_____no_output_____"
],
[
"test_name = \"Missing Tickers\"\ntest_datasets(datasets=datasets_tickers,\n test_name=test_name, test_func=test_func)",
"_____no_output_____"
],
[
"# Show missing tickers in the 'companies' dataset.\ndf = data.get(dataset='companies', market='us')\nget_missing_data_rows(df=df, column=TICKER)",
"_____no_output_____"
],
[
"# Show missing tickers in the 'income-annual' dataset.\ndf = data.get(dataset='income', variant='annual', market='us')\nget_missing_data_rows(df=df, column=TICKER)",
"_____no_output_____"
],
[
"# Show missing tickers in the 'shareprices-daily' dataset.\ndf = data.get(dataset='shareprices', variant='daily', market='us')\nget_missing_data_rows(df=df, column=TICKER)",
"_____no_output_____"
]
],
[
[
"## Missing Company Names",
"_____no_output_____"
]
],
[
[
"# Test-function whether a DataFrame has missing company names.\ntest_func = lambda df: (len(get_missing_data_rows(df=df, column=COMPANY_NAME)) > 0)",
"_____no_output_____"
],
[
"test_name = \"Missing Company Name\"\ntest_datasets(datasets=['companies'],\n test_name=test_name, test_func=test_func)",
"_____no_output_____"
],
[
"# Show missing company names in the 'companies' dataset.\ndf = data.get(dataset='companies', market='us')\nget_missing_data_rows(df=df, column=COMPANY_NAME)",
"_____no_output_____"
]
],
[
[
"## Missing Annual Reports",
"_____no_output_____"
]
],
[
[
"def missing_annual_reports(df):\n \"\"\"\n Return a list of the SIMFIN_ID's from the given DataFrame\n that have missing annual reports.\n \n :param df:\n Pandas DataFrame with a dataset e.g. 'income-annual'.\n It must have columns SIMFIN_ID and FISCAL_YEAR.\n\n :return:\n List of integers with SIMFIN_ID's that have missing reports.\n \"\"\"\n \n # The idea is to test for each SIMFIN_ID individually,\n # whether the DataFrame has all the expected reports for\n # consecutive Fiscal Years between the min/max years.\n \n # Helper-function for processing a DataFrame for one SIMFIN_ID.\n def _missing(df):\n # Get the Fiscal Years from the DataFrame.\n fiscal_years = df[FISCAL_YEAR]\n\n # How many years between min and max fiscal years.\n num_years = fiscal_years.max() - fiscal_years.min() + 1\n\n # We expect the Series to have the same length, otherwise\n # some reports must be missing between min and max years.\n missing = (num_years != len(fiscal_years))\n\n return missing\n \n # Process all companies individually and get a Pandas\n # DataFrame with a boolean for each SIMFIN_ID whether\n # it has some missing Fiscal Years.\n idx = df.groupby(SIMFIN_ID).apply(_missing)\n\n # List of the SIMFIN_ID's that have missing reports.\n simfin_ids = list(idx[idx].index.values)\n\n return simfin_ids",
"_____no_output_____"
],
[
"test_name = \"Missing annual reports\"\ntest_func = lambda df: len(missing_annual_reports(df=df)) > 0\ntest_datasets(datasets=datasets_fundamental(),\n variants=['annual'],\n test_name=test_name, test_func=test_func)",
"_____no_output_____"
],
[
"# Get list of SIMFIN_ID's that have missing reports for a dataset.\nif not running_pytest:\n df = data.get(dataset='income', variant='annual', market='de')\n display(missing_annual_reports(df=df))",
"_____no_output_____"
],
[
"def sort_annual_reports(df, simfin_id):\n \"\"\"\n Get the data for a given SIMFIN_ID and set the index to be\n the sorted Fiscal Year so it is easier to see which are missing.\n \"\"\"\n return df.set_index([SIMFIN_ID, FISCAL_YEAR]).sort_index().loc[simfin_id]",
"_____no_output_____"
],
[
"# Show all the reports for a given SIMFIN_ID sorted by\n# Fiscal Year so it is easier to see which are missing.\nif not running_pytest:\n display(sort_annual_reports(df=df, simfin_id=936426))",
"_____no_output_____"
]
],
[
[
"## Missing Quarterly Reports",
"_____no_output_____"
]
],
[
[
"def missing_quarterly_reports(df):\n \"\"\"\n Return a list of the SIMFIN_ID's from the given DataFrame\n that have missing quarterly or ttm reports.\n \n :param df:\n Pandas DataFrame with a dataset e.g. 'income-annual'.\n It must have columns SIMFIN_ID, FISCAL_YEAR, FISCAL_PERIOD.\n\n :return:\n List of integers with SIMFIN_ID's that have missing reports.\n \"\"\"\n \n # The idea is to test for each SIMFIN_ID individually,\n # whether the DataFrame has all the expected reports for\n # consecutive Fiscal Years and Periods between the min/max.\n \n # Helper-function for processing a DataFrame for one SIMFIN_ID.\n def _missing(df):\n # Get the Fiscal Years and Periods from the DataFrame.\n fiscal_years_periods = df[[FISCAL_YEAR, FISCAL_PERIOD]]\n\n # The first Fiscal Year and Period.\n min_year = fiscal_years_periods[FISCAL_YEAR].min()\n min_idx = (fiscal_years_periods[FISCAL_YEAR] == min_year)\n min_period = fiscal_years_periods[min_idx][FISCAL_PERIOD].min()\n\n # The last Fiscal Year and Period.\n max_year = fiscal_years_periods[FISCAL_YEAR].max()\n max_idx = (fiscal_years_periods[FISCAL_YEAR] == max_year)\n max_period = fiscal_years_periods[max_idx][FISCAL_PERIOD].max()\n\n # How many years between min and max fiscal years.\n num_years = max_year - min_year + 1\n\n # Total number of Fiscal Periods between first and\n # last Fiscal Years - if all Fiscal Periods were included.\n num_periods = num_years * 4\n\n # Used to map from Fiscal Period strings to ints.\n # This is safer and easier to understand than\n # e.g. def map_period(x): int(x[1])\n map_period = \\\n {\n 'Q1': 1,\n 'Q2': 2,\n 'Q3': 3,\n 'Q4': 4\n }\n\n # Number of Fiscal Periods missing in the first year.\n adj_min_period = map_period[min_period] - 1\n\n # Number of Fiscal Periods missing in the last year.\n adj_max_period = 4 - map_period[max_period]\n\n # Adjust the number of Fiscal Periods between the min/max\n # Fiscal Years and Periods by subtracting those periods\n # missing in the first and last years.\n expected_periods = num_periods - adj_min_period - adj_max_period\n\n # If the expected number of Fiscal Periods between the\n # min and max dates, is different from the actual number\n # of Fiscal Periods in the DataFrame, then some are missing.\n missing = (expected_periods != len(fiscal_years_periods))\n\n return missing\n\n # Process all companies individually and get a Pandas\n # DataFrame with a boolean for each SIMFIN_ID whether\n # it has some missing Fiscal Years.\n idx = df.groupby(SIMFIN_ID).apply(_missing)\n\n # List of the SIMFIN_ID's that have missing reports.\n simfin_ids = list(idx[idx].index.values)\n\n return simfin_ids",
"_____no_output_____"
],
[
"%%time\ntest_name = \"Missing quarterly reports\"\ntest_func = lambda df: len(missing_quarterly_reports(df=df)) > 0\ntest_datasets(datasets=datasets_fundamental(),\n variants=['quarterly'],\n test_name=test_name, test_func=test_func)",
"_____no_output_____"
],
[
"# Get list of SIMFIN_ID's that have missing reports for a dataset.\nif not running_pytest:\n df = data.get(dataset='income', variant='quarterly', market='us')\n display(missing_quarterly_reports(df=df))",
"_____no_output_____"
],
[
"def sort_quarterly_reports(df, simfin_id):\n \"\"\"\n Get the data for a given SIMFIN_ID and set the index to be\n the sorted Fiscal Year and Period so it is easier to see\n which ones are missing.\n \"\"\"\n return df.set_index([SIMFIN_ID, FISCAL_YEAR, FISCAL_PERIOD]).sort_index().loc[simfin_id]",
"_____no_output_____"
],
[
"# Show all the reports for a given SIMFIN_ID sorted by\n# Fiscal Year and Period so it is easier to see which are missing.\nif not running_pytest:\n display(sort_quarterly_reports(df=df, simfin_id=139560))",
"_____no_output_____"
]
],
[
[
"## Missing TTM Reports\n\nTrailing-Twelve-Months (TTM) data is also quarterly so we can use the same helper-functions from above.",
"_____no_output_____"
]
],
[
[
"test_name = \"Missing ttm reports\"\ntest_func = lambda df: len(missing_quarterly_reports(df=df)) > 0\ntest_datasets(datasets=datasets_fundamental(),\n variants=['ttm'],\n test_name=test_name, test_func=test_func)",
"_____no_output_____"
],
[
"# Get list of SIMFIN_ID's that have missing reports for a dataset.\nif not running_pytest:\n df = data.get(dataset='income', variant='ttm', market='us')\n display(missing_quarterly_reports(df=df))",
"_____no_output_____"
],
[
"# Show all the reports for a given SIMFIN_ID sorted by\n# Fiscal Year and Period so it is easier to see which are missing.\nif not running_pytest:\n display(sort_quarterly_reports(df=df, simfin_id=89750))",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
d0bcfd25b723a95799a0c968079caf73b115da3c | 36,642 | ipynb | Jupyter Notebook | testsquaddata.ipynb | Xirider/tdnc | 4e2b18dd3dd9e160fef42e89506aaf3b6e15fe12 | [
"MIT"
] | null | null | null | testsquaddata.ipynb | Xirider/tdnc | 4e2b18dd3dd9e160fef42e89506aaf3b6e15fe12 | [
"MIT"
] | null | null | null | testsquaddata.ipynb | Xirider/tdnc | 4e2b18dd3dd9e160fef42e89506aaf3b6e15fe12 | [
"MIT"
] | null | null | null | 27.992361 | 1,108 | 0.508078 | [
[
[
"# coding=utf-8\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\n\n\nimport argparse\nimport logging\nimport os\nfrom pathlib import Path\nimport random\nfrom io import open\nimport pickle\nimport math\n\nimport numpy as np\nimport requests\n",
"_____no_output_____"
],
[
"logging.basicConfig(format='%(asctime)s - %(levelname)s - %(name)s - %(message)s',\n datefmt='%m/%d/%Y %H:%M:%S',\n level=logging.INFO)\nlogger = logging.getLogger(__name__)\n",
"_____no_output_____"
],
[
"_CURPATH = Path.cwd() \n_TMPDIR = _CURPATH / \"squad_data\"\n_TRAINDIR = _TMPDIR / \"squad_train\"\n_TESTFILE = \"dev-v2.0.json\"\n_DATADIR = _CURPATH / \"squad_data\"\n_TRAINFILE = \"train-v2.0.json\"\n_URL = \"https://rajpurkar.github.io/SQuAD-explorer/dataset/\" + _TRAINFILE\n_MODELS = _CURPATH / \"models\"",
"_____no_output_____"
],
[
"def maybe_download(directory, filename, uri):\n \n filepath = os.path.join(directory, filename)\n if not os.path.exists(directory):\n logger.info(f\"Creating new dir: {directory}\")\n os.makedirs(directory)\n if not os.path.exists(filepath):\n logger.info(\"Downloading und unpacking file, as file does not exist yet\")\n r = requests.get(uri, allow_redirects=True)\n open(filepath, \"wb\").write(r.content)\n\n return filepath",
"_____no_output_____"
],
[
"filename = maybe_download(_TMPDIR, _TRAINFILE, _URL)",
"_____no_output_____"
],
[
"import json\nfor files in os.listdir(_TMPDIR):\n \n with open(_TMPDIR/files, \"r\", encoding=\"utf-8\") as json_file:\n data_dict = json.load(json_file)\n data_dict = data_dict[\"data\"]\n number_articles = len(data_dict)\n total = 0\n hundreds = 0\n twohundredf = 0\n rest = 0\n cont100 = 0\n cont150 =0\n contover = 0\n \n for article in range(number_articles):\n cur_number_context = len(data_dict[article][\"paragraphs\"])\n print(f\"This is article number {article}\")\n print(cur_number_context)\n if cur_number_context < 70:\n cont100 += 1\n elif cur_number_context < 130:\n cont150 += 1\n else:\n contover += cur_number_context\n \n \n \n for context in range(cur_number_context -1, -1, -1 ):\n num = len(data_dict[article][\"paragraphs\"][context][\"context\"].split())\n #print(num)\n if num < 50:\n hundreds += 1\n elif num < 100:\n twohundredf += 1\n else:\n rest += 1\n print(f\"Hundreds is {hundreds}\")\n print(f\"twohundreds is {twohundredf}\")\n print(f\"Rest is {rest}\")\n print(f\"cont 100 {cont100}\")\n print(f\"cont 150 {cont150}\")\n print(f\"cont over {contover}\")\n\n# data contains all the data\n# title contains the title, paragrapsh contains qas, context (the paragraph)\n# qas contains a list of dicts with the questions and the answers\n\n\n# lets put all the stuff inside the get_item function, so that we get new data each epoch without rebuilding\n# rebuild should just contain saving the json file or we dont even need rebuild\n",
"This is article number 0\n66\nThis is article number 1\n82\nThis is article number 2\n72\nThis is article number 3\n60\nThis is article number 4\n32\nThis is article number 5\n43\nThis is article number 6\n77\nThis is article number 7\n148\nThis is article number 8\n62\nThis is article number 9\n52\nThis is article number 10\n79\nThis is article number 11\n149\nThis is article number 12\n127\nThis is article number 13\n75\nThis is article number 14\n74\nThis is article number 15\n25\nThis is article number 16\n25\nThis is article number 17\n39\nThis is article number 18\n36\nThis is article number 19\n77\nThis is article number 20\n26\nThis is article number 21\n21\nThis is article number 22\n23\nThis is article number 23\n46\nThis is article number 24\n45\nThis is article number 25\n21\nThis is article number 26\n57\nThis is article number 27\n29\nThis is article number 28\n38\nThis is article number 29\n38\nThis is article number 30\n25\nThis is article number 31\n31\nThis is article number 32\n86\nThis is article number 33\n81\nThis is article number 34\n29\nThis is article number 35\n23\nThis is article number 36\n33\nThis is article number 37\n28\nThis is article number 38\n35\nThis is article number 39\n82\nThis is article number 40\n34\nThis is article number 41\n22\nThis is article number 42\n34\nThis is article number 43\n74\nThis is article number 44\n90\nThis is article number 45\n21\nThis is article number 46\n37\nThis is article number 47\n82\nThis is article number 48\n34\nThis is article number 49\n60\nThis is article number 50\n70\nThis is article number 51\n24\nThis is article number 52\n34\nThis is article number 53\n22\nThis is article number 54\n25\nThis is article number 55\n80\nThis is article number 56\n79\nThis is article number 57\n42\nThis is article number 58\n33\nThis is article number 59\n32\nThis is article number 60\n53\nThis is article number 61\n63\nThis is article number 62\n27\nThis is article number 63\n28\nThis is article number 64\n29\nThis is article number 65\n36\nThis is article number 66\n66\nThis is article number 67\n55\nThis is article number 68\n44\nThis is article number 69\n52\nThis is article number 70\n46\nThis is article number 71\n57\nThis is article number 72\n68\nThis is article number 73\n49\nThis is article number 74\n95\nThis is article number 75\n57\nThis is article number 76\n67\nThis is article number 77\n51\nThis is article number 78\n63\nThis is article number 79\n23\nThis is article number 80\n87\nThis is article number 81\n56\nThis is article number 82\n50\nThis is article number 83\n50\nThis is article number 84\n64\nThis is article number 85\n59\nThis is article number 86\n38\nThis is article number 87\n25\nThis is article number 88\n46\nThis is article number 89\n70\nThis is article number 90\n71\nThis is article number 91\n93\nThis is article number 92\n23\nThis is article number 93\n80\nThis is article number 94\n61\nThis is article number 95\n53\nThis is article number 96\n52\nThis is article number 97\n56\nThis is article number 98\n47\nThis is article number 99\n18\nThis is article number 100\n62\nThis is article number 101\n44\nThis is article number 102\n12\nThis is article number 103\n66\nThis is article number 104\n14\nThis is article number 105\n10\nThis is article number 106\n12\nThis is article number 107\n48\nThis is article number 108\n31\nThis is article number 109\n21\nThis is article number 110\n23\nThis is article number 111\n12\nThis is article number 112\n10\nThis is article number 113\n25\nThis is article number 114\n25\nThis is article number 115\n17\nThis is article number 116\n34\nThis is article number 117\n12\nThis is article number 118\n32\nThis is article number 119\n25\nThis is article number 120\n21\nThis is article number 121\n44\nThis is article number 122\n24\nThis is article number 123\n23\nThis is article number 124\n34\nThis is article number 125\n12\nThis is article number 126\n20\nThis is article number 127\n16\nThis is article number 128\n21\nThis is article number 129\n23\nThis is article number 130\n26\nThis is article number 131\n42\nThis is article number 132\n25\nThis is article number 133\n13\nThis is article number 134\n20\nThis is article number 135\n32\nThis is article number 136\n26\nThis is article number 137\n18\nThis is article number 138\n23\nThis is article number 139\n50\nThis is article number 140\n45\nThis is article number 141\n41\nThis is article number 142\n64\nThis is article number 143\n76\nThis is article number 144\n49\nThis is article number 145\n70\nThis is article number 146\n22\nThis is article number 147\n51\nThis is article number 148\n16\nThis is article number 149\n34\nThis is article number 150\n58\nThis is article number 151\n78\nThis is article number 152\n44\nThis is article number 153\n40\nThis is article number 154\n41\nThis is article number 155\n13\nThis is article number 156\n22\nThis is article number 157\n45\nThis is article number 158\n53\nThis is article number 159\n45\nThis is article number 160\n47\nThis is article number 161\n83\nThis is article number 162\n44\nThis is article number 163\n36\nThis is article number 164\n52\nThis is article number 165\n32\nThis is article number 166\n44\nThis is article number 167\n50\nThis is article number 168\n44\nThis is article number 169\n35\nThis is article number 170\n36\nThis is article number 171\n42\nThis is article number 172\n37\nThis is article number 173\n99\nThis is article number 174\n94\nThis is article number 175\n21\nThis is article number 176\n70\nThis is article number 177\n31\nThis is article number 178\n51\nThis is article number 179\n28\nThis is article number 180\n42\nThis is article number 181\n34\nThis is article number 182\n21\nThis is article number 183\n63\nThis is article number 184\n75\nThis is article number 185\n39\nThis is article number 186\n95\nThis is article number 187\n25\nThis is article number 188\n24\nThis is article number 189\n31\nThis is article number 190\n21\nThis is article number 191\n61\nThis is article number 192\n26\nThis is article number 193\n87\nThis is article number 194\n89\nThis is article number 195\n32\nThis is article number 196\n26\nThis is article number 197\n37\nThis is article number 198\n40\nThis is article number 199\n21\nThis is article number 200\n35\nThis is article number 201\n59\nThis is article number 202\n38\nThis is article number 203\n29\nThis is article number 204\n26\nThis is article number 205\n25\nThis is article number 206\n60\nThis is article number 207\n51\nThis is article number 208\n76\nThis is article number 209\n27\nThis is article number 210\n73\nThis is article number 211\n36\nThis is article number 212\n30\nThis is article number 213\n26\nThis is article number 214\n49\nThis is article number 215\n26\nThis is article number 216\n37\nThis is article number 217\n46\nThis is article number 218\n40\nThis is article number 219\n35\nThis is article number 220\n40\nThis is article number 221\n39\nThis is article number 222\n21\nThis is article number 223\n22\nThis is article number 224\n30\nThis is article number 225\n40\nThis is article number 226\n25\nThis is article number 227\n55\nThis is article number 228\n42\nThis is article number 229\n77\nThis is article number 230\n22\nThis is article number 231\n34\nThis is article number 232\n56\nThis is article number 233\n62\nThis is article number 234\n24\nThis is article number 235\n23\nThis is article number 236\n35\nThis is article number 237\n56\nThis is article number 238\n47\nThis is article number 239\n55\nThis is article number 240\n27\nThis is article number 241\n44\nThis is article number 242\n37\nThis is article number 243\n67\nThis is article number 244\n21\nThis is article number 245\n23\nThis is article number 246\n27\nThis is article number 247\n31\nThis is article number 248\n48\nThis is article number 249\n32\nThis is article number 250\n66\nThis is article number 251\n27\nThis is article number 252\n52\nThis is article number 253\n23\nThis is article number 254\n61\nThis is article number 255\n61\nThis is article number 256\n24\nThis is article number 257\n59\nThis is article number 258\n67\nThis is article number 259\n32\nThis is article number 260\n23\nThis is article number 261\n32\nThis is article number 262\n85\nThis is article number 263\n23\nThis is article number 264\n25\nThis is article number 265\n27\nThis is article number 266\n23\nThis is article number 267\n44\nThis is article number 268\n43\nThis is article number 269\n30\nThis is article number 270\n50\nThis is article number 271\n66\nThis is article number 272\n26\nThis is article number 273\n33\nThis is article number 274\n79\nThis is article number 275\n77\nThis is article number 276\n24\nThis is article number 277\n25\nThis is article number 278\n24\nThis is article number 279\n39\nThis is article number 280\n21\nThis is article number 281\n44\nThis is article number 282\n34\nThis is article number 283\n55\nThis is article number 284\n78\nThis is article number 285\n95\nThis is article number 286\n25\nThis is article number 287\n29\nThis is article number 288\n28\nThis is article number 289\n31\nThis is article number 290\n63\nThis is article number 291\n44\nThis is article number 292\n38\nThis is article number 293\n30\nThis is article number 294\n23\nThis is article number 295\n25\nThis is article number 296\n50\nThis is article number 297\n23\nThis is article number 298\n38\nThis is article number 299\n35\nThis is article number 300\n24\nThis is article number 301\n31\nThis is article number 302\n41\nThis is article number 303\n72\nThis is article number 304\n48\nThis is article number 305\n43\nThis is article number 306\n43\nThis is article number 307\n60\nThis is article number 308\n65\nThis is article number 309\n94\nThis is article number 310\n52\nThis is article number 311\n26\nThis is article number 312\n26\nThis is article number 313\n50\nThis is article number 314\n29\nThis is article number 315\n24\nThis is article number 316\n26\nThis is article number 317\n24\nThis is article number 318\n40\nThis is article number 319\n24\nThis is article number 320\n27\nThis is article number 321\n38\nThis is article number 322\n22\nThis is article number 323\n22\nThis is article number 324\n25\nThis is article number 325\n35\nThis is article number 326\n26\nThis is article number 327\n33\nThis is article number 328\n27\nThis is article number 329\n69\nThis is article number 330\n24\nThis is article number 331\n23\nThis is article number 332\n50\nThis is article number 333\n21\nThis is article number 334\n35\nThis is article number 335\n45\nThis is article number 336\n59\nThis is article number 337\n52\nThis is article number 338\n24\nThis is article number 339\n36\nThis is article number 340\n25\nThis is article number 341\n29\nThis is article number 342\n28\nThis is article number 343\n35\nThis is article number 344\n50\nThis is article number 345\n22\nThis is article number 346\n21\nThis is article number 347\n61\nThis is article number 348\n32\nThis is article number 349\n76\nThis is article number 350\n59\nThis is article number 351\n49\nThis is article number 352\n52\nThis is article number 353\n36\nThis is article number 354\n34\nThis is article number 355\n23\nThis is article number 356\n48\nThis is article number 357\n92\nThis is article number 358\n31\nThis is article number 359\n31\nThis is article number 360\n65\nThis is article number 361\n43\nThis is article number 362\n21\nThis is article number 363\n25\nThis is article number 364\n23\nThis is article number 365\n88\nThis is article number 366\n32\nThis is article number 367\n74\nThis is article number 368\n24\nThis is article number 369\n53\nThis is article number 370\n22\nThis is article number 371\n35\nThis is article number 372\n16\nThis is article number 373\n33\nThis is article number 374\n32\nThis is article number 375\n52\nThis is article number 376\n57\nThis is article number 377\n51\nThis is article number 378\n35\nThis is article number 379\n44\nThis is article number 380\n40\nThis is article number 381\n82\nThis is article number 382\n71\nThis is article number 383\n49\nThis is article number 384\n27\nThis is article number 385\n60\nThis is article number 386\n34\nThis is article number 387\n28\nThis is article number 388\n21\nThis is article number 389\n23\nThis is article number 390\n47\nThis is article number 391\n26\nThis is article number 392\n48\nThis is article number 393\n27\nThis is article number 394\n29\nThis is article number 395\n25\nThis is article number 396\n39\nThis is article number 397\n24\nThis is article number 398\n32\nThis is article number 399\n61\nThis is article number 400\n22\nThis is article number 401\n38\nThis is article number 402\n31\nThis is article number 403\n57\nThis is article number 404\n43\nThis is article number 405\n21\nThis is article number 406\n62\nThis is article number 407\n81\nThis is article number 408\n56\nThis is article number 409\n48\nThis is article number 410\n64\nThis is article number 411\n64\nThis is article number 412\n27\nThis is article number 413\n81\nThis is article number 414\n34\nThis is article number 415\n41\nThis is article number 416\n75\nThis is article number 417\n23\nThis is article number 418\n29\nThis is article number 419\n31\nThis is article number 420\n25\nThis is article number 421\n46\nThis is article number 422\n90\nThis is article number 423\n57\nThis is article number 424\n79\nThis is article number 425\n98\nThis is article number 426\n30\nThis is article number 427\n26\nThis is article number 428\n36\nThis is article number 429\n26\nThis is article number 430\n24\nThis is article number 431\n34\nThis is article number 432\n55\nThis is article number 433\n45\nThis is article number 434\n44\nThis is article number 435\n26\nThis is article number 436\n61\nThis is article number 437\n30\nThis is article number 438\n36\nThis is article number 439\n58\n"
],
[
"a = data_dict[0]",
"_____no_output_____"
],
[
"len(a)",
"_____no_output_____"
],
[
"a[\"paragraphs\"][10]",
"_____no_output_____"
],
[
"a[0][\"paragraphs\"][0][\"context\"]",
"_____no_output_____"
],
[
"len(a[0][\"paragraphs\"])",
"_____no_output_____"
],
[
"for a in []:\n print(a)",
"_____no_output_____"
],
[
"a = [-1] * 5",
"_____no_output_____"
],
[
"a",
"_____no_output_____"
],
[
"a.append([\"CLS\"])",
"_____no_output_____"
],
[
"a",
"_____no_output_____"
],
[
"[\"[MASK]\"]* 5",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0bcff2e2b2ca64faa2fe522801c17f9fe37c59e | 88,070 | ipynb | Jupyter Notebook | CNN-Keras-Tensorflow/q1-ck2840.ipynb | Chandra-S-Narain-Kappera/Image-Classification | 0718ed1521818baf7951ff490cf5f6424175e772 | [
"MIT"
] | null | null | null | CNN-Keras-Tensorflow/q1-ck2840.ipynb | Chandra-S-Narain-Kappera/Image-Classification | 0718ed1521818baf7951ff490cf5f6424175e772 | [
"MIT"
] | null | null | null | CNN-Keras-Tensorflow/q1-ck2840.ipynb | Chandra-S-Narain-Kappera/Image-Classification | 0718ed1521818baf7951ff490cf5f6424175e772 | [
"MIT"
] | null | null | null | 65.140533 | 46,584 | 0.750539 | [
[
[
"### Importing required stuff",
"_____no_output_____"
]
],
[
[
"import time\nimport math\nimport random\n\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport tensorflow as tf\n\nfrom datetime import timedelta\n\nimport scipy.misc\nimport glob\nimport sys\n\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"### Helper files to load data",
"_____no_output_____"
]
],
[
[
"# Helper functions, DO NOT modify this\n\ndef get_img_array(path):\n \"\"\"\n Given path of image, returns it's numpy array\n \"\"\"\n return scipy.misc.imread(path)\n\ndef get_files(folder):\n \"\"\"\n Given path to folder, returns list of files in it\n \"\"\"\n filenames = [file for file in glob.glob(folder+'*/*')]\n filenames.sort()\n return filenames\n\ndef get_label(filepath, label2id):\n \"\"\"\n Files are assumed to be labeled as: /path/to/file/999_frog.png\n Returns label for a filepath\n \"\"\"\n tokens = filepath.split('/')\n label = tokens[-1].split('_')[1][:-4]\n if label in label2id:\n return label2id[label]\n else:\n sys.exit(\"Invalid label: \" + label)",
"_____no_output_____"
],
[
"# Functions to load data, DO NOT change these\n\ndef get_labels(folder, label2id):\n \"\"\"\n Returns vector of labels extracted from filenames of all files in folder\n :param folder: path to data folder\n :param label2id: mapping of text labels to numeric ids. (Eg: automobile -> 0)\n \"\"\"\n files = get_files(folder)\n y = []\n for f in files:\n y.append(get_label(f,label2id))\n return np.array(y)\n\ndef one_hot(y, num_classes=10):\n \"\"\"\n Converts each label index in y to vector with one_hot encoding\n \"\"\"\n y_one_hot = np.zeros((num_classes, y.shape[0]))\n y_one_hot[y, range(y.shape[0])] = 1\n return y_one_hot\n\ndef get_label_mapping(label_file):\n \"\"\"\n Returns mappings of label to index and index to label\n The input file has list of labels, each on a separate line.\n \"\"\"\n with open(label_file, 'r') as f:\n id2label = f.readlines()\n id2label = [l.strip() for l in id2label]\n label2id = {}\n count = 0\n for label in id2label:\n label2id[label] = count\n count += 1\n return id2label, label2id\n\ndef get_images(folder):\n \"\"\"\n returns numpy array of all samples in folder\n each column is a sample resized to 30x30 and flattened\n \"\"\"\n files = get_files(folder)\n images = []\n count = 0\n \n for f in files:\n count += 1\n if count % 10000 == 0:\n print(\"Loaded {}/{}\".format(count,len(files)))\n img_arr = get_img_array(f)\n img_arr = img_arr.flatten() / 255.0\n images.append(img_arr)\n X = np.column_stack(images)\n\n return X\n\ndef get_train_data(data_root_path):\n \"\"\"\n Return X and y\n \"\"\"\n train_data_path = data_root_path + 'train'\n id2label, label2id = get_label_mapping(data_root_path+'labels.txt')\n print(label2id)\n X = get_images(train_data_path)\n y = get_labels(train_data_path, label2id)\n return X, y\n\ndef save_predictions(filename, y):\n \"\"\"\n Dumps y into .npy file\n \"\"\"\n np.save(filename, y)",
"_____no_output_____"
]
],
[
[
"### Load test data from using the helper code from HW1",
"_____no_output_____"
]
],
[
[
"# Load the data\ndata_root_path = 'cifar10-hw2/'\nX_train, Y_train = get_train_data(data_root_path) # this may take a few minutes\nX_test_format = get_images(data_root_path + 'test')\nX_test_format = X_test_format.T\n#print('Data loading done')",
"{'airplane': 0, 'automobile': 1, 'bird': 2, 'cat': 3, 'deer': 4, 'dog': 5, 'frog': 6, 'horse': 7, 'ship': 8, 'truck': 9}\nLoaded 10000/50000\nLoaded 20000/50000\nLoaded 30000/50000\nLoaded 40000/50000\nLoaded 50000/50000\nLoaded 10000/10000\n"
],
[
"X_train = X_train.T\nY_train = Y_train.T",
"_____no_output_____"
]
],
[
[
"### Load all the data",
"_____no_output_____"
]
],
[
[
"def unpickle(file):\n import pickle\n with open(file, 'rb') as fo:\n data_dict = pickle.load(fo, encoding='bytes')\n return data_dict\n\npath = 'cifar-10-batches-py'\nfile = []\nfile.append('data_batch_1')\nfile.append('data_batch_2')\nfile.append('data_batch_3')\nfile.append('data_batch_4')\nfile.append('data_batch_5')\nfile.append('test_batch')\n\nX_train = None\nY_train = None\nX_test = None\nY_test = None\n\nfor i in range(6):\n fname = path+'/'+file[i]\n data_dict = unpickle(fname)\n \n _X = np.array(data_dict[b'data'], dtype=float) / 255.0\n _X = _X.reshape([-1, 3, 32, 32])\n _X = _X.transpose([0, 2, 3, 1])\n _X = _X.reshape(-1, 32*32*3)\n _Y = data_dict[b'labels']\n\n if X_train is None:\n X_train = _X\n Y_train = _Y\n elif i != 5:\n X_train = np.concatenate((X_train, _X), axis=0)\n Y_train = np.concatenate((Y_train, _Y), axis=0)\n else:\n X_test = _X\n Y_test = np.array(_Y)\n print(data_dict[b'batch_label'])\n\n# confirming the output\nprint(X_train.shape, Y_train.shape, X_test.shape, Y_test.shape)\n \n ",
"b'training batch 1 of 5'\nb'training batch 2 of 5'\nb'training batch 3 of 5'\nb'training batch 4 of 5'\nb'training batch 5 of 5'\nb'testing batch 1 of 1'\n(50000, 3072) (50000,) (10000, 3072) (10000,)\n"
]
],
[
[
"### Defining Hyperparameters",
"_____no_output_____"
]
],
[
[
"# Convolutional Layer 1.\nfilter_size1 = 3 \nnum_filters1 = 64\n\n# Convolutional Layer 2.\nfilter_size2 = 3\nnum_filters2 = 64\n\n# Fully-connected layer.\nfc_1 = 256 # Number of neurons in fully-connected layer.\nfc_2 = 128 # Number of neurons in fc layer\n\n# Number of color channels for the images: 1 channel for gray-scale.\nnum_channels = 3\n\n# image dimensions (only squares for now)\nimg_size = 32\n\n# Size of image when flattened to a single dimension\nimg_size_flat = img_size * img_size * num_channels\n\n# Tuple with height and width of images used to reshape arrays.\nimg_shape = (img_size, img_size)\n\n# class info\nclasses = ['airplane','automobile','bird','cat','deer','dog','frog','horse','ship','truck']\nnum_classes = len(classes)\n\n# batch size\nbatch_size = 64\n\n# validation split\nvalidation_size = .16\n\n# learning rate \nlearning_rate = 0.001\n\n# beta\nbeta = 0.01\n\n# log directory\nimport os\nlog_dir = os.getcwd()\n\n# how long to wait after validation loss stops improving before terminating training\nearly_stopping = None # use None if you don't want to implement early stoping",
"_____no_output_____"
]
],
[
[
"### Helper-function for plotting images\nFunction used to plot 9 images in a 3x3 grid (or fewer, depending on how many images are passed), and writing the true and predicted classes below each image.",
"_____no_output_____"
]
],
[
[
"def plot_images(images, cls_true, cls_pred=None):\n \n if len(images) == 0:\n print(\"no images to show\")\n return \n else:\n random_indices = random.sample(range(len(images)), min(len(images), 9))\n \n print(images.shape) \n images, cls_true = zip(*[(images[i], cls_true[i]) for i in random_indices])\n \n # Create figure with 3x3 sub-plots.\n fig, axes = plt.subplots(3, 3)\n fig.subplots_adjust(hspace=0.3, wspace=0.3)\n\n for i, ax in enumerate(axes.flat):\n # Plot image.\n ax.imshow(images[i].reshape(img_size, img_size, num_channels))\n\n # Show true and predicted classes.\n if cls_pred is None:\n xlabel = \"True: {0}\".format(classes[cls_true[i]])\n else:\n xlabel = \"True: {0}, Pred: {1}\".format(cls_true[i], cls_pred[i])\n\n # Show the classes as the label on the x-axis.\n ax.set_xlabel(xlabel)\n \n # Remove ticks from the plot.\n ax.set_xticks([])\n ax.set_yticks([])\n \n # Ensure the plot is shown correctly with multiple plots\n # in a single Notebook cell.\n plt.show()",
"_____no_output_____"
]
],
[
[
"### Plot a few images to see if data is correct",
"_____no_output_____"
]
],
[
[
"# Plot the images and labels using our helper-function above.\nplot_images(X_train, Y_train)",
"(50000, 3072)\n"
]
],
[
[
"### Normalize",
"_____no_output_____"
]
],
[
[
"mean = np.mean(X_train, axis = 0)\nstdDev = np.std(X_train, axis = 0)\n\nX_train -= mean\nX_train /= stdDev\n\nX_test -= mean\nX_test /= stdDev\n\nX_test_format -= mean\nX_test_format /= stdDev",
"_____no_output_____"
]
],
[
[
"### Tensorflow graph",
"_____no_output_____"
],
[
"### Regularizer",
"_____no_output_____"
]
],
[
[
"regularizer = tf.contrib.layers.l2_regularizer(scale=0.1)",
"_____no_output_____"
]
],
[
[
"### Weights and Bias",
"_____no_output_____"
]
],
[
[
"def new_weights(shape):\n return tf.get_variable(name='weights',shape=shape,regularizer=regularizer)\n\ndef new_biases(length):\n return tf.Variable(tf.constant(0.05, shape=[length]))",
"_____no_output_____"
]
],
[
[
"### Batch Norm",
"_____no_output_____"
]
],
[
[
"def batch_norm(x, n_out, phase_train):\n \"\"\"\n Batch normalization on convolutional maps.\n Ref.: http://stackoverflow.com/questions/33949786/how-could-i-use-batch-normalization-in-tensorflow\n Args:\n x: Tensor, 4D BHWD input maps\n n_out: integer, depth of input maps\n phase_train: boolean tf.Varialbe, true indicates training phase\n scope: string, variable scope\n Return:\n normed: batch-normalized maps\n \"\"\"\n with tf.variable_scope('batch_norm'):\n beta = tf.Variable(tf.constant(0.0, shape=[n_out]),\n name='beta', trainable=True)\n gamma = tf.Variable(tf.constant(1.0, shape=[n_out]),\n name='gamma', trainable=True)\n batch_mean, batch_var = tf.nn.moments(x, [0,1,2], name='moments')\n ema = tf.train.ExponentialMovingAverage(decay=0.5)\n\n def mean_var_with_update():\n ema_apply_op = ema.apply([batch_mean, batch_var])\n with tf.control_dependencies([ema_apply_op]):\n return tf.identity(batch_mean), tf.identity(batch_var)\n \n mean, var = tf.cond(tf.equal(phase_train,1),\n mean_var_with_update,\n lambda: (ema.average(batch_mean), ema.average(batch_var)))\n normed = tf.nn.batch_normalization(x, mean, var, beta, gamma, 1e-3)\n return normed",
"_____no_output_____"
]
],
[
[
"### Helper function for summaries:",
"_____no_output_____"
]
],
[
[
"def variable_summaries(var):\n \"\"\"Attach a lot of summaries to a Tensor (for TensorBoard visualization).\"\"\"\n with tf.name_scope('summaries'):\n mean = tf.reduce_mean(var)\n tf.summary.scalar('mean', mean)\n with tf.name_scope('stddev'):\n stddev = tf.sqrt(tf.reduce_mean(tf.square(var - mean)))\n tf.summary.scalar('stddev', stddev)\n tf.summary.scalar('max', tf.reduce_max(var))\n tf.summary.scalar('min', tf.reduce_min(var))\n tf.summary.histogram('histogram', var)",
"_____no_output_____"
]
],
[
[
"### Convolutional Layer",
"_____no_output_____"
]
],
[
[
"def new_conv_layer(input, # The previous layer.\n num_input_channels, # Num. channels in prev. layer.\n filter_size, # Width and height of each filter.\n num_filters, # Number of filters.\n use_pooling=True, normalize=True, phase=1, batch_normalization =False): # Use 2x2 max-pooling.\n\n # Shape of the filter-weights for the convolution.\n # This format is determined by the TensorFlow API.\n shape = [filter_size, filter_size, num_input_channels, num_filters]\n\n # Create new weights aka. filters with the given shape.\n with tf.variable_scope('weights'):\n weights = new_weights(shape=shape)\n #tf.add_to_collection(tf.GraphKeys.REGULARIZATION_LOSSES, weights)\n variable_summaries(weights)\n\n\n \n # Create new biases, one for each filter.\n with tf.variable_scope('biases'):\n biases = new_biases(length=num_filters)\n variable_summaries(biases)\n\n \n \n with tf.variable_scope('convolution_layer'):\n layer = tf.nn.conv2d(input=input,\n filter=weights,\n strides=[1, 1, 1, 1],\n padding='SAME')\n # Add the biases to the results of the convolution.\n # A bias-value is added to each filter-channel.\n layer += biases\n\n #layer = tf.layers.batch_normalization(layer, \n # center=True, scale=True, \n # training=phase)\n \n #layer = tf.contrib.layers.batch_norm(layer,is_training=phase)\n # Use pooling to down-sample the image resolution?\n \n # Adding batch_norm\n if batch_normalization == True:\n layer = batch_norm(layer,num_filters, phase)\n \n with tf.variable_scope('Max-Pooling'):\n if use_pooling:\n # This is 2x2 max-pooling, which means that we\n # consider 2x2 windows and select the largest value\n # in each window. Then we move 2 pixels to the next window.\n layer = tf.nn.max_pool(value=layer,\n ksize=[1, 2, 2, 1],\n strides=[1, 2, 2, 1],\n padding='SAME')\n with tf.variable_scope('ReLU'):\n # Rectified Linear Unit (ReLU).\n # It calculates max(x, 0) for each input pixel x.\n # This adds some non-linearity to the formula and allows us\n # to learn more complicated functions.\n layer = tf.nn.relu(layer)\n\n \n tf.summary.histogram('activations', layer)\n return layer, weights",
"_____no_output_____"
]
],
[
[
"### Flatten Layer",
"_____no_output_____"
]
],
[
[
"def flatten_layer(layer):\n # Get the shape of the input layer.\n layer_shape = layer.get_shape()\n\n # The shape of the input layer is assumed to be:\n # layer_shape == [num_images, img_height, img_width, num_channels]\n\n # The number of features is: img_height * img_width * num_channels\n # We can use a function from TensorFlow to calculate this.\n num_features = layer_shape[1:4].num_elements()\n \n # Reshape the layer to [num_images, num_features].\n # Note that we just set the size of the second dimension\n # to num_features and the size of the first dimension to -1\n # which means the size in that dimension is calculated\n # so the total size of the tensor is unchanged from the reshaping.\n layer_flat = tf.reshape(layer, [-1, num_features])\n\n # The shape of the flattened layer is now:\n # [num_images, img_height * img_width * num_channels]\n\n # Return both the flattened layer and the number of features.\n return layer_flat, num_features",
"_____no_output_____"
]
],
[
[
"### FC Layer",
"_____no_output_____"
]
],
[
[
"def new_fc_layer(input, # The previous layer.\n num_inputs, # Num. inputs from prev. layer.\n num_outputs, # Num. outputs.\n use_relu=True): # Use Rectified Linear Unit (ReLU)?\n\n # Create new weights and biases.\n with tf.variable_scope('weights'):\n weights = new_weights(shape=[num_inputs, num_outputs])\n \n with tf.variable_scope('biases'):\n biases = new_biases(length=num_outputs)\n\n # Calculate the layer as the matrix multiplication of\n # the input and weights, and then add the bias-values.\n with tf.variable_scope('matmul'):\n layer = tf.matmul(input, weights) + biases\n\n # Use ReLU?\n if use_relu:\n with tf.variable_scope('relu'):\n layer = tf.nn.relu(layer)\n\n return layer, weights",
"_____no_output_____"
]
],
[
[
"### Placeholder variables",
"_____no_output_____"
]
],
[
[
"x = tf.placeholder(tf.float32, shape=[None, img_size_flat], name='x')\n\nx_image = tf.reshape(x, [-1, img_size, img_size, num_channels])\n\ny_true = tf.placeholder(tf.float32, shape=[None, num_classes], name='y_true')\n\ny_true_cls = tf.argmax(y_true, axis=1)\n\nphase = tf.placeholder(tf.int32, name='phase')\n\nkeep_prob = tf.placeholder(tf.float32, name='keep_prob')",
"_____no_output_____"
]
],
[
[
"### Convolutional Layers",
"_____no_output_____"
]
],
[
[
"with tf.variable_scope('Layer-1'):\n layer_conv1, weights_conv1 = \\\n new_conv_layer(input=x_image,\n num_input_channels=num_channels,\n filter_size=filter_size1,\n num_filters=num_filters1,\n use_pooling=True, phase=phase, batch_normalization=True)\n \nwith tf.variable_scope('Layer-2'):\n layer_conv2, weights_conv2 = \\\n new_conv_layer(input=layer_conv1,\n num_input_channels=num_filters1,\n filter_size=filter_size2,\n num_filters=num_filters2,\n use_pooling=True, phase=phase)\n ",
"_____no_output_____"
]
],
[
[
"### Flatten Layer",
"_____no_output_____"
]
],
[
[
"with tf.variable_scope('Flatten'):\n layer_flat, num_features = flatten_layer(layer_conv2)\n\nprint(layer_flat,num_features)",
"Tensor(\"Flatten/Reshape:0\", shape=(?, 16384), dtype=float32) 16384\n"
]
],
[
[
"### FC Layers",
"_____no_output_____"
]
],
[
[
"with tf.variable_scope('Fully-Connected-1'):\n layer_fc1, weights_fc1 = new_fc_layer(input=layer_flat,\n num_inputs=num_features,\n num_outputs=fc_1,\n use_relu=True)\n \nwith tf.variable_scope('Fully-Connected-2'):\n layer_fc2, weights_fc2 = new_fc_layer(input=layer_fc1,\n num_inputs=fc_1,\n num_outputs=fc_2,\n use_relu=True)\n \nwith tf.variable_scope('Fully-connected-3'):\n layer_fc3, weights_fc3 = new_fc_layer(input=layer_fc2,\n num_inputs=fc_2,\n num_outputs=num_classes,\n use_relu=False)\n \n#with tf.variable_scope('dropout'):\n# layer = tf.nn.dropout(layer_fc2,keep_prob)",
"_____no_output_____"
]
],
[
[
"### Softmax and argmax functions",
"_____no_output_____"
]
],
[
[
"with tf.variable_scope('Softmax'):\n y_pred = tf.nn.softmax(layer_fc3)\n y_pred_cls = tf.argmax(y_pred, axis=1)",
"_____no_output_____"
]
],
[
[
"### Cost-Function:",
"_____no_output_____"
]
],
[
[
"with tf.variable_scope('cross_entropy_loss'):\n cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=layer_fc3,\n labels=y_true)\n\n loss = tf.reduce_mean(cross_entropy)\n tf.summary.scalar('cross_entropy', loss)\n \n#with tf.variable_scope('Regularization'):\nreg_variables = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)\nreg_term = tf.contrib.layers.apply_regularization(regularizer, reg_variables)\nloss += reg_term \ncost = loss\ntf.summary.scalar('Total-Loss', cost)",
"_____no_output_____"
]
],
[
[
"### Using Adam Optimizer",
"_____no_output_____"
]
],
[
[
"#with tf.variable_scope('Optimize'):\noptimizer = tf.train.GradientDescentOptimizer(learning_rate=1e-2).minimize(cost)",
"_____no_output_____"
]
],
[
[
"### Metrics",
"_____no_output_____"
]
],
[
[
"with tf.variable_scope('Metrics'):\n correct_prediction = tf.equal(y_pred_cls, y_true_cls)\n accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n tf.summary.scalar('accuracy', accuracy)",
"_____no_output_____"
]
],
[
[
"### Tensorflow Session",
"_____no_output_____"
]
],
[
[
"session = tf.Session()\nsession.run(tf.global_variables_initializer())",
"_____no_output_____"
]
],
[
[
"### Summaries",
"_____no_output_____"
]
],
[
[
"merged = tf.summary.merge_all()\ntrain_writer = tf.summary.FileWriter(log_dir + '/train', session.graph)\ntest_writer = tf.summary.FileWriter(log_dir + '/test')",
"_____no_output_____"
],
[
"print(X_train.shape)",
"(50000, 3072)\n"
],
[
"def one_hot(y, num_classes=10):\n \"\"\"\n Converts each label index in y to vector with one_hot encoding\n \"\"\"\n y_one_hot = np.zeros((num_classes, y.shape[0]))\n y_one_hot[y, range(y.shape[0])] = 1\n return y_one_hot\n\nY_hot = one_hot(Y_train)\nY_hot = Y_hot.T\n# split test and train:\nx_dev_batch = X_train[0:5000,:]\ny_dev_batch = Y_hot[0:5000,:]\nX_train = X_train[5000:,:]\nY_hot = Y_hot[5000:,:]",
"_____no_output_____"
]
],
[
[
"### Training",
"_____no_output_____"
]
],
[
[
"train_batch_size = batch_size\n\ndef print_status(epoch, feed_dict_train, feed_dict_validate, train_loss, val_loss, step):\n # Calculate the accuracy on the training-set.\n summary, acc = session.run([merged,accuracy], feed_dict=feed_dict_train)\n train_writer.add_summary(summary, step)\n summary, val_acc = session.run([merged,accuracy], feed_dict=feed_dict_validate)\n test_writer.add_summary(summary, step)\n msg = \"Epoch {0} --- Training Accuracy: {1:>6.1%}, Validation Accuracy: {2:>6.1%}, Training Loss: {3:.3f}, Validation Loss: {4:.3f}\"\n print(msg.format(epoch + 1, acc, val_acc, train_loss, val_loss))\n \n# Counter for total number of iterations performed so far.\ntotal_iterations = 0\nbatch_id = 1\n\ndef get_batch(X, Y, batch_size):\n \"\"\"\n Return minibatch of samples and labels\n \n :param X, y: samples and corresponding labels\n :parma batch_size: minibatch size\n :returns: (tuple) X_batch, y_batch\n \"\"\"\n global batch_id\n if batch_id*batch_size >= X.shape[0]:\n batch_id = 1\n \n if batch_id == 1:\n permutation = np.random.permutation(X.shape[0])\n X = X[permutation,:]\n Y = Y[permutation,:]\n \n lb = batch_size*(batch_id-1)\n ub = batch_size*(batch_id)\n\n X = X[lb:ub,:]\n Y = Y[lb:ub,:]\n batch_id += 1\n return X,Y\n\n\ndef optimize(num_iterations):\n # Ensure we update the global variable rather than a local copy.\n global total_iterations\n\n # Start-time used for printing time-usage below.\n start_time = time.time()\n \n best_val_loss = float(\"inf\")\n patience = 0\n\n for i in range(total_iterations,\n total_iterations + num_iterations):\n\n # Get a batch of training examples.\n # x_batch now holds a batch of images and\n # y_true_batch are the true labels for those images\n x_batch, y_true_batch = get_batch(X_train,Y_hot, train_batch_size)\n \n # getting one hot form:\n #y_true_batch = one_hot(y_true_batch)\n #y_dev_batch = one_hot(y_dev_batch)\n\n # Put the batch into a dict with the proper names\n # for placeholder variables in the TensorFlow graph.\n feed_dict_train = {x: x_batch,\n y_true: y_true_batch, phase: 1, keep_prob:0.5}\n \n feed_dict_validate = {x: x_dev_batch,\n y_true: y_dev_batch, phase: 0, keep_prob:1.0}\n\n # Run the optimizer using this batch of training data.\n # TensorFlow assigns the variables in feed_dict_train\n # to the placeholder variables and then runs the optimizer.\n #print(x_batch.shape,y_true_batch.shape)\n acc = session.run(optimizer, feed_dict=feed_dict_train)\n \n\n # Print status at end of each epoch (defined as full pass through training dataset).\n if i % int(X_train.shape[0]/batch_size) == 0 == 0: \n train_loss = session.run(cost, feed_dict=feed_dict_train)\n val_loss = session.run(cost, feed_dict=feed_dict_validate)\n epoch = int(i / int(X_train.shape[0]/batch_size))\n print('Iteration:',i)\n print_status(epoch, feed_dict_train, feed_dict_validate, train_loss, val_loss, i)\n \n if early_stopping: \n if val_loss < best_val_loss:\n best_val_loss = val_loss\n patience = 0\n else:\n patience += 1\n\n if patience == early_stopping:\n break\n\n # Update the total number of iterations performed.\n total_iterations += num_iterations\n\n # Ending time.\n end_time = time.time()\n\n # Difference between start and end-times.\n time_dif = end_time - start_time\n\n # close the writers\n train_writer.close()\n test_writer.close()\n\n # Print the time-usage.\n print(\"Time elapsed: \" + str(timedelta(seconds=int(round(time_dif)))))",
"_____no_output_____"
],
[
"# Run the optimizer \noptimize(num_iterations=16873)\n\n",
"Iteration: 0\nEpoch 1 --- Training Accuracy: 20.3%, Validation Accuracy: 10.3%, Training Loss: 37.783, Validation Loss: 38.208\nIteration: 703\nEpoch 2 --- Training Accuracy: 73.4%, Validation Accuracy: 49.1%, Training Loss: 3.554, Validation Loss: 4.081\nIteration: 1406\nEpoch 3 --- Training Accuracy: 64.1%, Validation Accuracy: 59.4%, Training Loss: 1.968, Validation Loss: 2.246\nIteration: 2109\nEpoch 4 --- Training Accuracy: 75.0%, Validation Accuracy: 63.1%, Training Loss: 1.387, Validation Loss: 1.665\nIteration: 2812\nEpoch 5 --- Training Accuracy: 73.4%, Validation Accuracy: 64.6%, Training Loss: 1.140, Validation Loss: 1.452\nIteration: 3515\nEpoch 6 --- Training Accuracy: 76.6%, Validation Accuracy: 66.5%, Training Loss: 1.060, Validation Loss: 1.307\nIteration: 4218\nEpoch 7 --- Training Accuracy: 75.0%, Validation Accuracy: 67.1%, Training Loss: 0.947, Validation Loss: 1.183\nIteration: 4921\nEpoch 8 --- Training Accuracy: 85.9%, Validation Accuracy: 69.6%, Training Loss: 0.788, Validation Loss: 1.135\nIteration: 5624\nEpoch 9 --- Training Accuracy: 93.8%, Validation Accuracy: 69.2%, Training Loss: 0.625, Validation Loss: 1.105\nIteration: 6327\nEpoch 10 --- Training Accuracy: 79.7%, Validation Accuracy: 69.9%, Training Loss: 0.797, Validation Loss: 1.080\nIteration: 7030\nEpoch 11 --- Training Accuracy: 93.8%, Validation Accuracy: 70.7%, Training Loss: 0.533, Validation Loss: 1.056\nIteration: 7733\nEpoch 12 --- Training Accuracy: 98.4%, Validation Accuracy: 70.5%, Training Loss: 0.490, Validation Loss: 1.084\nIteration: 8436\nEpoch 13 --- Training Accuracy: 100.0%, Validation Accuracy: 72.1%, Training Loss: 0.440, Validation Loss: 1.030\nIteration: 9139\nEpoch 14 --- Training Accuracy: 95.3%, Validation Accuracy: 72.5%, Training Loss: 0.457, Validation Loss: 1.055\nIteration: 9842\nEpoch 15 --- Training Accuracy: 95.3%, Validation Accuracy: 72.9%, Training Loss: 0.436, Validation Loss: 1.033\nIteration: 10545\nEpoch 16 --- Training Accuracy: 98.4%, Validation Accuracy: 72.6%, Training Loss: 0.407, Validation Loss: 1.060\nIteration: 11248\nEpoch 17 --- Training Accuracy: 92.2%, Validation Accuracy: 68.1%, Training Loss: 0.480, Validation Loss: 1.256\nIteration: 11951\nEpoch 18 --- Training Accuracy: 98.4%, Validation Accuracy: 70.9%, Training Loss: 0.331, Validation Loss: 1.142\nIteration: 12654\nEpoch 19 --- Training Accuracy: 98.4%, Validation Accuracy: 72.8%, Training Loss: 0.383, Validation Loss: 1.127\nIteration: 13357\nEpoch 20 --- Training Accuracy: 100.0%, Validation Accuracy: 68.4%, Training Loss: 0.325, Validation Loss: 1.306\nIteration: 14060\nEpoch 21 --- Training Accuracy: 93.8%, Validation Accuracy: 67.3%, Training Loss: 0.445, Validation Loss: 1.448\nIteration: 14763\nEpoch 22 --- Training Accuracy: 100.0%, Validation Accuracy: 68.5%, Training Loss: 0.347, Validation Loss: 1.354\nIteration: 15466\nEpoch 23 --- Training Accuracy: 100.0%, Validation Accuracy: 69.6%, Training Loss: 0.330, Validation Loss: 1.308\nIteration: 16169\nEpoch 24 --- Training Accuracy: 100.0%, Validation Accuracy: 70.2%, Training Loss: 0.325, Validation Loss: 1.253\nIteration: 16872\nEpoch 25 --- Training Accuracy: 92.2%, Validation Accuracy: 72.7%, Training Loss: 0.428, Validation Loss: 1.220\nTime elapsed: 0:07:02\n"
],
[
"Y_test_hot = one_hot(Y_test)\nY_test_hot = Y_test_hot.T\nfeed_dict_test= {x: X_test,y_true: Y_test_hot, phase: 0, keep_prob:1.0}\nsummary, acc = session.run([merged,accuracy], feed_dict=feed_dict_test)\n",
"_____no_output_____"
],
[
"print(\"Accuracy on test set is: %f%%\"%(acc*100))",
"Accuracy on test set is: 71.399993%\n"
]
],
[
[
"### Write out the results",
"_____no_output_____"
]
],
[
[
"feed_dict_test= {x: X_test_format,y_true: Y_test_hot, phase: 0, keep_prob:1.0}\ny_pred = session.run(y_pred, feed_dict=feed_dict_test)\nsave_predictions('ans1-ck2840.npy', y_pred)",
"_____no_output_____"
],
[
"session.close()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d0bd238479d5d92c3e9d86101ed3405b7d61fba7 | 68,051 | ipynb | Jupyter Notebook | notebook/Analysis.ipynb | slouvan/anago | 99a8be0ba2ea42c9c686ff9697ea9e6ef60ca028 | [
"MIT"
] | null | null | null | notebook/Analysis.ipynb | slouvan/anago | 99a8be0ba2ea42c9c686ff9697ea9e6ef60ca028 | [
"MIT"
] | null | null | null | notebook/Analysis.ipynb | slouvan/anago | 99a8be0ba2ea42c9c686ff9697ea9e6ef60ca028 | [
"MIT"
] | 1 | 2021-06-23T13:35:51.000Z | 2021-06-23T13:35:51.000Z | 37.227024 | 1,315 | 0.372647 | [
[
[
"import sys\nsys.executable\n%load_ext autoreload\n%autoreload 2\n%reload_ext autoreload\nimport os\nimport sys\nmodule_path = os.path.abspath(os.path.join('..'))\nif module_path not in sys.path:\n sys.path.append(module_path)\nimport conlleval\nimport pandas as pd\n\ndef performance_by_slot(file_name):\n x = conlleval.eval_conll_eval(\"/Users/slouvan/sandbox/cross-domain/models/anago/ATIS_BASE_MODEL_RANDOM/test_pred.txt\")\n performance_by_slot = {}\n performance_by_slot['F1'] = [ \"{0:.2f}\".format(m.fscore * 100) for i, m in by_type.items() if len(i) > 0]\n performance_by_slot['Recall'] = [ \"{0:.2f}\".format(m.rec * 100) for i, m in by_type.items() if len(i) > 0]\n performance_by_slot['Precision'] = [ \"{0:.2f}\".format(m.prec * 100) for i, m in by_type.items() if len(i) > 0]\n performance_by_slot['Slot'] = [i for i, m in by_type.items() if len(i) > 0] \n\n df = pd.DataFrame(performance_by_slot)\n df = df[['Slot', 'Precision','Recall', 'F1']]\n \n return df",
"The autoreload extension is already loaded. To reload it, use:\n %reload_ext autoreload\n"
],
[
"df_test_performance = performance_by_slot(\"/Users/slouvan/sandbox/cross-domain/models/anago/ATIS_BASE_MODEL_RANDOM/test_pred.txt\")\n\n#x = conlleval.eval_conll_eval(\"/Users/slouvan/sandbox/cross-domain/models/anago/ATIS_BASE_MODEL_RANDOM/test_pred.txt\")",
"['stoploc.city_name', '', 'round_trip', 'state_name', 'toloc.state_code', 'fromloc.city_name', 'fare_amount', 'flight_stop', 'arrive_time.start_time', 'cost_relative', 'meal_code', 'connect', 'arrive_time.end_time', 'depart_time.period_mod', 'depart_time.start_time', 'depart_date.day_number', 'meal_description', 'state_code', 'arrive_time.time', 'airport_code', 'depart_date.today_relative', 'airline_name', 'airport_name', 'flight_time', 'flight_number', 'return_date.day_name', 'arrive_time.period_of_day', 'flight_mod', 'fare_basis_code', 'depart_time.time_relative', 'depart_date.month_name', 'aircraft_code', 'arrive_date.date_relative', 'class_type', 'days_code', 'fromloc.state_code', 'toloc.state_name', 'toloc.airport_name', 'arrive_date.day_number', 'toloc.city_name', 'period_of_day', 'flight_days', 'city_name', 'arrive_date.day_name', 'fromloc.airport_code', 'arrive_date.month_name', 'airline_code', 'mod', 'depart_time.end_time', 'fromloc.state_name', 'toloc.airport_code', 'depart_time.time', 'depart_date.date_relative', 'toloc.country_name', 'day_name', 'fromloc.airport_name', 'restriction_code', 'or', 'depart_date.year', 'transport_type', 'depart_date.day_name', 'return_date.date_relative', 'arrive_time.time_relative', 'economy', 'meal', 'depart_time.period_of_day']\n['stoploc.city_name', 'round_trip', 'state_name', 'toloc.state_code', 'fromloc.city_name', 'fare_amount', 'flight_stop', 'arrive_time.start_time', 'cost_relative', 'economy', 'connect', 'arrive_time.end_time', 'depart_time.period_mod', 'depart_time.start_time', 'depart_date.day_number', 'meal_description', 'state_code', 'arrive_time.time', 'airport_code', 'depart_date.today_relative', 'airline_name', 'airport_name', 'flight_time', 'flight_number', 'fromloc.state_name', 'arrive_time.period_of_day', 'flight_mod', 'fare_basis_code', 'depart_time.time_relative', 'depart_date.month_name', 'aircraft_code', 'arrive_date.date_relative', 'class_type', 'days_code', 'fromloc.state_code', 'toloc.state_name', 'toloc.airport_name', 'arrive_date.day_number', 'toloc.city_name', 'period_of_day', 'flight_days', 'city_name', 'arrive_date.day_name', 'fromloc.airport_code', 'arrive_date.month_name', 'airline_code', 'mod', 'return_date.today_relative', 'depart_time.end_time', 'toloc.airport_code', 'depart_time.time', 'depart_date.date_relative', 'toloc.country_name', 'day_name', 'fromloc.airport_name', 'restriction_code', 'meal', 'depart_date.year', 'transport_type', 'depart_date.day_name', 'return_date.date_relative', 'arrive_time.time_relative', 'stoploc.airport_name', 'or', 'depart_time.period_of_day']\nprocessed 10950 tokens with 2835 phrases; found: 2855 phrases; correct: 2721.\naccuracy: 98.37%; precision: 95.31%; recall: 95.98%; FB1: 95.64\n : precision: 0.00%; recall: 0.00%; FB1: 0.00 0\n aircraft_code: precision: 90.32%; recall: 84.85%; FB1: 87.50 31\n airline_code: precision: 97.14%; recall: 100.00%; FB1: 98.55 35\n airline_name: precision: 91.59%; recall: 97.03%; FB1: 94.23 107\n airport_code: precision: 66.67%; recall: 44.44%; FB1: 53.33 6\n airport_name: precision: 83.33%; recall: 47.62%; FB1: 60.61 12\narrive_date.date_relative: precision: 100.00%; recall: 100.00%; FB1: 100.00 2\narrive_date.day_name: precision: 78.57%; recall: 100.00%; FB1: 88.00 14\narrive_date.day_number: precision: 71.43%; recall: 83.33%; FB1: 76.92 7\narrive_date.month_name: precision: 62.50%; recall: 83.33%; FB1: 71.43 8\narrive_time.end_time: precision: 88.89%; recall: 100.00%; FB1: 94.12 9\narrive_time.period_of_day: precision: 75.00%; recall: 100.00%; FB1: 85.71 8\narrive_time.start_time: precision: 100.00%; recall: 100.00%; FB1: 100.00 8\n arrive_time.time: precision: 94.29%; recall: 97.06%; FB1: 95.65 35\narrive_time.time_relative: precision: 87.10%; recall: 87.10%; FB1: 87.10 31\n city_name: precision: 94.29%; recall: 57.89%; FB1: 71.74 35\n class_type: precision: 88.89%; recall: 100.00%; FB1: 94.12 27\n connect: precision: 100.00%; recall: 100.00%; FB1: 100.00 6\n cost_relative: precision: 100.00%; recall: 97.30%; FB1: 98.63 36\n day_name: precision: 100.00%; recall: 50.00%; FB1: 66.67 1\n days_code: precision: 100.00%; recall: 100.00%; FB1: 100.00 1\ndepart_date.date_relative: precision: 89.47%; recall: 100.00%; FB1: 94.44 19\ndepart_date.day_name: precision: 99.06%; recall: 99.06%; FB1: 99.06 212\ndepart_date.day_number: precision: 98.15%; recall: 96.36%; FB1: 97.25 54\ndepart_date.month_name: precision: 98.18%; recall: 96.43%; FB1: 97.30 55\ndepart_date.today_relative: precision: 88.89%; recall: 88.89%; FB1: 88.89 9\n depart_date.year: precision: 100.00%; recall: 100.00%; FB1: 100.00 3\ndepart_time.end_time: precision: 66.67%; recall: 66.67%; FB1: 66.67 3\ndepart_time.period_mod: precision: 83.33%; recall: 100.00%; FB1: 90.91 6\ndepart_time.period_of_day: precision: 100.00%; recall: 93.08%; FB1: 96.41 121\ndepart_time.start_time: precision: 100.00%; recall: 100.00%; FB1: 100.00 3\n depart_time.time: precision: 87.69%; recall: 100.00%; FB1: 93.44 65\ndepart_time.time_relative: precision: 96.88%; recall: 95.38%; FB1: 96.12 64\n economy: precision: 100.00%; recall: 100.00%; FB1: 100.00 6\n fare_amount: precision: 100.00%; recall: 100.00%; FB1: 100.00 2\n fare_basis_code: precision: 94.44%; recall: 100.00%; FB1: 97.14 18\n flight_days: precision: 100.00%; recall: 100.00%; FB1: 100.00 10\n flight_mod: precision: 65.52%; recall: 79.17%; FB1: 71.70 29\n flight_number: precision: 84.62%; recall: 100.00%; FB1: 91.67 13\n flight_stop: precision: 100.00%; recall: 100.00%; FB1: 100.00 21\n flight_time: precision: 50.00%; recall: 100.00%; FB1: 66.67 2\nfromloc.airport_code: precision: 55.56%; recall: 100.00%; FB1: 71.43 9\nfromloc.airport_name: precision: 50.00%; recall: 100.00%; FB1: 66.67 24\nfromloc.city_name: precision: 98.46%; recall: 99.86%; FB1: 99.15 714\nfromloc.state_code: precision: 100.00%; recall: 100.00%; FB1: 100.00 23\nfromloc.state_name: precision: 94.44%; recall: 100.00%; FB1: 97.14 18\n meal: precision: 94.12%; recall: 100.00%; FB1: 96.97 17\n meal_code: precision: 0.00%; recall: 0.00%; FB1: 0.00 0\n meal_description: precision: 90.00%; recall: 90.00%; FB1: 90.00 10\n mod: precision: 100.00%; recall: 50.00%; FB1: 66.67 1\n or: precision: 50.00%; recall: 100.00%; FB1: 66.67 6\n period_of_day: precision: 100.00%; recall: 75.00%; FB1: 85.71 3\n restriction_code: precision: 100.00%; recall: 100.00%; FB1: 100.00 4\nreturn_date.date_relative: precision: 0.00%; recall: 0.00%; FB1: 0.00 1\nreturn_date.day_name: precision: 0.00%; recall: 0.00%; FB1: 0.00 0\nreturn_date.today_relative: precision: 0.00%; recall: 0.00%; FB1: 0.00 1\n round_trip: precision: 100.00%; recall: 97.26%; FB1: 98.61 71\n state_code: precision: 100.00%; recall: 100.00%; FB1: 100.00 1\n state_name: precision: 0.00%; recall: 0.00%; FB1: 0.00 1\nstoploc.airport_name: precision: 0.00%; recall: 0.00%; FB1: 0.00 1\nstoploc.city_name: precision: 95.24%; recall: 100.00%; FB1: 97.56 21\ntoloc.airport_code: precision: 100.00%; recall: 75.00%; FB1: 85.71 3\ntoloc.airport_name: precision: 100.00%; recall: 100.00%; FB1: 100.00 3\n toloc.city_name: precision: 97.13%; recall: 99.30%; FB1: 98.20 732\ntoloc.country_name: precision: 100.00%; recall: 100.00%; FB1: 100.00 1\n toloc.state_code: precision: 100.00%; recall: 100.00%; FB1: 100.00 18\n toloc.state_name: precision: 92.86%; recall: 92.86%; FB1: 92.86 28\n transport_type: precision: 90.00%; recall: 90.00%; FB1: 90.00 10\n"
],
[
"df_test_performance",
"_____no_output_____"
],
[
"qgrid.show_grid(df, show_toolbar=True)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code"
]
] |
d0bd359a24b12e85c49ccc1385b6e2c1819e9cd2 | 337,150 | ipynb | Jupyter Notebook | .ipynb_checkpoints/ParsingTechniques-checkpoint.ipynb | jhkloss/libg3n_parsing_notebook | e7afef1cc0baf777df334350e967a5d1873998e2 | [
"Unlicense"
] | null | null | null | .ipynb_checkpoints/ParsingTechniques-checkpoint.ipynb | jhkloss/libg3n_parsing_notebook | e7afef1cc0baf777df334350e967a5d1873998e2 | [
"Unlicense"
] | null | null | null | .ipynb_checkpoints/ParsingTechniques-checkpoint.ipynb | jhkloss/libg3n_parsing_notebook | e7afef1cc0baf777df334350e967a5d1873998e2 | [
"Unlicense"
] | null | null | null | 209.409938 | 63,684 | 0.884491 | [
[
[
"# Evaluation von Parsingtechniken\n\nDas Parsing von Textdateien ist ein wichtiger Mechanismus, welcher wärend Informationsbearbeitung einen hohen Stellenwert innehält. \n\nIn order to be able to choose an adequate technique to be able to parse our custom DSL, we need to evaluate multiple of these techniques first.\n\n*The following techniques are going to be evaluated and compared:*\n\n- Parsing with an complete custom build Parser\n- Parsing with help of the Pyparsing module\n- Parsing an YAML config\n- Parsing an XML config",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\nimport pandas as pd\nimport numpy as np\n\nRUNS = 20",
"_____no_output_____"
]
],
[
[
"## Measuring Technique\n\nTo measure the results we use an combination of the python performance timer functionality and an custom class. This custom class is going to measure the execution time of the different techniques. Thus the execution time is measured multiple times in a row to avoid false measurements due to system load or caching mechanisms.",
"_____no_output_____"
],
[
"## Performance Timer Class",
"_____no_output_____"
]
],
[
[
"import time\n\n\nclass PerformanceTimer:\n timers = {}\n\n def __init__(self, name: str = \"\", iterations: int = 20):\n self.running = False\n self.start = None\n self.name = name\n self.elapsed = 0.0\n self.measurements = {}\n self.successful_measurements = 0\n self.iterations = iterations\n\n PerformanceTimer.timers[self.name] = self\n\n def measure_function(self, func, *args):\n for i, arg in enumerate(args):\n self.measurements[i] = []\n for j in range(self.iterations):\n self.start_timer()\n func(arg)\n self.stop_timer()\n self.measurements[i].append(self.elapsed)\n self.successful_measurements += 1\n self.reset()\n\n def start_timer(self):\n if self.running is False:\n self.start = time.perf_counter()\n self.running = True\n else:\n raise Exception('Timer already started.')\n\n def stop_timer(self):\n if self.running is True:\n # Elapsed time in ms\n self.elapsed = (time.perf_counter() - self.start) * 1000\n self.running = False\n else:\n raise Exception('Timer is not running.')\n\n def reset(self):\n self.start = None\n self.elapsed = 0.0\n self.running = False\n\n def average_time(self):\n result = []\n for measurement_set in self.measurements.values():\n result.append(sum(measurement_set) / self.iterations)\n return result\n\n def print(self):\n print(('Timer: ' + self.name).center(50, '-'))\n print('Finished: ' + str(not self.running))\n print('Sample Sets: ' + str(len(self.measurements)))\n print('Measurements: ' + str(self.successful_measurements))\n\n if self.measurements:\n print('Measured Times: ' + str(self.measurements))\n else:\n print('Elapsed Time: ' + str(self.elapsed))\n\n print('\\n')",
"_____no_output_____"
]
],
[
[
"## Manual Parsing",
"_____no_output_____"
]
],
[
[
"from parse_manual.parser import parse as parse_manual\n\nmanual_timer = PerformanceTimer('Manual Parsing', RUNS)\nmanual_timer.measure_function(parse_manual, './samples/sample.gen', './samples/sample-40.gen',\n './samples/sample-80.gen', './samples/sample-160.gen')\nmanual_timer.print()",
"--------------Timer: Manual Parsing---------------\nFinished: True\nSample Sets: 4\nMeasurements: 80\nMeasured Times: {0: [2.3339580000083515, 0.5242499998985295, 0.39779200005796156, 0.38650000010420626, 0.3739999999652355, 0.4582500000651635, 0.4288749998977437, 0.3689580000809656, 0.3590409999105759, 0.34633400014172366, 0.35554099986256915, 0.35825000009026553, 0.37883300001340103, 0.4088340001544566, 0.37649999990208016, 0.3593750000163709, 0.4679169999235455, 0.3629170000749582, 0.5852499998582061, 0.517458000103943], 1: [0.9704579999834095, 0.7147910000639968, 0.6922909999502735, 2.2773339999275777, 1.3870829998268164, 0.6362090000493481, 1.1564999999791326, 1.2967080001544673, 1.5514999997776613, 2.532041999984358, 0.6817080000018905, 0.6018750000293949, 0.616416999946523, 0.5852499998582061, 0.6099170000197773, 0.5822499999794672, 0.6015829999341804, 0.5674169999565493, 0.6280410000272241, 0.5860839999058953], 2: [1.3046250001025328, 1.1537919999682344, 1.1437499999829015, 1.1464579999937996, 1.051582999934908, 1.1054579999836278, 1.1818749999292777, 1.046791999897323, 1.102583000147206, 1.04579100002411, 1.1213329999009147, 1.037125000038941, 1.1325840000608878, 1.0628330001054564, 1.1237080000228161, 1.05966600017382, 1.080332999890743, 1.0006659999817202, 1.05770800018945, 1.052833999892755], 3: [2.4517499998637504, 2.002333000064027, 1.968999999917287, 2.207333000114886, 1.9735419998596626, 1.971000000139611, 2.0424589999947784, 1.9777079999130365, 2.1350000001802982, 1.9610419999480655, 2.0440829998733534, 1.9697080001606082, 2.0634160000554402, 1.97008400004961, 2.046000000063941, 1.9819159999769909, 2.060083000060331, 2.008040999953664, 2.198458000066239, 1.9874160000199481]}\n\n\n"
]
],
[
[
"### Function execution time development",
"_____no_output_____"
]
],
[
[
"plt.subplot(2, 2, 1)\nplt.plot(manual_timer.measurements[0])\nplt.title('20 Datapoints')\nplt.xlim(1, RUNS)\nplt.xlabel('runs')\nplt.ylabel('time in ms')\n\nplt.subplot(2, 2, 2)\nplt.plot(manual_timer.measurements[1])\nplt.title('40 Datapoints')\nplt.xlim(1, RUNS)\nplt.xlabel('runs')\nplt.ylabel('time in ms')\n\nplt.subplot(2, 2, 3)\nplt.plot(manual_timer.measurements[2])\nplt.title('80 Datapoints')\nplt.xlim(1, RUNS)\nplt.xlabel('runs')\nplt.ylabel('time in ms')\n\nplt.subplot(2, 2, 4)\nplt.plot(manual_timer.measurements[3])\nplt.title('160 Datapoints')\nplt.xlim(1, RUNS)\nplt.xlabel('runs')\nplt.ylabel('time in ms')\n\nplt.rcParams['figure.figsize'] = [30 / 2.54, 15 / 2.54]\nplt.tight_layout()\nplt.show()",
"_____no_output_____"
]
],
[
[
"### Individual function execution time results",
"_____no_output_____"
]
],
[
[
"df = pd.DataFrame(manual_timer.measurements)\ndf.columns = ['20 Datapoints', '40 Datapoints', '80 Datapoints', '160 Datapoints']\ndf",
"_____no_output_____"
]
],
[
[
"## Pyparsing",
"_____no_output_____"
]
],
[
[
"from parse_pyparsing.parser import parse as parse_pyparsing\n\n# Pyparsing\npyparsing_timer = PerformanceTimer('Pyparsing', RUNS)\npyparsing_timer.measure_function(parse_pyparsing, './samples/sample.gen', './samples/sample-40.gen',\n './samples/sample-80.gen', './samples/sample-160.gen')\npyparsing_timer.print()",
"-----------------Timer: Pyparsing-----------------\nFinished: True\nSample Sets: 4\nMeasurements: 80\nMeasured Times: {0: [3.2524169998850994, 3.1476249998831918, 2.429042000130721, 2.4144580002030125, 2.373874999875625, 5.665041000156634, 3.501332999803708, 4.405624999890279, 2.4345419999463047, 2.9748330000529677, 3.7193749999460124, 2.3898750000626023, 2.3811250000562723, 2.4906250000640284, 5.688458000122409, 4.188250000197513, 3.1435000000783475, 3.0540420000306767, 2.396999999973559, 2.388791000157653], 1: [4.995667000002868, 4.811999999901673, 11.596833000112383, 10.095874999933585, 5.4874170000402955, 6.808207999938531, 6.080083000142622, 6.773457999997845, 6.900707999875522, 4.810834000181785, 4.749833999994735, 5.035040999928242, 4.753583000137951, 8.32304200002909, 16.745624999884967, 5.9475420000580925, 9.066625000059503, 15.271542000164118, 6.408749999991414, 8.685582999987673], 2: [17.386125000030006, 13.08333300016784, 10.397042000022338, 9.455042000126923, 9.430250000150409, 9.411583000201063, 9.403291000126046, 9.706875000119908, 9.661083000082726, 9.303707999833932, 9.035291999907713, 8.886625000059212, 8.781167000051937, 8.77683400017304, 8.680875000209198, 8.700834000137547, 8.65412500002094, 8.627291000038895, 8.865958000114915, 8.661459000222749], 3: [17.371249999996508, 17.808457999990424, 17.459457999848382, 17.338291999976718, 17.340250000188462, 17.36720899998545, 17.380084000024, 17.514540999854944, 17.346374999988257, 17.2628339998937, 17.369334000022718, 17.65862500019466, 17.275333000043247, 17.424249999976382, 17.336833999934242, 17.404208000016297, 17.702584000062416, 17.388833000040904, 17.330291999996916, 17.384125000035056]}\n\n\n"
]
],
[
[
"### Function execution time development",
"_____no_output_____"
]
],
[
[
"plt.subplot(2, 2, 1)\nplt.plot(pyparsing_timer.measurements[0])\nplt.title('20 Datapoints')\nplt.xlim(1, RUNS)\nplt.xlabel('runs')\nplt.ylabel('time in ms')\n\nplt.subplot(2, 2, 2)\nplt.plot(pyparsing_timer.measurements[1])\nplt.title('40 Datapoints')\nplt.xlim(1, RUNS)\nplt.xlabel('runs')\nplt.ylabel('time in ms')\n\nplt.subplot(2, 2, 3)\nplt.plot(pyparsing_timer.measurements[2])\nplt.title('80 Datapoints')\nplt.xlim(1, RUNS)\nplt.xlabel('runs')\nplt.ylabel('time in ms')\n\nplt.subplot(2, 2, 4)\nplt.plot(pyparsing_timer.measurements[3])\nplt.title('160 Datapoints')\nplt.xlim(1, RUNS)\nplt.xlabel('runs')\nplt.ylabel('time in ms')\n\nplt.rcParams['figure.figsize'] = [30 / 2.54, 15 / 2.54]\nplt.tight_layout()\nplt.show()",
"_____no_output_____"
]
],
[
[
"### Individual function execution time results",
"_____no_output_____"
]
],
[
[
"df = pd.DataFrame(pyparsing_timer.measurements)\ndf.columns = ['20 Datapoints', '40 Datapoints', '80 Datapoints', '160 Datapoints']\ndf",
"_____no_output_____"
]
],
[
[
"## YAML Parsing",
"_____no_output_____"
]
],
[
[
"from parse_yaml.parser import parse as parse_yaml\n\n#YAML\nyaml_timer = PerformanceTimer('YAML Parsing', RUNS)\nyaml_timer.measure_function(parse_yaml, './samples/sample.yaml', './samples/sample-40.yaml', './samples/sample-80.yaml',\n './samples/sample-160.yaml')\nyaml_timer.print()",
"---------------Timer: YAML Parsing----------------\nFinished: True\nSample Sets: 4\nMeasurements: 80\nMeasured Times: {0: [10.130707999906008, 8.419750000030035, 8.157375000109823, 7.8233750000435975, 7.933374999993248, 7.832582999981241, 7.7690000000529835, 7.812834000105795, 8.685000000014043, 8.887166000022262, 8.725040999934208, 7.803125000009459, 8.01695799987101, 7.8147090000584285, 7.806000000073254, 7.52337499989153, 7.533041999977286, 7.49833400004718, 7.556541999974797, 7.696542000076079], 1: [15.486292000105095, 74.43137499990371, 14.721957999881852, 15.642040999864548, 14.771875000178625, 15.09816700013289, 14.886958000033701, 14.729959000078452, 15.025915999785866, 16.098249999913605, 14.833250000037879, 14.940207999870836, 15.125291000003926, 14.827416999878551, 14.79212499998539, 15.060208999784663, 14.773166999930254, 14.926458999980241, 16.784333000032348, 15.247000000044864], 2: [29.98533300001327, 28.359832999967693, 28.426250000165965, 28.260249999902953, 28.48629099980826, 28.30041699985486, 28.972916999919107, 28.32891700018081, 28.58820899996317, 29.18491700006598, 28.642875000059576, 28.82812500001819, 28.511457999911727, 28.69187500004955, 62.127207999992606, 30.134625000073356, 30.2961250001772, 30.968125000072177, 29.284000000188826, 28.407207999862294], 3: [58.24387500001649, 56.97466699984943, 58.84429100001398, 86.4997090000088, 56.92375000012362, 56.981666999945446, 56.90895900011128, 56.91441599992686, 56.88224999994418, 87.90825000005498, 56.95141599994713, 56.91495800010671, 56.93437499985521, 57.20229100006691, 88.89170800011925, 57.19283300004463, 56.91729199998008, 57.1102090000295, 57.019416999992245, 56.85970800004725]}\n\n\n"
]
],
[
[
"### Function execution time development",
"_____no_output_____"
]
],
[
[
"plt.subplot(2, 2, 1)\nplt.plot(yaml_timer.measurements[0])\nplt.title('20 Datapoints')\nplt.xlim(1, RUNS)\nplt.xlabel('runs')\nplt.ylabel('time in ms')\n\nplt.subplot(2, 2, 2)\nplt.plot(yaml_timer.measurements[1])\nplt.title('40 Datapoints')\nplt.xlim(1, RUNS)\nplt.xlabel('runs')\nplt.ylabel('time in ms')\n\nplt.subplot(2, 2, 3)\nplt.plot(yaml_timer.measurements[2])\nplt.title('80 Datapoints')\nplt.xlim(1, RUNS)\nplt.xlabel('runs')\nplt.ylabel('time in ms')\n\nplt.subplot(2, 2, 4)\nplt.plot(yaml_timer.measurements[3])\nplt.title('160 Datapoints')\nplt.xlim(1, RUNS)\nplt.xlabel('runs')\nplt.ylabel('time in ms')\n\nplt.rcParams['figure.figsize'] = [30 / 2.54, 15 / 2.54]\nplt.tight_layout()\nplt.show()",
"_____no_output_____"
]
],
[
[
"### Individual function execution time results",
"_____no_output_____"
]
],
[
[
"df = pd.DataFrame(yaml_timer.measurements)\ndf.columns = ['20 Datapoints', '40 Datapoints', '80 Datapoints', '160 Datapoints']\ndf",
"_____no_output_____"
]
],
[
[
"## XML Parsing",
"_____no_output_____"
]
],
[
[
"from parse_xml.parser import parse as parse_xml\n\n#XML\nxml_timer = PerformanceTimer('XML Parsing', RUNS)\nxml_timer.measure_function(parse_xml, './samples/sample.xml', './samples/sample-40.xml', './samples/sample-80.xml',\n './samples/sample-160.xml')\nxml_timer.print()",
"----------------Timer: XML Parsing----------------\nFinished: True\nSample Sets: 4\nMeasurements: 80\nMeasured Times: {0: [1.24070800006848, 0.1318340000580065, 0.09770900010153127, 0.31420899995282525, 0.11433399981797265, 0.6731660000696138, 0.10866599996006698, 0.09591699995326053, 34.85795899996447, 0.1462080001601862, 0.09691700006442261, 0.09350000004815229, 0.08983299994724803, 0.08887500007404014, 0.08891700008462067, 0.08779099994171702, 0.08816600006866793, 0.08712499993634992, 0.08683300006850914, 0.08620899984634889], 1: [0.27154200006407336, 0.17124999999396096, 0.16249999998763087, 0.1831249999213469, 0.1576249999288848, 0.15483300012419932, 0.17695799988359795, 0.1541250001082517, 0.15295899993361672, 0.17491600010544062, 0.15366699994956434, 0.154249999923195, 0.176250000095024, 0.15454200001840945, 0.1537919999918813, 0.17595799999980954, 0.16058300002441683, 0.1539580000553542, 0.17508399992038903, 0.15387500002361776], 2: [0.3852089998872543, 0.3187080001225695, 0.3121249999367137, 0.31083399994713545, 0.36641600013354036, 0.3129169999738224, 0.3112910001163982, 0.31362499998977, 0.3109589999894524, 0.3110840000317694, 0.310416999809604, 0.30962499999986903, 0.31029099977786245, 0.3108749999682914, 0.31049999984134047, 0.3110419997938152, 0.3427910000937118, 0.3185000000485161, 0.31012499994176324, 0.3110830000423448], 3: [0.8410840000578901, 0.64812499999789, 0.7110829999419366, 0.618374999930893, 0.615625000136788, 0.6205409999893163, 0.7166670000060549, 0.6279159999849071, 0.6160420000469458, 0.6183750001582666, 0.7014999998773419, 0.6427079999866692, 0.619165999978577, 0.6184579999626294, 0.7053330000417191, 0.6185419999837904, 0.6152500000098371, 0.6209999999100546, 0.7035839998934534, 0.7669999999961874]}\n\n\n"
]
],
[
[
"### Function execution time development",
"_____no_output_____"
]
],
[
[
"plt.subplot(2, 2, 1)\nplt.plot(xml_timer.measurements[0])\nplt.title('20 Datapoints')\nplt.xlim(1, RUNS)\nplt.xlabel('runs')\nplt.ylabel('time in ms')\n\nplt.subplot(2, 2, 2)\nplt.plot(xml_timer.measurements[1])\nplt.title('40 Datapoints')\nplt.xlim(1, RUNS)\nplt.xlabel('runs')\nplt.ylabel('time in ms')\n\nplt.subplot(2, 2, 3)\nplt.plot(xml_timer.measurements[2])\nplt.title('80 Datapoints')\nplt.xlim(1, RUNS)\nplt.xlabel('runs')\nplt.ylabel('time in ms')\n\nplt.subplot(2, 2, 4)\nplt.plot(xml_timer.measurements[3])\nplt.title('160 Datapoints')\nplt.xlim(1, RUNS)\nplt.xlabel('runs')\nplt.ylabel('time in ms')\n\nplt.rcParams['figure.figsize'] = [30 / 2.54, 15 / 2.54]\nplt.tight_layout()\nplt.show()",
"_____no_output_____"
]
],
[
[
"### Individual function execution time results",
"_____no_output_____"
]
],
[
[
"df = pd.DataFrame(xml_timer.measurements)\ndf.columns = ['20 Datapoints', '40 Datapoints', '80 Datapoints', '160 Datapoints']\ndf",
"_____no_output_____"
]
],
[
[
"## Comparison",
"_____no_output_____"
]
],
[
[
"manual_avg = manual_timer.average_time()\npyparsing_avg = pyparsing_timer.average_time()\nyaml_avg = yaml_timer.average_time()\nxml_avg = xml_timer.average_time()\n\n# Define y-axis labels\nlabels = ['Manual', 'Pyparsing', 'YAML', 'XML']\n\n# Define y values\ny = np.arange(len(labels))\n\n# Define label helper function\ndef add_labels(bars):\n for bar in bars:\n width = bar.get_width()\n label_y = bar.get_y() + bar.get_height() / 2\n plt.text(width, label_y, s=f'{round(width, 4)}')\n\n# Define plot helper function\ndef show_bar_plot(values, title):\n # Create bar plot\n bars = plt.barh(y, values, color=['#6CB1FF', '#FFE000', '#00FF7B', '#FF9800'])\n\n # Axis labels and styling\n plt.yticks(y, labels)\n plt.xlabel('Time in ms')\n add_labels(bars)\n\n plt.title(title)\n plt.show()\n\n# Show Plots\n\n## 20 Datapoints\nx = [manual_avg[0], pyparsing_avg[0], yaml_avg[0], xml_avg[0]]\nshow_bar_plot(x, 'Time comparison 20 datapoints')\n\n## 40 Datapoints\nx = [manual_avg[1], pyparsing_avg[1], yaml_avg[1], xml_avg[1]]\nshow_bar_plot(x, 'Time comparison 40 datapoints')\n\n## 80 Datapoints\nx = [manual_avg[2], pyparsing_avg[2], yaml_avg[2], xml_avg[2]]\nshow_bar_plot(x, 'Time comparison 80 datapoints')\n\n## 160 Datapoints\nx = [manual_avg[3], pyparsing_avg[3], yaml_avg[3], xml_avg[3]]\nshow_bar_plot(x, 'Time comparison 160 datapoints')",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0bd4103d1d4460621f66ee04ded89b1cd33f81f | 286,736 | ipynb | Jupyter Notebook | 6-3-silsup.ipynb | wonyoungso/hg-mldl | 43eea14c35dede1af5d6b59e86a06a3030ea2083 | [
"MIT"
] | null | null | null | 6-3-silsup.ipynb | wonyoungso/hg-mldl | 43eea14c35dede1af5d6b59e86a06a3030ea2083 | [
"MIT"
] | null | null | null | 6-3-silsup.ipynb | wonyoungso/hg-mldl | 43eea14c35dede1af5d6b59e86a06a3030ea2083 | [
"MIT"
] | null | null | null | 1,412.492611 | 198,305 | 0.958443 | [
[
[
"import numpy as np\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"fruits = np.load(\"./fruits_300.npy\")",
"_____no_output_____"
],
[
"fruits_2d = fruits.reshape(-1, 100 * 100)",
"_____no_output_____"
],
[
"from sklearn.decomposition import PCA\n\npca = PCA(n_components=50)\npca.fit(fruits_2d)",
"_____no_output_____"
],
[
"def draw_fruits(arr, ratio=1):\n n = len(arr)\n rows = int(np.ceil(n / 10))\n cols = n if rows < 2 else 10\n figs, axes = plt.subplots(rows, cols, figsize=(cols * ratio, rows * ratio), squeeze=False)\n\n for i in range(rows):\n for j in range(cols):\n idx = i * 10 + j\n\n if idx < n:\n axes[i, j].imshow(arr[idx], cmap='gray_r')\n \n axes[i, j].axis('off')\n \n plt.show()",
"_____no_output_____"
],
[
"draw_fruits(pca.components_.reshape(-1, 100, 100))",
"_____no_output_____"
],
[
"fruits_2d.shape",
"_____no_output_____"
],
[
"fruits_pca = pca.transform(fruits_2d)",
"_____no_output_____"
],
[
"fruits_pca.shape",
"_____no_output_____"
],
[
"draw_fruits(fruits_pca.reshape(-1, 5, 10))",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0bd574b73d1636c5b7b365fd6d7459c60fbe1ba | 105,856 | ipynb | Jupyter Notebook | code/ML-Pima-Indian-Diabetes-Prediction.ipynb | kimamo/Machine-Learning | cc2f43008bd0d101dad3444abd916b6ea98133bb | [
"MIT"
] | null | null | null | code/ML-Pima-Indian-Diabetes-Prediction.ipynb | kimamo/Machine-Learning | cc2f43008bd0d101dad3444abd916b6ea98133bb | [
"MIT"
] | null | null | null | code/ML-Pima-Indian-Diabetes-Prediction.ipynb | kimamo/Machine-Learning | cc2f43008bd0d101dad3444abd916b6ea98133bb | [
"MIT"
] | null | null | null | 60.28246 | 18,368 | 0.73139 | [
[
[
"## Use the *Machine Learning Workflow* to process & transform Pima Indian data to create a prediction model.\n### This model must predict which people are likely to develop diabetes with 70% accuracy! ",
"_____no_output_____"
],
[
"##Import Libraries",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport matplotlib.pyplot as plt\nimport numpy as np\n\n#plot inline instead of seperate windows\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"## Load and review data\n",
"_____no_output_____"
]
],
[
[
"df = pd.read_csv('./data/pima-data.csv')\n",
"_____no_output_____"
],
[
"df.shape\n",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
],
[
"df.isnull().values.any()",
"_____no_output_____"
],
[
"def plot_correlatedValues(df, size=11):\n corr = df.corr()\n fig, ax = plt.subplots(figsize=(size,size))\n ax.matshow(corr)\n plt.xticks(range(len(corr.columns)),corr.columns)\n plt.yticks(range(len(corr.columns)),corr.columns)",
"_____no_output_____"
],
[
"plot_correlatedValues(df)",
"_____no_output_____"
],
[
"df.corr()",
"_____no_output_____"
],
[
"del df['skin']\n",
"_____no_output_____"
],
[
"df.head(5)",
"_____no_output_____"
],
[
"plot_correlatedValues(df)\n",
"_____no_output_____"
]
],
[
[
"change diabetes column values from True/False to 1/0\n",
"_____no_output_____"
]
],
[
[
"diabetes_map = {True:1, False:0}",
"_____no_output_____"
],
[
"diabetes_map",
"_____no_output_____"
],
[
"df['diabetes'] = df['diabetes'].map(diabetes_map)\n",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
]
],
[
[
"# check true/false ratio",
"_____no_output_____"
]
],
[
[
"num_true = len(df.loc[df['diabetes']==True])\n \nnum_false = len(df.loc[df['diabetes']==False] )\n ",
"_____no_output_____"
]
],
[
[
"print(\"# of cases the diabetes is True:- {0}({1:2.2f}%)\".format(num_true, (num_true /(num_true+num_false))*100 ))\n\nprint(\"# of cases the diabetes is False:- {0}({1:2.2f}%)\".format(num_false, (num_false /(num_true+num_false))*100 ))",
"_____no_output_____"
],
[
"# Splitting the data\n\n70% training : 30% testing",
"_____no_output_____"
]
],
[
[
"from sklearn.cross_validation import train_test_split\n",
"_____no_output_____"
],
[
"feature_col_names = ['num_preg','glucose_conc',\t'diastolic_bp','thickness','insulin','bmi','diab_pred','age']\npredicted_class_name = ['diabetes']",
"_____no_output_____"
],
[
"X = df[feature_col_names].values\ny = df[predicted_class_name].values\nsplit_test_size = 0.30\n\nX_train, X_test , y_train, y_test = train_test_split(X,y,test_size= split_test_size, random_state=42)\n",
"_____no_output_____"
],
[
"print(' {0}({0:2.2f}%) in training set '.format((len(X_train)/len(df.index))*100))\nprint(' {0}({0:2.2f}%) in test set '.format((len(X_test)/len(df.index))*100))",
" 69.921875(69.92%) in training set \n 30.078125(30.08%) in test set \n"
]
],
[
[
"Confirm the split of predict values was correctly done",
"_____no_output_____"
]
],
[
[
"print(\"Original True : {0} ({1:0.2f}%)\".format(len(df.loc[df['diabetes'] == 1]), (len(df.loc[df['diabetes'] == 1])/len(df.index)) * 100.0))\nprint(\"Original False : {0} ({1:0.2f}%)\".format(len(df.loc[df['diabetes'] == 0]), (len(df.loc[df['diabetes'] == 0])/len(df.index)) * 100.0))\nprint(\"\")\nprint(\"Training True : {0} ({1:0.2f}%)\".format(len(y_train[y_train[:] == 1]), (len(y_train[y_train[:] == 1])/len(y_train) * 100.0)))\nprint(\"Training False : {0} ({1:0.2f}%)\".format(len(y_train[y_train[:] == 0]), (len(y_train[y_train[:] == 0])/len(y_train) * 100.0)))\nprint(\"\")\nprint(\"Test True : {0} ({1:0.2f}%)\".format(len(y_test[y_test[:] == 1]), (len(y_test[y_test[:] == 1])/len(y_test) * 100.0)))\nprint(\"Test False : {0} ({1:0.2f}%)\".format(len(y_test[y_test[:] == 0]), (len(y_test[y_test[:] == 0])/len(y_test) * 100.0)))",
"Original True : 268 (34.90%)\nOriginal False : 500 (65.10%)\n\nTraining True : 188 (35.01%)\nTraining False : 349 (64.99%)\n\nTest True : 80 (34.63%)\nTest False : 151 (65.37%)\n"
]
],
[
[
"rows have have unexpected 0 values?",
"_____no_output_____"
]
],
[
[
"print(\"# rows in dataframe {0}\".format(len(df)))\nprint(\"# rows missing glucose_conc: {0}\".format(len(df.loc[df['glucose_conc'] == 0])))\nprint(\"# rows missing diastolic_bp: {0}\".format(len(df.loc[df['diastolic_bp'] == 0])))\nprint(\"# rows missing thickness: {0}\".format(len(df.loc[df['thickness'] == 0])))\nprint(\"# rows missing insulin: {0}\".format(len(df.loc[df['insulin'] == 0])))\nprint(\"# rows missing bmi: {0}\".format(len(df.loc[df['bmi'] == 0])))\nprint(\"# rows missing diab_pred: {0}\".format(len(df.loc[df['diab_pred'] == 0])))\nprint(\"# rows missing age: {0}\".format(len(df.loc[df['age'] == 0])))",
"# rows in dataframe 768\n# rows missing glucose_conc: 5\n# rows missing diastolic_bp: 35\n# rows missing thickness: 227\n# rows missing insulin: 374\n# rows missing bmi: 11\n# rows missing diab_pred: 0\n# rows missing age: 0\n"
]
],
[
[
"Impute missing values",
"_____no_output_____"
]
],
[
[
"from sklearn.preprocessing import Imputer",
"_____no_output_____"
],
[
"fill_0 = Imputer(missing_values=0, strategy='mean', axis=0)",
"_____no_output_____"
],
[
"X_train = fill_0.fit_transform(X_train)\nX_test = fill_0.fit_transform(X_test)",
"_____no_output_____"
]
],
[
[
" Use Naive Bayes algorithm to train models",
"_____no_output_____"
]
],
[
[
"from sklearn.naive_bayes import GaussianNB",
"_____no_output_____"
],
[
"nb_model = GaussianNB()",
"_____no_output_____"
],
[
"nb_model.fit(X_train, y_train.ravel())",
"_____no_output_____"
]
],
[
[
"### Performance on Training Data",
"_____no_output_____"
]
],
[
[
"nb_predict_train = nb_model.predict(X_train)\n",
"_____no_output_____"
],
[
"from sklearn import metrics",
"_____no_output_____"
],
[
"print('Accuracy : {0:.4f}'.format(metrics.accuracy_score(y_train, nb_predict_train)))\nprint()",
"Accuracy : 0.7542\n\n"
]
],
[
[
"### Performance on Testing Data",
"_____no_output_____"
]
],
[
[
"nb_predict_test = nb_model.predict(X_test)",
"_____no_output_____"
],
[
"print('Accuracy : {0:.4f}'.format(metrics.accuracy_score(y_test, nb_predict_test)))\nprint()",
"Accuracy : 0.7359\n\n"
]
],
[
[
"#### Naive Bayes Metrics",
"_____no_output_____"
]
],
[
[
"print(\"Confusion Matrix\")\nprint(\"{0}\".format(metrics.confusion_matrix(y_test, nb_predict_test)))\nprint(\"\")\n\nprint(\"Classification Report\")\nprint(metrics.classification_report(y_test, nb_predict_test))",
"Confusion Matrix\n[[118 33]\n [ 28 52]]\n\nClassification Report\n precision recall f1-score support\n\n 0 0.81 0.78 0.79 151\n 1 0.61 0.65 0.63 80\n\navg / total 0.74 0.74 0.74 231\n\n"
]
],
[
[
"#### Random Forest\n",
"_____no_output_____"
]
],
[
[
"from sklearn.ensemble import RandomForestClassifier",
"_____no_output_____"
],
[
"rf_model = RandomForestClassifier(random_state=42)",
"_____no_output_____"
],
[
"rf_model.fit(X_train, y_train.ravel())",
"_____no_output_____"
]
],
[
[
"##### Prediction on Training Data",
"_____no_output_____"
]
],
[
[
"rf_predict_train = rf_model.predict(X_train)",
"_____no_output_____"
],
[
"print('Accuracy : {0:.4f}'.format(metrics.accuracy_score(y_train, rf_predict_train)))\nprint()",
"Accuracy : 0.9870\n\n"
]
],
[
[
"##### Prediction on Test Data",
"_____no_output_____"
]
],
[
[
"rf_predict_test = rf_model.predict(X_test)\nprint('Accuracy : {0:.4f}'.format(metrics.accuracy_score(y_test, rf_predict_test)))\nprint()",
"Accuracy : 0.7100\n\n"
]
],
[
[
"#### Random Forest Metrics",
"_____no_output_____"
]
],
[
[
"print(\"Confusion Matrix\")\nprint(\"{0}\".format(metrics.confusion_matrix(y_test, rf_predict_test)))\nprint(\"\")\n\nprint(\"Classification Report\")\nprint(metrics.classification_report(y_test, rf_predict_test))",
"Confusion Matrix\n[[121 30]\n [ 37 43]]\n\nClassification Report\n precision recall f1-score support\n\n 0 0.77 0.80 0.78 151\n 1 0.59 0.54 0.56 80\n\navg / total 0.70 0.71 0.71 231\n\n"
]
],
[
[
"#### Logistic Regression",
"_____no_output_____"
]
],
[
[
"from sklearn.linear_model import LogisticRegression",
"_____no_output_____"
],
[
"lr_model = LogisticRegression(C=0.7, random_state=42)",
"_____no_output_____"
],
[
"lr_model = lr_model.fit(X_train, y_train.ravel())\nlr_predict_test = lr_model.predict(X_test)\n\nprint('Accuracy : {0:.4f}'.format(metrics.accuracy_score(y_test, lr_predict_test)))\nprint()",
"Accuracy : 0.7446\n\n"
]
],
[
[
"##### Logistic Regression Metrics",
"_____no_output_____"
]
],
[
[
"print(\"Confusion Matrix\")\nprint(\"{0}\".format(metrics.confusion_matrix(y_test, lr_predict_test)))\nprint(\"\")\n\nprint(\"Classification Report\")\nprint(metrics.classification_report(y_test, lr_predict_test))",
"Confusion Matrix\n[[128 23]\n [ 36 44]]\n\nClassification Report\n precision recall f1-score support\n\n 0 0.78 0.85 0.81 151\n 1 0.66 0.55 0.60 80\n\navg / total 0.74 0.74 0.74 231\n\n"
]
],
[
[
"###### Regularize",
"_____no_output_____"
]
],
[
[
"C_start = 0.1\nC_end = 5\nC_inc = 0.1\n\nC_values, recall_scores = [], []\n\nC_val = C_start\nbest_recall_score = 0\nwhile (C_val < C_end):\n C_values.append(C_val)\n lr_model_loop = LogisticRegression(C=C_val, random_state=42, solver='liblinear')\n lr_model_loop.fit(X_train, y_train.ravel())\n lr_predict_loop_test = lr_model_loop.predict(X_test)\n recall_score = metrics.recall_score(y_test, lr_predict_loop_test)\n recall_scores.append(recall_score)\n if (recall_score > best_recall_score):\n best_recall_score = recall_score\n best_lr_predict_test = lr_predict_loop_test\n \n C_val = C_val + C_inc\n\nbest_score_C_val = C_values[recall_scores.index(best_recall_score)]\nprint(\"1st max value of {0:.3f} occured at C={1:.3f}\".format(best_recall_score, best_score_C_val))\n\n%matplotlib inline \nplt.plot(C_values, recall_scores, \"-\")\nplt.xlabel(\"C value\")\nplt.ylabel(\"recall score\")",
"1st max value of 0.613 occured at C=1.400\n"
]
],
[
[
"#### Logistic Regression with balanced weight class\n",
"_____no_output_____"
]
],
[
[
"C_start = 0.1\nC_end = 5\nC_inc = 0.1\n\nC_values, recall_scores = [], []\n\nC_val = C_start\nbest_recall_score = 0\nwhile (C_val < C_end):\n C_values.append(C_val)\n lr_model_loop = LogisticRegression(C=C_val, class_weight=\"balanced\", random_state=42, solver='liblinear', max_iter=10000)\n lr_model_loop.fit(X_train, y_train.ravel())\n lr_predict_loop_test = lr_model_loop.predict(X_test)\n recall_score = metrics.recall_score(y_test, lr_predict_loop_test)\n recall_scores.append(recall_score)\n if (recall_score > best_recall_score):\n best_recall_score = recall_score\n best_lr_predict_test = lr_predict_loop_test\n \n C_val = C_val + C_inc\n\nbest_score_C_val = C_values[recall_scores.index(best_recall_score)]\nprint(\"1st max value of {0:.3f} occured at C={1:.3f}\".format(best_recall_score, best_score_C_val))\n\n%matplotlib inline \nplt.plot(C_values, recall_scores, \"-\")\nplt.xlabel(\"C value\")\nplt.ylabel(\"recall score\")",
"1st max value of 0.738 occured at C=0.300\n"
],
[
"lr_model = LogisticRegression(class_weight='balanced',C=best_score_C_val, random_state=42)",
"_____no_output_____"
],
[
"lr_model.fit(X_train, y_train.ravel())\nlr_predict_test = lr_model.predict(X_test)\n\nprint('Accuracy : {0:.4f}'.format(metrics.accuracy_score(y_test, lr_predict_test)))\nprint()",
"Accuracy : 0.7143\n\n"
]
],
[
[
"##### Metrics",
"_____no_output_____"
]
],
[
[
"print(\"Confusion Matrix\")\nprint(\"{0}\".format(metrics.confusion_matrix(y_test, lr_predict_test)))\nprint(\"\")\n\nprint(\"Classification Report\")\nprint(metrics.classification_report(y_test, lr_predict_test))",
"Confusion Matrix\n[[106 45]\n [ 21 59]]\n\nClassification Report\n precision recall f1-score support\n\n 0 0.83 0.70 0.76 151\n 1 0.57 0.74 0.64 80\n\navg / total 0.74 0.71 0.72 231\n\n"
],
[
"print(metrics.recall_score(y_test, lr_predict_test))",
"0.7375\n"
]
],
[
[
"#### Logistic Regression with Cross-Validation",
"_____no_output_____"
]
],
[
[
"from sklearn.linear_model import LogisticRegressionCV",
"_____no_output_____"
],
[
"lr_model_cv = LogisticRegressionCV(n_jobs = -1 ,class_weight='balanced',Cs=3,cv=10,refit=False, random_state=42)\n\nlr_model_cv.fit(X_train, y_train.ravel())\n",
"_____no_output_____"
],
[
"lr_cv_predict_test = lr_model_cv.predict(X_test)\n\nprint('Accuracy : {0:.4f}'.format(metrics.accuracy_score(y_test, lr_cv_predict_test)))\nprint()",
"Accuracy : 0.7013\n\n"
]
],
[
[
"##### Metrics",
"_____no_output_____"
]
],
[
[
"print(\"Confusion Matrix\")\nprint(\"{0}\".format(metrics.confusion_matrix(y_test, lr_cv_predict_test)))\nprint(\"\")\n\nprint(\"Classification Report\")\nprint(metrics.classification_report(y_test, lr_cv_predict_test))",
"Confusion Matrix\n[[108 43]\n [ 26 54]]\n\nClassification Report\n precision recall f1-score support\n\n 0 0.81 0.72 0.76 151\n 1 0.56 0.68 0.61 80\n\navg / total 0.72 0.70 0.71 231\n\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0bd58eae52d4a436bc4bb2eba391487d891e24d | 7,465 | ipynb | Jupyter Notebook | 2018_06_14_Inclass_Send_Email_SMTP_Google_Email.ipynb | jaykim-asset/datascience_review | c55782f5d4226e179088346da399e299433c6ca6 | [
"MIT"
] | 4 | 2018-05-30T10:39:47.000Z | 2018-11-10T15:39:53.000Z | 2018_06_14_Inclass_Send_Email_SMTP_Google_Email.ipynb | jaykim-asset/datascience_review | c55782f5d4226e179088346da399e299433c6ca6 | [
"MIT"
] | null | null | null | 2018_06_14_Inclass_Send_Email_SMTP_Google_Email.ipynb | jaykim-asset/datascience_review | c55782f5d4226e179088346da399e299433c6ca6 | [
"MIT"
] | null | null | null | 19.906667 | 88 | 0.489484 | [
[
[
"import smtplib, os, pickle\nfrom email import encoders\nfrom email.mime.text import MIMEText\nfrom email.mime.multipart import MIMEMultipart\nfrom email.mime.base import MIMEBase",
"_____no_output_____"
],
[
"toAddr = ['[email protected]']",
"_____no_output_____"
],
[
"# 비밀번호 저장\npw = 'Jong910218!'",
"_____no_output_____"
],
[
"pickle.dump(pw, open('pw.pickle', 'wb'))",
"_____no_output_____"
],
[
"# Access smtp server\nsmtp = smtplib.SMTP('smtp.gmail.com', 587)\nsmtp.ehlo()\nsmtp.starttls()\nsmtp.login('[email protected]', pickle.load(open('pw.pickle', 'rb')))",
"_____no_output_____"
],
[
"# make msg\nmsg = MIMEMultipart()\nmsg['subject'] = 'SMTP 전송 테스트(제목)'",
"_____no_output_____"
],
[
"part = MIMEText(\"SMTP TEXT 메시지 보내기\")\nmsg.attach(part)",
"_____no_output_____"
],
[
"msg # message에는 제목, 본문이 추가됨",
"_____no_output_____"
],
[
"for addr in toAddr:\n msg['To'] = addr\n # (보내는 사람 메일, 받는 사람 메일, 메세지)\n smtp.sendmail('[email protected]', addr, msg.as_string())",
"_____no_output_____"
],
[
"# close smtp object\nsmtp.quit()",
"_____no_output_____"
]
],
[
[
"#### send html",
"_____no_output_____"
]
],
[
[
"def get_smtp():\n smtp = smtplib.SMTP('smtp.gmail.com', 587)\n smtp.ehlo()\n smtp.starttls()\n smtp.login('[email protected]', pickle.load( open('pw.pickle', 'rb')))\n return smtp",
"_____no_output_____"
],
[
"smtp = get_smtp()\nsmtp",
"_____no_output_____"
],
[
"msg = MIMEMultipart()\n\n#제목\nmsg['subject'] = 'HTML 전송 테스트'\n\n# 내용\npart = MIMEText(\"Html 코드 보내기\")\nmsg.attach(part)\n\n# HTML\npart_html = MIMEText('<br><a href=\"http://www.fastcampus.co.kr/\">\\\n패스트캠퍼스</a>', 'html')\nmsg.attach(part_html)\n\nmsg # 제목, 내용, html",
"_____no_output_____"
],
[
"# 전송\ndef send_mails(smtp, toAddr, msg):\n for addr in toAddr:\n msg['To'] = addr\n smtp.sendmail('[email protected]', addr, msg.as_string())\n smtp.quit()",
"_____no_output_____"
],
[
"smtp, toAddr, msg",
"_____no_output_____"
],
[
"send_mails(smtp, toAddr, msg)",
"_____no_output_____"
]
],
[
[
"#### Send File",
"_____no_output_____"
]
],
[
[
"smtp = get_smtp()\nmsg = MIMEMultipart()\nmsg['subject'] = 'Send file test'\npart = MIMEText('파일 전송 테스트')\nmsg.attach(part)",
"_____no_output_____"
],
[
"# file attach - pdf\nfile_name = 'asdv.png'\nmaintype, subtype = 'application', 'octet-steam'\n\nwith open(file_name, 'rb') as fp:\n part = MIMEBase(maintype, subtype)\n part.set_payload(fp.read()) # part에 mp4 파일이 들어감\n encoders.encode_base64(part) # base64 encoding\n \npart.add_header('Content-Disposition', 'attachment', filename = file_name)\nmsg.attach(part)",
"_____no_output_____"
],
[
"msg # 본문, 패일",
"_____no_output_____"
],
[
"send_mails(smtp, toAddr, msg)",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
d0bd597e7385eaa51334dd3f7d64b131333775b3 | 26,207 | ipynb | Jupyter Notebook | tutorials/streamlit_notebooks/healthcare/NER_LEGAL_DE.ipynb | hatrungduc/spark-nlp-workshop | 4a4ec0195d1d3d847261df9ef2df7aa5f95bbaec | [
"Apache-2.0"
] | 687 | 2018-09-07T03:45:39.000Z | 2022-03-20T17:11:20.000Z | tutorials/streamlit_notebooks/healthcare/NER_LEGAL_DE.ipynb | hatrungduc/spark-nlp-workshop | 4a4ec0195d1d3d847261df9ef2df7aa5f95bbaec | [
"Apache-2.0"
] | 89 | 2018-09-18T02:04:42.000Z | 2022-02-24T18:22:27.000Z | tutorials/streamlit_notebooks/healthcare/NER_LEGAL_DE.ipynb | hatrungduc/spark-nlp-workshop | 4a4ec0195d1d3d847261df9ef2df7aa5f95bbaec | [
"Apache-2.0"
] | 407 | 2018-09-07T03:45:44.000Z | 2022-03-20T05:12:25.000Z | 56.480603 | 6,937 | 0.648415 | [
[
[
"\n\n\n\n[](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/streamlit_notebooks/healthcare/NER_LEGAL_DE.ipynb)\n\n\n",
"_____no_output_____"
],
[
"# **Detect legal entities in German**",
"_____no_output_____"
],
[
"To run this yourself, you will need to upload your license keys to the notebook. Just Run The Cell Below in order to do that. Also You can open the file explorer on the left side of the screen and upload `license_keys.json` to the folder that opens.\nOtherwise, you can look at the example outputs at the bottom of the notebook.\n\n",
"_____no_output_____"
],
[
"## 1. Colab Setup",
"_____no_output_____"
],
[
"Import license keys",
"_____no_output_____"
]
],
[
[
"import os\nimport json\n\nfrom google.colab import files\n\nlicense_keys = files.upload()\n\nwith open(list(license_keys.keys())[0]) as f:\n license_keys = json.load(f)\n\nsparknlp_version = license_keys[\"PUBLIC_VERSION\"]\njsl_version = license_keys[\"JSL_VERSION\"]\n\nprint ('SparkNLP Version:', sparknlp_version)\nprint ('SparkNLP-JSL Version:', jsl_version)",
"_____no_output_____"
]
],
[
[
"Install dependencies",
"_____no_output_____"
]
],
[
[
"%%capture\nfor k,v in license_keys.items(): \n %set_env $k=$v\n\n!wget https://raw.githubusercontent.com/JohnSnowLabs/spark-nlp-workshop/master/jsl_colab_setup.sh\n!bash jsl_colab_setup.sh\n\n# Install Spark NLP Display for visualization\n!pip install --ignore-installed spark-nlp-display",
"_____no_output_____"
]
],
[
[
"Import dependencies into Python and start the Spark session",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nfrom pyspark.ml import Pipeline\nfrom pyspark.sql import SparkSession\nimport pyspark.sql.functions as F\n\nimport sparknlp\nfrom sparknlp.annotator import *\nfrom sparknlp_jsl.annotator import *\nfrom sparknlp.base import *\nimport sparknlp_jsl\n\nspark = sparknlp_jsl.start(license_keys['SECRET'])\n\n# manually start session\n# params = {\"spark.driver.memory\" : \"16G\",\n# \"spark.kryoserializer.buffer.max\" : \"2000M\",\n# \"spark.driver.maxResultSize\" : \"2000M\"}\n\n# spark = sparknlp_jsl.start(license_keys['SECRET'],params=params)",
"_____no_output_____"
]
],
[
[
"## 2. Construct the pipeline\n\nFor more details: https://github.com/JohnSnowLabs/spark-nlp-models#pretrained-models---spark-nlp-for-healthcare",
"_____no_output_____"
]
],
[
[
"document_assembler = DocumentAssembler() \\\n .setInputCol('text')\\\n .setOutputCol('document')\n\nsentence_detector = SentenceDetector() \\\n .setInputCols(['document'])\\\n .setOutputCol('sentence')\n\ntokenizer = Tokenizer()\\\n .setInputCols(['sentence']) \\\n .setOutputCol('token')\n\n# German word embeddings\nword_embeddings = WordEmbeddingsModel.pretrained('w2v_cc_300d','de', 'clinical/models') \\\n .setInputCols([\"sentence\", 'token'])\\\n .setOutputCol(\"embeddings\")\n\n# German NER model\nclinical_ner = MedicalNerModel.pretrained('ner_legal','de', 'clinical/models') \\\n .setInputCols([\"sentence\", \"token\", \"embeddings\"]) \\\n .setOutputCol(\"ner\")\n\nner_converter = NerConverter()\\\n .setInputCols(['sentence', 'token', 'ner']) \\\n .setOutputCol('ner_chunk')\n\nnlp_pipeline = Pipeline(stages=[\n document_assembler, \n sentence_detector,\n tokenizer,\n word_embeddings,\n clinical_ner,\n ner_converter])",
"w2v_cc_300d download started this may take some time.\nApproximate size to download 1.2 GB\n[OK!]\nner_legal download started this may take some time.\nApproximate size to download 14.3 MB\n[OK!]\n"
]
],
[
[
"## 3. Create example inputs",
"_____no_output_____"
]
],
[
[
"# Enter examples as strings in this array\ninput_list = [\n \"\"\"Dementsprechend hat der Bundesgerichtshof mit Beschluss vom 24 August 2017 ( - III ZA 15/17 - ) das bei ihm von der Antragstellerin anhängig gemachte „ Prozesskostenhilfeprüfungsverfahre“ an das Bundesarbeitsgericht abgegeben. 2 Die Antragstellerin hat mit Schriftsatz vom 21 März 2016 und damit mehr als sechs Monate vor der Anbringung des Antrags auf Gewährung von Prozesskostenhilfe für die beabsichtigte Klage auf Entschädigung eine Verzögerungsrüge iSv § 198 Abs 5 Satz 1 GVG erhoben. 3 Nach § 198 Abs 1 Satz 1 GVG wird angemessen entschädigt , wer infolge unangemessener Dauer eines Gerichtsverfahrens als Verfahrensbeteiligter einen Nachteil erleidet. a ) Die Angemessenheit der Verfahrensdauer richtet sich gemäß § 198 Abs 1 Satz 2 GVG nach den Umständen des Einzelfalls , insbesondere nach der Schwierigkeit und Bedeutung des Verfahrens sowie nach dem Verhalten der Verfahrensbeteiligten und Dritter. Hierbei handelt es sich um eine beispielhafte , nicht abschließende Auflistung von Umständen , die für die Beurteilung der Angemessenheit besonders bedeutsam sind ( BT-Drs 17/3802 S 18 ). Weitere gewichtige Beurteilungskriterien sind die Verfahrensführung durch das Gericht sowie die zur Verfahrensbeschleunigung gegenläufigen Rechtsgüter der Gewährleistung der inhaltlichen Richtigkeit von Entscheidungen , der Beachtung der richterlichen Unabhängigkeit und des gesetzlichen Richters.\"\"\",\n]",
"_____no_output_____"
]
],
[
[
"## 4. Use the pipeline to create outputs",
"_____no_output_____"
]
],
[
[
"empty_df = spark.createDataFrame([['']]).toDF('text')\npipeline_model = nlp_pipeline.fit(empty_df)\ndf = spark.createDataFrame(pd.DataFrame({'text': input_list}))\nresult = pipeline_model.transform(df)",
"_____no_output_____"
]
],
[
[
"## 5. Visualize results",
"_____no_output_____"
],
[
"Visualize outputs as data frame",
"_____no_output_____"
]
],
[
[
"from sparknlp_display import NerVisualizer\n\nNerVisualizer().display(\n result = result.collect()[0],\n label_col = 'ner_chunk',\n document_col = 'document'\n)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
]
] |
d0bd5aa881dd73d8be04edccd2443a58085dc5e1 | 12,888 | ipynb | Jupyter Notebook | COAD-DRD/scripts/full_pivotTable.ipynb | muntisa/muntisa.github.io | 95f5ab99bf4d17364b9be5c8cfb1b955bbd25586 | [
"CC0-1.0"
] | null | null | null | COAD-DRD/scripts/full_pivotTable.ipynb | muntisa/muntisa.github.io | 95f5ab99bf4d17364b9be5c8cfb1b955bbd25586 | [
"CC0-1.0"
] | null | null | null | COAD-DRD/scripts/full_pivotTable.ipynb | muntisa/muntisa.github.io | 95f5ab99bf4d17364b9be5c8cfb1b955bbd25586 | [
"CC0-1.0"
] | null | null | null | 29.833333 | 102 | 0.376164 | [
[
[
"# Pivot table with dynamic plots for COAD",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\nfrom pivottablejs import pivot_ui",
"_____no_output_____"
],
[
"# get data\ndatabase = \"../db/dbCOAD-DRD.csv\"",
"_____no_output_____"
],
[
"df = pd.read_csv(database)\ndf.head()",
"_____no_output_____"
],
[
"df.columns",
"_____no_output_____"
],
[
"# use other order of columns\ndf_repurposing = df[['AE', 'HGNC_symbol', 'DrugName', 'ProteinID', 'DrugCID', 'Drug']]\ndf_repurposing",
"_____no_output_____"
],
[
"#df.to_json(r'all.json')",
"_____no_output_____"
],
[
"# export to HTML\npivot_ui(df, rows=[\"HGNC_symbol\"], outfile_path=\"All_pivotTable.html\")",
"_____no_output_____"
]
],
[
[
"Manually edit the resulted HTML by replacing this part:",
"_____no_output_____"
]
],
[
[
"<body>\n <script type=\"text/javascript\">\n $(function(){\n\t\t\t\n\t\t\tvar derivers = $.pivotUtilities.derivers;\n\t\t\tvar renderers = $.extend($.pivotUtilities.renderers,$.pivotUtilities.plotly_renderers);\n\t\t\t\n if(window.location != window.parent.location)\n $(\"<a>\", {target:\"_blank\", href:\"\"})\n .text(\"[pop out]\").prependTo($(\"body\"));\n $(\"#output\").pivotUI(\n $.csv.toArrays($(\"#output\").text()),\n $.extend({\n renderers: $.extend(\n $.pivotUtilities.renderers,\n $.pivotUtilities.c3_renderers,\n $.pivotUtilities.d3_renderers,\n $.pivotUtilities.export_renderers,\n ),\n hiddenAttributes: [\"\"]\n },\n\t\t\t\t\t\n\t\t\t\t\t{rows : [\"Target\"],\n\t\t\t\t\t filter : (function(r){ return r[\"Target\"] != null }),\n\t\t\t\t\t renderers: renderers,\n\t\t\t\t\t rendererName : \"Bar Chart\",\n\t\t\t\t\t rowOrder: \"value_a_to_z\"\n\t\t\t\t\t})\n ).show();\n });\n </script>\n <div id=\"output\" style=\"display: none;\">",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0bd5c5fb7a9cbfb36335f27cb3c9a342599ae17 | 5,468 | ipynb | Jupyter Notebook | notebooks/Models.ipynb | ursueugen/meth2expr | b17e64fe14d7a510d5aa74fdf7c14de74c4302bd | [
"MIT"
] | null | null | null | notebooks/Models.ipynb | ursueugen/meth2expr | b17e64fe14d7a510d5aa74fdf7c14de74c4302bd | [
"MIT"
] | null | null | null | notebooks/Models.ipynb | ursueugen/meth2expr | b17e64fe14d7a510d5aa74fdf7c14de74c4302bd | [
"MIT"
] | null | null | null | 28.778947 | 113 | 0.497805 | [
[
[
"import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\nfrom torch.utils.data import Dataset, DataLoader\nfrom torch.utils.tensorboard import SummaryWriter\ntorch.manual_seed(42)",
"_____no_output_____"
],
[
"class RNNRegressor(nn.Module):\n\n def __init__(self, input_dim, hidden_dim, output_dim):\n super(RNNRegressor, self).__init__()\n self.input_dim = input_dim\n self.hidden_dim = hidden_dim\n self.output_dim = output_dim\n self.i2h = nn.Linear(input_dim + hidden_dim, hidden_dim)\n self.i2o = nn.Linear(input_dim + hidden_dim, output_dim)\n self.tanh = nn.Tanh()\n \n def forward(self, input, hidden):\n \n #input = input.view(-1, input.shape[1])\n # TODO: Modify code to accept batch tensors <loc, batch_nr, index>. Can add init hidden in here.\n \n combined = torch.cat((input, hidden), 1)\n hidden = self.i2h(combined)\n output = self.i2o(combined)\n output = self.tanh(output)\n\n return output, hidden\n \n def initHidden(self):\n return torch.zeros(1, self.hidden_dim)\n\nHIDDEN_DIM = 10\nrnn = RNNRegressor(train_set.vocab_size, HIDDEN_DIM, 1)\n\n\n\nwriter = SummaryWriter('runs/RNN_playground')\nwriter.add_graph(rnn, (seq2tensor(\"ACGTT\")[1], torch.zeros(1, HIDDEN_DIM)))\nwriter.close()",
"_____no_output_____"
],
[
"criterion = nn.MSELoss()\noptimizer = optim.SGD(rnn.parameters(), lr=0.005)\n\n\ndef train(output_tensor, seq_tensor):\n ''''''\n \n hidden = rnn.initHidden()\n \n # zero the grad buffers\n rnn.zero_grad()\n optimizer.zero_grad()\n \n # forward pass\n for i in range(seq_tensor.shape[0]):\n output, hidden = rnn(seq_tensor[i], hidden)\n \n # compute loss and backward pass\n loss = criterion(output, output_tensor)\n loss.backward()\n \n # update params\n optimizer.step()\n \n return output, loss.item()\n\ntrain(torch.tensor([-0.0117]), seq2tensor(\"ACGTN\"))\nprint(seq2tensor(\"ACGTN\").shape, torch.tensor([-0.0118]).shape)",
"_____no_output_____"
],
[
"NUM_EPOCHS = 5\n\n# TODO: not really minibatch for now\nfor epoch in range(NUM_EPOCHS):\n \n running_loss = 0.0\n for i, data in enumerate(train_loader, 0):\n \n # get inputs\n # TODO: Account for dtypes, otherwise get incompatible!\n seq_tensor, expr = data\n seq_tensor = seq_tensor[0]\n expr = expr[0].view(1)\n expr = expr.type(torch.FloatTensor) \n \n # train on example\n output, loss = train(expr[0], seq_tensor)\n \n # print statistics\n running_loss += loss\n if i % 2000 == 1999: # print every 2000 mini-batches\n print('[%d, %5d] loss: %.3f' %\n (epoch + 1, i + 1, running_loss / 2000))\n running_loss = 0.0\n\nprint('Finished Training')\n\n# save model\nPATH = './models/test_model'\ntorch.save(rnn.state_dict(), PATH)",
"_____no_output_____"
],
[
"# Predict loop\n\n# correct = 0\n# total = 0\n# with torch.no_grad():\n# for data in testloader:\n# images, labels = data\n# outputs = net(images)\n# _, predicted = torch.max(outputs.data, 1)\n# total += labels.size(0)\n# correct += (predicted == labels).sum().item()\n\n# print('Accuracy of the network on the 10000 test images: %d %%' % (\n# 100 * correct / total))",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code"
]
] |
d0bd5c71ed190198a5fdcde08e066618f65982d7 | 209,922 | ipynb | Jupyter Notebook | w2/w2-Day_2/W2D2-Pandas-21-09-21/.ipynb_checkpoints/Intro_to_Pandas_Student_COPY-filled-checkpoint.ipynb | bmskarate/lighthouseMain | b2434f14f1378b89085d59f896c44eda5f74eecc | [
"MIT"
] | null | null | null | w2/w2-Day_2/W2D2-Pandas-21-09-21/.ipynb_checkpoints/Intro_to_Pandas_Student_COPY-filled-checkpoint.ipynb | bmskarate/lighthouseMain | b2434f14f1378b89085d59f896c44eda5f74eecc | [
"MIT"
] | null | null | null | w2/w2-Day_2/W2D2-Pandas-21-09-21/.ipynb_checkpoints/Intro_to_Pandas_Student_COPY-filled-checkpoint.ipynb | bmskarate/lighthouseMain | b2434f14f1378b89085d59f896c44eda5f74eecc | [
"MIT"
] | null | null | null | 31.05355 | 397 | 0.349182 | [
[
[
"## Data Wrangling with Python: Intro to Pandas\nNote: Notebook adapted from [here](https://github.com/EricElmoznino/lighthouse_pandas_tutorial/blob/master/pandas_tutorial.ipynb) & [here](https://github.com/sedv8808/LighthouseLabs/tree/main/W02D2) & from LHL's [21 Day Data Challenge](https://data-challenge.lighthouselabs.ca/start)\n#### Instructor: Andrew Berry\n#### Date: Aug 24, 2021\n\n**Agenda:**\n - Why Pandas?\n - Pandas Basics\n - Pandas Series vs. Pandas DataFrames\n - .loc() vs. iloc()\n - Pandas Advance\n - Filtering\n - Group bys\n - Pandas Exercises\n - Challenge 1\n - Challenge 2\n\n### Pandas: Why Pandas? What is it? ",
"_____no_output_____"
],
[
"To do data anlaysis with Python, Pandas is a great tool to for dealing with data in a tabular and time series formats. Designed by Wes McKinney as an attempt to port R's dataframes to python. \n\n- Python Package for working with **tables**\n- Similar to SQL & Excel\n - Faster\n - More features to manipulate, transform, and aggregate data\n- Easy to handle messy and missing data\n- Great at working with large data files\n- When combing with other Python libraries, it's fairly easy to create bautiful and customazied visuals. Easy integration with Matplotlib, Seaborn, Plotly.\n- Easy integration with machine learning plugins (sckit-learn)\n \n \n-----------\nTo read more about, Wes McKinney, the creator of Pandas, check out the article below.\n\n1. https://qz.com/1126615/the-story-of-the-most-important-tool-in-data-science/\n\n--------------\n",
"_____no_output_____"
],
[
"## Think of how we would try to represent a table in Python?\n",
"_____no_output_____"
]
],
[
[
"#A dicitonary of lists example\nstudents = {\n 'student_id': [1, 2, 3, 4,5,6],\n 'name': ['Daenerys', 'Jon', 'Arya', 'Sansa', 'Eddard', 'Khal Drogo'],\n 'course_mark': [82, 100, 12, 76, 46, 20],\n 'species': ['cat', 'human', 'cat', 'human', 'human', 'human']\n}",
"_____no_output_____"
]
],
[
[
"**What are some operations we might want to do on this data?**\n\n- 1.Select a subset of columns\n- 2.Filter out some rows based on an attribute\n- 3.Group by some attribute\n- 4.Compute some aggregate values within groups\n- 5.Save to a file\n\nHow about we try out one of these to see how easy it is",
"_____no_output_____"
],
[
"### Try to return a table with the mean course mark per-species.",
"_____no_output_____"
]
],
[
[
"# Return a table with the mean course mark per-species\n# Think about a SQL statment where we group by species with the average course mark\n\nspecies_sums = {} #Tables of Sums\nspecies_counts = {} #Count per Species\nfor i in range(len(students['species'])): #iterating over the rows\n species = students['species'][i] #every row number I get species \n course_mark = students['course_mark'][i] # and course mark\n if species not in species_sums: #Intializing Species if not in list\n species_sums[species] = 0\n species_counts[species] = 0\n species_sums[species] += course_mark #Add each course mark for each species\n species_counts[species] += 1 \n\nspecies_means = {}\n \nfor species in species_sums: # for every unique species we found\n species_means[species] = species_sums[species] / species_counts[species] #sum/count\n\nspecies_means #return",
"_____no_output_____"
]
],
[
[
"- Did you like looking at is? Does this look fun to do?\n- Super Tiring. ",
"_____no_output_____"
],
[
"## Pandas Version",
"_____no_output_____"
]
],
[
[
"# Pandas Version\nimport pandas as pd\n\n# Can take in a dictionry of list to instatiate a DataFrame\nstudents = pd.DataFrame(students) \nstudents",
"_____no_output_____"
],
[
"species_means = students[['species', 'course_mark']].groupby('species').mean()\n#species_means = students.groupby('species')['course_mark'].mean()\nspecies_means",
"_____no_output_____"
]
],
[
[
"### Dissecting the above code!",
"_____no_output_____"
]
],
[
[
"#Step 1: Filter out the columns we want to keep\nstudents_filtered = students[['species','course_mark']]\nstudents_filtered",
"_____no_output_____"
],
[
"# Step 2: Group by species column\nstudents_grouped_by_species = students_filtered.groupby('species') \nstudents_grouped_by_species",
"_____no_output_____"
],
[
"#Step 3: Specify how to aggregate the course-mark column\nspecies_means = students_grouped_by_species.mean()",
"_____no_output_____"
],
[
"species_means",
"_____no_output_____"
]
],
[
[
"#### As shown, Pandas makes use of vectorized operations. ",
"_____no_output_____"
],
[
"\n- Rather than use for-loops, we specify the operation that will apply to the structure as a whole (i.e. all the rows)\n- By vectorizing, **the code becomes more concise and more readable**\n- Pandas is optimized for vectorized operations (parallel vs. serial computation), which makes them **much faster**\n- It is almost always possible to vectorize operations on Pandas data types\n",
"_____no_output_____"
],
[
"### Getting Started: Pandas Series & Pandas DataFrames\n\nThere are two Pandas data types of interest:\n\n- Series (column)\n - A pandas series is similar to an array but it has an index. The index is constant, and doesnt change through the operations we apply to the series. \n- DataFrame (table)\n - A pandas dataframe is an object that is similar to a collection of pandas series.",
"_____no_output_____"
]
],
[
[
"# One way to construct a Series\nseries = pd.Series([82, 100, 12, 76, 46, 20]) \nseries",
"_____no_output_____"
],
[
"#We can specify some index when building a series. \ngrades = pd.Series([82, 100, 12, 76, 46, 20], \n index = ['Daenerys', 'Jon', 'Arya', 'Sansa', 'Eddard', 'Khal Drogo'] ) \n\ngrades",
"_____no_output_____"
],
[
"grades['Daenerys']\n",
"_____no_output_____"
],
[
"grades[0]",
"_____no_output_____"
],
[
"print(\"The values:\", grades.values)\nprint(\"The indexes:\", grades.index)",
"The values: [ 82 100 12 76 46 20]\nThe indexes: Index(['Daenerys', 'Jon', 'Arya', 'Sansa', 'Eddard', 'Khal Drogo'], dtype='object')\n"
]
],
[
[
"**Note:** The underlying index is still 0, 1, 2, 3.... and we can still index on that:",
"_____no_output_____"
]
],
[
[
"grades[2]",
"_____no_output_____"
]
],
[
[
"### Pandas DataFrames",
"_____no_output_____"
]
],
[
[
"# One way to construct a DataFrame\ndf = pd.DataFrame({\n 'name': ['Daenerys', 'Jon', 'Arya', 'Sansa'],\n 'course_mark': [82, 100, 12, 76],\n 'species': ['human', 'human', 'cat', 'human']},\n index=[1412, 94, 9351, 14])\ndf",
"_____no_output_____"
]
],
[
[
"#### Reading a CSV file\n\nWe'll use the function `read_csv()` to load the data into our notebook\n\n- The `read_csv()` function can read data from a locally saved file or from a URL\n- We'll store the data as a variable `df_pokemon`",
"_____no_output_____"
]
],
[
[
"df_pokemon = pd.read_csv('pokemon.csv')",
"_____no_output_____"
],
[
"df_pokemon",
"_____no_output_____"
]
],
[
[
"**What do we see here?**\n- Each row of the table is an observation, containing data of a single pokemon",
"_____no_output_____"
]
],
[
[
"df_pokemon.shape",
"_____no_output_____"
]
],
[
[
"For large DataFrames, it's often useful to display just the first few or last few rows:",
"_____no_output_____"
]
],
[
[
"#pd.options.display.max_rows = 15\ndf_pokemon.head(10)",
"_____no_output_____"
],
[
"df_pokemon.tail(10)",
"_____no_output_____"
],
[
"#df_pokemon.head?",
"_____no_output_____"
]
],
[
[
"> **Pro tip:**\n> - To display the documentation for this method within Jupyter notebook, you can run the command `df_pokemon.head?` or press `Shift-Tab` within the parentheses of `df_pokemon.head()`\n> - To see other methods available for the DataFrame, type `df_pokemon.` followed by `Tab` for auto-complete options ",
"_____no_output_____"
],
[
"## Data at a Glance\n\n`pandas` provides many ways to quickly and easily summarize your data:\n- How many rows and columns are there?\n- What are all the column names and what type of data is in each column?\n- How many values are missing in each column or row?\n- Numerical data: What is the average and range of the values?\n- Text data: What are the unique values and how often does each occur?",
"_____no_output_____"
],
[
"### Peeking into the pokemon dataset\n\n- Similar with getting familar with SQL tables, it is often a great idea to look at the pandas dataframes we are working with. Below are some of the basic methods to glance at a dataset. ",
"_____no_output_____"
]
],
[
[
"#Getting the Columns\ndf_pokemon.columns\n\nlist_of_column = ['#', 'Name', 'Type 1', 'Type 2', 'Total', 'HP', 'Attack', 'Defense',\n 'Sp. Atk', 'Sp. Def', 'Speed', 'Generation', 'Legendary']",
"_____no_output_____"
],
[
"#Getting Summary Statistics\ndf_pokemon.describe()",
"_____no_output_____"
],
[
"df_pokemon[[\"Total\",\"Attack\"]].describe()",
"_____no_output_____"
],
[
"df_pokemon",
"_____no_output_____"
],
[
"list_columns = ['Name', 'Defense',\n 'Sp. Atk']\nvar_a = df_pokemon[list_columns].describe()",
"_____no_output_____"
],
[
"#var_a.round(2)",
"_____no_output_____"
],
[
"std_attack = df_pokemon['Attack'].std()",
"_____no_output_____"
],
[
"print(std_attack)",
"32.45736586949843\n"
],
[
"#Checking for Missing Data\ndf_pokemon.isnull().sum() / len(df_pokemon) * 100",
"_____no_output_____"
]
],
[
[
"## The .loc() vs .iloc() method\n\n\nTo select rows and columns at the same time, we use the syntax `.loc[<rows>, <columns>]`:",
"_____no_output_____"
]
],
[
[
"#Notice the square brackets on loc and the colon\ndf_pokemon.loc[10:20, ['Name']]",
"_____no_output_____"
],
[
"#Taking a slice of index values\n",
"_____no_output_____"
],
[
"# Getting more than one columns\ndf_pokemon.loc[10:14, ['Name',\"Legendary\",'Attack']]",
"_____no_output_____"
],
[
"#we can also feed in a list for the rows\ndf_pokemon.loc[[10,14,24,58,238], ['Name',\"Legendary\",'Attack']]",
"_____no_output_____"
],
[
"df_pokemon.head(3)",
"_____no_output_____"
],
[
"#We can also slice over range of column values\ndf_pokemon.loc[[10,14,24,58,238], 'Name':'Attack']",
"_____no_output_____"
],
[
"#Iloc is use for integer based indexing\ndf_pokemon.iloc[0:3,1:4]",
"_____no_output_____"
]
],
[
[
"### Modifying a Column or Creating a new column",
"_____no_output_____"
],
[
"Give a little description",
"_____no_output_____"
]
],
[
[
"df_pokemon.head(3)",
"_____no_output_____"
],
[
"df2 = df_pokemon.copy() #hard copy",
"_____no_output_____"
],
[
"df2['Total_Attack'] = df2['Attack'] + df2['Sp. Atk']",
"_____no_output_____"
],
[
"df2.head(3)",
"_____no_output_____"
],
[
"df2['Total'] = df2['Total'] * 2\n",
"_____no_output_____"
],
[
"df2['filler'] = True",
"_____no_output_____"
],
[
"#Modify an orginal ",
"_____no_output_____"
],
[
"#Modify Data Frame with .loc() method\ndf2.loc[1, 'Name'] = 'Andrew'",
"_____no_output_____"
]
],
[
[
"### Sort_values() & value_counts()\n\n1. ***df.sort_values()***\n2. ***df.value_counts()***\n\n\nThe ***pandas.sort_values()*** allows us to reorder our dataframe in an ascending or descending order given a column for pandas to work from. This is similar to the excel sort function.\n\n```python\nimport pandas as pd\ndf = pd.read_csv('random.csv')\ndf\n\n\ndf.sort_values(by=['some_column'], ascending = True)\n```\nIn the above code snippet, we are sorting our *random.csv* pandas data frame by the column *some_column* in ascending order. To read more on the ***df.sort_values()*** function, read this [article](https://datatofish.com/sort-pandas-dataframe/).\n\nThe second function is ***df.value_counts()***, it allows us to count how many times a specific value/item occurred in the dataframe. This function is best used on a specific column on a data frame, ideally on a column representing categorical data. Categorical data refers to a statistical data type consisting of categorical variables. \n\n```python\ndf['column'].value_counts()\n```\n\nTo read more on some of the advanced functionalities of ***df.value_counts()***, please refer to the pandas documentation or this [article](https://towardsdatascience.com/getting-more-value-from-the-pandas-value-counts-aa17230907a6).",
"_____no_output_____"
]
],
[
[
"df2.head(10)",
"_____no_output_____"
],
[
"another_variable = df2.sort_values(by = 'Total_Attack', ascending = False).head(5)",
"_____no_output_____"
],
[
"df2.sort_values(by = ['Attack','Defense'], ascending = [True,False]).head(15).to_csv('modified.csv')",
"_____no_output_____"
],
[
"#Value_Counts",
"_____no_output_____"
],
[
"#pd.options.display.max_rows = 30\ndf2['Type 1'].value_counts(ascending = True)",
"_____no_output_____"
],
[
"df2['Type 1'].value_counts().sort_index()",
"_____no_output_____"
],
[
"#Just Unique Values\ndf2['Type 1'].unique()",
"_____no_output_____"
],
[
"#How many unique Values\ndf2['Type 1'].nunique()",
"_____no_output_____"
]
],
[
[
"### How to Query or Filter Data with Conditions?\n\n- We can extract specific data from our dataframe based on a specific condition. We will be using the syntax below. Pandas will return a subset of the dataframe based on the given condition. \n\n```python\ndf[<insert_condition>]\n```\n\nConditions follow the generic boolean logic in Python. Below is a cheat sheet python boolean logic.\n\n**Conditional Logic:** \n\nConditional logic refers to the execution of different actions based on whether a certain condition is met. In programming, these conditions are expressed by a set of symbols called **Boolean Operators**. \n\n| Boolean Comparator | Example | Meaning |\n|--------------------|---------|---------------------------------|\n| > | x > y | x is greater than y |\n| >= | x >= y | x is greater than or equal to y |\n| < | x < y | x is less than y |\n| <= | x <= y | x is less than or equal to y |\n| != | x != y | x is not equal to y |\n| == | x == y | x is equal to y |\n\n\n",
"_____no_output_____"
]
],
[
[
"#Step 1: Create a filter\n#the_filter = df_pokemon['Total'] >= 300\nthe_filter = (df_pokemon['Total'] >= 300) & (df_pokemon['Legendary'] == True)",
"_____no_output_____"
],
[
"#Step 2: Apply Filter\n#df_pokemon[the_filter]\ndf_pokemon[(df_pokemon['Total'] >= 300) & (df_pokemon['Legendary'] == True)]",
"_____no_output_____"
],
[
"#Finding Only Legendary Pokemons\n",
"_____no_output_____"
]
],
[
[
"### Grouping and Aggregation \n\nGrouping and aggregation can be used to calculate statistics on groups in the data.\n\n**Common Aggregation Functions**\n- mean()\n- median()\n- sum()\n- count()\n",
"_____no_output_____"
]
],
[
[
"df = df_pokemon.copy()",
"_____no_output_____"
],
[
"df.head(2)",
"_____no_output_____"
],
[
"df[['Type 1','Attack', 'Defense']].groupby('Type 1', as_index = False).mean()",
"_____no_output_____"
]
],
[
[
"- By default, `groupby()` assigns the variable that we're grouping on (in this case `Type 1`) to the index of the output data\n- If we use the keyword argument `as_index=False`, the grouping variable is instead assigned to a regular column\n - This can be useful in some situations, such as data visualization functions which expect the relevant variables to be in columns rather than the index",
"_____no_output_____"
]
],
[
[
"df[['Type 1','Attack', 'Defense','Legendary']].groupby(['Legendary','Type 1'], as_index = False).mean()",
"_____no_output_____"
],
[
"#pd.options.display.max_rows = 50",
"_____no_output_____"
]
],
[
[
"We can use the `agg` method to compute multiple aggregated statistics on our data, for example minimum and maximum country populations in each region:",
"_____no_output_____"
]
],
[
[
"df[['Attack','Type 1']].groupby('Type 1').agg(['min','max','mean'])",
"_____no_output_____"
]
],
[
[
"We can also use `agg` to compute different statistics for different columns:",
"_____no_output_____"
]
],
[
[
"agg_dict = {\n 'Attack': \"mean\",\n 'Defense': ['min','max']\n}",
"_____no_output_____"
],
[
"new_df = df[['Attack','Defense','Type 1']].groupby('Type 1').agg(agg_dict)",
"_____no_output_____"
],
[
"new_df.reset_index()",
"_____no_output_____"
]
],
[
[
"### Challenge 1 (10 minutes)\n\nLet's play around with Pandas on a more intricate dataset: a dataset on wines!\n\n**Challenge 14 from the 21 Day Data Challenge** \n\nDot's neighbour said that he only likes wine from Stellenbosch, Bordeaux, and the Okanagan Valley, and that the sulfates can't be that high. The problem is, Dot can't really afford to spend tons of money on the wine. Dot's conditions for searching for wine are: \n1. Sulfates cannot be higher than 0.6. \n2. The price has to be less than $20. \n\nUse the above conditions to filter the data for questions **2 and 3** below. \n\n**Questions:**\n1. Where is Stellenbosch, anyway? How many wines from Stellenbosch are there in the *entire dataset*? \n2. *After filtering with the 2 conditions*, what is the average price of wine from the Bordeaux region? \n3. *After filtering with the 2 conditions*, what is the least expensive wine that's of the highest quality from the Okanagan Valley?\n\n\n\n**Stretch Question:**\n1. What is the average price of wine from Stellenbosch, according to the entire unfiltered dataset? \n\n\n**Note: Check the dataset to see if there are missing values; if there are, fill in missing values with the mean.**\n",
"_____no_output_____"
]
],
[
[
"#Write your Code Below",
"_____no_output_____"
],
[
"import pandas as pd\ndf = pd.read_csv('winequality-red_2.csv')\ndf = df.drop(columns = ['Unnamed: 0'])\n\ndf.head()",
"_____no_output_____"
],
[
"\n\n#Solutions\n#Q1\ndf['region'].value_counts()\n\n",
"_____no_output_____"
],
[
"\n#Q2\nfilter_sulhpates = df['sulphates'] <= 0.6\nfiltered_df = df[filter_sulhpates]\n\n",
"_____no_output_____"
],
[
"\n\nfilter_quality = filtered_df['price'] < 20\nfiltered_df = filtered_df[filter_quality]\n\n",
"_____no_output_____"
],
[
"filtered_df.groupby(['region']).mean()\n#Answer is $11.300",
"_____no_output_____"
],
[
"\n\n#Q3\nfilter_region = df['region'] == 'Okanagan Valley'\nfiltered_df = filtered_df[filter_region]\nfiltered_df.sort_values(by=['quality', 'price'], ascending = [False,True])\n\n",
"<ipython-input-121-7543189bff1f>:3: UserWarning: Boolean Series key will be reindexed to match DataFrame index.\n filtered_df = filtered_df[filter_region]\n"
]
],
[
[
"### Challenge 2 (25 minutes)\n\n**Challenge 21 from the 21DDC (Adapted)**\n\nDot wants to play retro video games with all their new friends! Help them figure out which games would be best.\n\nQuestions: \n \n1. What is the top 5 best selling games released before the year 2000.\n\n - **Note**: Use Global_Sales\n \n \n2. Create a new column called Aggregate_Score, which returns the proportional average between Critic Score and User_Score based on Critic_Count and User_Count. Plot a horizontal bar chart of the top 5 highest rated games by Aggregate_Score, not published by Nintendo before the year 2000. From this bar chart, what is the highest rated game by Aggregate_Score?\n\n - **Note**: Critic_Count should be filled with the mean. User_Count should be filled with the median.\n \n \n#### In the exercise above, there is some missing values in the dataset. Look up the pandas documentation to figure out how to fill missing values in a column. You will be using the **fillna()** function. ",
"_____no_output_____"
]
],
[
[
"df = pd.read_csv('video_games.csv')\ndf.head(5)",
"_____no_output_____"
],
[
"#Solution Q1\nbest_selling_2000_filter = df[\"Year_of_Release\"] < 2000\nbest_selling_2000 = df[best_selling_2000_filter]\n\n",
"_____no_output_____"
],
[
"best_selling_2000.head(5)",
"_____no_output_____"
],
[
"#Solution Q2\n#Step 1: Fill in missing values with the median\n#Columns with missing values Critic_Count and User_Count\n\ndf['Critic_Count'].isnull().sum()",
"_____no_output_____"
],
[
"\n\n#Fill in with the mean\ndf['Critic_Count'] = df['Critic_Count'].fillna(value = df.Critic_Count.mean())\n\n",
"_____no_output_____"
],
[
"\n\n#Fill in with the median\ndf['User_Count'] = df['User_Count'].fillna(value = df.User_Count.median())\n\n",
"_____no_output_____"
],
[
"#Up the User_score\n#Because the user_score is not on the same scale as the critic score\ndf['User_Score'] = df['User_Score'] * 10",
"_____no_output_____"
],
[
"#Create aggregate Score\ndf['Aggregate_Score'] = ((df['Critic_Score'] * df['Critic_Count']) + (df['User_Score'] * df['User_Count']))/(df['Critic_Count'] + df['User_Count'])\n\n",
"_____no_output_____"
],
[
"df[\"Aggregate_Score\"].describe()\n",
"_____no_output_____"
],
[
"nintendo_filter_year = df[\"Year_of_Release\"] < 2000\nnintendo_filter_publisher = df[\"Publisher\"] != 'Nintendo'",
"_____no_output_____"
],
[
"nintendo = df[nintendo_filter_year]\nnintendo = nintendo[nintendo_filter_publisher]",
"<ipython-input-136-05a8e9883b33>:2: UserWarning: Boolean Series key will be reindexed to match DataFrame index.\n nintendo = nintendo[nintendo_filter_publisher]\n"
],
[
"nintendo.sort_values('Aggregate_Score', ascending = False).head(5)",
"_____no_output_____"
]
],
[
[
"# HINT\n\n**How to create the Aggregate Score Column?**\n\n\\begin{equation*}\nAggregateScore = \\frac{(CriticCount * CriticScore)+(UserCount * UserScore)}{UserCount + CriticCount}\n\\end{equation*}\n\n**Check Your Column Values**\n\nThe Critic_Score column is scored out of 100. The User_Score column is scored out of 10. You will need to modify one of the columns to match the other.",
"_____no_output_____"
],
[
"## Documentation\n\nIn the meantime, check out pandas the user guide in the [pandas documentation](https://pandas.pydata.org/docs/user_guide/index.html#user-guide).\n\n-------\n**Why should I use the documentation?**\n\nOn the job as a data scientist or data analyst, more often than not, you may find yourself looking up the documentation of a particular function or plugin you use. Don't worry if there are a few functions you don't know by heart. However, there are just too many to know! An essential skill is to learn how to navigate documentation and understand how to apply the examples to your work. \n\n--------",
"_____no_output_____"
],
[
"Additional resources:\n\n- To learn more about these topics, as well as other topics not covered here (e.g. reshaping, merging, additional subsetting methods, working with text data, etc.) check out [these introductory tutorials](https://pandas.pydata.org/docs/getting_started/index.html#getting-started) from the `pandas` documentation\n- To learn more about subsetting your data, check out [this tutorial](https://pandas.pydata.org/docs/getting_started/intro_tutorials/03_subset_data.html#min-tut-03-subset)\n- This [pandas cheatsheet](https://pandas.pydata.org/Pandas_Cheat_Sheet.pdf) may also be helpful as a reference.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
]
] |
d0bd5c8d4f0b5547d94fe45100dc06cdb42e63aa | 13,548 | ipynb | Jupyter Notebook | samples/demo.ipynb | MirandaLv/Mask_RCNN | afc8b3fd321ba3be710fe20990a3137d606a5449 | [
"MIT"
] | null | null | null | samples/demo.ipynb | MirandaLv/Mask_RCNN | afc8b3fd321ba3be710fe20990a3137d606a5449 | [
"MIT"
] | null | null | null | samples/demo.ipynb | MirandaLv/Mask_RCNN | afc8b3fd321ba3be710fe20990a3137d606a5449 | [
"MIT"
] | null | null | null | 43.84466 | 428 | 0.569457 | [
[
[
"# Mask R-CNN Demo\n\nA quick intro to using the pre-trained model to detect and segment objects.",
"_____no_output_____"
]
],
[
[
"import os\nimport sys\nimport random\nimport math\nimport numpy as np\nimport skimage.io\nimport matplotlib\nimport matplotlib.pyplot as plt\n\n# Root directory of the project\nROOT_DIR = os.path.abspath(\"../\")\n\n# Import Mask RCNN\nsys.path.append(ROOT_DIR) # To find local version of the library\nfrom mrcnn import utils\nimport mrcnn.model as modellib\nfrom mrcnn import visualize\n# Import COCO config\nsys.path.append(os.path.join(ROOT_DIR, \"samples/coco/\")) # To find local version\nimport coco\n\n%matplotlib inline \n\n# Directory to save logs and trained model\nMODEL_DIR = os.path.join(ROOT_DIR, \"logs\")\n\n# Local path to trained weights file\nCOCO_MODEL_PATH = os.path.join(ROOT_DIR, \"mask_rcnn_coco.h5\")\n# Download COCO trained weights from Releases if needed\nif not os.path.exists(COCO_MODEL_PATH):\n utils.download_trained_weights(COCO_MODEL_PATH)\n\n# Directory of images to run detection on\nIMAGE_DIR = os.path.join(ROOT_DIR, \"images\")",
"C:\\Users\\zlv\\Anaconda3\\envs\\test\\lib\\site-packages\\tensorflow\\python\\framework\\dtypes.py:526: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint8 = np.dtype([(\"qint8\", np.int8, 1)])\nC:\\Users\\zlv\\Anaconda3\\envs\\test\\lib\\site-packages\\tensorflow\\python\\framework\\dtypes.py:527: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint8 = np.dtype([(\"quint8\", np.uint8, 1)])\nC:\\Users\\zlv\\Anaconda3\\envs\\test\\lib\\site-packages\\tensorflow\\python\\framework\\dtypes.py:528: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint16 = np.dtype([(\"qint16\", np.int16, 1)])\nC:\\Users\\zlv\\Anaconda3\\envs\\test\\lib\\site-packages\\tensorflow\\python\\framework\\dtypes.py:529: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint16 = np.dtype([(\"quint16\", np.uint16, 1)])\nC:\\Users\\zlv\\Anaconda3\\envs\\test\\lib\\site-packages\\tensorflow\\python\\framework\\dtypes.py:530: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint32 = np.dtype([(\"qint32\", np.int32, 1)])\nC:\\Users\\zlv\\Anaconda3\\envs\\test\\lib\\site-packages\\tensorflow\\python\\framework\\dtypes.py:535: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n np_resource = np.dtype([(\"resource\", np.ubyte, 1)])\nUsing TensorFlow backend.\n"
]
],
[
[
"## Configurations\n\nWe'll be using a model trained on the MS-COCO dataset. The configurations of this model are in the ```CocoConfig``` class in ```coco.py```.\n\nFor inferencing, modify the configurations a bit to fit the task. To do so, sub-class the ```CocoConfig``` class and override the attributes you need to change.",
"_____no_output_____"
]
],
[
[
"class InferenceConfig(coco.CocoConfig):\n # Set batch size to 1 since we'll be running inference on\n # one image at a time. Batch size = GPU_COUNT * IMAGES_PER_GPU\n GPU_COUNT = 1\n IMAGES_PER_GPU = 1\n\nconfig = InferenceConfig()\nconfig.display()",
"\nConfigurations:\nBACKBONE resnet101\nBACKBONE_STRIDES [4, 8, 16, 32, 64]\nBATCH_SIZE 1\nBBOX_STD_DEV [0.1 0.1 0.2 0.2]\nCOMPUTE_BACKBONE_SHAPE None\nDETECTION_MAX_INSTANCES 100\nDETECTION_MIN_CONFIDENCE 0.7\nDETECTION_NMS_THRESHOLD 0.3\nFPN_CLASSIF_FC_LAYERS_SIZE 1024\nGPU_COUNT 1\nGRADIENT_CLIP_NORM 5.0\nIMAGES_PER_GPU 1\nIMAGE_CHANNEL_COUNT 3\nIMAGE_MAX_DIM 1024\nIMAGE_META_SIZE 93\nIMAGE_MIN_DIM 800\nIMAGE_MIN_SCALE 0\nIMAGE_RESIZE_MODE square\nIMAGE_SHAPE [1024 1024 3]\nLEARNING_MOMENTUM 0.9\nLEARNING_RATE 0.001\nLOSS_WEIGHTS {'rpn_class_loss': 1.0, 'rpn_bbox_loss': 1.0, 'mrcnn_class_loss': 1.0, 'mrcnn_bbox_loss': 1.0, 'mrcnn_mask_loss': 1.0}\nMASK_POOL_SIZE 14\nMASK_SHAPE [28, 28]\nMAX_GT_INSTANCES 100\nMEAN_PIXEL [123.7 116.8 103.9]\nMINI_MASK_SHAPE (56, 56)\nNAME coco\nNUM_CLASSES 81\nPOOL_SIZE 7\nPOST_NMS_ROIS_INFERENCE 1000\nPOST_NMS_ROIS_TRAINING 2000\nPRE_NMS_LIMIT 6000\nROI_POSITIVE_RATIO 0.33\nRPN_ANCHOR_RATIOS [0.5, 1, 2]\nRPN_ANCHOR_SCALES (32, 64, 128, 256, 512)\nRPN_ANCHOR_STRIDE 1\nRPN_BBOX_STD_DEV [0.1 0.1 0.2 0.2]\nRPN_NMS_THRESHOLD 0.7\nRPN_TRAIN_ANCHORS_PER_IMAGE 256\nSTEPS_PER_EPOCH 1000\nTOP_DOWN_PYRAMID_SIZE 256\nTRAIN_BN False\nTRAIN_ROIS_PER_IMAGE 200\nUSE_MINI_MASK True\nUSE_RPN_ROIS True\nVALIDATION_STEPS 50\nWEIGHT_DECAY 0.0001\n\n\n"
]
],
[
[
"## Create Model and Load Trained Weights",
"_____no_output_____"
]
],
[
[
"# Create model object in inference mode.\nmodel = modellib.MaskRCNN(mode=\"inference\", model_dir=MODEL_DIR, config=config)\n\n# Load weights trained on MS-COCO\nmodel.load_weights(COCO_MODEL_PATH, by_name=True)",
"WARNING:tensorflow:From C:\\Users\\zlv\\Anaconda3\\envs\\test\\lib\\site-packages\\tensorflow\\python\\framework\\op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.\nInstructions for updating:\nColocations handled automatically by placer.\nWARNING:tensorflow:From C:\\Users\\zlv\\Anaconda3\\envs\\test\\lib\\site-packages\\keras\\backend\\tensorflow_backend.py:1208: calling reduce_max_v1 (from tensorflow.python.ops.math_ops) with keep_dims is deprecated and will be removed in a future version.\nInstructions for updating:\nkeep_dims is deprecated, use keepdims instead\nWARNING:tensorflow:From C:\\Users\\zlv\\Anaconda3\\envs\\test\\lib\\site-packages\\keras\\backend\\tensorflow_backend.py:1242: calling reduce_sum_v1 (from tensorflow.python.ops.math_ops) with keep_dims is deprecated and will be removed in a future version.\nInstructions for updating:\nkeep_dims is deprecated, use keepdims instead\nWARNING:tensorflow:From C:\\Users\\zlv\\Documents\\GitHub\\Mask_RCNN\\mrcnn\\model.py:772: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse tf.cast instead.\n"
]
],
[
[
"## Class Names\n\nThe model classifies objects and returns class IDs, which are integer value that identify each class. Some datasets assign integer values to their classes and some don't. For example, in the MS-COCO dataset, the 'person' class is 1 and 'teddy bear' is 88. The IDs are often sequential, but not always. The COCO dataset, for example, has classes associated with class IDs 70 and 72, but not 71.\n\nTo improve consistency, and to support training on data from multiple sources at the same time, our ```Dataset``` class assigns it's own sequential integer IDs to each class. For example, if you load the COCO dataset using our ```Dataset``` class, the 'person' class would get class ID = 1 (just like COCO) and the 'teddy bear' class is 78 (different from COCO). Keep that in mind when mapping class IDs to class names.\n\nTo get the list of class names, you'd load the dataset and then use the ```class_names``` property like this.\n```\n# Load COCO dataset\ndataset = coco.CocoDataset()\ndataset.load_coco(COCO_DIR, \"train\")\ndataset.prepare()\n\n# Print class names\nprint(dataset.class_names)\n```\n\nWe don't want to require you to download the COCO dataset just to run this demo, so we're including the list of class names below. The index of the class name in the list represent its ID (first class is 0, second is 1, third is 2, ...etc.)",
"_____no_output_____"
]
],
[
[
"# COCO Class names\n# Index of the class in the list is its ID. For example, to get ID of\n# the teddy bear class, use: class_names.index('teddy bear')\nclass_names = ['BG', 'person', 'bicycle', 'car', 'motorcycle', 'airplane',\n 'bus', 'train', 'truck', 'boat', 'traffic light',\n 'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird',\n 'cat', 'dog', 'horse', 'sheep', 'cow', 'elephant', 'bear',\n 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie',\n 'suitcase', 'frisbee', 'skis', 'snowboard', 'sports ball',\n 'kite', 'baseball bat', 'baseball glove', 'skateboard',\n 'surfboard', 'tennis racket', 'bottle', 'wine glass', 'cup',\n 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple',\n 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza',\n 'donut', 'cake', 'chair', 'couch', 'potted plant', 'bed',\n 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote',\n 'keyboard', 'cell phone', 'microwave', 'oven', 'toaster',\n 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors',\n 'teddy bear', 'hair drier', 'toothbrush']",
"_____no_output_____"
]
],
[
[
"## Run Object Detection",
"_____no_output_____"
]
],
[
[
"# Load a random image from the images folder\nfile_names = next(os.walk(IMAGE_DIR))[2]\nimage = skimage.io.imread(os.path.join(IMAGE_DIR, random.choice(file_names)))\n\n# Run detection\nresults = model.detect([image], verbose=1)\n\n# Visualize results\nr = results[0]\nvisualize.display_instances(image, r['rois'], r['masks'], r['class_ids'], \n class_names, r['scores'])",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0bd699039b6f9826dd2e9a66284950b8073fe68 | 2,312 | ipynb | Jupyter Notebook | src/bfs.ipynb | songqsh/MA2210 | 01cc61f78acb6341d05534d230c998cdfa4494bc | [
"MIT"
] | null | null | null | src/bfs.ipynb | songqsh/MA2210 | 01cc61f78acb6341d05534d230c998cdfa4494bc | [
"MIT"
] | null | null | null | src/bfs.ipynb | songqsh/MA2210 | 01cc61f78acb6341d05534d230c998cdfa4494bc | [
"MIT"
] | null | null | null | 22.230769 | 218 | 0.434689 | [
[
[
"<a href=\"https://colab.research.google.com/github/songqsh/MA2210/blob/main/src/bfs.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"# BFS\n\n",
"_____no_output_____"
]
],
[
[
"# import packages\nimport numpy as np\nimport numpy.linalg as la\nnp.set_printoptions(suppress=True)\nimport itertools",
"_____no_output_____"
],
[
"A = np.array([[1.,1,1,0], [2.,1,0,1]])\nb = np.array([40,60])",
"_____no_output_____"
],
[
"bv = [0,1]\nbs = la.solve(A[:,bv], b)\nprint(f'bv is {bv} and bs is {bs}')",
"bv is [0, 1] and bs is [20. 20.]\n"
],
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
d0bd733813708606831952dd1ed6e7d2e2acc0e9 | 96,698 | ipynb | Jupyter Notebook | Project/SageMaker Project.ipynb | kurdi1mans/sagemaker-deployment-project | ab6ab773f2cd3fa1abe864d07bdafa12b16f4ba5 | [
"MIT"
] | null | null | null | Project/SageMaker Project.ipynb | kurdi1mans/sagemaker-deployment-project | ab6ab773f2cd3fa1abe864d07bdafa12b16f4ba5 | [
"MIT"
] | null | null | null | Project/SageMaker Project.ipynb | kurdi1mans/sagemaker-deployment-project | ab6ab773f2cd3fa1abe864d07bdafa12b16f4ba5 | [
"MIT"
] | null | null | null | 47.100828 | 1,155 | 0.60601 | [
[
[
"# Creating a Sentiment Analysis Web App\n## Using PyTorch and SageMaker\n\n_Deep Learning Nanodegree Program | Deployment_\n\n---\n\nNow that we have a basic understanding of how SageMaker works we will try to use it to construct a complete project from end to end. Our goal will be to have a simple web page which a user can use to enter a movie review. The web page will then send the review off to our deployed model which will predict the sentiment of the entered review.",
"_____no_output_____"
],
[
"## Instructions\n\nSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a `# TODO: ...` comment. Please be sure to read the instructions carefully!\n\nIn addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.\n\n> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted.",
"_____no_output_____"
],
[
"## General Outline\n\nRecall the general outline for SageMaker projects using a notebook instance.\n\n1. Download or otherwise retrieve the data.\n2. Process / Prepare the data.\n3. Upload the processed data to S3.\n4. Train a chosen model.\n5. Test the trained model (typically using a batch transform job).\n6. Deploy the trained model.\n7. Use the deployed model.\n\nFor this project, you will be following the steps in the general outline with some modifications. \n\nFirst, you will not be testing the model in its own step. You will still be testing the model, however, you will do it by deploying your model and then using the deployed model by sending the test data to it. One of the reasons for doing this is so that you can make sure that your deployed model is working correctly before moving forward.\n\nIn addition, you will deploy and use your trained model a second time. In the second iteration you will customize the way that your trained model is deployed by including some of your own code. In addition, your newly deployed model will be used in the sentiment analysis web app.",
"_____no_output_____"
],
[
"## Step 1: Downloading the data\n\nAs in the XGBoost in SageMaker notebook, we will be using the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/)\n\n> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.",
"_____no_output_____"
]
],
[
[
"%mkdir ../data\n!wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz\n!tar -zxf ../data/aclImdb_v1.tar.gz -C ../data",
"mkdir: cannot create directory ‘../data’: File exists\n--2019-10-30 08:54:14-- http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz\nResolving ai.stanford.edu (ai.stanford.edu)... 171.64.68.10\nConnecting to ai.stanford.edu (ai.stanford.edu)|171.64.68.10|:80... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 84125825 (80M) [application/x-gzip]\nSaving to: ‘../data/aclImdb_v1.tar.gz’\n\n../data/aclImdb_v1. 100%[===================>] 80.23M 22.9MB/s in 4.9s \n\n2019-10-30 08:54:19 (16.3 MB/s) - ‘../data/aclImdb_v1.tar.gz’ saved [84125825/84125825]\n\n"
]
],
[
[
"## Step 2: Preparing and Processing the data\n\nAlso, as in the XGBoost notebook, we will be doing some initial data processing. The first few steps are the same as in the XGBoost example. To begin with, we will read in each of the reviews and combine them into a single input structure. Then, we will split the dataset into a training set and a testing set.",
"_____no_output_____"
]
],
[
[
"import os\nimport glob",
"_____no_output_____"
],
[
"def read_imdb_data(data_dir='../data/aclImdb'):\n data = {}\n labels = {}\n \n for data_type in ['train', 'test']:\n data[data_type] = {}\n labels[data_type] = {}\n \n for sentiment in ['pos', 'neg']:\n data[data_type][sentiment] = []\n labels[data_type][sentiment] = []\n \n path = os.path.join(data_dir, data_type, sentiment, '*.txt')\n files = glob.glob(path)\n \n for f in files:\n with open(f) as review:\n data[data_type][sentiment].append(review.read())\n # Here we represent a positive review by '1' and a negative review by '0'\n labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0)\n \n assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \\\n \"{}/{} data size does not match labels size\".format(data_type, sentiment)\n \n return data, labels",
"_____no_output_____"
],
[
"data, labels = read_imdb_data()\n",
"_____no_output_____"
],
[
"print(\"IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg\".format(\n len(data['train']['pos']), len(data['train']['neg']),\n len(data['test']['pos']), len(data['test']['neg'])))",
"IMDB reviews: train = 12500 pos / 12500 neg, test = 12500 pos / 12500 neg\n"
]
],
[
[
"Now that we've read the raw training and testing data from the downloaded dataset, we will combine the positive and negative reviews and shuffle the resulting records.",
"_____no_output_____"
]
],
[
[
"from sklearn.utils import shuffle",
"_____no_output_____"
],
[
"def prepare_imdb_data(data, labels):\n \"\"\"Prepare training and test sets from IMDb movie reviews.\"\"\"\n \n #Combine positive and negative reviews and labels\n data_train = data['train']['pos'] + data['train']['neg']\n data_test = data['test']['pos'] + data['test']['neg']\n labels_train = labels['train']['pos'] + labels['train']['neg']\n labels_test = labels['test']['pos'] + labels['test']['neg']\n \n #Shuffle reviews and corresponding labels within training and test sets\n data_train, labels_train = shuffle(data_train, labels_train)\n data_test, labels_test = shuffle(data_test, labels_test)\n \n # Return a unified training data, test data, training labels, test labets\n return data_train, data_test, labels_train, labels_test",
"_____no_output_____"
],
[
"train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels)",
"_____no_output_____"
],
[
"print(\"IMDb reviews (combined): train = {}, test = {}\".format(len(train_X), len(test_X)))",
"IMDb reviews (combined): train = 25000, test = 25000\n"
]
],
[
[
"Now that we have our training and testing sets unified and prepared, we should do a quick check and see an example of the data our model will be trained on. This is generally a good idea as it allows you to see how each of the further processing steps affects the reviews and it also ensures that the data has been loaded correctly.",
"_____no_output_____"
]
],
[
[
"print(train_X[100])\nprint(train_y[100])",
"I am very disappointed with \"K-911.\" The original \"good\" quality of \"K-9\" doesn't exist any more. This is more like a sitcom! Some of casts from original movie returned and got some of my memory back. The captain of Dooley now loves to hit him like a scene from old comedy show. That was crazy. What's the deal with the change of Police? It seems like they are now LAPD! Not San Diego PD. It is a completely different movie from \"\n0\n"
]
],
[
[
"The first step in processing the reviews is to make sure that any html tags that appear should be removed. In addition we wish to tokenize our input, that way words such as *entertained* and *entertaining* are considered the same with regard to sentiment analysis.",
"_____no_output_____"
]
],
[
[
"import nltk\nfrom nltk.corpus import stopwords\nfrom nltk.stem.porter import *\n\nimport re\nfrom bs4 import BeautifulSoup",
"_____no_output_____"
],
[
"def review_to_words(review):\n nltk.download(\"stopwords\", quiet=True)\n stemmer = PorterStemmer()\n \n text = BeautifulSoup(review, \"html.parser\").get_text() # Remove HTML tags\n text = re.sub(r\"[^a-zA-Z0-9]\", \" \", text.lower()) # Convert to lower case\n words = text.split() # Split string into words\n words = [w for w in words if w not in stopwords.words(\"english\")] # Remove stopwords\n words = [PorterStemmer().stem(w) for w in words] # stem\n \n return words",
"_____no_output_____"
]
],
[
[
"The `review_to_words` method defined above uses `BeautifulSoup` to remove any html tags that appear and uses the `nltk` package to tokenize the reviews. As a check to ensure we know how everything is working, try applying `review_to_words` to one of the reviews in the training set.",
"_____no_output_____"
]
],
[
[
"# TODO: Apply review_to_words to a review (train_X[100] or any other review)\nreview_to_words(train_X[100])",
"_____no_output_____"
]
],
[
[
"**Question:** Above we mentioned that `review_to_words` method removes html formatting and allows us to tokenize the words found in a review, for example, converting *entertained* and *entertaining* into *entertain* so that they are treated as though they are the same word. What else, if anything, does this method do to the input?",
"_____no_output_____"
],
[
"**Answer:** `review_to_words` does not only stem the words. It also removes stop words such as the articles `the`,`a`,`an` along with prepositions such as `and`,`to`,`from` and more. Moreover, it removes punctuations. The only issue with Stemming is that some words are not really accurate after the stemming (e.g. `pr` or `j`). However, if this is to happen systematically, it shouldn't be a problem.",
"_____no_output_____"
],
[
"The method below applies the `review_to_words` method to each of the reviews in the training and testing datasets. In addition it caches the results. This is because performing this processing step can take a long time. This way if you are unable to complete the notebook in the current session, you can come back without needing to process the data a second time.",
"_____no_output_____"
]
],
[
[
"import pickle",
"_____no_output_____"
],
[
"cache_dir = os.path.join(\"../cache\", \"sentiment_analysis\") # where to store cache files\nos.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists",
"_____no_output_____"
],
[
"def preprocess_data(data_train, data_test, labels_train, labels_test,\n cache_dir=cache_dir, cache_file=\"preprocessed_data.pkl\"):\n \"\"\"Convert each review to words; read from cache if available.\"\"\"\n\n # If cache_file is not None, try to read from it first\n cache_data = None\n if cache_file is not None:\n try:\n with open(os.path.join(cache_dir, cache_file), \"rb\") as f:\n cache_data = pickle.load(f)\n print(\"Read preprocessed data from cache file:\", cache_file)\n except:\n pass # unable to read from cache, but that's okay\n \n # If cache is missing, then do the heavy lifting\n if cache_data is None:\n # Preprocess training and test data to obtain words for each review\n #words_train = list(map(review_to_words, data_train))\n #words_test = list(map(review_to_words, data_test))\n words_train = [review_to_words(review) for review in data_train]\n words_test = [review_to_words(review) for review in data_test]\n \n # Write to cache file for future runs\n if cache_file is not None:\n cache_data = dict(words_train=words_train, words_test=words_test,\n labels_train=labels_train, labels_test=labels_test)\n with open(os.path.join(cache_dir, cache_file), \"wb\") as f:\n pickle.dump(cache_data, f)\n print(\"Wrote preprocessed data to cache file:\", cache_file)\n else:\n # Unpack data loaded from cache file\n words_train, words_test, labels_train, labels_test = (cache_data['words_train'],\n cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test'])\n \n return words_train, words_test, labels_train, labels_test",
"_____no_output_____"
],
[
"# Preprocess data\ntrain_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y)",
"Read preprocessed data from cache file: preprocessed_data.pkl\n"
]
],
[
[
"## Transform the data\n\nIn the XGBoost notebook we transformed the data from its word representation to a bag-of-words feature representation. For the model we are going to construct in this notebook we will construct a feature representation which is very similar. To start, we will represent each word as an integer. Of course, some of the words that appear in the reviews occur very infrequently and so likely don't contain much information for the purposes of sentiment analysis. The way we will deal with this problem is that we will fix the size of our working vocabulary and we will only include the words that appear most frequently. We will then combine all of the infrequent words into a single category and, in our case, we will label it as `1`.\n\nSince we will be using a recurrent neural network, it will be convenient if the length of each review is the same. To do this, we will fix a size for our reviews and then pad short reviews with the category 'no word' (which we will label `0`) and truncate long reviews.",
"_____no_output_____"
],
[
"### (TODO) Create a word dictionary\n\nTo begin with, we need to construct a way to map words that appear in the reviews to integers. Here we fix the size of our vocabulary (including the 'no word' and 'infrequent' categories) to be `5000` but you may wish to change this to see how it affects the model.\n\n> **TODO:** Complete the implementation for the `build_dict()` method below. Note that even though the vocab_size is set to `5000`, we only want to construct a mapping for the most frequently appearing `4998` words. This is because we want to reserve the special labels `0` for 'no word' and `1` for 'infrequent word'.",
"_____no_output_____"
]
],
[
[
"import numpy as np\nfrom collections import Counter,OrderedDict\nimport operator",
"_____no_output_____"
],
[
"def build_dict(data, vocab_size = 5000):\n \"\"\"Construct and return a dictionary mapping each of the most frequently appearing words to a unique integer.\"\"\"\n \n # TODO: Determine how often each word appears in `data`. Note that `data` is a list of sentences and that a\n # sentence is a list of words.\n \n # A dict storing the words that appear in the reviews along with how often they occur\n temp_data = np.concatenate(data,axis=0)\n word_counter = Counter(temp_data)\n word_count = dict(word_counter)\n \n # TODO: Sort the words found in `data` so that sorted_words[0] is the most frequently appearing word and\n # sorted_words[-1] is the least frequently appearing word.\n \n sorted_counts =sorted(word_count.items(), key=lambda x: x[1], reverse=True)\n sorted_words = [word for word,_ in sorted_counts]\n \n word_dict = {} # This is what we are building, a dictionary that translates words into integers\n for idx, word in enumerate(sorted_words[:vocab_size - 2]): # The -2 is so that we save room for the 'no word'\n word_dict[word] = idx + 2 # 'infrequent' labels\n \n return word_dict",
"_____no_output_____"
],
[
"word_dict = build_dict(train_X)",
"_____no_output_____"
]
],
[
[
"**Question:** What are the five most frequently appearing (tokenized) words in the training set? Does it makes sense that these words appear frequently in the training set?",
"_____no_output_____"
],
[
"**Answer:** [movi,film,one,like,time] are the most frequently apprearing words in the training set. This makes sense as the reviews are about movies and we should be expecting people to express whether they like them or not.",
"_____no_output_____"
]
],
[
[
"# TODO: Use this space to determine the five most frequently appearing words in the training set.\ntemp_data = np.concatenate(train_X,axis=0)\nword_counter = Counter(temp_data)\nword_count = dict(word_counter)\nsorted(word_count.items(), key=lambda x: x[1], reverse=True)[0:5]",
"_____no_output_____"
]
],
[
[
"### Save `word_dict`\n\nLater on when we construct an endpoint which processes a submitted review we will need to make use of the `word_dict` which we have created. As such, we will save it to a file now for future use.",
"_____no_output_____"
]
],
[
[
"data_dir = '../data/pytorch' # The folder we will use for storing data\nif not os.path.exists(data_dir): # Make sure that the folder exists\n os.makedirs(data_dir)",
"_____no_output_____"
],
[
"with open(os.path.join(data_dir, 'word_dict.pkl'), \"wb\") as f:\n pickle.dump(word_dict, f)",
"_____no_output_____"
]
],
[
[
"### Transform the reviews\n\nNow that we have our word dictionary which allows us to transform the words appearing in the reviews into integers, it is time to make use of it and convert our reviews to their integer sequence representation, making sure to pad or truncate to a fixed length, which in our case is `500`.",
"_____no_output_____"
]
],
[
[
"def convert_and_pad(word_dict, sentence, pad=500):\n NOWORD = 0 # We will use 0 to represent the 'no word' category\n INFREQ = 1 # and we use 1 to represent the infrequent words, i.e., words not appearing in word_dict\n \n working_sentence = [NOWORD] * pad\n \n for word_index, word in enumerate(sentence[:pad]):\n if word in word_dict:\n working_sentence[word_index] = word_dict[word]\n else:\n working_sentence[word_index] = INFREQ\n \n return working_sentence, min(len(sentence), pad)",
"_____no_output_____"
],
[
"def convert_and_pad_data(word_dict, data, pad=500):\n result = []\n lengths = []\n \n for sentence in data:\n converted, leng = convert_and_pad(word_dict, sentence, pad)\n result.append(converted)\n lengths.append(leng)\n \n return np.array(result), np.array(lengths)",
"_____no_output_____"
],
[
"train_X, train_X_len = convert_and_pad_data(word_dict, train_X)\ntest_X, test_X_len = convert_and_pad_data(word_dict, test_X)",
"_____no_output_____"
]
],
[
[
"As a quick check to make sure that things are working as intended, check to see what one of the reviews in the training set looks like after having been processeed. Does this look reasonable? What is the length of a review in the training set?",
"_____no_output_____"
]
],
[
[
"# Use this cell to examine one of the processed reviews to make sure everything is working as intended.\ntrain_X[0]",
"_____no_output_____"
]
],
[
[
"**Question:** In the cells above we use the `preprocess_data` and `convert_and_pad_data` methods to process both the training and testing set. Why or why not might this be a problem?",
"_____no_output_____"
],
[
"**Answer:** Since the word dictionary of the most frequently used words is built based on the training data, it's considered a form of leakage to use that same dictionary to transform and encode the testing set. However, the whole approach is to build the dictionary and freeze it such that we have consistent numerical representation across all datasets. Therefore, I would not change anything here as the results will not be consistent using different dictionaries across different datasets.",
"_____no_output_____"
],
[
"## Step 3: Upload the data to S3\n\nAs in the XGBoost notebook, we will need to upload the training dataset to S3 in order for our training code to access it. For now we will save it locally and we will upload to S3 later on.\n\n### Save the processed training dataset locally\n\nIt is important to note the format of the data that we are saving as we will need to know it when we write the training code. In our case, each row of the dataset has the form `label`, `length`, `review[500]` where `review[500]` is a sequence of `500` integers representing the words in the review.",
"_____no_output_____"
]
],
[
[
"import pandas as pd",
"_____no_output_____"
],
[
"pd.concat([pd.DataFrame(train_y), pd.DataFrame(train_X_len), pd.DataFrame(train_X)], axis=1) \\\n .to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)",
"_____no_output_____"
]
],
[
[
"### Uploading the training data\n\n\nNext, we need to upload the training data to the SageMaker default S3 bucket so that we can provide access to it while training our model.",
"_____no_output_____"
]
],
[
[
"import sagemaker",
"_____no_output_____"
],
[
"sagemaker_session = sagemaker.Session()\n\nbucket = sagemaker_session.default_bucket()\nprefix = 'sagemaker/sentiment_rnn'\n\nrole = sagemaker.get_execution_role()",
"_____no_output_____"
],
[
"input_data = sagemaker_session.upload_data(path=data_dir, bucket=bucket, key_prefix=prefix)",
"_____no_output_____"
]
],
[
[
"**NOTE:** The cell above uploads the entire contents of our data directory. This includes the `word_dict.pkl` file. This is fortunate as we will need this later on when we create an endpoint that accepts an arbitrary review. For now, we will just take note of the fact that it resides in the data directory (and so also in the S3 training bucket) and that we will need to make sure it gets saved in the model directory.",
"_____no_output_____"
],
[
"## Step 4: Build and Train the PyTorch Model\n\nIn the XGBoost notebook we discussed what a model is in the SageMaker framework. In particular, a model comprises three objects\n\n - Model Artifacts,\n - Training Code, and\n - Inference Code,\n \neach of which interact with one another. In the XGBoost example we used training and inference code that was provided by Amazon. Here we will still be using containers provided by Amazon with the added benefit of being able to include our own custom code.\n\nWe will start by implementing our own neural network in PyTorch along with a training script. For the purposes of this project we have provided the necessary model object in the `model.py` file, inside of the `train` folder. You can see the provided implementation by running the cell below.",
"_____no_output_____"
]
],
[
[
"!pygmentize train/model.py",
"\u001b[34mimport\u001b[39;49;00m \u001b[04m\u001b[36mtorch.nn\u001b[39;49;00m \u001b[34mas\u001b[39;49;00m \u001b[04m\u001b[36mnn\u001b[39;49;00m\r\n\r\n\u001b[34mclass\u001b[39;49;00m \u001b[04m\u001b[32mLSTMClassifier\u001b[39;49;00m(nn.Module):\r\n \u001b[33m\"\"\"\u001b[39;49;00m\r\n\u001b[33m This is the simple RNN model we will be using to perform Sentiment Analysis.\u001b[39;49;00m\r\n\u001b[33m \"\"\"\u001b[39;49;00m\r\n\r\n \u001b[34mdef\u001b[39;49;00m \u001b[32m__init__\u001b[39;49;00m(\u001b[36mself\u001b[39;49;00m, embedding_dim, hidden_dim, vocab_size):\r\n \u001b[33m\"\"\"\u001b[39;49;00m\r\n\u001b[33m Initialize the model by settingg up the various layers.\u001b[39;49;00m\r\n\u001b[33m \"\"\"\u001b[39;49;00m\r\n \u001b[36msuper\u001b[39;49;00m(LSTMClassifier, \u001b[36mself\u001b[39;49;00m).\u001b[32m__init__\u001b[39;49;00m()\r\n\r\n \u001b[36mself\u001b[39;49;00m.embedding = nn.Embedding(vocab_size, embedding_dim, padding_idx=\u001b[34m0\u001b[39;49;00m)\r\n \u001b[36mself\u001b[39;49;00m.lstm = nn.LSTM(embedding_dim, hidden_dim)\r\n \u001b[36mself\u001b[39;49;00m.dense = nn.Linear(in_features=hidden_dim, out_features=\u001b[34m1\u001b[39;49;00m)\r\n \u001b[36mself\u001b[39;49;00m.sig = nn.Sigmoid()\r\n \r\n \u001b[36mself\u001b[39;49;00m.word_dict = \u001b[36mNone\u001b[39;49;00m\r\n\r\n \u001b[34mdef\u001b[39;49;00m \u001b[32mforward\u001b[39;49;00m(\u001b[36mself\u001b[39;49;00m, x):\r\n \u001b[33m\"\"\"\u001b[39;49;00m\r\n\u001b[33m Perform a forward pass of our model on some input.\u001b[39;49;00m\r\n\u001b[33m \"\"\"\u001b[39;49;00m\r\n x = x.t()\r\n lengths = x[\u001b[34m0\u001b[39;49;00m,:]\r\n reviews = x[\u001b[34m1\u001b[39;49;00m:,:]\r\n embeds = \u001b[36mself\u001b[39;49;00m.embedding(reviews)\r\n lstm_out, _ = \u001b[36mself\u001b[39;49;00m.lstm(embeds)\r\n out = \u001b[36mself\u001b[39;49;00m.dense(lstm_out)\r\n out = out[lengths - \u001b[34m1\u001b[39;49;00m, \u001b[36mrange\u001b[39;49;00m(\u001b[36mlen\u001b[39;49;00m(lengths))]\r\n \u001b[34mreturn\u001b[39;49;00m \u001b[36mself\u001b[39;49;00m.sig(out.squeeze())\r\n"
]
],
[
[
"The important takeaway from the implementation provided is that there are three parameters that we may wish to tweak to improve the performance of our model. These are the embedding dimension, the hidden dimension and the size of the vocabulary. We will likely want to make these parameters configurable in the training script so that if we wish to modify them we do not need to modify the script itself. We will see how to do this later on. To start we will write some of the training code in the notebook so that we can more easily diagnose any issues that arise.\n\nFirst we will load a small portion of the training data set to use as a sample. It would be very time consuming to try and train the model completely in the notebook as we do not have access to a gpu and the compute instance that we are using is not particularly powerful. However, we can work on a small bit of the data to get a feel for how our training script is behaving.",
"_____no_output_____"
]
],
[
[
"import torch\nimport torch.utils.data",
"_____no_output_____"
],
[
"# Read in only the first 250 rows\ntrain_sample = pd.read_csv(os.path.join(data_dir, 'train.csv'), header=None, names=None, nrows=250)\n\n# Turn the input pandas dataframe into tensors\ntrain_sample_y = torch.from_numpy(train_sample[[0]].values).float().squeeze()\ntrain_sample_X = torch.from_numpy(train_sample.drop([0], axis=1).values).long()\n\n# Build the dataset\ntrain_sample_ds = torch.utils.data.TensorDataset(train_sample_X, train_sample_y)\n# Build the dataloader\ntrain_sample_dl = torch.utils.data.DataLoader(train_sample_ds, batch_size=50)",
"_____no_output_____"
]
],
[
[
"### (TODO) Writing the training method\n\nNext we need to write the training code itself. This should be very similar to training methods that you have written before to train PyTorch models. We will leave any difficult aspects such as model saving / loading and parameter loading until a little later.",
"_____no_output_____"
]
],
[
[
"def train(model, train_loader, epochs, optimizer, loss_fn, device):\n for epoch in range(1, epochs + 1):\n model.train()\n total_loss = 0\n for batch in train_loader: \n batch_X, batch_y = batch\n \n batch_X = batch_X.to(device)\n batch_y = batch_y.to(device)\n \n # TODO: Complete this train method to train the model provided.\n optimizer.zero_grad()\n output = model.forward(batch_X)\n loss = loss_fn(output, batch_y)\n loss.backward()\n optimizer.step()\n \n total_loss += loss.data.item()\n print(\"Epoch: {}, BCELoss: {}\".format(epoch, total_loss / len(train_loader)))",
"_____no_output_____"
]
],
[
[
"Supposing we have the training method above, we will test that it is working by writing a bit of code in the notebook that executes our training method on the small sample training set that we loaded earlier. The reason for doing this in the notebook is so that we have an opportunity to fix any errors that arise early when they are easier to diagnose.",
"_____no_output_____"
]
],
[
[
"import torch.optim as optim\nfrom train.model import LSTMClassifier",
"_____no_output_____"
],
[
"device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\nmodel = LSTMClassifier(32, 100, 5000).to(device)\noptimizer = optim.Adam(model.parameters())\nloss_fn = torch.nn.BCELoss()\n\ntrain(model, train_sample_dl, 5, optimizer, loss_fn, device)",
"Epoch: 1, BCELoss: 0.6868077039718627\nEpoch: 2, BCELoss: 0.6752514839172363\nEpoch: 3, BCELoss: 0.664816963672638\nEpoch: 4, BCELoss: 0.6535287261009216\nEpoch: 5, BCELoss: 0.6402966618537903\n"
]
],
[
[
"In order to construct a PyTorch model using SageMaker we must provide SageMaker with a training script. We may optionally include a directory which will be copied to the container and from which our training code will be run. When the training container is executed it will check the uploaded directory (if there is one) for a `requirements.txt` file and install any required Python libraries, after which the training script will be run.",
"_____no_output_____"
],
[
"### (TODO) Training the model\n\nWhen a PyTorch model is constructed in SageMaker, an entry point must be specified. This is the Python file which will be executed when the model is trained. Inside of the `train` directory is a file called `train.py` which has been provided and which contains most of the necessary code to train our model. The only thing that is missing is the implementation of the `train()` method which you wrote earlier in this notebook.\n\n**TODO**: Copy the `train()` method written above and paste it into the `train/train.py` file where required.\n\nThe way that SageMaker passes hyperparameters to the training script is by way of arguments. These arguments can then be parsed and used in the training script. To see how this is done take a look at the provided `train/train.py` file.",
"_____no_output_____"
]
],
[
[
"from sagemaker.pytorch import PyTorch",
"_____no_output_____"
],
[
"estimator = PyTorch(entry_point=\"train.py\",\n source_dir=\"train\",\n role=role,\n framework_version='0.4.0',\n train_instance_count=1,\n train_instance_type='ml.p2.xlarge',\n hyperparameters={\n 'epochs': 10,\n 'hidden_dim': 200,\n })",
"_____no_output_____"
],
[
"estimator.fit({'training': input_data})",
"2019-10-30 08:56:36 Starting - Starting the training job...\n2019-10-30 08:56:39 Starting - Launching requested ML instances......\n2019-10-30 08:57:45 Starting - Preparing the instances for training......\n2019-10-30 08:59:05 Downloading - Downloading input data......\n2019-10-30 08:59:41 Training - Downloading the training image.\u001b[31mbash: cannot set terminal process group (-1): Inappropriate ioctl for device\u001b[0m\n\u001b[31mbash: no job control in this shell\u001b[0m\n\u001b[31m2019-10-30 09:00:09,506 sagemaker-containers INFO Imported framework sagemaker_pytorch_container.training\u001b[0m\n\u001b[31m2019-10-30 09:00:09,531 sagemaker_pytorch_container.training INFO Block until all host DNS lookups succeed.\u001b[0m\n\u001b[31m2019-10-30 09:00:10,954 sagemaker_pytorch_container.training INFO Invoking user training script.\u001b[0m\n\u001b[31m2019-10-30 09:00:11,235 sagemaker-containers INFO Module train does not provide a setup.py. \u001b[0m\n\u001b[31mGenerating setup.py\u001b[0m\n\u001b[31m2019-10-30 09:00:11,236 sagemaker-containers INFO Generating setup.cfg\u001b[0m\n\u001b[31m2019-10-30 09:00:11,236 sagemaker-containers INFO Generating MANIFEST.in\u001b[0m\n\u001b[31m2019-10-30 09:00:11,236 sagemaker-containers INFO Installing module with the following command:\u001b[0m\n\u001b[31m/usr/bin/python -m pip install -U . -r requirements.txt\u001b[0m\n\u001b[31mProcessing /opt/ml/code\u001b[0m\n\u001b[31mCollecting pandas (from -r requirements.txt (line 1))\n Downloading https://files.pythonhosted.org/packages/74/24/0cdbf8907e1e3bc5a8da03345c23cbed7044330bb8f73bb12e711a640a00/pandas-0.24.2-cp35-cp35m-manylinux1_x86_64.whl (10.0MB)\u001b[0m\n\u001b[31mCollecting numpy (from -r requirements.txt (line 2))\n Downloading https://files.pythonhosted.org/packages/5e/f8/82a8a6ed446b58aa718b2744b265983783a2c84098a73db6d0b78a573e25/numpy-1.17.3-cp35-cp35m-manylinux1_x86_64.whl (19.8MB)\u001b[0m\n\u001b[31mCollecting nltk (from -r requirements.txt (line 3))\n Downloading https://files.pythonhosted.org/packages/f6/1d/d925cfb4f324ede997f6d47bea4d9babba51b49e87a767c170b77005889d/nltk-3.4.5.zip (1.5MB)\u001b[0m\n\u001b[31mCollecting beautifulsoup4 (from -r requirements.txt (line 4))\n Downloading https://files.pythonhosted.org/packages/3b/c8/a55eb6ea11cd7e5ac4bacdf92bac4693b90d3ba79268be16527555e186f0/beautifulsoup4-4.8.1-py3-none-any.whl (101kB)\u001b[0m\n\u001b[31mCollecting html5lib (from -r requirements.txt (line 5))\n Downloading https://files.pythonhosted.org/packages/a5/62/bbd2be0e7943ec8504b517e62bab011b4946e1258842bc159e5dfde15b96/html5lib-1.0.1-py2.py3-none-any.whl (117kB)\u001b[0m\n\u001b[31mCollecting pytz>=2011k (from pandas->-r requirements.txt (line 1))\n Downloading https://files.pythonhosted.org/packages/e7/f9/f0b53f88060247251bf481fa6ea62cd0d25bf1b11a87888e53ce5b7c8ad2/pytz-2019.3-py2.py3-none-any.whl (509kB)\u001b[0m\n\u001b[31mRequirement already satisfied, skipping upgrade: python-dateutil>=2.5.0 in /usr/local/lib/python3.5/dist-packages (from pandas->-r requirements.txt (line 1)) (2.7.5)\u001b[0m\n\u001b[31mRequirement already satisfied, skipping upgrade: six in /usr/local/lib/python3.5/dist-packages (from nltk->-r requirements.txt (line 3)) (1.11.0)\u001b[0m\n\u001b[31mCollecting soupsieve>=1.2 (from beautifulsoup4->-r requirements.txt (line 4))\n Downloading https://files.pythonhosted.org/packages/5d/42/d821581cf568e9b7dfc5b415aa61952b0f5e3dede4f3cbd650e3a1082992/soupsieve-1.9.4-py2.py3-none-any.whl\u001b[0m\n\u001b[31mCollecting webencodings (from html5lib->-r requirements.txt (line 5))\n Downloading https://files.pythonhosted.org/packages/f4/24/2a3e3df732393fed8b3ebf2ec078f05546de641fe1b667ee316ec1dcf3b7/webencodings-0.5.1-py2.py3-none-any.whl\u001b[0m\n\u001b[31mBuilding wheels for collected packages: nltk, train\n Running setup.py bdist_wheel for nltk: started\u001b[0m\n\u001b[31m Running setup.py bdist_wheel for nltk: finished with status 'done'\n Stored in directory: /root/.cache/pip/wheels/96/86/f6/68ab24c23f207c0077381a5e3904b2815136b879538a24b483\n Running setup.py bdist_wheel for train: started\n Running setup.py bdist_wheel for train: finished with status 'done'\n Stored in directory: /tmp/pip-ephem-wheel-cache-k5jjk43l/wheels/35/24/16/37574d11bf9bde50616c67372a334f94fa8356bc7164af8ca3\u001b[0m\n\u001b[31mSuccessfully built nltk train\u001b[0m\n\u001b[31mInstalling collected packages: numpy, pytz, pandas, nltk, soupsieve, beautifulsoup4, webencodings, html5lib, train\n Found existing installation: numpy 1.15.4\u001b[0m\n\u001b[31m Uninstalling numpy-1.15.4:\n Successfully uninstalled numpy-1.15.4\u001b[0m\n\n2019-10-30 09:00:08 Training - Training image download completed. Training in progress.\u001b[31mSuccessfully installed beautifulsoup4-4.8.1 html5lib-1.0.1 nltk-3.4.5 numpy-1.17.3 pandas-0.24.2 pytz-2019.3 soupsieve-1.9.4 train-1.0.0 webencodings-0.5.1\u001b[0m\n\u001b[31mYou are using pip version 18.1, however version 19.3.1 is available.\u001b[0m\n\u001b[31mYou should consider upgrading via the 'pip install --upgrade pip' command.\u001b[0m\n\u001b[31m2019-10-30 09:00:23,282 sagemaker-containers INFO Invoking user script\n\u001b[0m\n\u001b[31mTraining Env:\n\u001b[0m\n\u001b[31m{\n \"job_name\": \"sagemaker-pytorch-2019-10-30-08-56-36-307\",\n \"num_gpus\": 1,\n \"input_config_dir\": \"/opt/ml/input/config\",\n \"module_dir\": \"s3://sagemaker-us-east-1-797322584826/sagemaker-pytorch-2019-10-30-08-56-36-307/source/sourcedir.tar.gz\",\n \"num_cpus\": 4,\n \"current_host\": \"algo-1\",\n \"model_dir\": \"/opt/ml/model\",\n \"network_interface_name\": \"eth0\",\n \"resource_config\": {\n \"hosts\": [\n \"algo-1\"\n ],\n \"network_interface_name\": \"eth0\",\n \"current_host\": \"algo-1\"\n },\n \"log_level\": 20,\n \"output_data_dir\": \"/opt/ml/output/data\",\n \"framework_module\": \"sagemaker_pytorch_container.training:main\",\n \"additional_framework_parameters\": {},\n \"hosts\": [\n \"algo-1\"\n ],\n \"hyperparameters\": {\n \"hidden_dim\": 200,\n \"epochs\": 10\n },\n \"input_dir\": \"/opt/ml/input\",\n \"channel_input_dirs\": {\n \"training\": \"/opt/ml/input/data/training\"\n },\n \"module_name\": \"train\",\n \"output_dir\": \"/opt/ml/output\",\n \"user_entry_point\": \"train.py\",\n \"input_data_config\": {\n \"training\": {\n \"TrainingInputMode\": \"File\",\n \"RecordWrapperType\": \"None\",\n \"S3DistributionType\": \"FullyReplicated\"\n }\n },\n \"output_intermediate_dir\": \"/opt/ml/output/intermediate\"\u001b[0m\n\u001b[31m}\n\u001b[0m\n\u001b[31mEnvironment variables:\n\u001b[0m\n\u001b[31mPYTHONPATH=/usr/local/bin:/usr/lib/python35.zip:/usr/lib/python3.5:/usr/lib/python3.5/plat-x86_64-linux-gnu:/usr/lib/python3.5/lib-dynload:/usr/local/lib/python3.5/dist-packages:/usr/lib/python3/dist-packages\u001b[0m\n\u001b[31mSM_MODULE_NAME=train\u001b[0m\n\u001b[31mSM_HPS={\"epochs\":10,\"hidden_dim\":200}\u001b[0m\n\u001b[31mSM_RESOURCE_CONFIG={\"current_host\":\"algo-1\",\"hosts\":[\"algo-1\"],\"network_interface_name\":\"eth0\"}\u001b[0m\n\u001b[31mSM_NUM_GPUS=1\u001b[0m\n\u001b[31mSM_USER_ENTRY_POINT=train.py\u001b[0m\n\u001b[31mSM_OUTPUT_DIR=/opt/ml/output\u001b[0m\n\u001b[31mSM_OUTPUT_DATA_DIR=/opt/ml/output/data\u001b[0m\n\u001b[31mSM_INPUT_DIR=/opt/ml/input\u001b[0m\n\u001b[31mSM_FRAMEWORK_MODULE=sagemaker_pytorch_container.training:main\u001b[0m\n\u001b[31mSM_FRAMEWORK_PARAMS={}\u001b[0m\n\u001b[31mSM_INPUT_CONFIG_DIR=/opt/ml/input/config\u001b[0m\n\u001b[31mSM_LOG_LEVEL=20\u001b[0m\n\u001b[31mSM_CURRENT_HOST=algo-1\u001b[0m\n\u001b[31mSM_HOSTS=[\"algo-1\"]\u001b[0m\n\u001b[31mSM_HP_EPOCHS=10\u001b[0m\n\u001b[31mSM_HP_HIDDEN_DIM=200\u001b[0m\n\u001b[31mSM_MODULE_DIR=s3://sagemaker-us-east-1-797322584826/sagemaker-pytorch-2019-10-30-08-56-36-307/source/sourcedir.tar.gz\u001b[0m\n\u001b[31mSM_USER_ARGS=[\"--epochs\",\"10\",\"--hidden_dim\",\"200\"]\u001b[0m\n\u001b[31mSM_TRAINING_ENV={\"additional_framework_parameters\":{},\"channel_input_dirs\":{\"training\":\"/opt/ml/input/data/training\"},\"current_host\":\"algo-1\",\"framework_module\":\"sagemaker_pytorch_container.training:main\",\"hosts\":[\"algo-1\"],\"hyperparameters\":{\"epochs\":10,\"hidden_dim\":200},\"input_config_dir\":\"/opt/ml/input/config\",\"input_data_config\":{\"training\":{\"RecordWrapperType\":\"None\",\"S3DistributionType\":\"FullyReplicated\",\"TrainingInputMode\":\"File\"}},\"input_dir\":\"/opt/ml/input\",\"job_name\":\"sagemaker-pytorch-2019-10-30-08-56-36-307\",\"log_level\":20,\"model_dir\":\"/opt/ml/model\",\"module_dir\":\"s3://sagemaker-us-east-1-797322584826/sagemaker-pytorch-2019-10-30-08-56-36-307/source/sourcedir.tar.gz\",\"module_name\":\"train\",\"network_interface_name\":\"eth0\",\"num_cpus\":4,\"num_gpus\":1,\"output_data_dir\":\"/opt/ml/output/data\",\"output_dir\":\"/opt/ml/output\",\"output_intermediate_dir\":\"/opt/ml/output/intermediate\",\"resource_config\":{\"current_host\":\"algo-1\",\"hosts\":[\"algo-1\"],\"network_interface_name\":\"eth0\"},\"user_entry_point\":\"train.py\"}\u001b[0m\n\u001b[31mSM_NUM_CPUS=4\u001b[0m\n\u001b[31mSM_OUTPUT_INTERMEDIATE_DIR=/opt/ml/output/intermediate\u001b[0m\n\u001b[31mSM_MODEL_DIR=/opt/ml/model\u001b[0m\n\u001b[31mSM_NETWORK_INTERFACE_NAME=eth0\u001b[0m\n\u001b[31mSM_CHANNELS=[\"training\"]\u001b[0m\n\u001b[31mSM_INPUT_DATA_CONFIG={\"training\":{\"RecordWrapperType\":\"None\",\"S3DistributionType\":\"FullyReplicated\",\"TrainingInputMode\":\"File\"}}\u001b[0m\n\u001b[31mSM_CHANNEL_TRAINING=/opt/ml/input/data/training\n\u001b[0m\n\u001b[31mInvoking script with the following command:\n\u001b[0m\n\u001b[31m/usr/bin/python -m train --epochs 10 --hidden_dim 200\n\n\u001b[0m\n\u001b[31mUsing device cuda.\u001b[0m\n\u001b[31mGet train data loader.\u001b[0m\n"
]
],
[
[
"## Step 5: Testing the model\n\nAs mentioned at the top of this notebook, we will be testing this model by first deploying it and then sending the testing data to the deployed endpoint. We will do this so that we can make sure that the deployed model is working correctly.\n\n## Step 6: Deploy the model for testing\n\nNow that we have trained our model, we would like to test it to see how it performs. Currently our model takes input of the form `review_length, review[500]` where `review[500]` is a sequence of `500` integers which describe the words present in the review, encoded using `word_dict`. Fortunately for us, SageMaker provides built-in inference code for models with simple inputs such as this.\n\nThere is one thing that we need to provide, however, and that is a function which loads the saved model. This function must be called `model_fn()` and takes as its only parameter a path to the directory where the model artifacts are stored. This function must also be present in the python file which we specified as the entry point. In our case the model loading function has been provided and so no changes need to be made.\n\n**NOTE**: When the built-in inference code is run it must import the `model_fn()` method from the `train.py` file. This is why the training code is wrapped in a main guard ( ie, `if __name__ == '__main__':` )\n\nSince we don't need to change anything in the code that was uploaded during training, we can simply deploy the current model as-is.\n\n**NOTE:** When deploying a model you are asking SageMaker to launch an compute instance that will wait for data to be sent to it. As a result, this compute instance will continue to run until *you* shut it down. This is important to know since the cost of a deployed endpoint depends on how long it has been running for.\n\nIn other words **If you are no longer using a deployed endpoint, shut it down!**\n\n**TODO:** Deploy the trained model.",
"_____no_output_____"
]
],
[
[
"# TODO: Deploy the trained model\n\npredictor = estimator.deploy(initial_instance_count = 1, instance_type = 'ml.m4.xlarge')",
"---------------------------------------------------------------------------------------------------!"
]
],
[
[
"## Step 7 - Use the model for testing\n\nOnce deployed, we can read in the test data and send it off to our deployed model to get some results. Once we collect all of the results we can determine how accurate our model is.",
"_____no_output_____"
]
],
[
[
"test_X = pd.concat([pd.DataFrame(test_X_len), pd.DataFrame(test_X)], axis=1)",
"_____no_output_____"
],
[
"# We split the data into chunks and send each chunk seperately, accumulating the results.\ndef predict(data, rows=512):\n split_array = np.array_split(data, int(data.shape[0] / float(rows) + 1))\n predictions = np.array([])\n for array in split_array:\n predictions = np.append(predictions, predictor.predict(array))\n \n return predictions",
"_____no_output_____"
],
[
"predictions = predict(test_X.values)\npredictions = [round(num) for num in predictions]",
"_____no_output_____"
],
[
"from sklearn.metrics import accuracy_score",
"_____no_output_____"
],
[
"accuracy_score(test_y, predictions)",
"_____no_output_____"
]
],
[
[
"**Question:** How does this model compare to the XGBoost model you created earlier? Why might these two models perform differently on this dataset? Which do *you* think is better for sentiment analysis?",
"_____no_output_____"
],
[
"**Answer:** The performance of this RNN model is comparable to the XGBoost model. However, this RNN model is is relatively very simple and it's not tuned yet. I am anticipating a better performance once tuning takes place or a more complex model is built for the RNN. For XGBoost, there is no more room for improvement as we already did the tuning for the very limited set of parameters. Therefore, I would recommend continuing with the RNN model for R&D.",
"_____no_output_____"
],
[
"### (TODO) More testing\n\nWe now have a trained model which has been deployed and which we can send processed reviews to and which returns the predicted sentiment. However, ultimately we would like to be able to send our model an unprocessed review. That is, we would like to send the review itself as a string. For example, suppose we wish to send the following review to our model.",
"_____no_output_____"
]
],
[
[
"test_review = 'The simplest pleasures in life are the best, and this film is one of them. Combining a rather basic storyline of love and adventure this movie transcends the usual weekend fair with wit and unmitigated charm.'",
"_____no_output_____"
]
],
[
[
"The question we now need to answer is, how do we send this review to our model?\n\nRecall in the first section of this notebook we did a bunch of data processing to the IMDb dataset. In particular, we did two specific things to the provided reviews.\n - Removed any html tags and stemmed the input\n - Encoded the review as a sequence of integers using `word_dict`\n \nIn order process the review we will need to repeat these two steps.\n\n**TODO**: Using the `review_to_words` and `convert_and_pad` methods from section one, convert `test_review` into a numpy array `test_data` suitable to send to our model. Remember that our model expects input of the form `review_length, review[500]`.",
"_____no_output_____"
]
],
[
[
"# TODO: Convert test_review into a form usable by the model and save the results in test_data\ntest_data = None",
"_____no_output_____"
],
[
"test_words = review_to_words(test_review)",
"_____no_output_____"
],
[
"test_numerical, length = convert_and_pad(word_dict, test_words)",
"_____no_output_____"
],
[
"test_data = np.array([[length] + test_numerical])",
"_____no_output_____"
]
],
[
[
"Now that we have processed the review, we can send the resulting array to our model to predict the sentiment of the review.",
"_____no_output_____"
]
],
[
[
"predictor.predict(test_data)",
"_____no_output_____"
]
],
[
[
"Since the return value of our model is close to `1`, we can be certain that the review we submitted is positive.",
"_____no_output_____"
],
[
"### Delete the endpoint\n\nOf course, just like in the XGBoost notebook, once we've deployed an endpoint it continues to run until we tell it to shut down. Since we are done using our endpoint for now, we can delete it.",
"_____no_output_____"
]
],
[
[
"estimator.delete_endpoint()",
"_____no_output_____"
]
],
[
[
"## Step 6 (again) - Deploy the model for the web app\n\nNow that we know that our model is working, it's time to create some custom inference code so that we can send the model a review which has not been processed and have it determine the sentiment of the review.\n\nAs we saw above, by default the estimator which we created, when deployed, will use the entry script and directory which we provided when creating the model. However, since we now wish to accept a string as input and our model expects a processed review, we need to write some custom inference code.\n\nWe will store the code that we write in the `serve` directory. Provided in this directory is the `model.py` file that we used to construct our model, a `utils.py` file which contains the `review_to_words` and `convert_and_pad` pre-processing functions which we used during the initial data processing, and `predict.py`, the file which will contain our custom inference code. Note also that `requirements.txt` is present which will tell SageMaker what Python libraries are required by our custom inference code.\n\nWhen deploying a PyTorch model in SageMaker, you are expected to provide four functions which the SageMaker inference container will use.\n - `model_fn`: This function is the same function that we used in the training script and it tells SageMaker how to load our model.\n - `input_fn`: This function receives the raw serialized input that has been sent to the model's endpoint and its job is to de-serialize and make the input available for the inference code.\n - `output_fn`: This function takes the output of the inference code and its job is to serialize this output and return it to the caller of the model's endpoint.\n - `predict_fn`: The heart of the inference script, this is where the actual prediction is done and is the function which you will need to complete.\n\nFor the simple website that we are constructing during this project, the `input_fn` and `output_fn` methods are relatively straightforward. We only require being able to accept a string as input and we expect to return a single value as output. You might imagine though that in a more complex application the input or output may be image data or some other binary data which would require some effort to serialize.\n\n### (TODO) Writing inference code\n\nBefore writing our custom inference code, we will begin by taking a look at the code which has been provided.",
"_____no_output_____"
]
],
[
[
"!pygmentize serve/predict.py",
"\u001b[34mimport\u001b[39;49;00m \u001b[04m\u001b[36margparse\u001b[39;49;00m\r\n\u001b[34mimport\u001b[39;49;00m \u001b[04m\u001b[36mjson\u001b[39;49;00m\r\n\u001b[34mimport\u001b[39;49;00m \u001b[04m\u001b[36mos\u001b[39;49;00m\r\n\u001b[34mimport\u001b[39;49;00m \u001b[04m\u001b[36mpickle\u001b[39;49;00m\r\n\u001b[34mimport\u001b[39;49;00m \u001b[04m\u001b[36msys\u001b[39;49;00m\r\n\u001b[34mimport\u001b[39;49;00m \u001b[04m\u001b[36msagemaker_containers\u001b[39;49;00m\r\n\u001b[34mimport\u001b[39;49;00m \u001b[04m\u001b[36mpandas\u001b[39;49;00m \u001b[34mas\u001b[39;49;00m \u001b[04m\u001b[36mpd\u001b[39;49;00m\r\n\u001b[34mimport\u001b[39;49;00m \u001b[04m\u001b[36mnumpy\u001b[39;49;00m \u001b[34mas\u001b[39;49;00m \u001b[04m\u001b[36mnp\u001b[39;49;00m\r\n\u001b[34mimport\u001b[39;49;00m \u001b[04m\u001b[36mtorch\u001b[39;49;00m\r\n\u001b[34mimport\u001b[39;49;00m \u001b[04m\u001b[36mtorch.nn\u001b[39;49;00m \u001b[34mas\u001b[39;49;00m \u001b[04m\u001b[36mnn\u001b[39;49;00m\r\n\u001b[34mimport\u001b[39;49;00m \u001b[04m\u001b[36mtorch.optim\u001b[39;49;00m \u001b[34mas\u001b[39;49;00m \u001b[04m\u001b[36moptim\u001b[39;49;00m\r\n\u001b[34mimport\u001b[39;49;00m \u001b[04m\u001b[36mtorch.utils.data\u001b[39;49;00m\r\n\r\n\u001b[34mfrom\u001b[39;49;00m \u001b[04m\u001b[36mmodel\u001b[39;49;00m \u001b[34mimport\u001b[39;49;00m LSTMClassifier\r\n\r\n\u001b[34mfrom\u001b[39;49;00m \u001b[04m\u001b[36mutils\u001b[39;49;00m \u001b[34mimport\u001b[39;49;00m review_to_words, convert_and_pad\r\n\r\n\u001b[34mdef\u001b[39;49;00m \u001b[32mmodel_fn\u001b[39;49;00m(model_dir):\r\n \u001b[33m\"\"\"Load the PyTorch model from the `model_dir` directory.\"\"\"\u001b[39;49;00m\r\n \u001b[34mprint\u001b[39;49;00m(\u001b[33m\"\u001b[39;49;00m\u001b[33mLoading model.\u001b[39;49;00m\u001b[33m\"\u001b[39;49;00m)\r\n\r\n \u001b[37m# First, load the parameters used to create the model.\u001b[39;49;00m\r\n model_info = {}\r\n model_info_path = os.path.join(model_dir, \u001b[33m'\u001b[39;49;00m\u001b[33mmodel_info.pth\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m)\r\n \u001b[34mwith\u001b[39;49;00m \u001b[36mopen\u001b[39;49;00m(model_info_path, \u001b[33m'\u001b[39;49;00m\u001b[33mrb\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m) \u001b[34mas\u001b[39;49;00m f:\r\n model_info = torch.load(f)\r\n\r\n \u001b[34mprint\u001b[39;49;00m(\u001b[33m\"\u001b[39;49;00m\u001b[33mmodel_info: {}\u001b[39;49;00m\u001b[33m\"\u001b[39;49;00m.format(model_info))\r\n\r\n \u001b[37m# Determine the device and construct the model.\u001b[39;49;00m\r\n device = torch.device(\u001b[33m\"\u001b[39;49;00m\u001b[33mcuda\u001b[39;49;00m\u001b[33m\"\u001b[39;49;00m \u001b[34mif\u001b[39;49;00m torch.cuda.is_available() \u001b[34melse\u001b[39;49;00m \u001b[33m\"\u001b[39;49;00m\u001b[33mcpu\u001b[39;49;00m\u001b[33m\"\u001b[39;49;00m)\r\n model = LSTMClassifier(model_info[\u001b[33m'\u001b[39;49;00m\u001b[33membedding_dim\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m], model_info[\u001b[33m'\u001b[39;49;00m\u001b[33mhidden_dim\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m], model_info[\u001b[33m'\u001b[39;49;00m\u001b[33mvocab_size\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m])\r\n\r\n \u001b[37m# Load the store model parameters.\u001b[39;49;00m\r\n model_path = os.path.join(model_dir, \u001b[33m'\u001b[39;49;00m\u001b[33mmodel.pth\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m)\r\n \u001b[34mwith\u001b[39;49;00m \u001b[36mopen\u001b[39;49;00m(model_path, \u001b[33m'\u001b[39;49;00m\u001b[33mrb\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m) \u001b[34mas\u001b[39;49;00m f:\r\n model.load_state_dict(torch.load(f))\r\n\r\n \u001b[37m# Load the saved word_dict.\u001b[39;49;00m\r\n word_dict_path = os.path.join(model_dir, \u001b[33m'\u001b[39;49;00m\u001b[33mword_dict.pkl\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m)\r\n \u001b[34mwith\u001b[39;49;00m \u001b[36mopen\u001b[39;49;00m(word_dict_path, \u001b[33m'\u001b[39;49;00m\u001b[33mrb\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m) \u001b[34mas\u001b[39;49;00m f:\r\n model.word_dict = pickle.load(f)\r\n\r\n model.to(device).eval()\r\n\r\n \u001b[34mprint\u001b[39;49;00m(\u001b[33m\"\u001b[39;49;00m\u001b[33mDone loading model.\u001b[39;49;00m\u001b[33m\"\u001b[39;49;00m)\r\n \u001b[34mreturn\u001b[39;49;00m model\r\n\r\n\u001b[34mdef\u001b[39;49;00m \u001b[32minput_fn\u001b[39;49;00m(serialized_input_data, content_type):\r\n \u001b[34mprint\u001b[39;49;00m(\u001b[33m'\u001b[39;49;00m\u001b[33mDeserializing the input data.\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m)\r\n \u001b[34mif\u001b[39;49;00m content_type == \u001b[33m'\u001b[39;49;00m\u001b[33mtext/plain\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m:\r\n data = serialized_input_data.decode(\u001b[33m'\u001b[39;49;00m\u001b[33mutf-8\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m)\r\n \u001b[34mreturn\u001b[39;49;00m data\r\n \u001b[34mraise\u001b[39;49;00m \u001b[36mException\u001b[39;49;00m(\u001b[33m'\u001b[39;49;00m\u001b[33mRequested unsupported ContentType in content_type: \u001b[39;49;00m\u001b[33m'\u001b[39;49;00m + content_type)\r\n\r\n\u001b[34mdef\u001b[39;49;00m \u001b[32moutput_fn\u001b[39;49;00m(prediction_output, accept):\r\n \u001b[34mprint\u001b[39;49;00m(\u001b[33m'\u001b[39;49;00m\u001b[33mSerializing the generated output.\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m)\r\n \u001b[34mreturn\u001b[39;49;00m \u001b[36mstr\u001b[39;49;00m(prediction_output)\r\n\r\n\u001b[34mdef\u001b[39;49;00m \u001b[32mpredict_fn\u001b[39;49;00m(input_data, model):\r\n \u001b[34mprint\u001b[39;49;00m(\u001b[33m'\u001b[39;49;00m\u001b[33mInferring sentiment of input data.\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m)\r\n\r\n device = torch.device(\u001b[33m\"\u001b[39;49;00m\u001b[33mcuda\u001b[39;49;00m\u001b[33m\"\u001b[39;49;00m \u001b[34mif\u001b[39;49;00m torch.cuda.is_available() \u001b[34melse\u001b[39;49;00m \u001b[33m\"\u001b[39;49;00m\u001b[33mcpu\u001b[39;49;00m\u001b[33m\"\u001b[39;49;00m)\r\n \r\n \u001b[34mif\u001b[39;49;00m model.word_dict \u001b[35mis\u001b[39;49;00m \u001b[36mNone\u001b[39;49;00m:\r\n \u001b[34mraise\u001b[39;49;00m \u001b[36mException\u001b[39;49;00m(\u001b[33m'\u001b[39;49;00m\u001b[33mModel has not been loaded properly, no word_dict.\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m)\r\n \r\n \u001b[37m# TODO: Process input_data so that it is ready to be sent to our model.\u001b[39;49;00m\r\n \u001b[37m# You should produce two variables:\u001b[39;49;00m\r\n \u001b[37m# data_X - A sequence of length 500 which represents the converted review\u001b[39;49;00m\r\n \u001b[37m# data_len - The length of the review\u001b[39;49;00m\r\n\r\n data_X = \u001b[36mNone\u001b[39;49;00m\r\n data_len = \u001b[36mNone\u001b[39;49;00m\r\n\r\n \u001b[37m# Using data_X and data_len we construct an appropriate input tensor. Remember\u001b[39;49;00m\r\n \u001b[37m# that our model expects input data of the form 'len, review[500]'.\u001b[39;49;00m\r\n data_pack = np.hstack((data_len, data_X))\r\n data_pack = data_pack.reshape(\u001b[34m1\u001b[39;49;00m, -\u001b[34m1\u001b[39;49;00m)\r\n \r\n data = torch.from_numpy(data_pack)\r\n data = data.to(device)\r\n\r\n \u001b[37m# Make sure to put the model into evaluation mode\u001b[39;49;00m\r\n model.eval()\r\n\r\n \u001b[37m# TODO: Compute the result of applying the model to the input data. The variable `result` should\u001b[39;49;00m\r\n \u001b[37m# be a numpy array which contains a single integer which is either 1 or 0\u001b[39;49;00m\r\n\r\n result = \u001b[36mNone\u001b[39;49;00m\r\n\r\n \u001b[34mreturn\u001b[39;49;00m result\r\n"
]
],
[
[
"As mentioned earlier, the `model_fn` method is the same as the one provided in the training code and the `input_fn` and `output_fn` methods are very simple and your task will be to complete the `predict_fn` method. Make sure that you save the completed file as `predict.py` in the `serve` directory.\n\n**TODO**: Complete the `predict_fn()` method in the `serve/predict.py` file.",
"_____no_output_____"
],
[
"### Deploying the model\n\nNow that the custom inference code has been written, we will create and deploy our model. To begin with, we need to construct a new PyTorchModel object which points to the model artifacts created during training and also points to the inference code that we wish to use. Then we can call the deploy method to launch the deployment container.\n\n**NOTE**: The default behaviour for a deployed PyTorch model is to assume that any input passed to the predictor is a `numpy` array. In our case we want to send a string so we need to construct a simple wrapper around the `RealTimePredictor` class to accomodate simple strings. In a more complicated situation you may want to provide a serialization object, for example if you wanted to sent image data.",
"_____no_output_____"
]
],
[
[
"from sagemaker.predictor import RealTimePredictor\nfrom sagemaker.pytorch import PyTorchModel",
"_____no_output_____"
],
[
"class StringPredictor(RealTimePredictor):\n def __init__(self, endpoint_name, sagemaker_session):\n super(StringPredictor, self).__init__(endpoint_name, sagemaker_session, content_type='text/plain')\n\nmodel = PyTorchModel(model_data=estimator.model_data,\n role = role,\n framework_version='0.4.0',\n entry_point='predict.py',\n source_dir='serve',\n predictor_cls=StringPredictor)\npredictor = model.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge')",
"---------------------------------------------------------------------------------------------------!"
]
],
[
[
"### Testing the model\n\nNow that we have deployed our model with the custom inference code, we should test to see if everything is working. Here we test our model by loading the first `250` positive and negative reviews and send them to the endpoint, then collect the results. The reason for only sending some of the data is that the amount of time it takes for our model to process the input and then perform inference is quite long and so testing the entire data set would be prohibitive.",
"_____no_output_____"
]
],
[
[
"import glob",
"_____no_output_____"
],
[
"def test_reviews(data_dir='../data/aclImdb', stop=250):\n \n results = []\n ground = []\n \n # We make sure to test both positive and negative reviews \n for sentiment in ['pos', 'neg']:\n \n path = os.path.join(data_dir, 'test', sentiment, '*.txt')\n files = glob.glob(path)\n \n files_read = 0\n \n print('Starting ', sentiment, ' files')\n \n # Iterate through the files and send them to the predictor\n for f in files:\n with open(f) as review:\n # First, we store the ground truth (was the review positive or negative)\n if sentiment == 'pos':\n ground.append(1)\n else:\n ground.append(0)\n # Read in the review and convert to 'utf-8' for transmission via HTTP\n review_input = review.read().encode('utf-8')\n # Send the review to the predictor and store the results\n results.append(int(predictor.predict(review_input)))\n \n # Sending reviews to our endpoint one at a time takes a while so we\n # only send a small number of reviews\n files_read += 1\n if files_read == stop:\n break\n \n return ground, results",
"_____no_output_____"
],
[
"ground, results = test_reviews()",
"Starting pos files\nStarting neg files\n"
],
[
"from sklearn.metrics import accuracy_score",
"_____no_output_____"
],
[
"accuracy_score(ground, results)",
"_____no_output_____"
]
],
[
[
"As an additional test, we can try sending the `test_review` that we looked at earlier.",
"_____no_output_____"
]
],
[
[
"predictor.predict(test_review)",
"_____no_output_____"
]
],
[
[
"Now that we know our endpoint is working as expected, we can set up the web page that will interact with it. If you don't have time to finish the project now, make sure to skip down to the end of this notebook and shut down your endpoint. You can deploy it again when you come back.",
"_____no_output_____"
],
[
"## Step 7 (again): Use the model for the web app\n\n> **TODO:** This entire section and the next contain tasks for you to complete, mostly using the AWS console.\n\nSo far we have been accessing our model endpoint by constructing a predictor object which uses the endpoint and then just using the predictor object to perform inference. What if we wanted to create a web app which accessed our model? The way things are set up currently makes that not possible since in order to access a SageMaker endpoint the app would first have to authenticate with AWS using an IAM role which included access to SageMaker endpoints. However, there is an easier way! We just need to use some additional AWS services.\n\n<img src=\"Web App Diagram.svg\">\n\nThe diagram above gives an overview of how the various services will work together. On the far right is the model which we trained above and which is deployed using SageMaker. On the far left is our web app that collects a user's movie review, sends it off and expects a positive or negative sentiment in return.\n\nIn the middle is where some of the magic happens. We will construct a Lambda function, which you can think of as a straightforward Python function that can be executed whenever a specified event occurs. We will give this function permission to send and recieve data from a SageMaker endpoint.\n\nLastly, the method we will use to execute the Lambda function is a new endpoint that we will create using API Gateway. This endpoint will be a url that listens for data to be sent to it. Once it gets some data it will pass that data on to the Lambda function and then return whatever the Lambda function returns. Essentially it will act as an interface that lets our web app communicate with the Lambda function.",
"_____no_output_____"
],
[
"### Setting up a Lambda function\n\nThe first thing we are going to do is set up a Lambda function. This Lambda function will be executed whenever our public API has data sent to it. When it is executed it will receive the data, perform any sort of processing that is required, send the data (the review) to the SageMaker endpoint we've created and then return the result.",
"_____no_output_____"
],
[
"#### Part A: Create an IAM Role for the Lambda function\n\nSince we want the Lambda function to call a SageMaker endpoint, we need to make sure that it has permission to do so. To do this, we will construct a role that we can later give the Lambda function.\n\nUsing the AWS Console, navigate to the **IAM** page and click on **Roles**. Then, click on **Create role**. Make sure that the **AWS service** is the type of trusted entity selected and choose **Lambda** as the service that will use this role, then click **Next: Permissions**.\n\nIn the search box type `sagemaker` and select the check box next to the **AmazonSageMakerFullAccess** policy. Then, click on **Next: Review**.\n\nLastly, give this role a name. Make sure you use a name that you will remember later on, for example `LambdaSageMakerRole`. Then, click on **Create role**.",
"_____no_output_____"
],
[
"#### Part B: Create a Lambda function\n\nNow it is time to actually create the Lambda function.\n\nUsing the AWS Console, navigate to the AWS Lambda page and click on **Create a function**. When you get to the next page, make sure that **Author from scratch** is selected. Now, name your Lambda function, using a name that you will remember later on, for example `sentiment_analysis_func`. Make sure that the **Python 3.6** runtime is selected and then choose the role that you created in the previous part. Then, click on **Create Function**.\n\nOn the next page you will see some information about the Lambda function you've just created. If you scroll down you should see an editor in which you can write the code that will be executed when your Lambda function is triggered. In our example, we will use the code below. \n\n```python\n# We need to use the low-level library to interact with SageMaker since the SageMaker API\n# is not available natively through Lambda.\nimport boto3\n\ndef lambda_handler(event, context):\n\n # The SageMaker runtime is what allows us to invoke the endpoint that we've created.\n runtime = boto3.Session().client('sagemaker-runtime')\n\n # Now we use the SageMaker runtime to invoke our endpoint, sending the review we were given\n response = runtime.invoke_endpoint(EndpointName = '**ENDPOINT NAME HERE**', # The name of the endpoint we created\n ContentType = 'text/plain', # The data format that is expected\n Body = event['body']) # The actual review\n\n # The response is an HTTP response whose body contains the result of our inference\n result = response['Body'].read().decode('utf-8')\n\n return {\n 'statusCode' : 200,\n 'headers' : { 'Content-Type' : 'text/plain', 'Access-Control-Allow-Origin' : '*' },\n 'body' : result\n }\n```\n\nOnce you have copy and pasted the code above into the Lambda code editor, replace the `**ENDPOINT NAME HERE**` portion with the name of the endpoint that we deployed earlier. You can determine the name of the endpoint using the code cell below.",
"_____no_output_____"
]
],
[
[
"predictor.endpoint",
"_____no_output_____"
]
],
[
[
"Once you have added the endpoint name to the Lambda function, click on **Save**. Your Lambda function is now up and running. Next we need to create a way for our web app to execute the Lambda function.\n\n### Setting up API Gateway\n\nNow that our Lambda function is set up, it is time to create a new API using API Gateway that will trigger the Lambda function we have just created.\n\nUsing AWS Console, navigate to **Amazon API Gateway** and then click on **Get started**.\n\nOn the next page, make sure that **New API** is selected and give the new api a name, for example, `sentiment_analysis_api`. Then, click on **Create API**.\n\nNow we have created an API, however it doesn't currently do anything. What we want it to do is to trigger the Lambda function that we created earlier.\n\nSelect the **Actions** dropdown menu and click **Create Method**. A new blank method will be created, select its dropdown menu and select **POST**, then click on the check mark beside it.\n\nFor the integration point, make sure that **Lambda Function** is selected and click on the **Use Lambda Proxy integration**. This option makes sure that the data that is sent to the API is then sent directly to the Lambda function with no processing. It also means that the return value must be a proper response object as it will also not be processed by API Gateway.\n\nType the name of the Lambda function you created earlier into the **Lambda Function** text entry box and then click on **Save**. Click on **OK** in the pop-up box that then appears, giving permission to API Gateway to invoke the Lambda function you created.\n\nThe last step in creating the API Gateway is to select the **Actions** dropdown and click on **Deploy API**. You will need to create a new Deployment stage and name it anything you like, for example `prod`.\n\nYou have now successfully set up a public API to access your SageMaker model. Make sure to copy or write down the URL provided to invoke your newly created public API as this will be needed in the next step. This URL can be found at the top of the page, highlighted in blue next to the text **Invoke URL**.",
"_____no_output_____"
],
[
"## Step 4: Deploying our web app\n\nNow that we have a publicly available API, we can start using it in a web app. For our purposes, we have provided a simple static html file which can make use of the public api you created earlier.\n\nIn the `website` folder there should be a file called `index.html`. Download the file to your computer and open that file up in a text editor of your choice. There should be a line which contains **\\*\\*REPLACE WITH PUBLIC API URL\\*\\***. Replace this string with the url that you wrote down in the last step and then save the file.\n\nNow, if you open `index.html` on your local computer, your browser will behave as a local web server and you can use the provided site to interact with your SageMaker model.\n\nIf you'd like to go further, you can host this html file anywhere you'd like, for example using github or hosting a static site on Amazon's S3. Once you have done this you can share the link with anyone you'd like and have them play with it too!\n\n> **Important Note** In order for the web app to communicate with the SageMaker endpoint, the endpoint has to actually be deployed and running. This means that you are paying for it. Make sure that the endpoint is running when you want to use the web app but that you shut it down when you don't need it, otherwise you will end up with a surprisingly large AWS bill.\n\n**TODO:** Make sure that you include the edited `index.html` file in your project submission.",
"_____no_output_____"
],
[
"Now that your web app is working, trying playing around with it and see how well it works.\n\n**Question**: Give an example of a review that you entered into your web app. What was the predicted sentiment of your example review?",
"_____no_output_____"
],
[
"**Answer:**\nA simple positive review has been tested on my local machine.\nThe call was successfully sent to the Gateway API and invoked the service.\nThe response is POSITIVE as expected.\n\n",
"_____no_output_____"
],
[
"### Delete the endpoint\n\nRemember to always shut down your endpoint if you are no longer using it. You are charged for the length of time that the endpoint is running so if you forget and leave it on you could end up with an unexpectedly large bill.",
"_____no_output_____"
]
],
[
[
"predictor.delete_endpoint()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
]
] |
d0bd74ebef38ed493db30e5b4919f84f5f7725d5 | 120,688 | ipynb | Jupyter Notebook | csv_format.ipynb | arvkr/retinanet-detection | 21213bc829b5c922cb0fee5882c7e73036a49ecd | [
"Apache-2.0"
] | null | null | null | csv_format.ipynb | arvkr/retinanet-detection | 21213bc829b5c922cb0fee5882c7e73036a49ecd | [
"Apache-2.0"
] | null | null | null | csv_format.ipynb | arvkr/retinanet-detection | 21213bc829b5c922cb0fee5882c7e73036a49ecd | [
"Apache-2.0"
] | null | null | null | 28.673794 | 1,091 | 0.371628 | [
[
[
"import pandas as pd\nimport numpy as np",
"_____no_output_____"
],
[
"ann_orig = pd.read_csv('./orig_anno.csv')",
"_____no_output_____"
],
[
"# ./dataset/clean_resized/0.jpg,125,165,209,201,neck\nann_dict = [{'file_name':'./dataset/clean_resized/0.jpg','x1':125,'y1':165,'x2':209,'y2':201,'body_part':'neck'}]\nann_new = pd.DataFrame(ann_dict)",
"_____no_output_____"
],
[
"ann_orig",
"_____no_output_____"
],
[
"ann_orig.iloc[0,0]",
"_____no_output_____"
],
[
"import ast",
"_____no_output_____"
],
[
"ast.literal_eval(ann_orig.iloc[0,6])['body_part']",
"_____no_output_____"
],
[
"import pandas as pd\nimport ast\n\ndef change_format(left_x,top_y,width,height):\n anno = {}\n anno['x1'] = int(left_x)\n anno['y1'] = int(top_y)\n anno['x2'] = int(left_x + width)\n anno['y2'] = int(top_y + height)\n\n return anno\n\norig_ann = pd.read_csv('./orig_anno4.csv')\nnew_ann_list = []\nfor i in range(len(orig_ann)):\n ann_dict = {}\n print(i)\n print(ast.literal_eval(orig_ann.iloc[i,6])['body_part'])\n if ast.literal_eval(orig_ann.iloc[i,6])['body_part'] == 'neck' or ast.literal_eval(orig_ann.iloc[i,6])['body_part'] == 'stomach':\n ann_dict['file_name'] = './dataset/clean_resized/' + orig_ann.iloc[i,0]\n lx = ast.literal_eval(orig_ann.iloc[i,5])['x']\n ty = ast.literal_eval(orig_ann.iloc[i,5])['y']\n w = ast.literal_eval(orig_ann.iloc[i,5])['width']\n h = ast.literal_eval(orig_ann.iloc[i,5])['height']\n anno = change_format(lx,ty,w,h)\n ann_dict['x1'] = anno['x1']\n ann_dict['y1'] = anno['y1']\n ann_dict['x2'] = anno['x2']\n ann_dict['y2'] = anno['y2']\n ann_dict['body_part'] = ast.literal_eval(orig_ann.iloc[i,6])['body_part']\n new_ann_list.append(ann_dict)",
"0\nneck\n1\nstomach\n2\nhip\n3\nstomach\n4\nneck\n5\nhip\n6\nneck\n7\nneck\n8\nstomach\n9\nhip\n10\nneck\n11\nstomach\n12\nneck\n13\nstomach\n14\nneck\n15\nstomach\n16\nhip\n17\nneck\n18\nstomach\n19\nhip\n20\nneck\n21\nstomach\n22\nhip\n23\nneck\n24\nstomach\n25\nneck\n26\nstomach\n27\nhip\n28\nneck\n29\nstomach\n30\nhip\n31\nneck\n32\nstomach\n33\nhip\n34\nneck\n35\nstomach\n36\nhip\n37\nneck\n38\nstomach\n39\nhip\n40\nstomach\n41\nneck\n42\nneck\n43\nstomach\n44\nstomach\n45\nneck\n46\nstomach\n47\nhip\n48\nneck\n49\nstomach\n50\nneck\n51\nstomach\n52\nhip\n53\nneck\n54\nstomach\n55\nhip\n56\nneck\n57\nstomach\n58\nhip\n59\nneck\n60\nstomach\n61\nneck\n62\nstomach\n63\nhip\n64\nneck\n65\nstomach\n66\nhip\n67\nneck\n68\nstomach\n69\nneck\n70\nstomach\n71\nhip\n72\nneck\n73\nstomach\n74\nhip\n75\nstomach\n76\nneck\n77\nstomach\n78\nneck\n79\nstomach\n80\nneck\n81\nstomach\n82\nneck\n83\nstomach\n84\nhip\n85\nstomach\n86\nhip\n87\nstomach\n88\nhip\n89\nneck\n90\nstomach\n91\nneck\n92\nstomach\n93\nhip\n94\nneck\n95\nstomach\n96\nstomach\n97\nneck\n98\nstomach\n99\nneck\n100\nstomach\n101\nhip\n102\nneck\n103\nstomach\n104\nstomach\n105\nneck\n106\nstomach\n107\nneck\n108\nhip\n109\nneck\n110\nstomach\n111\nneck\n112\nstomach\n113\nhip\n114\nneck\n115\nstomach\n116\nhip\n117\nneck\n118\nstomach\n119\nhip\n120\nneck\n121\nstomach\n122\nhip\n123\nstomach\n124\nneck\n125\nhip\n126\nneck\n127\nstomach\n128\nhip\n129\nneck\n130\nstomach\n131\nhip\n132\nneck\n133\nstomach\n134\nhip\n135\nneck\n136\nstomach\n137\nstomach\n138\nhip\n139\nneck\n140\nneck\n141\nneck\n142\nhip\n143\nneck\n144\nstomach\n145\nhip\n146\nneck\n147\nstomach\n148\nhip\n149\nneck\n150\nstomach\n151\nhip\n152\nneck\n153\nstomach\n154\nhip\n155\nneck\n156\nstomach\n157\nneck\n158\nstomach\n159\nhip\n160\nneck\n161\nstomach\n162\nneck\n163\nstomach\n164\nhip\n165\nneck\n166\nstomach\n167\nhip\n168\nneck\n169\nhip\n170\nhip\n171\nneck\n172\nstomach\n173\nhip\n174\nneck\n175\nstomach\n176\nstomach\n177\nstomach\n178\nstomach\n179\nstomach\n180\nstomach\n181\nhip\n182\nhip\n183\nstomach\n184\nneck\n185\nhip\n186\nstomach\n187\nneck\n188\nhip\n189\nstomach\n190\nhip\n191\nneck\n192\nstomach\n193\nhip\n194\nneck\n195\nstomach\n196\nhip\n197\nstomach\n198\nhip\n199\nneck\n200\nneck\n201\nstomach\n202\nhip\n203\nstomach\n204\nhip\n205\nneck\n206\nneck\n207\nstomach\n208\nneck\n209\nstomach\n210\nneck\n211\nstomach\n212\nneck\n213\nstomach\n214\nneck\n215\nstomach\n216\nneck\n217\nstomach\n218\nneck\n219\nstomach\n220\nneck\n221\nstomach\n222\nneck\n223\nstomach\n224\nneck\n225\nstomach\n226\nneck\n227\nstomach\n228\nneck\n229\nstomach\n230\nneck\n231\nstomach\n232\nneck\n233\nstomach\n234\nneck\n235\nstomach\n236\nneck\n237\nstomach\n238\nneck\n239\nstomach\n240\nneck\n241\nstomach\n242\nneck\n243\nstomach\n244\nneck\n245\nstomach\n246\nneck\n247\nstomach\n248\nneck\n249\nstomach\n250\nneck\n251\nstomach\n252\nstomach\n253\nneck\n254\nneck\n255\nneck\n256\nneck\n257\nstomach\n258\nneck\n259\nstomach\n260\nneck\n261\nneck\n262\nneck\n263\nstomach\n264\nneck\n265\nstomach\n266\nstomach\n267\nneck\n268\nneck\n269\nstomach\n270\nneck\n271\nstomach\n272\nneck\n273\nstomach\n274\nneck\n275\nstomach\n276\nneck\n277\nstomach\n278\nneck\n279\nstomach\n280\nneck\n281\nstomach\n282\nstomach\n283\nneck\n284\nneck\n285\nneck\n286\nneck\n287\nstomach\n288\nstomach\n289\nstomach\n290\nneck\n291\nstomach\n292\nneck\n293\nstomach\n294\nneck\n295\nneck\n296\nstomach\n297\nneck\n298\nstomach\n299\nneck\n300\nstomach\n301\nneck\n302\nstomach\n303\nneck\n304\nstomach\n305\nneck\n306\nstomach\n307\nneck\n308\nstomach\n309\nneck\n310\nstomach\n311\nneck\n312\nneck\n313\nstomach\n314\nneck\n315\nstomach\n316\nneck\n317\nstomach\n318\nneck\n319\nstomach\n320\nstomach\n321\nneck\n322\nstomach\n323\nneck\n324\nstomach\n325\nneck\n326\nstomach\n327\nneck\n328\nneck\n329\nneck\n330\nstomach\n331\nstomach\n332\nstomach\n333\nneck\n334\nstomach\n335\nneck\n336\nstomach\n337\nstomach\n338\nneck\n339\nstomach\n340\nstomach\n341\nstomach\n342\nstomach\n343\nneck\n344\nneck\n345\nneck\n346\nneck\n347\nneck\n348\nneck\n349\nneck\n350\nneck\n351\nneck\n352\nneck\n353\nstomach\n354\nneck\n355\nstomach\n356\nneck\n357\nstomach\n358\nstomach\n359\nneck\n360\nneck\n361\nstomach\n362\nstomach\n363\nneck\n364\nneck\n365\nstomach\n366\nneck\n367\nstomach\n368\nneck\n369\nstomach\n370\nneck\n371\nstomach\n372\nneck\n373\nstomach\n374\nneck\n375\nstomach\n376\nneck\n377\nneck\n378\nstomach\n379\nstomach\n380\nneck\n381\nstomach\n382\nstomach\n383\nneck\n384\nneck\n385\nstomach\n386\nneck\n387\nstomach\n388\nneck\n389\nstomach\n390\nstomach\n391\nneck\n392\nstomach\n393\nneck\n394\nstomach\n395\nstomach\n396\nstomach\n397\nneck\n398\nneck\n399\nneck\n400\nneck\n401\nneck\n402\nneck\n403\nstomach\n404\nstomach\n405\nstomach\n406\nstomach\n407\nstomach\n408\nneck\n409\nstomach\n410\nneck\n411\nstomach\n412\nneck\n413\nstomach\n414\nneck\n415\nstomach\n416\nneck\n417\nstomach\n418\nstomach\n419\nneck\n420\nstomach\n421\nstomach\n422\nstomach\n423\nneck\n424\nneck\n425\nstomach\n426\nstomach\n427\nneck\n428\nstomach\n429\nstomach\n430\nneck\n431\nneck\n432\nstomach\n433\nneck\n434\nstomach\n435\nneck\n436\nneck\n437\nstomach\n438\nstomach\n439\nstomach\n440\nneck\n441\nneck\n442\nstomach\n443\nneck\n444\nstomach\n445\nneck\n446\nstomach\n447\nneck\n448\nstomach\n449\nneck\n450\nstomach\n451\nneck\n452\nneck\n453\nstomach\n454\nstomach\n455\nneck\n456\nstomach\n457\nstomach\n458\nneck\n459\nstomach\n460\nneck\n461\nneck\n462\nstomach\n463\nneck\n464\nneck\n465\nstomach\n466\nneck\n467\nstomach\n468\nneck\n469\nstomach\n470\nneck\n471\nstomach\n472\nneck\n473\nstomach\n474\nneck\n475\nneck\n476\nstomach\n477\nneck\n478\nstomach\n479\nneck\n480\nstomach\n481\nneck\n482\nstomach\n483\nneck\n484\nstomach\n485\nneck\n486\nneck\n487\nstomach\n488\nstomach\n489\nneck\n490\nstomach\n491\nneck\n492\nstomach\n493\nneck\n494\nstomach\n495\nneck\n496\nstomach\n497\nneck\n498\nstomach\n499\nneck\n500\nstomach\n501\nneck\n502\nstomach\n503\nstomach\n504\nneck\n505\nneck\n506\nstomach\n507\nneck\n508\nstomach\n509\nneck\n510\nstomach\n511\nneck\n512\nstomach\n513\nneck\n514\nstomach\n515\nneck\n516\nstomach\n517\nneck\n518\nstomach\n519\nneck\n520\nstomach\n521\nneck\n522\nneck\n523\nstomach\n524\nstomach\n525\nstomach\n526\nneck\n527\nneck\n528\nstomach\n529\nneck\n530\nstomach\n531\nneck\n532\nstomach\n533\nneck\n534\nstomach\n535\nneck\n536\nstomach\n537\nneck\n538\nstomach\n539\nneck\n540\nstomach\n541\nneck\n542\nstomach\n543\nneck\n544\nstomach\n545\nneck\n546\nstomach\n547\nneck\n548\nstomach\n549\nneck\n550\nstomach\n551\nneck\n552\nstomach\n553\nneck\n554\nstomach\n555\nneck\n556\nstomach\n557\nneck\n558\nstomach\n559\nneck\n560\nstomach\n561\nneck\n562\nstomach\n563\nneck\n564\nstomach\n565\nneck\n566\nstomach\n567\nneck\n568\nstomach\n569\nneck\n570\nstomach\n"
],
[
"ast.literal_eval(orig_ann.iloc[486,:])",
"_____no_output_____"
],
[
"orig_ann.iloc[489,:]",
"_____no_output_____"
],
[
"new_ann = pd.DataFrame(new_ann_list)\nnew_ann.to_csv('csv_annotations3.csv', index = False)",
"_____no_output_____"
],
[
"len(orig_ann)",
"_____no_output_____"
],
[
"len(new_ann_list)",
"_____no_output_____"
],
[
"new_ann.to_csv('csv_annotations3.csv', index = False)",
"_____no_output_____"
],
[
"orig_ann.iloc[1,5]",
"_____no_output_____"
],
[
"from sklearn.utils import shuffle",
"_____no_output_____"
],
[
"shuffle(new_ann_list, random_state = 1)",
"_____no_output_____"
],
[
"new_ann_list",
"_____no_output_____"
],
[
"from dataloader import CocoDataset, CSVDataset, collater, Resizer, AspectRatioBasedSampler, Augmenter, UnNormalizer, Normalizer\nfrom torch.utils.data import Dataset, DataLoader\ndataset_train = CSVDataset(train_file='./csv_annotations.csv', class_list='./classes.csv', transform=transforms.Compose([Normalizer(), Augmenter(), Resizer()]))\n",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0bd750c8988f4641fc160b1c658db564a362dfb | 3,943 | ipynb | Jupyter Notebook | notebook/DECam draw2d.ipynb | jmeyers314/jtrace | 9149a5af766fb9a9cd7ebfe6f3f18de0eb8b2e89 | [
"BSD-2-Clause"
] | 13 | 2018-12-24T03:55:04.000Z | 2021-11-09T11:40:40.000Z | notebook/DECam draw2d.ipynb | bregeon/batoid | 7b03d9b59ff43db6746eadab7dd58a463a0415c3 | [
"BSD-2-Clause"
] | 65 | 2017-08-15T07:19:05.000Z | 2021-09-08T17:44:57.000Z | notebook/DECam draw2d.ipynb | bregeon/batoid | 7b03d9b59ff43db6746eadab7dd58a463a0415c3 | [
"BSD-2-Clause"
] | 10 | 2019-02-19T07:02:31.000Z | 2021-12-10T22:19:40.000Z | 30.099237 | 109 | 0.549835 | [
[
[
"import batoid\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline",
"_____no_output_____"
],
[
"telescope = batoid.Optic.fromYaml(\"DECam.yaml\")\n\nfig, ax = plt.subplots(figsize=(8, 16))\n\ntelescope.draw2d(\n ax, only=batoid.Lens, fc='c', alpha=0.2, \n# labelpos=1.62, fontdict=dict(fontsize=18, weight='bold', color='c')\n)\ntelescope.draw2d(ax, only=batoid.Detector, c='b', lw=2)\ntelescope.draw2d(ax, only=batoid.Baffle, c='r', lw=1, ls=':',\n)\ntelescope.draw2d(ax, only=batoid.Mirror, c='b', lw=2)\n\n# Fill the (x, z) plane with rays entering the pupil.\nz_pupil = telescope.backDist\nr_pupil = 0.5 * telescope.pupilSize\nx_pupil = np.linspace(-r_pupil, r_pupil, 22)\n\n# Trace and draw 500nm rays from 5 angles covering the field of view.\nwlen = 500e-9\nfov = np.deg2rad(2.2)\nthetas = np.linspace(-0.5 * fov, +0.5 * fov, 5)\nfor theta in thetas:\n rays = batoid.RayVector(x_pupil, 0, z_pupil, np.sin(theta), 0., -np.cos(theta), wavelength=wlen)\n traceFull = telescope.traceFull(rays)\n batoid.drawTrace2d(ax, traceFull, c='k', lw=1, alpha=0.3)\n\n# ax.set_xlim(-0.6, 0.7)\nax.set_xlim(-2.1, 2.1)\nax.set_ylim(0.0, 12.0)\nax.set_aspect(1.0)\nax.axis('off')\nplt.show()",
"_____no_output_____"
],
[
"telescope = batoid.Optic.fromYaml(\"DECam.yaml\")\n\nfig, ax = plt.subplots(figsize=(8, 16))\n\ntelescope.draw2d(\n ax, only=batoid.Lens, fc='c', alpha=0.2, \n labelpos=0.62, fontdict=dict(fontsize=18, weight='bold', color='c')\n)\ntelescope.draw2d(ax, only=batoid.Detector, c='b', lw=2)\ntelescope.draw2d(ax, only=batoid.Baffle, c='r', lw=1, ls=':',\n# labelpos=0.62, fontdict=dict(fontsize=18, weight='bold', color='r')\n)\ntelescope.draw2d(ax, only=batoid.Mirror, c='b', lw=2)\n\n# Fill the (x, z) plane with rays entering the pupil.\nz_pupil = telescope.backDist\nr_pupil = 0.5 * telescope.pupilSize\nx_pupil = np.linspace(-r_pupil, r_pupil, 12)\n\n# Trace and draw 500nm rays from 5 angles covering the field of view.\nwlen = 500e-9\nfov = np.deg2rad(2.2)\nthetas = np.linspace(-0.5 * fov, +0.5 * fov, 5)\nfor theta in thetas:\n rays = batoid.RayVector(x_pupil, 0, z_pupil, np.sin(theta), 0., -np.cos(theta), wavelength=wlen)\n traceFull = telescope.traceFull(rays)\n batoid.drawTrace2d(ax, traceFull, c='k', lw=1, alpha=0.3)\n\nax.set_xlim(-0.6, 0.7)\n# ax.set_xlim(-2.2, 2.2)\nax.set_ylim(8.5, 10.9)\nax.set_aspect(1.0)\nax.axis('off')\nplt.show()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code"
]
] |
d0bd76fc1aa6887f69c262bed37dfb5c9d25f51a | 161,836 | ipynb | Jupyter Notebook | 3.Horses_vs_humans_using_Transfer_Learning.ipynb | Abhishek-Gargha-Maheshwarappa/Deeplearning-Basic- | fef76d0ec8e49bad0cb6f590b834b5d371252df5 | [
"MIT"
] | null | null | null | 3.Horses_vs_humans_using_Transfer_Learning.ipynb | Abhishek-Gargha-Maheshwarappa/Deeplearning-Basic- | fef76d0ec8e49bad0cb6f590b834b5d371252df5 | [
"MIT"
] | null | null | null | 3.Horses_vs_humans_using_Transfer_Learning.ipynb | Abhishek-Gargha-Maheshwarappa/Deeplearning-Basic- | fef76d0ec8e49bad0cb6f590b834b5d371252df5 | [
"MIT"
] | null | null | null | 100.958203 | 21,064 | 0.684007 | [
[
[
"# **Tansfer Learning for Classification of Horses and Humans**\n\n## **Abstract**\n\nAim of the notebook is to demonstrate the use of the transfer learning for improving the model accuracy for real-world images.",
"_____no_output_____"
]
],
[
[
"import os\nimport tensorflow as tf\nfrom tensorflow.keras import layers\nfrom tensorflow.keras import Model\nfrom os import getcwd",
"_____no_output_____"
],
[
"path_inception = f\"{getcwd()}/../tmp2/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5\"\n\n\nfrom tensorflow.keras.applications.inception_v3 import InceptionV3\n\nlocal_weights_file = path_inception\n\npre_trained_model = InceptionV3(input_shape = (150, 150, 3), \n include_top = False, \n weights = None)\n\npre_trained_model.load_weights(local_weights_file)\n\nfor layer in pre_trained_model.layers:\n layer.trainable = False\n\n \n# Print the model summary\npre_trained_model.summary()\n\n",
"Model: \"inception_v3\"\n__________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n==================================================================================================\ninput_1 (InputLayer) [(None, 150, 150, 3) 0 \n__________________________________________________________________________________________________\nconv2d (Conv2D) (None, 74, 74, 32) 864 input_1[0][0] \n__________________________________________________________________________________________________\nbatch_normalization (BatchNorma (None, 74, 74, 32) 96 conv2d[0][0] \n__________________________________________________________________________________________________\nactivation (Activation) (None, 74, 74, 32) 0 batch_normalization[0][0] \n__________________________________________________________________________________________________\nconv2d_1 (Conv2D) (None, 72, 72, 32) 9216 activation[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_1 (BatchNor (None, 72, 72, 32) 96 conv2d_1[0][0] \n__________________________________________________________________________________________________\nactivation_1 (Activation) (None, 72, 72, 32) 0 batch_normalization_1[0][0] \n__________________________________________________________________________________________________\nconv2d_2 (Conv2D) (None, 72, 72, 64) 18432 activation_1[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_2 (BatchNor (None, 72, 72, 64) 192 conv2d_2[0][0] \n__________________________________________________________________________________________________\nactivation_2 (Activation) (None, 72, 72, 64) 0 batch_normalization_2[0][0] \n__________________________________________________________________________________________________\nmax_pooling2d (MaxPooling2D) (None, 35, 35, 64) 0 activation_2[0][0] \n__________________________________________________________________________________________________\nconv2d_3 (Conv2D) (None, 35, 35, 80) 5120 max_pooling2d[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_3 (BatchNor (None, 35, 35, 80) 240 conv2d_3[0][0] \n__________________________________________________________________________________________________\nactivation_3 (Activation) (None, 35, 35, 80) 0 batch_normalization_3[0][0] \n__________________________________________________________________________________________________\nconv2d_4 (Conv2D) (None, 33, 33, 192) 138240 activation_3[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_4 (BatchNor (None, 33, 33, 192) 576 conv2d_4[0][0] \n__________________________________________________________________________________________________\nactivation_4 (Activation) (None, 33, 33, 192) 0 batch_normalization_4[0][0] \n__________________________________________________________________________________________________\nmax_pooling2d_1 (MaxPooling2D) (None, 16, 16, 192) 0 activation_4[0][0] \n__________________________________________________________________________________________________\nconv2d_8 (Conv2D) (None, 16, 16, 64) 12288 max_pooling2d_1[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_8 (BatchNor (None, 16, 16, 64) 192 conv2d_8[0][0] \n__________________________________________________________________________________________________\nactivation_8 (Activation) (None, 16, 16, 64) 0 batch_normalization_8[0][0] \n__________________________________________________________________________________________________\nconv2d_6 (Conv2D) (None, 16, 16, 48) 9216 max_pooling2d_1[0][0] \n__________________________________________________________________________________________________\nconv2d_9 (Conv2D) (None, 16, 16, 96) 55296 activation_8[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_6 (BatchNor (None, 16, 16, 48) 144 conv2d_6[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_9 (BatchNor (None, 16, 16, 96) 288 conv2d_9[0][0] \n__________________________________________________________________________________________________\nactivation_6 (Activation) (None, 16, 16, 48) 0 batch_normalization_6[0][0] \n__________________________________________________________________________________________________\nactivation_9 (Activation) (None, 16, 16, 96) 0 batch_normalization_9[0][0] \n__________________________________________________________________________________________________\naverage_pooling2d (AveragePooli (None, 16, 16, 192) 0 max_pooling2d_1[0][0] \n__________________________________________________________________________________________________\nconv2d_5 (Conv2D) (None, 16, 16, 64) 12288 max_pooling2d_1[0][0] \n__________________________________________________________________________________________________\nconv2d_7 (Conv2D) (None, 16, 16, 64) 76800 activation_6[0][0] \n__________________________________________________________________________________________________\nconv2d_10 (Conv2D) (None, 16, 16, 96) 82944 activation_9[0][0] \n__________________________________________________________________________________________________\nconv2d_11 (Conv2D) (None, 16, 16, 32) 6144 average_pooling2d[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_5 (BatchNor (None, 16, 16, 64) 192 conv2d_5[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_7 (BatchNor (None, 16, 16, 64) 192 conv2d_7[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_10 (BatchNo (None, 16, 16, 96) 288 conv2d_10[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_11 (BatchNo (None, 16, 16, 32) 96 conv2d_11[0][0] \n__________________________________________________________________________________________________\nactivation_5 (Activation) (None, 16, 16, 64) 0 batch_normalization_5[0][0] \n__________________________________________________________________________________________________\nactivation_7 (Activation) (None, 16, 16, 64) 0 batch_normalization_7[0][0] \n__________________________________________________________________________________________________\nactivation_10 (Activation) (None, 16, 16, 96) 0 batch_normalization_10[0][0] \n__________________________________________________________________________________________________\nactivation_11 (Activation) (None, 16, 16, 32) 0 batch_normalization_11[0][0] \n__________________________________________________________________________________________________\nmixed0 (Concatenate) (None, 16, 16, 256) 0 activation_5[0][0] \n activation_7[0][0] \n activation_10[0][0] \n activation_11[0][0] \n__________________________________________________________________________________________________\nconv2d_15 (Conv2D) (None, 16, 16, 64) 16384 mixed0[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_15 (BatchNo (None, 16, 16, 64) 192 conv2d_15[0][0] \n__________________________________________________________________________________________________\nactivation_15 (Activation) (None, 16, 16, 64) 0 batch_normalization_15[0][0] \n__________________________________________________________________________________________________\nconv2d_13 (Conv2D) (None, 16, 16, 48) 12288 mixed0[0][0] \n__________________________________________________________________________________________________\nconv2d_16 (Conv2D) (None, 16, 16, 96) 55296 activation_15[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_13 (BatchNo (None, 16, 16, 48) 144 conv2d_13[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_16 (BatchNo (None, 16, 16, 96) 288 conv2d_16[0][0] \n__________________________________________________________________________________________________\nactivation_13 (Activation) (None, 16, 16, 48) 0 batch_normalization_13[0][0] \n__________________________________________________________________________________________________\nactivation_16 (Activation) (None, 16, 16, 96) 0 batch_normalization_16[0][0] \n__________________________________________________________________________________________________\naverage_pooling2d_1 (AveragePoo (None, 16, 16, 256) 0 mixed0[0][0] \n__________________________________________________________________________________________________\nconv2d_12 (Conv2D) (None, 16, 16, 64) 16384 mixed0[0][0] \n__________________________________________________________________________________________________\nconv2d_14 (Conv2D) (None, 16, 16, 64) 76800 activation_13[0][0] \n__________________________________________________________________________________________________\nconv2d_17 (Conv2D) (None, 16, 16, 96) 82944 activation_16[0][0] \n__________________________________________________________________________________________________\nconv2d_18 (Conv2D) (None, 16, 16, 64) 16384 average_pooling2d_1[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_12 (BatchNo (None, 16, 16, 64) 192 conv2d_12[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_14 (BatchNo (None, 16, 16, 64) 192 conv2d_14[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_17 (BatchNo (None, 16, 16, 96) 288 conv2d_17[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_18 (BatchNo (None, 16, 16, 64) 192 conv2d_18[0][0] \n__________________________________________________________________________________________________\nactivation_12 (Activation) (None, 16, 16, 64) 0 batch_normalization_12[0][0] \n__________________________________________________________________________________________________\nactivation_14 (Activation) (None, 16, 16, 64) 0 batch_normalization_14[0][0] \n__________________________________________________________________________________________________\nactivation_17 (Activation) (None, 16, 16, 96) 0 batch_normalization_17[0][0] \n__________________________________________________________________________________________________\nactivation_18 (Activation) (None, 16, 16, 64) 0 batch_normalization_18[0][0] \n__________________________________________________________________________________________________\nmixed1 (Concatenate) (None, 16, 16, 288) 0 activation_12[0][0] \n activation_14[0][0] \n activation_17[0][0] \n activation_18[0][0] \n__________________________________________________________________________________________________\nconv2d_22 (Conv2D) (None, 16, 16, 64) 18432 mixed1[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_22 (BatchNo (None, 16, 16, 64) 192 conv2d_22[0][0] \n__________________________________________________________________________________________________\nactivation_22 (Activation) (None, 16, 16, 64) 0 batch_normalization_22[0][0] \n__________________________________________________________________________________________________\nconv2d_20 (Conv2D) (None, 16, 16, 48) 13824 mixed1[0][0] \n__________________________________________________________________________________________________\nconv2d_23 (Conv2D) (None, 16, 16, 96) 55296 activation_22[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_20 (BatchNo (None, 16, 16, 48) 144 conv2d_20[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_23 (BatchNo (None, 16, 16, 96) 288 conv2d_23[0][0] \n__________________________________________________________________________________________________\nactivation_20 (Activation) (None, 16, 16, 48) 0 batch_normalization_20[0][0] \n__________________________________________________________________________________________________\nactivation_23 (Activation) (None, 16, 16, 96) 0 batch_normalization_23[0][0] \n__________________________________________________________________________________________________\naverage_pooling2d_2 (AveragePoo (None, 16, 16, 288) 0 mixed1[0][0] \n__________________________________________________________________________________________________\nconv2d_19 (Conv2D) (None, 16, 16, 64) 18432 mixed1[0][0] \n__________________________________________________________________________________________________\nconv2d_21 (Conv2D) (None, 16, 16, 64) 76800 activation_20[0][0] \n__________________________________________________________________________________________________\nconv2d_24 (Conv2D) (None, 16, 16, 96) 82944 activation_23[0][0] \n__________________________________________________________________________________________________\nconv2d_25 (Conv2D) (None, 16, 16, 64) 18432 average_pooling2d_2[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_19 (BatchNo (None, 16, 16, 64) 192 conv2d_19[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_21 (BatchNo (None, 16, 16, 64) 192 conv2d_21[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_24 (BatchNo (None, 16, 16, 96) 288 conv2d_24[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_25 (BatchNo (None, 16, 16, 64) 192 conv2d_25[0][0] \n__________________________________________________________________________________________________\nactivation_19 (Activation) (None, 16, 16, 64) 0 batch_normalization_19[0][0] \n__________________________________________________________________________________________________\nactivation_21 (Activation) (None, 16, 16, 64) 0 batch_normalization_21[0][0] \n__________________________________________________________________________________________________\nactivation_24 (Activation) (None, 16, 16, 96) 0 batch_normalization_24[0][0] \n__________________________________________________________________________________________________\nactivation_25 (Activation) (None, 16, 16, 64) 0 batch_normalization_25[0][0] \n__________________________________________________________________________________________________\nmixed2 (Concatenate) (None, 16, 16, 288) 0 activation_19[0][0] \n activation_21[0][0] \n activation_24[0][0] \n activation_25[0][0] \n__________________________________________________________________________________________________\nconv2d_27 (Conv2D) (None, 16, 16, 64) 18432 mixed2[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_27 (BatchNo (None, 16, 16, 64) 192 conv2d_27[0][0] \n__________________________________________________________________________________________________\nactivation_27 (Activation) (None, 16, 16, 64) 0 batch_normalization_27[0][0] \n__________________________________________________________________________________________________\nconv2d_28 (Conv2D) (None, 16, 16, 96) 55296 activation_27[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_28 (BatchNo (None, 16, 16, 96) 288 conv2d_28[0][0] \n__________________________________________________________________________________________________\nactivation_28 (Activation) (None, 16, 16, 96) 0 batch_normalization_28[0][0] \n__________________________________________________________________________________________________\nconv2d_26 (Conv2D) (None, 7, 7, 384) 995328 mixed2[0][0] \n__________________________________________________________________________________________________\nconv2d_29 (Conv2D) (None, 7, 7, 96) 82944 activation_28[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_26 (BatchNo (None, 7, 7, 384) 1152 conv2d_26[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_29 (BatchNo (None, 7, 7, 96) 288 conv2d_29[0][0] \n__________________________________________________________________________________________________\nactivation_26 (Activation) (None, 7, 7, 384) 0 batch_normalization_26[0][0] \n__________________________________________________________________________________________________\nactivation_29 (Activation) (None, 7, 7, 96) 0 batch_normalization_29[0][0] \n__________________________________________________________________________________________________\nmax_pooling2d_2 (MaxPooling2D) (None, 7, 7, 288) 0 mixed2[0][0] \n__________________________________________________________________________________________________\nmixed3 (Concatenate) (None, 7, 7, 768) 0 activation_26[0][0] \n activation_29[0][0] \n max_pooling2d_2[0][0] \n__________________________________________________________________________________________________\nconv2d_34 (Conv2D) (None, 7, 7, 128) 98304 mixed3[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_34 (BatchNo (None, 7, 7, 128) 384 conv2d_34[0][0] \n__________________________________________________________________________________________________\nactivation_34 (Activation) (None, 7, 7, 128) 0 batch_normalization_34[0][0] \n__________________________________________________________________________________________________\nconv2d_35 (Conv2D) (None, 7, 7, 128) 114688 activation_34[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_35 (BatchNo (None, 7, 7, 128) 384 conv2d_35[0][0] \n__________________________________________________________________________________________________\nactivation_35 (Activation) (None, 7, 7, 128) 0 batch_normalization_35[0][0] \n__________________________________________________________________________________________________\nconv2d_31 (Conv2D) (None, 7, 7, 128) 98304 mixed3[0][0] \n__________________________________________________________________________________________________\nconv2d_36 (Conv2D) (None, 7, 7, 128) 114688 activation_35[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_31 (BatchNo (None, 7, 7, 128) 384 conv2d_31[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_36 (BatchNo (None, 7, 7, 128) 384 conv2d_36[0][0] \n__________________________________________________________________________________________________\nactivation_31 (Activation) (None, 7, 7, 128) 0 batch_normalization_31[0][0] \n__________________________________________________________________________________________________\nactivation_36 (Activation) (None, 7, 7, 128) 0 batch_normalization_36[0][0] \n__________________________________________________________________________________________________\nconv2d_32 (Conv2D) (None, 7, 7, 128) 114688 activation_31[0][0] \n__________________________________________________________________________________________________\nconv2d_37 (Conv2D) (None, 7, 7, 128) 114688 activation_36[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_32 (BatchNo (None, 7, 7, 128) 384 conv2d_32[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_37 (BatchNo (None, 7, 7, 128) 384 conv2d_37[0][0] \n__________________________________________________________________________________________________\nactivation_32 (Activation) (None, 7, 7, 128) 0 batch_normalization_32[0][0] \n__________________________________________________________________________________________________\nactivation_37 (Activation) (None, 7, 7, 128) 0 batch_normalization_37[0][0] \n__________________________________________________________________________________________________\naverage_pooling2d_3 (AveragePoo (None, 7, 7, 768) 0 mixed3[0][0] \n__________________________________________________________________________________________________\nconv2d_30 (Conv2D) (None, 7, 7, 192) 147456 mixed3[0][0] \n__________________________________________________________________________________________________\nconv2d_33 (Conv2D) (None, 7, 7, 192) 172032 activation_32[0][0] \n__________________________________________________________________________________________________\nconv2d_38 (Conv2D) (None, 7, 7, 192) 172032 activation_37[0][0] \n__________________________________________________________________________________________________\nconv2d_39 (Conv2D) (None, 7, 7, 192) 147456 average_pooling2d_3[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_30 (BatchNo (None, 7, 7, 192) 576 conv2d_30[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_33 (BatchNo (None, 7, 7, 192) 576 conv2d_33[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_38 (BatchNo (None, 7, 7, 192) 576 conv2d_38[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_39 (BatchNo (None, 7, 7, 192) 576 conv2d_39[0][0] \n__________________________________________________________________________________________________\nactivation_30 (Activation) (None, 7, 7, 192) 0 batch_normalization_30[0][0] \n__________________________________________________________________________________________________\nactivation_33 (Activation) (None, 7, 7, 192) 0 batch_normalization_33[0][0] \n__________________________________________________________________________________________________\nactivation_38 (Activation) (None, 7, 7, 192) 0 batch_normalization_38[0][0] \n__________________________________________________________________________________________________\nactivation_39 (Activation) (None, 7, 7, 192) 0 batch_normalization_39[0][0] \n__________________________________________________________________________________________________\nmixed4 (Concatenate) (None, 7, 7, 768) 0 activation_30[0][0] \n activation_33[0][0] \n activation_38[0][0] \n activation_39[0][0] \n__________________________________________________________________________________________________\nconv2d_44 (Conv2D) (None, 7, 7, 160) 122880 mixed4[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_44 (BatchNo (None, 7, 7, 160) 480 conv2d_44[0][0] \n__________________________________________________________________________________________________\nactivation_44 (Activation) (None, 7, 7, 160) 0 batch_normalization_44[0][0] \n__________________________________________________________________________________________________\nconv2d_45 (Conv2D) (None, 7, 7, 160) 179200 activation_44[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_45 (BatchNo (None, 7, 7, 160) 480 conv2d_45[0][0] \n__________________________________________________________________________________________________\nactivation_45 (Activation) (None, 7, 7, 160) 0 batch_normalization_45[0][0] \n__________________________________________________________________________________________________\nconv2d_41 (Conv2D) (None, 7, 7, 160) 122880 mixed4[0][0] \n__________________________________________________________________________________________________\nconv2d_46 (Conv2D) (None, 7, 7, 160) 179200 activation_45[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_41 (BatchNo (None, 7, 7, 160) 480 conv2d_41[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_46 (BatchNo (None, 7, 7, 160) 480 conv2d_46[0][0] \n__________________________________________________________________________________________________\nactivation_41 (Activation) (None, 7, 7, 160) 0 batch_normalization_41[0][0] \n__________________________________________________________________________________________________\nactivation_46 (Activation) (None, 7, 7, 160) 0 batch_normalization_46[0][0] \n__________________________________________________________________________________________________\nconv2d_42 (Conv2D) (None, 7, 7, 160) 179200 activation_41[0][0] \n__________________________________________________________________________________________________\nconv2d_47 (Conv2D) (None, 7, 7, 160) 179200 activation_46[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_42 (BatchNo (None, 7, 7, 160) 480 conv2d_42[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_47 (BatchNo (None, 7, 7, 160) 480 conv2d_47[0][0] \n__________________________________________________________________________________________________\nactivation_42 (Activation) (None, 7, 7, 160) 0 batch_normalization_42[0][0] \n__________________________________________________________________________________________________\nactivation_47 (Activation) (None, 7, 7, 160) 0 batch_normalization_47[0][0] \n__________________________________________________________________________________________________\naverage_pooling2d_4 (AveragePoo (None, 7, 7, 768) 0 mixed4[0][0] \n__________________________________________________________________________________________________\nconv2d_40 (Conv2D) (None, 7, 7, 192) 147456 mixed4[0][0] \n__________________________________________________________________________________________________\nconv2d_43 (Conv2D) (None, 7, 7, 192) 215040 activation_42[0][0] \n__________________________________________________________________________________________________\nconv2d_48 (Conv2D) (None, 7, 7, 192) 215040 activation_47[0][0] \n__________________________________________________________________________________________________\nconv2d_49 (Conv2D) (None, 7, 7, 192) 147456 average_pooling2d_4[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_40 (BatchNo (None, 7, 7, 192) 576 conv2d_40[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_43 (BatchNo (None, 7, 7, 192) 576 conv2d_43[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_48 (BatchNo (None, 7, 7, 192) 576 conv2d_48[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_49 (BatchNo (None, 7, 7, 192) 576 conv2d_49[0][0] \n__________________________________________________________________________________________________\nactivation_40 (Activation) (None, 7, 7, 192) 0 batch_normalization_40[0][0] \n__________________________________________________________________________________________________\nactivation_43 (Activation) (None, 7, 7, 192) 0 batch_normalization_43[0][0] \n__________________________________________________________________________________________________\nactivation_48 (Activation) (None, 7, 7, 192) 0 batch_normalization_48[0][0] \n__________________________________________________________________________________________________\nactivation_49 (Activation) (None, 7, 7, 192) 0 batch_normalization_49[0][0] \n__________________________________________________________________________________________________\nmixed5 (Concatenate) (None, 7, 7, 768) 0 activation_40[0][0] \n activation_43[0][0] \n activation_48[0][0] \n activation_49[0][0] \n__________________________________________________________________________________________________\nconv2d_54 (Conv2D) (None, 7, 7, 160) 122880 mixed5[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_54 (BatchNo (None, 7, 7, 160) 480 conv2d_54[0][0] \n__________________________________________________________________________________________________\nactivation_54 (Activation) (None, 7, 7, 160) 0 batch_normalization_54[0][0] \n__________________________________________________________________________________________________\nconv2d_55 (Conv2D) (None, 7, 7, 160) 179200 activation_54[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_55 (BatchNo (None, 7, 7, 160) 480 conv2d_55[0][0] \n__________________________________________________________________________________________________\nactivation_55 (Activation) (None, 7, 7, 160) 0 batch_normalization_55[0][0] \n__________________________________________________________________________________________________\nconv2d_51 (Conv2D) (None, 7, 7, 160) 122880 mixed5[0][0] \n__________________________________________________________________________________________________\nconv2d_56 (Conv2D) (None, 7, 7, 160) 179200 activation_55[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_51 (BatchNo (None, 7, 7, 160) 480 conv2d_51[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_56 (BatchNo (None, 7, 7, 160) 480 conv2d_56[0][0] \n__________________________________________________________________________________________________\nactivation_51 (Activation) (None, 7, 7, 160) 0 batch_normalization_51[0][0] \n__________________________________________________________________________________________________\nactivation_56 (Activation) (None, 7, 7, 160) 0 batch_normalization_56[0][0] \n__________________________________________________________________________________________________\nconv2d_52 (Conv2D) (None, 7, 7, 160) 179200 activation_51[0][0] \n__________________________________________________________________________________________________\nconv2d_57 (Conv2D) (None, 7, 7, 160) 179200 activation_56[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_52 (BatchNo (None, 7, 7, 160) 480 conv2d_52[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_57 (BatchNo (None, 7, 7, 160) 480 conv2d_57[0][0] \n__________________________________________________________________________________________________\nactivation_52 (Activation) (None, 7, 7, 160) 0 batch_normalization_52[0][0] \n__________________________________________________________________________________________________\nactivation_57 (Activation) (None, 7, 7, 160) 0 batch_normalization_57[0][0] \n__________________________________________________________________________________________________\naverage_pooling2d_5 (AveragePoo (None, 7, 7, 768) 0 mixed5[0][0] \n__________________________________________________________________________________________________\nconv2d_50 (Conv2D) (None, 7, 7, 192) 147456 mixed5[0][0] \n__________________________________________________________________________________________________\nconv2d_53 (Conv2D) (None, 7, 7, 192) 215040 activation_52[0][0] \n__________________________________________________________________________________________________\nconv2d_58 (Conv2D) (None, 7, 7, 192) 215040 activation_57[0][0] \n__________________________________________________________________________________________________\nconv2d_59 (Conv2D) (None, 7, 7, 192) 147456 average_pooling2d_5[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_50 (BatchNo (None, 7, 7, 192) 576 conv2d_50[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_53 (BatchNo (None, 7, 7, 192) 576 conv2d_53[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_58 (BatchNo (None, 7, 7, 192) 576 conv2d_58[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_59 (BatchNo (None, 7, 7, 192) 576 conv2d_59[0][0] \n__________________________________________________________________________________________________\nactivation_50 (Activation) (None, 7, 7, 192) 0 batch_normalization_50[0][0] \n__________________________________________________________________________________________________\nactivation_53 (Activation) (None, 7, 7, 192) 0 batch_normalization_53[0][0] \n__________________________________________________________________________________________________\nactivation_58 (Activation) (None, 7, 7, 192) 0 batch_normalization_58[0][0] \n__________________________________________________________________________________________________\nactivation_59 (Activation) (None, 7, 7, 192) 0 batch_normalization_59[0][0] \n__________________________________________________________________________________________________\nmixed6 (Concatenate) (None, 7, 7, 768) 0 activation_50[0][0] \n activation_53[0][0] \n activation_58[0][0] \n activation_59[0][0] \n__________________________________________________________________________________________________\nconv2d_64 (Conv2D) (None, 7, 7, 192) 147456 mixed6[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_64 (BatchNo (None, 7, 7, 192) 576 conv2d_64[0][0] \n__________________________________________________________________________________________________\nactivation_64 (Activation) (None, 7, 7, 192) 0 batch_normalization_64[0][0] \n__________________________________________________________________________________________________\nconv2d_65 (Conv2D) (None, 7, 7, 192) 258048 activation_64[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_65 (BatchNo (None, 7, 7, 192) 576 conv2d_65[0][0] \n__________________________________________________________________________________________________\nactivation_65 (Activation) (None, 7, 7, 192) 0 batch_normalization_65[0][0] \n__________________________________________________________________________________________________\nconv2d_61 (Conv2D) (None, 7, 7, 192) 147456 mixed6[0][0] \n__________________________________________________________________________________________________\nconv2d_66 (Conv2D) (None, 7, 7, 192) 258048 activation_65[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_61 (BatchNo (None, 7, 7, 192) 576 conv2d_61[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_66 (BatchNo (None, 7, 7, 192) 576 conv2d_66[0][0] \n__________________________________________________________________________________________________\nactivation_61 (Activation) (None, 7, 7, 192) 0 batch_normalization_61[0][0] \n__________________________________________________________________________________________________\nactivation_66 (Activation) (None, 7, 7, 192) 0 batch_normalization_66[0][0] \n__________________________________________________________________________________________________\nconv2d_62 (Conv2D) (None, 7, 7, 192) 258048 activation_61[0][0] \n__________________________________________________________________________________________________\nconv2d_67 (Conv2D) (None, 7, 7, 192) 258048 activation_66[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_62 (BatchNo (None, 7, 7, 192) 576 conv2d_62[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_67 (BatchNo (None, 7, 7, 192) 576 conv2d_67[0][0] \n__________________________________________________________________________________________________\nactivation_62 (Activation) (None, 7, 7, 192) 0 batch_normalization_62[0][0] \n__________________________________________________________________________________________________\nactivation_67 (Activation) (None, 7, 7, 192) 0 batch_normalization_67[0][0] \n__________________________________________________________________________________________________\naverage_pooling2d_6 (AveragePoo (None, 7, 7, 768) 0 mixed6[0][0] \n__________________________________________________________________________________________________\nconv2d_60 (Conv2D) (None, 7, 7, 192) 147456 mixed6[0][0] \n__________________________________________________________________________________________________\nconv2d_63 (Conv2D) (None, 7, 7, 192) 258048 activation_62[0][0] \n__________________________________________________________________________________________________\nconv2d_68 (Conv2D) (None, 7, 7, 192) 258048 activation_67[0][0] \n__________________________________________________________________________________________________\nconv2d_69 (Conv2D) (None, 7, 7, 192) 147456 average_pooling2d_6[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_60 (BatchNo (None, 7, 7, 192) 576 conv2d_60[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_63 (BatchNo (None, 7, 7, 192) 576 conv2d_63[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_68 (BatchNo (None, 7, 7, 192) 576 conv2d_68[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_69 (BatchNo (None, 7, 7, 192) 576 conv2d_69[0][0] \n__________________________________________________________________________________________________\nactivation_60 (Activation) (None, 7, 7, 192) 0 batch_normalization_60[0][0] \n__________________________________________________________________________________________________\nactivation_63 (Activation) (None, 7, 7, 192) 0 batch_normalization_63[0][0] \n__________________________________________________________________________________________________\nactivation_68 (Activation) (None, 7, 7, 192) 0 batch_normalization_68[0][0] \n__________________________________________________________________________________________________\nactivation_69 (Activation) (None, 7, 7, 192) 0 batch_normalization_69[0][0] \n__________________________________________________________________________________________________\nmixed7 (Concatenate) (None, 7, 7, 768) 0 activation_60[0][0] \n activation_63[0][0] \n activation_68[0][0] \n activation_69[0][0] \n__________________________________________________________________________________________________\nconv2d_72 (Conv2D) (None, 7, 7, 192) 147456 mixed7[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_72 (BatchNo (None, 7, 7, 192) 576 conv2d_72[0][0] \n__________________________________________________________________________________________________\nactivation_72 (Activation) (None, 7, 7, 192) 0 batch_normalization_72[0][0] \n__________________________________________________________________________________________________\nconv2d_73 (Conv2D) (None, 7, 7, 192) 258048 activation_72[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_73 (BatchNo (None, 7, 7, 192) 576 conv2d_73[0][0] \n__________________________________________________________________________________________________\nactivation_73 (Activation) (None, 7, 7, 192) 0 batch_normalization_73[0][0] \n__________________________________________________________________________________________________\nconv2d_70 (Conv2D) (None, 7, 7, 192) 147456 mixed7[0][0] \n__________________________________________________________________________________________________\nconv2d_74 (Conv2D) (None, 7, 7, 192) 258048 activation_73[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_70 (BatchNo (None, 7, 7, 192) 576 conv2d_70[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_74 (BatchNo (None, 7, 7, 192) 576 conv2d_74[0][0] \n__________________________________________________________________________________________________\nactivation_70 (Activation) (None, 7, 7, 192) 0 batch_normalization_70[0][0] \n__________________________________________________________________________________________________\nactivation_74 (Activation) (None, 7, 7, 192) 0 batch_normalization_74[0][0] \n__________________________________________________________________________________________________\nconv2d_71 (Conv2D) (None, 3, 3, 320) 552960 activation_70[0][0] \n__________________________________________________________________________________________________\nconv2d_75 (Conv2D) (None, 3, 3, 192) 331776 activation_74[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_71 (BatchNo (None, 3, 3, 320) 960 conv2d_71[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_75 (BatchNo (None, 3, 3, 192) 576 conv2d_75[0][0] \n__________________________________________________________________________________________________\nactivation_71 (Activation) (None, 3, 3, 320) 0 batch_normalization_71[0][0] \n__________________________________________________________________________________________________\nactivation_75 (Activation) (None, 3, 3, 192) 0 batch_normalization_75[0][0] \n__________________________________________________________________________________________________\nmax_pooling2d_3 (MaxPooling2D) (None, 3, 3, 768) 0 mixed7[0][0] \n__________________________________________________________________________________________________\nmixed8 (Concatenate) (None, 3, 3, 1280) 0 activation_71[0][0] \n activation_75[0][0] \n max_pooling2d_3[0][0] \n__________________________________________________________________________________________________\nconv2d_80 (Conv2D) (None, 3, 3, 448) 573440 mixed8[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_80 (BatchNo (None, 3, 3, 448) 1344 conv2d_80[0][0] \n__________________________________________________________________________________________________\nactivation_80 (Activation) (None, 3, 3, 448) 0 batch_normalization_80[0][0] \n__________________________________________________________________________________________________\nconv2d_77 (Conv2D) (None, 3, 3, 384) 491520 mixed8[0][0] \n__________________________________________________________________________________________________\nconv2d_81 (Conv2D) (None, 3, 3, 384) 1548288 activation_80[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_77 (BatchNo (None, 3, 3, 384) 1152 conv2d_77[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_81 (BatchNo (None, 3, 3, 384) 1152 conv2d_81[0][0] \n__________________________________________________________________________________________________\nactivation_77 (Activation) (None, 3, 3, 384) 0 batch_normalization_77[0][0] \n__________________________________________________________________________________________________\nactivation_81 (Activation) (None, 3, 3, 384) 0 batch_normalization_81[0][0] \n__________________________________________________________________________________________________\nconv2d_78 (Conv2D) (None, 3, 3, 384) 442368 activation_77[0][0] \n__________________________________________________________________________________________________\nconv2d_79 (Conv2D) (None, 3, 3, 384) 442368 activation_77[0][0] \n__________________________________________________________________________________________________\nconv2d_82 (Conv2D) (None, 3, 3, 384) 442368 activation_81[0][0] \n__________________________________________________________________________________________________\nconv2d_83 (Conv2D) (None, 3, 3, 384) 442368 activation_81[0][0] \n__________________________________________________________________________________________________\naverage_pooling2d_7 (AveragePoo (None, 3, 3, 1280) 0 mixed8[0][0] \n__________________________________________________________________________________________________\nconv2d_76 (Conv2D) (None, 3, 3, 320) 409600 mixed8[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_78 (BatchNo (None, 3, 3, 384) 1152 conv2d_78[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_79 (BatchNo (None, 3, 3, 384) 1152 conv2d_79[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_82 (BatchNo (None, 3, 3, 384) 1152 conv2d_82[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_83 (BatchNo (None, 3, 3, 384) 1152 conv2d_83[0][0] \n__________________________________________________________________________________________________\nconv2d_84 (Conv2D) (None, 3, 3, 192) 245760 average_pooling2d_7[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_76 (BatchNo (None, 3, 3, 320) 960 conv2d_76[0][0] \n__________________________________________________________________________________________________\nactivation_78 (Activation) (None, 3, 3, 384) 0 batch_normalization_78[0][0] \n__________________________________________________________________________________________________\nactivation_79 (Activation) (None, 3, 3, 384) 0 batch_normalization_79[0][0] \n__________________________________________________________________________________________________\nactivation_82 (Activation) (None, 3, 3, 384) 0 batch_normalization_82[0][0] \n__________________________________________________________________________________________________\nactivation_83 (Activation) (None, 3, 3, 384) 0 batch_normalization_83[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_84 (BatchNo (None, 3, 3, 192) 576 conv2d_84[0][0] \n__________________________________________________________________________________________________\nactivation_76 (Activation) (None, 3, 3, 320) 0 batch_normalization_76[0][0] \n__________________________________________________________________________________________________\nmixed9_0 (Concatenate) (None, 3, 3, 768) 0 activation_78[0][0] \n activation_79[0][0] \n__________________________________________________________________________________________________\nconcatenate (Concatenate) (None, 3, 3, 768) 0 activation_82[0][0] \n activation_83[0][0] \n__________________________________________________________________________________________________\nactivation_84 (Activation) (None, 3, 3, 192) 0 batch_normalization_84[0][0] \n__________________________________________________________________________________________________\nmixed9 (Concatenate) (None, 3, 3, 2048) 0 activation_76[0][0] \n mixed9_0[0][0] \n concatenate[0][0] \n activation_84[0][0] \n__________________________________________________________________________________________________\nconv2d_89 (Conv2D) (None, 3, 3, 448) 917504 mixed9[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_89 (BatchNo (None, 3, 3, 448) 1344 conv2d_89[0][0] \n__________________________________________________________________________________________________\nactivation_89 (Activation) (None, 3, 3, 448) 0 batch_normalization_89[0][0] \n__________________________________________________________________________________________________\nconv2d_86 (Conv2D) (None, 3, 3, 384) 786432 mixed9[0][0] \n__________________________________________________________________________________________________\nconv2d_90 (Conv2D) (None, 3, 3, 384) 1548288 activation_89[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_86 (BatchNo (None, 3, 3, 384) 1152 conv2d_86[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_90 (BatchNo (None, 3, 3, 384) 1152 conv2d_90[0][0] \n__________________________________________________________________________________________________\nactivation_86 (Activation) (None, 3, 3, 384) 0 batch_normalization_86[0][0] \n__________________________________________________________________________________________________\nactivation_90 (Activation) (None, 3, 3, 384) 0 batch_normalization_90[0][0] \n__________________________________________________________________________________________________\nconv2d_87 (Conv2D) (None, 3, 3, 384) 442368 activation_86[0][0] \n__________________________________________________________________________________________________\nconv2d_88 (Conv2D) (None, 3, 3, 384) 442368 activation_86[0][0] \n__________________________________________________________________________________________________\nconv2d_91 (Conv2D) (None, 3, 3, 384) 442368 activation_90[0][0] \n__________________________________________________________________________________________________\nconv2d_92 (Conv2D) (None, 3, 3, 384) 442368 activation_90[0][0] \n__________________________________________________________________________________________________\naverage_pooling2d_8 (AveragePoo (None, 3, 3, 2048) 0 mixed9[0][0] \n__________________________________________________________________________________________________\nconv2d_85 (Conv2D) (None, 3, 3, 320) 655360 mixed9[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_87 (BatchNo (None, 3, 3, 384) 1152 conv2d_87[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_88 (BatchNo (None, 3, 3, 384) 1152 conv2d_88[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_91 (BatchNo (None, 3, 3, 384) 1152 conv2d_91[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_92 (BatchNo (None, 3, 3, 384) 1152 conv2d_92[0][0] \n__________________________________________________________________________________________________\nconv2d_93 (Conv2D) (None, 3, 3, 192) 393216 average_pooling2d_8[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_85 (BatchNo (None, 3, 3, 320) 960 conv2d_85[0][0] \n__________________________________________________________________________________________________\nactivation_87 (Activation) (None, 3, 3, 384) 0 batch_normalization_87[0][0] \n__________________________________________________________________________________________________\nactivation_88 (Activation) (None, 3, 3, 384) 0 batch_normalization_88[0][0] \n__________________________________________________________________________________________________\nactivation_91 (Activation) (None, 3, 3, 384) 0 batch_normalization_91[0][0] \n__________________________________________________________________________________________________\nactivation_92 (Activation) (None, 3, 3, 384) 0 batch_normalization_92[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_93 (BatchNo (None, 3, 3, 192) 576 conv2d_93[0][0] \n__________________________________________________________________________________________________\nactivation_85 (Activation) (None, 3, 3, 320) 0 batch_normalization_85[0][0] \n__________________________________________________________________________________________________\nmixed9_1 (Concatenate) (None, 3, 3, 768) 0 activation_87[0][0] \n activation_88[0][0] \n__________________________________________________________________________________________________\nconcatenate_1 (Concatenate) (None, 3, 3, 768) 0 activation_91[0][0] \n activation_92[0][0] \n__________________________________________________________________________________________________\nactivation_93 (Activation) (None, 3, 3, 192) 0 batch_normalization_93[0][0] \n__________________________________________________________________________________________________\nmixed10 (Concatenate) (None, 3, 3, 2048) 0 activation_85[0][0] \n mixed9_1[0][0] \n concatenate_1[0][0] \n activation_93[0][0] \n==================================================================================================\nTotal params: 21,802,784\nTrainable params: 0\nNon-trainable params: 21,802,784\n__________________________________________________________________________________________________\n"
],
[
"last_layer = pre_trained_model.get_layer('mixed7')\nprint('last layer output shape: ', last_layer.output_shape)\nlast_output = last_layer.output\n\n",
"last layer output shape: (None, 7, 7, 768)\n"
],
[
"# Defining a Callback class that stops training once accuracy reaches 97.0%\nclass myCallback(tf.keras.callbacks.Callback):\n def on_epoch_end(self, epoch, logs={}):\n if(logs.get('accuracy')>0.999):\n print(\"\\nReached 99.9% accuracy so cancelling training!\")\n self.model.stop_training = True",
"_____no_output_____"
],
[
"from tensorflow.keras.optimizers import RMSprop\n\n# Flatten the output layer to 1 dimension\nx = layers.Flatten()(last_output)\n\nx = layers.Dense(1024, activation='relu')(x)\nx = layers.Dropout(0.2)(x) \nx = layers.Dense (1, activation='sigmoid')(x) \n \n\nmodel = Model( pre_trained_model.input, x) \n\n\nmodel.compile(optimizer = RMSprop(lr=0.0001), \n loss = 'binary_crossentropy', \n metrics = ['accuracy'])\n\n\nmodel.summary()\n\n\n",
"Model: \"model_2\"\n__________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n==================================================================================================\ninput_1 (InputLayer) [(None, 150, 150, 3) 0 \n__________________________________________________________________________________________________\nconv2d (Conv2D) (None, 74, 74, 32) 864 input_1[0][0] \n__________________________________________________________________________________________________\nbatch_normalization (BatchNorma (None, 74, 74, 32) 96 conv2d[0][0] \n__________________________________________________________________________________________________\nactivation (Activation) (None, 74, 74, 32) 0 batch_normalization[0][0] \n__________________________________________________________________________________________________\nconv2d_1 (Conv2D) (None, 72, 72, 32) 9216 activation[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_1 (BatchNor (None, 72, 72, 32) 96 conv2d_1[0][0] \n__________________________________________________________________________________________________\nactivation_1 (Activation) (None, 72, 72, 32) 0 batch_normalization_1[0][0] \n__________________________________________________________________________________________________\nconv2d_2 (Conv2D) (None, 72, 72, 64) 18432 activation_1[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_2 (BatchNor (None, 72, 72, 64) 192 conv2d_2[0][0] \n__________________________________________________________________________________________________\nactivation_2 (Activation) (None, 72, 72, 64) 0 batch_normalization_2[0][0] \n__________________________________________________________________________________________________\nmax_pooling2d (MaxPooling2D) (None, 35, 35, 64) 0 activation_2[0][0] \n__________________________________________________________________________________________________\nconv2d_3 (Conv2D) (None, 35, 35, 80) 5120 max_pooling2d[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_3 (BatchNor (None, 35, 35, 80) 240 conv2d_3[0][0] \n__________________________________________________________________________________________________\nactivation_3 (Activation) (None, 35, 35, 80) 0 batch_normalization_3[0][0] \n__________________________________________________________________________________________________\nconv2d_4 (Conv2D) (None, 33, 33, 192) 138240 activation_3[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_4 (BatchNor (None, 33, 33, 192) 576 conv2d_4[0][0] \n__________________________________________________________________________________________________\nactivation_4 (Activation) (None, 33, 33, 192) 0 batch_normalization_4[0][0] \n__________________________________________________________________________________________________\nmax_pooling2d_1 (MaxPooling2D) (None, 16, 16, 192) 0 activation_4[0][0] \n__________________________________________________________________________________________________\nconv2d_8 (Conv2D) (None, 16, 16, 64) 12288 max_pooling2d_1[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_8 (BatchNor (None, 16, 16, 64) 192 conv2d_8[0][0] \n__________________________________________________________________________________________________\nactivation_8 (Activation) (None, 16, 16, 64) 0 batch_normalization_8[0][0] \n__________________________________________________________________________________________________\nconv2d_6 (Conv2D) (None, 16, 16, 48) 9216 max_pooling2d_1[0][0] \n__________________________________________________________________________________________________\nconv2d_9 (Conv2D) (None, 16, 16, 96) 55296 activation_8[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_6 (BatchNor (None, 16, 16, 48) 144 conv2d_6[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_9 (BatchNor (None, 16, 16, 96) 288 conv2d_9[0][0] \n__________________________________________________________________________________________________\nactivation_6 (Activation) (None, 16, 16, 48) 0 batch_normalization_6[0][0] \n__________________________________________________________________________________________________\nactivation_9 (Activation) (None, 16, 16, 96) 0 batch_normalization_9[0][0] \n__________________________________________________________________________________________________\naverage_pooling2d (AveragePooli (None, 16, 16, 192) 0 max_pooling2d_1[0][0] \n__________________________________________________________________________________________________\nconv2d_5 (Conv2D) (None, 16, 16, 64) 12288 max_pooling2d_1[0][0] \n__________________________________________________________________________________________________\nconv2d_7 (Conv2D) (None, 16, 16, 64) 76800 activation_6[0][0] \n__________________________________________________________________________________________________\nconv2d_10 (Conv2D) (None, 16, 16, 96) 82944 activation_9[0][0] \n__________________________________________________________________________________________________\nconv2d_11 (Conv2D) (None, 16, 16, 32) 6144 average_pooling2d[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_5 (BatchNor (None, 16, 16, 64) 192 conv2d_5[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_7 (BatchNor (None, 16, 16, 64) 192 conv2d_7[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_10 (BatchNo (None, 16, 16, 96) 288 conv2d_10[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_11 (BatchNo (None, 16, 16, 32) 96 conv2d_11[0][0] \n__________________________________________________________________________________________________\nactivation_5 (Activation) (None, 16, 16, 64) 0 batch_normalization_5[0][0] \n__________________________________________________________________________________________________\nactivation_7 (Activation) (None, 16, 16, 64) 0 batch_normalization_7[0][0] \n__________________________________________________________________________________________________\nactivation_10 (Activation) (None, 16, 16, 96) 0 batch_normalization_10[0][0] \n__________________________________________________________________________________________________\nactivation_11 (Activation) (None, 16, 16, 32) 0 batch_normalization_11[0][0] \n__________________________________________________________________________________________________\nmixed0 (Concatenate) (None, 16, 16, 256) 0 activation_5[0][0] \n activation_7[0][0] \n activation_10[0][0] \n activation_11[0][0] \n__________________________________________________________________________________________________\nconv2d_15 (Conv2D) (None, 16, 16, 64) 16384 mixed0[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_15 (BatchNo (None, 16, 16, 64) 192 conv2d_15[0][0] \n__________________________________________________________________________________________________\nactivation_15 (Activation) (None, 16, 16, 64) 0 batch_normalization_15[0][0] \n__________________________________________________________________________________________________\nconv2d_13 (Conv2D) (None, 16, 16, 48) 12288 mixed0[0][0] \n__________________________________________________________________________________________________\nconv2d_16 (Conv2D) (None, 16, 16, 96) 55296 activation_15[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_13 (BatchNo (None, 16, 16, 48) 144 conv2d_13[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_16 (BatchNo (None, 16, 16, 96) 288 conv2d_16[0][0] \n__________________________________________________________________________________________________\nactivation_13 (Activation) (None, 16, 16, 48) 0 batch_normalization_13[0][0] \n__________________________________________________________________________________________________\nactivation_16 (Activation) (None, 16, 16, 96) 0 batch_normalization_16[0][0] \n__________________________________________________________________________________________________\naverage_pooling2d_1 (AveragePoo (None, 16, 16, 256) 0 mixed0[0][0] \n__________________________________________________________________________________________________\nconv2d_12 (Conv2D) (None, 16, 16, 64) 16384 mixed0[0][0] \n__________________________________________________________________________________________________\nconv2d_14 (Conv2D) (None, 16, 16, 64) 76800 activation_13[0][0] \n__________________________________________________________________________________________________\nconv2d_17 (Conv2D) (None, 16, 16, 96) 82944 activation_16[0][0] \n__________________________________________________________________________________________________\nconv2d_18 (Conv2D) (None, 16, 16, 64) 16384 average_pooling2d_1[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_12 (BatchNo (None, 16, 16, 64) 192 conv2d_12[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_14 (BatchNo (None, 16, 16, 64) 192 conv2d_14[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_17 (BatchNo (None, 16, 16, 96) 288 conv2d_17[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_18 (BatchNo (None, 16, 16, 64) 192 conv2d_18[0][0] \n__________________________________________________________________________________________________\nactivation_12 (Activation) (None, 16, 16, 64) 0 batch_normalization_12[0][0] \n__________________________________________________________________________________________________\nactivation_14 (Activation) (None, 16, 16, 64) 0 batch_normalization_14[0][0] \n__________________________________________________________________________________________________\nactivation_17 (Activation) (None, 16, 16, 96) 0 batch_normalization_17[0][0] \n__________________________________________________________________________________________________\nactivation_18 (Activation) (None, 16, 16, 64) 0 batch_normalization_18[0][0] \n__________________________________________________________________________________________________\nmixed1 (Concatenate) (None, 16, 16, 288) 0 activation_12[0][0] \n activation_14[0][0] \n activation_17[0][0] \n activation_18[0][0] \n__________________________________________________________________________________________________\nconv2d_22 (Conv2D) (None, 16, 16, 64) 18432 mixed1[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_22 (BatchNo (None, 16, 16, 64) 192 conv2d_22[0][0] \n__________________________________________________________________________________________________\nactivation_22 (Activation) (None, 16, 16, 64) 0 batch_normalization_22[0][0] \n__________________________________________________________________________________________________\nconv2d_20 (Conv2D) (None, 16, 16, 48) 13824 mixed1[0][0] \n__________________________________________________________________________________________________\nconv2d_23 (Conv2D) (None, 16, 16, 96) 55296 activation_22[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_20 (BatchNo (None, 16, 16, 48) 144 conv2d_20[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_23 (BatchNo (None, 16, 16, 96) 288 conv2d_23[0][0] \n__________________________________________________________________________________________________\nactivation_20 (Activation) (None, 16, 16, 48) 0 batch_normalization_20[0][0] \n__________________________________________________________________________________________________\nactivation_23 (Activation) (None, 16, 16, 96) 0 batch_normalization_23[0][0] \n__________________________________________________________________________________________________\naverage_pooling2d_2 (AveragePoo (None, 16, 16, 288) 0 mixed1[0][0] \n__________________________________________________________________________________________________\nconv2d_19 (Conv2D) (None, 16, 16, 64) 18432 mixed1[0][0] \n__________________________________________________________________________________________________\nconv2d_21 (Conv2D) (None, 16, 16, 64) 76800 activation_20[0][0] \n__________________________________________________________________________________________________\nconv2d_24 (Conv2D) (None, 16, 16, 96) 82944 activation_23[0][0] \n__________________________________________________________________________________________________\nconv2d_25 (Conv2D) (None, 16, 16, 64) 18432 average_pooling2d_2[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_19 (BatchNo (None, 16, 16, 64) 192 conv2d_19[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_21 (BatchNo (None, 16, 16, 64) 192 conv2d_21[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_24 (BatchNo (None, 16, 16, 96) 288 conv2d_24[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_25 (BatchNo (None, 16, 16, 64) 192 conv2d_25[0][0] \n__________________________________________________________________________________________________\nactivation_19 (Activation) (None, 16, 16, 64) 0 batch_normalization_19[0][0] \n__________________________________________________________________________________________________\nactivation_21 (Activation) (None, 16, 16, 64) 0 batch_normalization_21[0][0] \n__________________________________________________________________________________________________\nactivation_24 (Activation) (None, 16, 16, 96) 0 batch_normalization_24[0][0] \n__________________________________________________________________________________________________\nactivation_25 (Activation) (None, 16, 16, 64) 0 batch_normalization_25[0][0] \n__________________________________________________________________________________________________\nmixed2 (Concatenate) (None, 16, 16, 288) 0 activation_19[0][0] \n activation_21[0][0] \n activation_24[0][0] \n activation_25[0][0] \n__________________________________________________________________________________________________\nconv2d_27 (Conv2D) (None, 16, 16, 64) 18432 mixed2[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_27 (BatchNo (None, 16, 16, 64) 192 conv2d_27[0][0] \n__________________________________________________________________________________________________\nactivation_27 (Activation) (None, 16, 16, 64) 0 batch_normalization_27[0][0] \n__________________________________________________________________________________________________\nconv2d_28 (Conv2D) (None, 16, 16, 96) 55296 activation_27[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_28 (BatchNo (None, 16, 16, 96) 288 conv2d_28[0][0] \n__________________________________________________________________________________________________\nactivation_28 (Activation) (None, 16, 16, 96) 0 batch_normalization_28[0][0] \n__________________________________________________________________________________________________\nconv2d_26 (Conv2D) (None, 7, 7, 384) 995328 mixed2[0][0] \n__________________________________________________________________________________________________\nconv2d_29 (Conv2D) (None, 7, 7, 96) 82944 activation_28[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_26 (BatchNo (None, 7, 7, 384) 1152 conv2d_26[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_29 (BatchNo (None, 7, 7, 96) 288 conv2d_29[0][0] \n__________________________________________________________________________________________________\nactivation_26 (Activation) (None, 7, 7, 384) 0 batch_normalization_26[0][0] \n__________________________________________________________________________________________________\nactivation_29 (Activation) (None, 7, 7, 96) 0 batch_normalization_29[0][0] \n__________________________________________________________________________________________________\nmax_pooling2d_2 (MaxPooling2D) (None, 7, 7, 288) 0 mixed2[0][0] \n__________________________________________________________________________________________________\nmixed3 (Concatenate) (None, 7, 7, 768) 0 activation_26[0][0] \n activation_29[0][0] \n max_pooling2d_2[0][0] \n__________________________________________________________________________________________________\nconv2d_34 (Conv2D) (None, 7, 7, 128) 98304 mixed3[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_34 (BatchNo (None, 7, 7, 128) 384 conv2d_34[0][0] \n__________________________________________________________________________________________________\nactivation_34 (Activation) (None, 7, 7, 128) 0 batch_normalization_34[0][0] \n__________________________________________________________________________________________________\nconv2d_35 (Conv2D) (None, 7, 7, 128) 114688 activation_34[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_35 (BatchNo (None, 7, 7, 128) 384 conv2d_35[0][0] \n__________________________________________________________________________________________________\nactivation_35 (Activation) (None, 7, 7, 128) 0 batch_normalization_35[0][0] \n__________________________________________________________________________________________________\nconv2d_31 (Conv2D) (None, 7, 7, 128) 98304 mixed3[0][0] \n__________________________________________________________________________________________________\nconv2d_36 (Conv2D) (None, 7, 7, 128) 114688 activation_35[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_31 (BatchNo (None, 7, 7, 128) 384 conv2d_31[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_36 (BatchNo (None, 7, 7, 128) 384 conv2d_36[0][0] \n__________________________________________________________________________________________________\nactivation_31 (Activation) (None, 7, 7, 128) 0 batch_normalization_31[0][0] \n__________________________________________________________________________________________________\nactivation_36 (Activation) (None, 7, 7, 128) 0 batch_normalization_36[0][0] \n__________________________________________________________________________________________________\nconv2d_32 (Conv2D) (None, 7, 7, 128) 114688 activation_31[0][0] \n__________________________________________________________________________________________________\nconv2d_37 (Conv2D) (None, 7, 7, 128) 114688 activation_36[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_32 (BatchNo (None, 7, 7, 128) 384 conv2d_32[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_37 (BatchNo (None, 7, 7, 128) 384 conv2d_37[0][0] \n__________________________________________________________________________________________________\nactivation_32 (Activation) (None, 7, 7, 128) 0 batch_normalization_32[0][0] \n__________________________________________________________________________________________________\nactivation_37 (Activation) (None, 7, 7, 128) 0 batch_normalization_37[0][0] \n__________________________________________________________________________________________________\naverage_pooling2d_3 (AveragePoo (None, 7, 7, 768) 0 mixed3[0][0] \n__________________________________________________________________________________________________\nconv2d_30 (Conv2D) (None, 7, 7, 192) 147456 mixed3[0][0] \n__________________________________________________________________________________________________\nconv2d_33 (Conv2D) (None, 7, 7, 192) 172032 activation_32[0][0] \n__________________________________________________________________________________________________\nconv2d_38 (Conv2D) (None, 7, 7, 192) 172032 activation_37[0][0] \n__________________________________________________________________________________________________\nconv2d_39 (Conv2D) (None, 7, 7, 192) 147456 average_pooling2d_3[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_30 (BatchNo (None, 7, 7, 192) 576 conv2d_30[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_33 (BatchNo (None, 7, 7, 192) 576 conv2d_33[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_38 (BatchNo (None, 7, 7, 192) 576 conv2d_38[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_39 (BatchNo (None, 7, 7, 192) 576 conv2d_39[0][0] \n__________________________________________________________________________________________________\nactivation_30 (Activation) (None, 7, 7, 192) 0 batch_normalization_30[0][0] \n__________________________________________________________________________________________________\nactivation_33 (Activation) (None, 7, 7, 192) 0 batch_normalization_33[0][0] \n__________________________________________________________________________________________________\nactivation_38 (Activation) (None, 7, 7, 192) 0 batch_normalization_38[0][0] \n__________________________________________________________________________________________________\nactivation_39 (Activation) (None, 7, 7, 192) 0 batch_normalization_39[0][0] \n__________________________________________________________________________________________________\nmixed4 (Concatenate) (None, 7, 7, 768) 0 activation_30[0][0] \n activation_33[0][0] \n activation_38[0][0] \n activation_39[0][0] \n__________________________________________________________________________________________________\nconv2d_44 (Conv2D) (None, 7, 7, 160) 122880 mixed4[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_44 (BatchNo (None, 7, 7, 160) 480 conv2d_44[0][0] \n__________________________________________________________________________________________________\nactivation_44 (Activation) (None, 7, 7, 160) 0 batch_normalization_44[0][0] \n__________________________________________________________________________________________________\nconv2d_45 (Conv2D) (None, 7, 7, 160) 179200 activation_44[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_45 (BatchNo (None, 7, 7, 160) 480 conv2d_45[0][0] \n__________________________________________________________________________________________________\nactivation_45 (Activation) (None, 7, 7, 160) 0 batch_normalization_45[0][0] \n__________________________________________________________________________________________________\nconv2d_41 (Conv2D) (None, 7, 7, 160) 122880 mixed4[0][0] \n__________________________________________________________________________________________________\nconv2d_46 (Conv2D) (None, 7, 7, 160) 179200 activation_45[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_41 (BatchNo (None, 7, 7, 160) 480 conv2d_41[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_46 (BatchNo (None, 7, 7, 160) 480 conv2d_46[0][0] \n__________________________________________________________________________________________________\nactivation_41 (Activation) (None, 7, 7, 160) 0 batch_normalization_41[0][0] \n__________________________________________________________________________________________________\nactivation_46 (Activation) (None, 7, 7, 160) 0 batch_normalization_46[0][0] \n__________________________________________________________________________________________________\nconv2d_42 (Conv2D) (None, 7, 7, 160) 179200 activation_41[0][0] \n__________________________________________________________________________________________________\nconv2d_47 (Conv2D) (None, 7, 7, 160) 179200 activation_46[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_42 (BatchNo (None, 7, 7, 160) 480 conv2d_42[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_47 (BatchNo (None, 7, 7, 160) 480 conv2d_47[0][0] \n__________________________________________________________________________________________________\nactivation_42 (Activation) (None, 7, 7, 160) 0 batch_normalization_42[0][0] \n__________________________________________________________________________________________________\nactivation_47 (Activation) (None, 7, 7, 160) 0 batch_normalization_47[0][0] \n__________________________________________________________________________________________________\naverage_pooling2d_4 (AveragePoo (None, 7, 7, 768) 0 mixed4[0][0] \n__________________________________________________________________________________________________\nconv2d_40 (Conv2D) (None, 7, 7, 192) 147456 mixed4[0][0] \n__________________________________________________________________________________________________\nconv2d_43 (Conv2D) (None, 7, 7, 192) 215040 activation_42[0][0] \n__________________________________________________________________________________________________\nconv2d_48 (Conv2D) (None, 7, 7, 192) 215040 activation_47[0][0] \n__________________________________________________________________________________________________\nconv2d_49 (Conv2D) (None, 7, 7, 192) 147456 average_pooling2d_4[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_40 (BatchNo (None, 7, 7, 192) 576 conv2d_40[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_43 (BatchNo (None, 7, 7, 192) 576 conv2d_43[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_48 (BatchNo (None, 7, 7, 192) 576 conv2d_48[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_49 (BatchNo (None, 7, 7, 192) 576 conv2d_49[0][0] \n__________________________________________________________________________________________________\nactivation_40 (Activation) (None, 7, 7, 192) 0 batch_normalization_40[0][0] \n__________________________________________________________________________________________________\nactivation_43 (Activation) (None, 7, 7, 192) 0 batch_normalization_43[0][0] \n__________________________________________________________________________________________________\nactivation_48 (Activation) (None, 7, 7, 192) 0 batch_normalization_48[0][0] \n__________________________________________________________________________________________________\nactivation_49 (Activation) (None, 7, 7, 192) 0 batch_normalization_49[0][0] \n__________________________________________________________________________________________________\nmixed5 (Concatenate) (None, 7, 7, 768) 0 activation_40[0][0] \n activation_43[0][0] \n activation_48[0][0] \n activation_49[0][0] \n__________________________________________________________________________________________________\nconv2d_54 (Conv2D) (None, 7, 7, 160) 122880 mixed5[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_54 (BatchNo (None, 7, 7, 160) 480 conv2d_54[0][0] \n__________________________________________________________________________________________________\nactivation_54 (Activation) (None, 7, 7, 160) 0 batch_normalization_54[0][0] \n__________________________________________________________________________________________________\nconv2d_55 (Conv2D) (None, 7, 7, 160) 179200 activation_54[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_55 (BatchNo (None, 7, 7, 160) 480 conv2d_55[0][0] \n__________________________________________________________________________________________________\nactivation_55 (Activation) (None, 7, 7, 160) 0 batch_normalization_55[0][0] \n__________________________________________________________________________________________________\nconv2d_51 (Conv2D) (None, 7, 7, 160) 122880 mixed5[0][0] \n__________________________________________________________________________________________________\nconv2d_56 (Conv2D) (None, 7, 7, 160) 179200 activation_55[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_51 (BatchNo (None, 7, 7, 160) 480 conv2d_51[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_56 (BatchNo (None, 7, 7, 160) 480 conv2d_56[0][0] \n__________________________________________________________________________________________________\nactivation_51 (Activation) (None, 7, 7, 160) 0 batch_normalization_51[0][0] \n__________________________________________________________________________________________________\nactivation_56 (Activation) (None, 7, 7, 160) 0 batch_normalization_56[0][0] \n__________________________________________________________________________________________________\nconv2d_52 (Conv2D) (None, 7, 7, 160) 179200 activation_51[0][0] \n__________________________________________________________________________________________________\nconv2d_57 (Conv2D) (None, 7, 7, 160) 179200 activation_56[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_52 (BatchNo (None, 7, 7, 160) 480 conv2d_52[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_57 (BatchNo (None, 7, 7, 160) 480 conv2d_57[0][0] \n__________________________________________________________________________________________________\nactivation_52 (Activation) (None, 7, 7, 160) 0 batch_normalization_52[0][0] \n__________________________________________________________________________________________________\nactivation_57 (Activation) (None, 7, 7, 160) 0 batch_normalization_57[0][0] \n__________________________________________________________________________________________________\naverage_pooling2d_5 (AveragePoo (None, 7, 7, 768) 0 mixed5[0][0] \n__________________________________________________________________________________________________\nconv2d_50 (Conv2D) (None, 7, 7, 192) 147456 mixed5[0][0] \n__________________________________________________________________________________________________\nconv2d_53 (Conv2D) (None, 7, 7, 192) 215040 activation_52[0][0] \n__________________________________________________________________________________________________\nconv2d_58 (Conv2D) (None, 7, 7, 192) 215040 activation_57[0][0] \n__________________________________________________________________________________________________\nconv2d_59 (Conv2D) (None, 7, 7, 192) 147456 average_pooling2d_5[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_50 (BatchNo (None, 7, 7, 192) 576 conv2d_50[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_53 (BatchNo (None, 7, 7, 192) 576 conv2d_53[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_58 (BatchNo (None, 7, 7, 192) 576 conv2d_58[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_59 (BatchNo (None, 7, 7, 192) 576 conv2d_59[0][0] \n__________________________________________________________________________________________________\nactivation_50 (Activation) (None, 7, 7, 192) 0 batch_normalization_50[0][0] \n__________________________________________________________________________________________________\nactivation_53 (Activation) (None, 7, 7, 192) 0 batch_normalization_53[0][0] \n__________________________________________________________________________________________________\nactivation_58 (Activation) (None, 7, 7, 192) 0 batch_normalization_58[0][0] \n__________________________________________________________________________________________________\nactivation_59 (Activation) (None, 7, 7, 192) 0 batch_normalization_59[0][0] \n__________________________________________________________________________________________________\nmixed6 (Concatenate) (None, 7, 7, 768) 0 activation_50[0][0] \n activation_53[0][0] \n activation_58[0][0] \n activation_59[0][0] \n__________________________________________________________________________________________________\nconv2d_64 (Conv2D) (None, 7, 7, 192) 147456 mixed6[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_64 (BatchNo (None, 7, 7, 192) 576 conv2d_64[0][0] \n__________________________________________________________________________________________________\nactivation_64 (Activation) (None, 7, 7, 192) 0 batch_normalization_64[0][0] \n__________________________________________________________________________________________________\nconv2d_65 (Conv2D) (None, 7, 7, 192) 258048 activation_64[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_65 (BatchNo (None, 7, 7, 192) 576 conv2d_65[0][0] \n__________________________________________________________________________________________________\nactivation_65 (Activation) (None, 7, 7, 192) 0 batch_normalization_65[0][0] \n__________________________________________________________________________________________________\nconv2d_61 (Conv2D) (None, 7, 7, 192) 147456 mixed6[0][0] \n__________________________________________________________________________________________________\nconv2d_66 (Conv2D) (None, 7, 7, 192) 258048 activation_65[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_61 (BatchNo (None, 7, 7, 192) 576 conv2d_61[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_66 (BatchNo (None, 7, 7, 192) 576 conv2d_66[0][0] \n__________________________________________________________________________________________________\nactivation_61 (Activation) (None, 7, 7, 192) 0 batch_normalization_61[0][0] \n__________________________________________________________________________________________________\nactivation_66 (Activation) (None, 7, 7, 192) 0 batch_normalization_66[0][0] \n__________________________________________________________________________________________________\nconv2d_62 (Conv2D) (None, 7, 7, 192) 258048 activation_61[0][0] \n__________________________________________________________________________________________________\nconv2d_67 (Conv2D) (None, 7, 7, 192) 258048 activation_66[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_62 (BatchNo (None, 7, 7, 192) 576 conv2d_62[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_67 (BatchNo (None, 7, 7, 192) 576 conv2d_67[0][0] \n__________________________________________________________________________________________________\nactivation_62 (Activation) (None, 7, 7, 192) 0 batch_normalization_62[0][0] \n__________________________________________________________________________________________________\nactivation_67 (Activation) (None, 7, 7, 192) 0 batch_normalization_67[0][0] \n__________________________________________________________________________________________________\naverage_pooling2d_6 (AveragePoo (None, 7, 7, 768) 0 mixed6[0][0] \n__________________________________________________________________________________________________\nconv2d_60 (Conv2D) (None, 7, 7, 192) 147456 mixed6[0][0] \n__________________________________________________________________________________________________\nconv2d_63 (Conv2D) (None, 7, 7, 192) 258048 activation_62[0][0] \n__________________________________________________________________________________________________\nconv2d_68 (Conv2D) (None, 7, 7, 192) 258048 activation_67[0][0] \n__________________________________________________________________________________________________\nconv2d_69 (Conv2D) (None, 7, 7, 192) 147456 average_pooling2d_6[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_60 (BatchNo (None, 7, 7, 192) 576 conv2d_60[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_63 (BatchNo (None, 7, 7, 192) 576 conv2d_63[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_68 (BatchNo (None, 7, 7, 192) 576 conv2d_68[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_69 (BatchNo (None, 7, 7, 192) 576 conv2d_69[0][0] \n__________________________________________________________________________________________________\nactivation_60 (Activation) (None, 7, 7, 192) 0 batch_normalization_60[0][0] \n__________________________________________________________________________________________________\nactivation_63 (Activation) (None, 7, 7, 192) 0 batch_normalization_63[0][0] \n__________________________________________________________________________________________________\nactivation_68 (Activation) (None, 7, 7, 192) 0 batch_normalization_68[0][0] \n__________________________________________________________________________________________________\nactivation_69 (Activation) (None, 7, 7, 192) 0 batch_normalization_69[0][0] \n__________________________________________________________________________________________________\nmixed7 (Concatenate) (None, 7, 7, 768) 0 activation_60[0][0] \n activation_63[0][0] \n activation_68[0][0] \n activation_69[0][0] \n__________________________________________________________________________________________________\nflatten_2 (Flatten) (None, 37632) 0 mixed7[0][0] \n__________________________________________________________________________________________________\ndense_4 (Dense) (None, 1024) 38536192 flatten_2[0][0] \n__________________________________________________________________________________________________\ndropout_2 (Dropout) (None, 1024) 0 dense_4[0][0] \n__________________________________________________________________________________________________\ndense_5 (Dense) (None, 1) 1025 dropout_2[0][0] \n==================================================================================================\nTotal params: 47,512,481\nTrainable params: 38,537,217\nNon-trainable params: 8,975,264\n__________________________________________________________________________________________________\n"
],
[
"# Get the Horse or Human dataset\npath_horse_or_human = f\"{getcwd()}/../tmp2/horse-or-human.zip\"\n# Get the Horse or Human Validation dataset\npath_validation_horse_or_human = f\"{getcwd()}/../tmp2/validation-horse-or-human.zip\"\nfrom tensorflow.keras.preprocessing.image import ImageDataGenerator\n\nimport os\nimport zipfile\nimport shutil\n\nshutil.rmtree('/tmp')\nlocal_zip = path_horse_or_human\nzip_ref = zipfile.ZipFile(local_zip, 'r')\nzip_ref.extractall('/tmp/training')\nzip_ref.close()\n\nlocal_zip = path_validation_horse_or_human\nzip_ref = zipfile.ZipFile(local_zip, 'r')\nzip_ref.extractall('/tmp/validation')\nzip_ref.close()",
"_____no_output_____"
],
[
"\ntrain_dir = '/tmp/training'\nvalidation_dir = '/tmp/validation'\n\ntrain_horses_dir = os.path.join(train_dir, 'horses') \ntrain_humans_dir = os.path.join(train_dir, 'humans') \nvalidation_horses_dir = os.path.join(validation_dir, 'horses')\nvalidation_humans_dir = os.path.join(validation_dir, 'humans')\n\ntrain_horses_fnames = os.listdir(train_horses_dir)\ntrain_humans_fnames = os.listdir(train_humans_dir)\nvalidation_horses_fnames = os.listdir(validation_horses_dir)\nvalidation_humans_fnames = os.listdir(validation_humans_dir)\n\nprint(len(train_horses_fnames))\nprint(len(train_humans_fnames))\nprint(len(validation_horses_fnames))\nprint(len(validation_humans_fnames))\n\n",
"500\n527\n128\n128\n"
],
[
"train_datagen = ImageDataGenerator(rescale = 1./255.,\n rotation_range = 40,\n width_shift_range = 0.2,\n height_shift_range = 0.2,\n shear_range = 0.2,\n zoom_range = 0.2,\n horizontal_flip = True\n)\n\ntest_datagen = ImageDataGenerator(\n rescale = 1./255.\n)\n\ntrain_generator = train_datagen.flow_from_directory(\n train_dir,\n batch_size=64,\n class_mode='binary',\n target_size=(150,150)\n) \n \n\nvalidation_generator = test_datagen.flow_from_directory(\n validation_dir,\n batch_size=64,\n class_mode='binary',\n target_size=(150,150)\n)\n\n",
"Found 1027 images belonging to 2 classes.\nFound 256 images belonging to 2 classes.\n"
],
[
"\n\ncallbacks = myCallback()\nhistory = model.fit_generator(\n train_generator,\n epochs=50,\n validation_data=validation_generator,\n callbacks=[callbacks]\n)\n",
"Epoch 1/50\n17/17 [==============================] - 31s 2s/step - loss: 0.4578 - accuracy: 0.8491 - val_loss: 0.2821 - val_accuracy: 0.8594\nEpoch 2/50\n17/17 [==============================] - 29s 2s/step - loss: 0.1359 - accuracy: 0.9484 - val_loss: 0.0341 - val_accuracy: 0.9883\nEpoch 3/50\n17/17 [==============================] - 30s 2s/step - loss: 0.0734 - accuracy: 0.9747 - val_loss: 0.0103 - val_accuracy: 1.0000\nEpoch 4/50\n17/17 [==============================] - 30s 2s/step - loss: 0.0935 - accuracy: 0.9698 - val_loss: 0.0015 - val_accuracy: 1.0000\nEpoch 5/50\n17/17 [==============================] - 30s 2s/step - loss: 0.1006 - accuracy: 0.9873 - val_loss: 0.0118 - val_accuracy: 0.9961\nEpoch 6/50\n17/17 [==============================] - 31s 2s/step - loss: 0.0530 - accuracy: 0.9854 - val_loss: 0.0018 - val_accuracy: 1.0000\nEpoch 7/50\n17/17 [==============================] - 31s 2s/step - loss: 0.0256 - accuracy: 0.9883 - val_loss: 2.8526e-04 - val_accuracy: 1.0000\nEpoch 8/50\n17/17 [==============================] - 29s 2s/step - loss: 0.0735 - accuracy: 0.9903 - val_loss: 3.9211e-04 - val_accuracy: 1.0000\nEpoch 9/50\n17/17 [==============================] - 31s 2s/step - loss: 0.0352 - accuracy: 0.9922 - val_loss: 0.0212 - val_accuracy: 0.9961\nEpoch 11/50\n17/17 [==============================] - 30s 2s/step - loss: 0.0138 - accuracy: 0.9951 - val_loss: 0.0301 - val_accuracy: 0.9922\nEpoch 12/50\n17/17 [==============================] - 30s 2s/step - loss: 0.0469 - accuracy: 0.9796 - val_loss: 0.0323 - val_accuracy: 0.9883\nEpoch 13/50\n17/17 [==============================] - 30s 2s/step - loss: 0.0168 - accuracy: 0.9932 - val_loss: 0.0567 - val_accuracy: 0.9805\nEpoch 14/50\n17/17 [==============================] - 30s 2s/step - loss: 0.0217 - accuracy: 0.9912 - val_loss: 0.0346 - val_accuracy: 0.9883\nEpoch 15/50\n17/17 [==============================] - 29s 2s/step - loss: 0.0328 - accuracy: 0.9873 - val_loss: 0.0164 - val_accuracy: 0.9961\nEpoch 16/50\n17/17 [==============================] - 30s 2s/step - loss: 0.0806 - accuracy: 0.9912 - val_loss: 0.0037 - val_accuracy: 0.9961\nEpoch 17/50\n16/17 [===========================>..] - ETA: 1s - loss: 0.0066 - accuracy: 0.9990\nReached 99.9% accuracy so cancelling training!\n17/17 [==============================] - 30s 2s/step - loss: 0.0064 - accuracy: 0.9990 - val_loss: 0.0102 - val_accuracy: 0.9961\n"
],
[
"%matplotlib inline\nimport matplotlib.pyplot as plt\nacc = history.history['accuracy']\nval_acc = history.history['val_accuracy']\nloss = history.history['loss']\nval_loss = history.history['val_loss']\n\nepochs = range(len(acc))\n\nplt.plot(epochs, acc, 'r', label='Training accuracy')\nplt.plot(epochs, val_acc, 'b', label='Validation accuracy')\nplt.title('Training and validation accuracy')\nplt.legend(loc=0)\nplt.figure()\n\n\nplt.show()",
"_____no_output_____"
]
],
[
[
"## **Refrence**\n\nhttps://www.coursera.org\n\nhttps://www.tensorflow.org/\n\nCopyright 2020 Abhishek Gargha Maheshwarappa\n\nPermission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
d0bd775dc3c992bbed99095261ab5a2b6036be20 | 9,595 | ipynb | Jupyter Notebook | doc/source/tutorials/LNA Example.ipynb | boxyrobot/scikit-rf | cd09cde1f0e4391fc855baf5539c637f600ea5b7 | [
"BSD-3-Clause"
] | 3 | 2022-01-13T03:49:17.000Z | 2022-01-23T06:00:50.000Z | doc/source/tutorials/LNA Example.ipynb | buguen/scikit-rf | c2bc0dd1050df1cb8264010fef32f6e78cdf5851 | [
"BSD-3-Clause"
] | 6 | 2020-08-24T09:50:30.000Z | 2020-08-27T22:20:36.000Z | doc/source/tutorials/LNA Example.ipynb | buguen/scikit-rf | c2bc0dd1050df1cb8264010fef32f6e78cdf5851 | [
"BSD-3-Clause"
] | null | null | null | 28.900602 | 323 | 0.554977 | [
[
[
"Let's design a LNA using Infineon's BFU520 transistor. First we need to import scikit-rf and a bunch of other utilities:",
"_____no_output_____"
]
],
[
[
"\nimport numpy as np\n\nimport skrf\nfrom skrf.media import DistributedCircuit\nimport skrf.frequency as freq\nimport skrf.network as net\nimport skrf.util\n\nimport matplotlib.pyplot as plt\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = [10, 10]\n\nf = freq.Frequency(0.4, 2, 101)\ntem = DistributedCircuit(f, z0=50)",
"_____no_output_____"
],
[
"# import the scattering parameters/noise data for the transistor\n\nbjt = net.Network('BFU520_05V0_010mA_NF_SP.s2p').interpolate(f)\n\nbjt",
"_____no_output_____"
]
],
[
[
"Let's plot the smith chart for it:",
"_____no_output_____"
]
],
[
[
"bjt.plot_s_smith()",
"_____no_output_____"
]
],
[
[
"Now let's calculate the source and load stablity curves.\nI'm slightly misusing the `Network` type to plot the curves; normally the curves you pass in to `Network` should be a function of frequency, but it also works to draw these circles as long as you don't try to use any other functions on them",
"_____no_output_____"
]
],
[
[
"sqabs = lambda x: np.square(np.absolute(x))\n\ndelta = bjt.s11.s*bjt.s22.s - bjt.s12.s*bjt.s21.s\nrl = np.absolute((bjt.s12.s * bjt.s21.s)/(sqabs(bjt.s22.s) - sqabs(delta)))\ncl = np.conj(bjt.s22.s - delta*np.conj(bjt.s11.s))/(sqabs(bjt.s22.s) - sqabs(delta))\n\nrs = np.absolute((bjt.s12.s * bjt.s21.s)/(sqabs(bjt.s11.s) - sqabs(delta)))\ncs = np.conj(bjt.s11.s - delta*np.conj(bjt.s22.s))/(sqabs(bjt.s11.s) - sqabs(delta))\n\ndef calc_circle(c, r):\n theta = np.linspace(0, 2*np.pi, 1000)\n return c + r*np.exp(1.0j*theta)",
"_____no_output_____"
],
[
"for i, f in enumerate(bjt.f):\n # decimate it a little\n if i % 100 != 0:\n continue\n n = net.Network(name=str(f/1.e+9), s=calc_circle(cs[i][0, 0], rs[i][0, 0]))\n n.plot_s_smith()",
"_____no_output_____"
],
[
"for i, f in enumerate(bjt.f):\n # decimate it a little\n if i % 100 != 0:\n continue\n n = net.Network(name=str(f/1.e+9), s=calc_circle(cl[i][0, 0], rl[i][0, 0]))\n n.plot_s_smith()",
"_____no_output_____"
]
],
[
[
"So we can see that we need to avoid inductive loads near short circuit in the input matching network and high impedance inductive loads on the output.\n\nLet's draw some constant noise circles. First we grab the noise parameters for our target frequency from the network model:",
"_____no_output_____"
]
],
[
[
"idx_915mhz = skrf.util.find_nearest_index(bjt.f, 915.e+6)\n\n# we need the normalized equivalent noise and optimum source coefficient to calculate the constant noise circles\nrn = bjt.rn[idx_915mhz]/50\ngamma_opt = bjt.g_opt[idx_915mhz]\nfmin = bjt.nfmin[idx_915mhz]\n\nfor nf_added in [0, 0.1, 0.2, 0.5]:\n nf = 10**(nf_added/10) * fmin\n \n N = (nf - fmin)*abs(1+gamma_opt)**2/(4*rn)\n c_n = gamma_opt/(1+N)\n r_n = 1/(1-N)*np.sqrt(N**2 + N*(1-abs(gamma_opt)**2))\n \n n = net.Network(name=str(nf_added), s=calc_circle(c_n, r_n))\n n.plot_s_smith()\n\nprint(\"the optimum source reflection coefficient is \", gamma_opt)",
"_____no_output_____"
]
],
[
[
"So we can see from the chart that just leaving the input at 50 ohms gets us under 0.1 dB of extra noise, which seems pretty good. I'm actually not sure that these actually correspond to the noise figure level increments I have listed up there, but the circles should at least correspond to increasing noise figures\n\nSo let's leave the input at 50 ohms and figure out how to match the output network to maximize gain and stability. Let's see what matching the load impedance with an unmatched input gives us:",
"_____no_output_____"
]
],
[
[
"gamma_s = 0.0\n\ngamma_l = np.conj(bjt.s22.s - bjt.s21.s*gamma_s*bjt.s12.s/(1-bjt.s11.s*gamma_s))\ngamma_l = gamma_l[idx_915mhz, 0, 0]\nis_gamma_l_stable = np.absolute(gamma_l - cl[idx_915mhz]) > rl[idx_915mhz]\n\ngamma_l, is_gamma_l_stable",
"_____no_output_____"
]
],
[
[
"This looks like it may be kind of close to the load instability circles, so it might make sense to pick a load point with less gain for more stability, or to pick a different source impedance with more noise.\n\nBut for now let's just build a matching network for this and see how it performs:",
"_____no_output_____"
]
],
[
[
"def calc_matching_network_vals(z1, z2):\n flipped = np.real(z1) < np.real(z2)\n if flipped:\n z2, z1 = z1, z2\n \n # cancel out the imaginary parts of both input and output impedances \n z1_par = 0.0\n if abs(np.imag(z1)) > 1e-6:\n # parallel something to cancel out the imaginary part of\n # z1's impedance\n z1_par = 1/(-1j*np.imag(1/z1))\n z1 = 1/(1./z1 + 1/z1_par)\n z2_ser = 0.0\n if abs(np.imag(z2)) > 1e-6:\n z2_ser = -1j*np.imag(z2)\n z2 = z2 + z2_ser\n \n Q = np.sqrt((np.real(z1) - np.real(z2))/np.real(z2))\n x1 = -1.j * np.real(z1)/Q\n x2 = 1.j * np.real(z2)*Q\n \n x1_tot = 1/(1/z1_par + 1/x1)\n x2_tot = z2_ser + x2\n if flipped:\n return x2_tot, x1_tot\n else:\n return x1_tot, x2_tot\n\nz_l = net.s2z(np.array([[[gamma_l]]]))[0,0,0]\n# note that we're matching against the conjugate;\n# this is because we want to see z_l from the BJT side\n# if we plugged in z the matching network would make\n# the 50 ohms look like np.conj(z) to match against it, so\n# we use np.conj(z_l) so that it'll look like z_l from the BJT's side\nz_par, z_ser = calc_matching_network_vals(np.conj(z_l), 50)\nz_l, z_par, z_ser",
"_____no_output_____"
]
],
[
[
"Let's calculate what the component values are:",
"_____no_output_____"
]
],
[
[
"c_par = np.real(1/(2j*np.pi*915e+6*z_par))\nl_ser = np.real(z_ser/(2j*np.pi*915e+6))\n\nc_par, l_ser",
"_____no_output_____"
]
],
[
[
"The capacitance is kind of low but the inductance seems reasonable. Let's test it out:",
"_____no_output_____"
]
],
[
[
"output_network = tem.shunt_capacitor(c_par) ** tem.inductor(l_ser)\n\namplifier = bjt ** output_network\n\namplifier.plot_s_smith()",
"_____no_output_____"
]
],
[
[
"That looks pretty reasonable; let's take a look at the S21 to see what we got:",
"_____no_output_____"
]
],
[
[
"amplifier.s21.plot_s_db()",
"_____no_output_____"
]
],
[
[
"So about 18 dB gain; let's see what our noise figure is:",
"_____no_output_____"
]
],
[
[
"10*np.log10(amplifier.nf(50.)[idx_915mhz])",
"_____no_output_____"
]
],
[
[
"So 0.96 dB NF, which is reasonably close to the BJT tombstone optimal NF of 0.95 dB",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d0bd853a33e64ffcbcd59327601f4bbebf221f8a | 420,343 | ipynb | Jupyter Notebook | Global_Warmth_NASA/Carbon_Dioxide_fbprophet.ipynb | LastAncientOne/Kaggle-Project | 06e0b1c9c7b62323ec978c491c2c57037b0832f6 | [
"MIT"
] | 12 | 2020-05-03T10:24:24.000Z | 2021-09-28T21:05:22.000Z | Global_Warmth_NASA/Carbon_Dioxide_fbprophet.ipynb | LastAncientOne/Kaggle-Project | 06e0b1c9c7b62323ec978c491c2c57037b0832f6 | [
"MIT"
] | null | null | null | Global_Warmth_NASA/Carbon_Dioxide_fbprophet.ipynb | LastAncientOne/Kaggle-Project | 06e0b1c9c7b62323ec978c491c2c57037b0832f6 | [
"MIT"
] | 14 | 2019-08-20T14:50:43.000Z | 2022-02-28T13:39:26.000Z | 144.398145 | 53,154 | 0.7579 | [
[
[
"# Carbon Dioxide Analysis",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"# Read Website\nfrom urllib.request import urlopen \ndef read_url(url): \n print(urlopen(url).read().decode())\nread_url(\"ftp://aftp.cmdl.noaa.gov/products/trends/co2/co2_mm_mlo.txt\")",
"# --------------------------------------------------------------------\n# USE OF NOAA ESRL DATA\n# \n# These data are made freely available to the public and the\n# scientific community in the belief that their wide dissemination\n# will lead to greater understanding and new scientific insights.\n# The availability of these data does not constitute publication\n# of the data. NOAA relies on the ethics and integrity of the user to\n# ensure that ESRL receives fair credit for their work. If the data \n# are obtained for potential use in a publication or presentation, \n# ESRL should be informed at the outset of the nature of this work. \n# If the ESRL data are essential to the work, or if an important \n# result or conclusion depends on the ESRL data, co-authorship\n# may be appropriate. This should be discussed at an early stage in\n# the work. Manuscripts using the ESRL data should be sent to ESRL\n# for review before they are submitted for publication so we can\n# ensure that the quality and limitations of the data are accurately\n# represented.\n# \n# Contact: Pieter Tans (303 497 6678; [email protected])\n# \n# File Creation: Mon May 6 09:49:46 2019\n# \n# RECIPROCITY\n# \n# Use of these data implies an agreement to reciprocate.\n# Laboratories making similar measurements agree to make their\n# own data available to the general public and to the scientific\n# community in an equally complete and easily accessible form.\n# Modelers are encouraged to make available to the community,\n# upon request, their own tools used in the interpretation\n# of the ESRL data, namely well documented model code, transport\n# fields, and additional information necessary for other\n# scientists to repeat the work and to run modified versions.\n# Model availability includes collaborative support for new\n# users of the models.\n# --------------------------------------------------------------------\n# \n# \n# See www.esrl.noaa.gov/gmd/ccgg/trends/ for additional details.\n# \n# Data from March 1958 through April 1974 have been obtained by C. David Keeling\n# of the Scripps Institution of Oceanography (SIO) and were obtained from the\n# Scripps website (scrippsco2.ucsd.edu).\n#\n# The \"average\" column contains the monthly mean CO2 mole fraction determined\n# from daily averages. The mole fraction of CO2, expressed as parts per million\n# (ppm) is the number of molecules of CO2 in every one million molecules of dried\n# air (water vapor removed). If there are missing days concentrated either early\n# or late in the month, the monthly mean is corrected to the middle of the month\n# using the average seasonal cycle. Missing months are denoted by -99.99.\n# The \"interpolated\" column includes average values from the preceding column\n# and interpolated values where data are missing. Interpolated values are\n# computed in two steps. First, we compute for each month the average seasonal\n# cycle in a 7-year window around each monthly value. In this way the seasonal\n# cycle is allowed to change slowly over time. We then determine the \"trend\"\n# value for each month by removing the seasonal cycle; this result is shown in\n# the \"trend\" column. Trend values are linearly interpolated for missing months.\n# The interpolated monthly mean is then the sum of the average seasonal cycle\n# value and the trend value for the missing month.\n#\n# NOTE: In general, the data presented for the last year are subject to change, \n# depending on recalibration of the reference gas mixtures used, and other quality\n# control procedures. Occasionally, earlier years may also be changed for the same\n# reasons. Usually these changes are minor.\n#\n# CO2 expressed as a mole fraction in dry air, micromol/mol, abbreviated as ppm\n#\n# (-99.99 missing data; -1 no data for #daily means in month)\n#\n# decimal average interpolated trend #days\n# date (season corr)\n1958 3 1958.208 315.71 315.71 314.62 -1\n1958 4 1958.292 317.45 317.45 315.29 -1\n1958 5 1958.375 317.50 317.50 314.71 -1\n1958 6 1958.458 -99.99 317.10 314.85 -1\n1958 7 1958.542 315.86 315.86 314.98 -1\n1958 8 1958.625 314.93 314.93 315.94 -1\n1958 9 1958.708 313.20 313.20 315.91 -1\n1958 10 1958.792 -99.99 312.66 315.61 -1\n1958 11 1958.875 313.33 313.33 315.31 -1\n1958 12 1958.958 314.67 314.67 315.61 -1\n1959 1 1959.042 315.62 315.62 315.70 -1\n1959 2 1959.125 316.38 316.38 315.88 -1\n1959 3 1959.208 316.71 316.71 315.62 -1\n1959 4 1959.292 317.72 317.72 315.56 -1\n1959 5 1959.375 318.29 318.29 315.50 -1\n1959 6 1959.458 318.15 318.15 315.92 -1\n1959 7 1959.542 316.54 316.54 315.66 -1\n1959 8 1959.625 314.80 314.80 315.81 -1\n1959 9 1959.708 313.84 313.84 316.55 -1\n1959 10 1959.792 313.26 313.26 316.19 -1\n1959 11 1959.875 314.80 314.80 316.78 -1\n1959 12 1959.958 315.58 315.58 316.52 -1\n1960 1 1960.042 316.43 316.43 316.51 -1\n1960 2 1960.125 316.97 316.97 316.47 -1\n1960 3 1960.208 317.58 317.58 316.49 -1\n1960 4 1960.292 319.02 319.02 316.86 -1\n1960 5 1960.375 320.03 320.03 317.24 -1\n1960 6 1960.458 319.59 319.59 317.36 -1\n1960 7 1960.542 318.18 318.18 317.30 -1\n1960 8 1960.625 315.91 315.91 316.92 -1\n1960 9 1960.708 314.16 314.16 316.87 -1\n1960 10 1960.792 313.83 313.83 316.76 -1\n1960 11 1960.875 315.00 315.00 316.98 -1\n1960 12 1960.958 316.19 316.19 317.13 -1\n1961 1 1961.042 316.93 316.93 317.03 -1\n1961 2 1961.125 317.70 317.70 317.28 -1\n1961 3 1961.208 318.54 318.54 317.47 -1\n1961 4 1961.292 319.48 319.48 317.27 -1\n1961 5 1961.375 320.58 320.58 317.70 -1\n1961 6 1961.458 319.77 319.77 317.48 -1\n1961 7 1961.542 318.57 318.57 317.70 -1\n1961 8 1961.625 316.79 316.79 317.80 -1\n1961 9 1961.708 314.80 314.80 317.49 -1\n1961 10 1961.792 315.38 315.38 318.35 -1\n1961 11 1961.875 316.10 316.10 318.13 -1\n1961 12 1961.958 317.01 317.01 317.94 -1\n1962 1 1962.042 317.94 317.94 318.06 -1\n1962 2 1962.125 318.56 318.56 318.11 -1\n1962 3 1962.208 319.68 319.68 318.57 -1\n1962 4 1962.292 320.63 320.63 318.45 -1\n1962 5 1962.375 321.01 321.01 318.20 -1\n1962 6 1962.458 320.55 320.55 318.27 -1\n1962 7 1962.542 319.58 319.58 318.67 -1\n1962 8 1962.625 317.40 317.40 318.48 -1\n1962 9 1962.708 316.26 316.26 319.03 -1\n1962 10 1962.792 315.42 315.42 318.33 -1\n1962 11 1962.875 316.69 316.69 318.62 -1\n1962 12 1962.958 317.69 317.69 318.61 -1\n1963 1 1963.042 318.74 318.74 318.91 -1\n1963 2 1963.125 319.08 319.08 318.68 -1\n1963 3 1963.208 319.86 319.86 318.69 -1\n1963 4 1963.292 321.39 321.39 319.09 -1\n1963 5 1963.375 322.25 322.25 319.39 -1\n1963 6 1963.458 321.47 321.47 319.16 -1\n1963 7 1963.542 319.74 319.74 318.77 -1\n1963 8 1963.625 317.77 317.77 318.83 -1\n1963 9 1963.708 316.21 316.21 319.06 -1\n1963 10 1963.792 315.99 315.99 319.00 -1\n1963 11 1963.875 317.12 317.12 319.10 -1\n1963 12 1963.958 318.31 318.31 319.25 -1\n1964 1 1964.042 319.57 319.57 319.67 -1\n1964 2 1964.125 -99.99 320.07 319.61 -1\n1964 3 1964.208 -99.99 320.73 319.55 -1\n1964 4 1964.292 -99.99 321.77 319.48 -1\n1964 5 1964.375 322.25 322.25 319.42 -1\n1964 6 1964.458 321.89 321.89 319.69 -1\n1964 7 1964.542 320.44 320.44 319.58 -1\n1964 8 1964.625 318.70 318.70 319.81 -1\n1964 9 1964.708 316.70 316.70 319.56 -1\n1964 10 1964.792 316.79 316.79 319.78 -1\n1964 11 1964.875 317.79 317.79 319.72 -1\n1964 12 1964.958 318.71 318.71 319.59 -1\n1965 1 1965.042 319.44 319.44 319.48 -1\n1965 2 1965.125 320.44 320.44 319.97 -1\n1965 3 1965.208 320.89 320.89 319.65 -1\n1965 4 1965.292 322.13 322.13 319.80 -1\n1965 5 1965.375 322.16 322.16 319.36 -1\n1965 6 1965.458 321.87 321.87 319.65 -1\n1965 7 1965.542 321.39 321.39 320.51 -1\n1965 8 1965.625 318.81 318.81 319.93 -1\n1965 9 1965.708 317.81 317.81 320.68 -1\n1965 10 1965.792 317.30 317.30 320.36 -1\n1965 11 1965.875 318.87 318.87 320.87 -1\n1965 12 1965.958 319.42 319.42 320.26 -1\n1966 1 1966.042 320.62 320.62 320.63 -1\n1966 2 1966.125 321.59 321.59 321.10 -1\n1966 3 1966.208 322.39 322.39 321.16 -1\n1966 4 1966.292 323.87 323.87 321.51 -1\n1966 5 1966.375 324.01 324.01 321.18 -1\n1966 6 1966.458 323.75 323.75 321.52 -1\n1966 7 1966.542 322.39 322.39 321.49 -1\n1966 8 1966.625 320.37 320.37 321.50 -1\n1966 9 1966.708 318.64 318.64 321.54 -1\n1966 10 1966.792 318.10 318.10 321.18 -1\n1966 11 1966.875 319.79 319.79 321.84 -1\n1966 12 1966.958 321.08 321.08 321.95 -1\n1967 1 1967.042 322.07 322.07 322.07 -1\n1967 2 1967.125 322.50 322.50 321.94 -1\n1967 3 1967.208 323.04 323.04 321.72 -1\n1967 4 1967.292 324.42 324.42 322.05 -1\n1967 5 1967.375 325.00 325.00 322.27 -1\n1967 6 1967.458 324.09 324.09 321.94 -1\n1967 7 1967.542 322.55 322.55 321.66 -1\n1967 8 1967.625 320.92 320.92 322.04 -1\n1967 9 1967.708 319.31 319.31 322.19 -1\n1967 10 1967.792 319.31 319.31 322.36 -1\n1967 11 1967.875 320.72 320.72 322.78 -1\n1967 12 1967.958 321.96 321.96 322.86 -1\n1968 1 1968.042 322.57 322.57 322.55 -1\n1968 2 1968.125 323.15 323.15 322.56 -1\n1968 3 1968.208 323.89 323.89 322.59 -1\n1968 4 1968.292 325.02 325.02 322.73 -1\n1968 5 1968.375 325.57 325.57 322.87 -1\n1968 6 1968.458 325.36 325.36 323.20 -1\n1968 7 1968.542 324.14 324.14 323.25 -1\n1968 8 1968.625 322.03 322.03 323.15 -1\n1968 9 1968.708 320.41 320.41 323.31 -1\n1968 10 1968.792 320.25 320.25 323.32 -1\n1968 11 1968.875 321.31 321.31 323.32 -1\n1968 12 1968.958 322.84 322.84 323.69 -1\n1969 1 1969.042 324.00 324.00 323.98 -1\n1969 2 1969.125 324.42 324.42 323.89 -1\n1969 3 1969.208 325.64 325.64 324.41 -1\n1969 4 1969.292 326.66 326.66 324.35 -1\n1969 5 1969.375 327.34 327.34 324.57 -1\n1969 6 1969.458 326.76 326.76 324.63 -1\n1969 7 1969.542 325.88 325.88 325.08 -1\n1969 8 1969.625 323.67 323.67 324.80 -1\n1969 9 1969.708 322.38 322.38 325.28 -1\n1969 10 1969.792 321.78 321.78 324.84 -1\n1969 11 1969.875 322.85 322.85 324.78 -1\n1969 12 1969.958 324.11 324.11 324.88 -1\n1970 1 1970.042 325.03 325.03 325.04 -1\n1970 2 1970.125 325.99 325.99 325.42 -1\n1970 3 1970.208 326.87 326.87 325.69 -1\n1970 4 1970.292 328.13 328.13 325.86 -1\n1970 5 1970.375 328.07 328.07 325.27 -1\n1970 6 1970.458 327.66 327.66 325.52 -1\n1970 7 1970.542 326.35 326.35 325.51 -1\n1970 8 1970.625 324.69 324.69 325.76 -1\n1970 9 1970.708 323.10 323.10 325.93 -1\n1970 10 1970.792 323.16 323.16 326.15 -1\n1970 11 1970.875 323.98 323.98 325.96 -1\n1970 12 1970.958 325.13 325.13 326.06 -1\n1971 1 1971.042 326.17 326.17 326.25 -1\n1971 2 1971.125 326.68 326.68 326.10 -1\n1971 3 1971.208 327.18 327.18 325.94 -1\n1971 4 1971.292 327.78 327.78 325.47 -1\n1971 5 1971.375 328.92 328.92 326.11 -1\n1971 6 1971.458 328.57 328.57 326.40 -1\n1971 7 1971.542 327.34 327.34 326.45 -1\n1971 8 1971.625 325.46 325.46 326.49 -1\n1971 9 1971.708 323.36 323.36 326.19 -1\n1971 10 1971.792 323.57 323.57 326.58 -1\n1971 11 1971.875 324.80 324.80 326.82 -1\n1971 12 1971.958 326.01 326.01 327.02 -1\n1972 1 1972.042 326.77 326.77 326.85 -1\n1972 2 1972.125 327.63 327.63 327.04 -1\n1972 3 1972.208 327.75 327.75 326.53 -1\n1972 4 1972.292 329.72 329.72 327.42 -1\n1972 5 1972.375 330.07 330.07 327.23 -1\n1972 6 1972.458 329.09 329.09 326.92 -1\n1972 7 1972.542 328.05 328.05 327.20 -1\n1972 8 1972.625 326.32 326.32 327.37 -1\n1972 9 1972.708 324.93 324.93 327.76 -1\n1972 10 1972.792 325.06 325.06 328.06 -1\n1972 11 1972.875 326.50 326.50 328.50 -1\n1972 12 1972.958 327.55 327.55 328.55 -1\n1973 1 1973.042 328.54 328.54 328.58 -1\n1973 2 1973.125 329.56 329.56 328.86 -1\n1973 3 1973.208 330.30 330.30 328.99 -1\n1973 4 1973.292 331.50 331.50 329.14 -1\n1973 5 1973.375 332.48 332.48 329.62 -1\n1973 6 1973.458 332.07 332.07 329.94 -1\n1973 7 1973.542 330.87 330.87 330.05 -1\n1973 8 1973.625 329.31 329.31 330.42 -1\n1973 9 1973.708 327.51 327.51 330.45 -1\n1973 10 1973.792 327.18 327.18 330.24 -1\n1973 11 1973.875 328.16 328.16 330.16 -1\n1973 12 1973.958 328.64 328.64 329.66 -1\n1974 1 1974.042 329.35 329.35 329.45 -1\n1974 2 1974.125 330.71 330.71 330.12 -1\n1974 3 1974.208 331.48 331.48 330.20 -1\n1974 4 1974.292 332.65 332.65 330.26 -1\n1974 5 1974.375 333.20 333.20 330.27 14\n1974 6 1974.458 332.16 332.16 329.94 26\n1974 7 1974.542 331.07 331.07 330.23 24\n1974 8 1974.625 329.12 329.12 330.26 27\n1974 9 1974.708 327.32 327.32 330.28 24\n1974 10 1974.792 327.28 327.28 330.36 24\n1974 11 1974.875 328.30 328.30 330.28 27\n1974 12 1974.958 329.58 329.58 330.55 28\n1975 1 1975.042 330.73 330.73 330.89 29\n1975 2 1975.125 331.46 331.46 330.93 26\n1975 3 1975.208 331.90 331.90 330.54 18\n1975 4 1975.292 333.17 333.17 330.67 25\n1975 5 1975.375 333.94 333.94 330.98 28\n1975 6 1975.458 333.45 333.45 331.20 26\n1975 7 1975.542 331.98 331.98 331.12 24\n1975 8 1975.625 329.95 329.95 331.11 24\n1975 9 1975.708 328.50 328.50 331.48 23\n1975 10 1975.792 328.34 328.34 331.46 12\n1975 11 1975.875 329.37 329.37 331.41 19\n1975 12 1975.958 -99.99 330.58 331.60 0\n1976 1 1976.042 331.59 331.59 331.79 20\n1976 2 1976.125 332.75 332.75 332.20 22\n1976 3 1976.208 333.52 333.52 332.05 20\n1976 4 1976.292 334.64 334.64 332.13 19\n1976 5 1976.375 334.77 334.77 331.84 22\n1976 6 1976.458 334.00 334.00 331.65 17\n1976 7 1976.542 333.06 333.06 332.14 16\n1976 8 1976.625 330.68 330.68 331.88 23\n1976 9 1976.708 328.95 328.95 331.94 13\n1976 10 1976.792 328.75 328.75 331.93 20\n1976 11 1976.875 330.15 330.15 332.29 25\n1976 12 1976.958 331.62 331.62 332.66 20\n1977 1 1977.042 332.66 332.66 332.76 24\n1977 2 1977.125 333.13 333.13 332.51 19\n1977 3 1977.208 334.95 334.95 333.35 23\n1977 4 1977.292 336.13 336.13 333.51 21\n1977 5 1977.375 336.93 336.93 333.98 20\n1977 6 1977.458 336.17 336.17 333.80 22\n1977 7 1977.542 334.88 334.88 334.02 21\n1977 8 1977.625 332.56 332.56 333.91 18\n1977 9 1977.708 331.29 331.29 334.36 19\n1977 10 1977.792 331.27 331.27 334.52 23\n1977 11 1977.875 332.41 332.41 334.64 21\n1977 12 1977.958 333.60 333.60 334.61 26\n1978 1 1978.042 334.95 334.95 335.01 22\n1978 2 1978.125 335.25 335.25 334.58 25\n1978 3 1978.208 336.66 336.66 335.00 28\n1978 4 1978.292 337.69 337.69 335.06 18\n1978 5 1978.375 338.03 338.03 335.06 26\n1978 6 1978.458 338.01 338.01 335.59 17\n1978 7 1978.542 336.41 336.41 335.57 22\n1978 8 1978.625 334.41 334.41 335.87 19\n1978 9 1978.708 332.37 332.37 335.51 17\n1978 10 1978.792 332.41 332.41 335.68 23\n1978 11 1978.875 333.75 333.75 335.99 24\n1978 12 1978.958 334.90 334.90 335.88 27\n1979 1 1979.042 336.14 336.14 336.22 27\n1979 2 1979.125 336.69 336.69 336.01 26\n1979 3 1979.208 338.27 338.27 336.54 21\n1979 4 1979.292 338.95 338.95 336.24 21\n1979 5 1979.375 339.21 339.21 336.21 12\n1979 6 1979.458 339.26 339.26 336.84 19\n1979 7 1979.542 337.54 337.54 336.72 26\n1979 8 1979.625 335.75 335.75 337.24 23\n1979 9 1979.708 333.98 333.98 337.20 19\n1979 10 1979.792 334.19 334.19 337.53 24\n1979 11 1979.875 335.31 335.31 337.57 27\n1979 12 1979.958 336.81 336.81 337.79 22\n1980 1 1980.042 337.90 337.90 338.09 29\n1980 2 1980.125 338.34 338.34 337.82 26\n1980 3 1980.208 340.01 340.01 338.43 25\n1980 4 1980.292 340.93 340.93 338.30 24\n1980 5 1980.375 341.48 341.48 338.43 25\n1980 6 1980.458 341.33 341.33 338.84 22\n1980 7 1980.542 339.40 339.40 338.54 21\n1980 8 1980.625 337.70 337.70 339.12 17\n1980 9 1980.708 336.19 336.19 339.33 17\n1980 10 1980.792 336.15 336.15 339.42 25\n1980 11 1980.875 337.27 337.27 339.42 24\n1980 12 1980.958 338.32 338.32 339.26 19\n1981 1 1981.042 339.29 339.29 339.38 28\n1981 2 1981.125 340.55 340.55 339.93 25\n1981 3 1981.208 341.61 341.61 340.06 25\n1981 4 1981.292 342.53 342.53 339.94 24\n1981 5 1981.375 343.04 343.04 339.98 30\n1981 6 1981.458 342.54 342.54 340.07 25\n1981 7 1981.542 340.78 340.78 339.92 24\n1981 8 1981.625 338.44 338.44 339.86 26\n1981 9 1981.708 336.95 336.95 340.17 27\n1981 10 1981.792 337.08 337.08 340.43 28\n1981 11 1981.875 338.58 338.58 340.74 25\n1981 12 1981.958 339.88 339.88 340.79 19\n1982 1 1982.042 340.96 340.96 341.10 27\n1982 2 1982.125 341.73 341.73 341.10 23\n1982 3 1982.208 342.81 342.81 341.21 18\n1982 4 1982.292 343.97 343.97 341.37 8\n1982 5 1982.375 344.63 344.63 341.56 26\n1982 6 1982.458 343.79 343.79 341.35 26\n1982 7 1982.542 342.32 342.32 341.55 28\n1982 8 1982.625 340.09 340.09 341.51 24\n1982 9 1982.708 338.28 338.28 341.47 21\n1982 10 1982.792 338.29 338.29 341.64 26\n1982 11 1982.875 339.60 339.60 341.73 25\n1982 12 1982.958 340.90 340.90 341.79 26\n1983 1 1983.042 341.68 341.68 341.84 28\n1983 2 1983.125 342.90 342.90 342.32 24\n1983 3 1983.208 343.33 343.33 341.82 26\n1983 4 1983.292 345.25 345.25 342.66 24\n1983 5 1983.375 346.03 346.03 342.87 28\n1983 6 1983.458 345.63 345.63 343.15 20\n1983 7 1983.542 344.19 344.19 343.44 20\n1983 8 1983.625 342.27 342.27 343.66 16\n1983 9 1983.708 340.35 340.35 343.49 15\n1983 10 1983.792 340.38 340.38 343.72 20\n1983 11 1983.875 341.59 341.59 343.71 26\n1983 12 1983.958 343.05 343.05 343.96 19\n1984 1 1984.042 344.10 344.10 344.20 23\n1984 2 1984.125 344.79 344.79 344.22 23\n1984 3 1984.208 345.52 345.52 344.09 19\n1984 4 1984.292 -99.99 346.84 344.27 2\n1984 5 1984.375 347.63 347.63 344.45 21\n1984 6 1984.458 346.98 346.98 344.52 21\n1984 7 1984.542 345.53 345.53 344.76 21\n1984 8 1984.625 343.55 343.55 344.94 12\n1984 9 1984.708 341.40 341.40 344.58 14\n1984 10 1984.792 341.67 341.67 345.01 12\n1984 11 1984.875 343.10 343.10 345.20 18\n1984 12 1984.958 344.70 344.70 345.57 12\n1985 1 1985.042 345.21 345.21 345.31 23\n1985 2 1985.125 346.16 346.16 345.61 17\n1985 3 1985.208 347.74 347.74 346.37 16\n1985 4 1985.292 348.34 348.34 345.79 19\n1985 5 1985.375 349.06 349.06 345.91 24\n1985 6 1985.458 348.38 348.38 345.94 23\n1985 7 1985.542 346.72 346.72 345.89 18\n1985 8 1985.625 345.02 345.02 346.34 18\n1985 9 1985.708 343.27 343.27 346.40 25\n1985 10 1985.792 343.13 343.13 346.42 20\n1985 11 1985.875 344.49 344.49 346.61 22\n1985 12 1985.958 345.88 345.88 346.81 25\n1986 1 1986.042 346.56 346.56 346.59 23\n1986 2 1986.125 347.28 347.28 346.74 25\n1986 3 1986.208 348.01 348.01 346.68 17\n1986 4 1986.292 349.77 349.77 347.22 22\n1986 5 1986.375 350.38 350.38 347.26 18\n1986 6 1986.458 349.93 349.93 347.52 17\n1986 7 1986.542 348.16 348.16 347.33 20\n1986 8 1986.625 346.08 346.08 347.40 18\n1986 9 1986.708 345.22 345.22 348.35 17\n1986 10 1986.792 344.51 344.51 347.77 26\n1986 11 1986.875 345.93 345.93 348.04 23\n1986 12 1986.958 347.22 347.22 348.13 24\n1987 1 1987.042 348.52 348.52 348.47 26\n1987 2 1987.125 348.73 348.73 348.02 25\n1987 3 1987.208 349.73 349.73 348.30 22\n1987 4 1987.292 351.31 351.31 348.77 26\n1987 5 1987.375 352.09 352.09 349.01 27\n1987 6 1987.458 351.53 351.53 349.20 21\n1987 7 1987.542 350.11 350.11 349.39 16\n1987 8 1987.625 348.08 348.08 349.50 14\n1987 9 1987.708 346.52 346.52 349.70 23\n1987 10 1987.792 346.59 346.59 349.86 22\n1987 11 1987.875 347.96 347.96 350.07 22\n1987 12 1987.958 349.16 349.16 350.05 27\n1988 1 1988.042 350.39 350.39 350.38 24\n1988 2 1988.125 351.64 351.64 350.94 24\n1988 3 1988.208 352.40 352.40 350.87 25\n1988 4 1988.292 353.69 353.69 351.01 27\n1988 5 1988.375 354.21 354.21 351.06 28\n1988 6 1988.458 353.72 353.72 351.37 26\n1988 7 1988.542 352.69 352.69 352.02 27\n1988 8 1988.625 350.40 350.40 351.90 26\n1988 9 1988.708 348.92 348.92 352.13 27\n1988 10 1988.792 349.12 349.12 352.41 26\n1988 11 1988.875 350.20 350.20 352.34 25\n1988 12 1988.958 351.41 351.41 352.35 28\n1989 1 1989.042 352.91 352.91 352.85 27\n1989 2 1989.125 353.27 353.27 352.54 25\n1989 3 1989.208 353.96 353.96 352.47 29\n1989 4 1989.292 355.64 355.64 352.97 28\n1989 5 1989.375 355.86 355.86 352.67 28\n1989 6 1989.458 355.37 355.37 352.97 26\n1989 7 1989.542 353.99 353.99 353.30 25\n1989 8 1989.625 351.81 351.81 353.37 24\n1989 9 1989.708 350.05 350.05 353.32 23\n1989 10 1989.792 350.25 350.25 353.52 25\n1989 11 1989.875 351.49 351.49 353.65 27\n1989 12 1989.958 352.85 352.85 353.80 27\n1990 1 1990.042 353.80 353.80 353.74 25\n1990 2 1990.125 355.04 355.04 354.33 28\n1990 3 1990.208 355.73 355.73 354.24 28\n1990 4 1990.292 356.32 356.32 353.68 28\n1990 5 1990.375 357.32 357.32 354.16 29\n1990 6 1990.458 356.34 356.34 353.97 29\n1990 7 1990.542 354.84 354.84 354.19 30\n1990 8 1990.625 353.01 353.01 354.61 22\n1990 9 1990.708 351.31 351.31 354.61 27\n1990 10 1990.792 351.62 351.62 354.89 28\n1990 11 1990.875 353.07 353.07 355.13 24\n1990 12 1990.958 354.33 354.33 355.19 28\n1991 1 1991.042 354.84 354.84 354.82 28\n1991 2 1991.125 355.73 355.73 355.02 27\n1991 3 1991.208 357.23 357.23 355.68 30\n1991 4 1991.292 358.66 358.66 356.02 30\n1991 5 1991.375 359.13 359.13 356.00 29\n1991 6 1991.458 358.13 358.13 355.80 29\n1991 7 1991.542 356.19 356.19 355.59 24\n1991 8 1991.625 353.85 353.85 355.46 25\n1991 9 1991.708 352.25 352.25 355.56 27\n1991 10 1991.792 352.35 352.35 355.62 27\n1991 11 1991.875 353.81 353.81 355.80 28\n1991 12 1991.958 355.12 355.12 355.93 30\n1992 1 1992.042 356.25 356.25 356.20 31\n1992 2 1992.125 357.11 357.11 356.38 27\n1992 3 1992.208 357.86 357.86 356.27 24\n1992 4 1992.292 359.09 359.09 356.39 27\n1992 5 1992.375 359.59 359.59 356.41 26\n1992 6 1992.458 359.33 359.33 356.97 30\n1992 7 1992.542 357.01 357.01 356.44 26\n1992 8 1992.625 354.94 354.94 356.62 23\n1992 9 1992.708 352.96 352.96 356.29 26\n1992 10 1992.792 353.32 353.32 356.63 29\n1992 11 1992.875 354.32 354.32 356.38 29\n1992 12 1992.958 355.57 355.57 356.39 31\n1993 1 1993.042 357.00 357.00 356.96 28\n1993 2 1993.125 357.31 357.31 356.44 28\n1993 3 1993.208 358.47 358.47 356.76 30\n1993 4 1993.292 359.27 359.27 356.59 25\n1993 5 1993.375 360.19 360.19 357.03 30\n1993 6 1993.458 359.52 359.52 357.12 28\n1993 7 1993.542 357.33 357.33 356.76 25\n1993 8 1993.625 355.64 355.64 357.32 27\n1993 9 1993.708 354.03 354.03 357.39 23\n1993 10 1993.792 354.12 354.12 357.49 28\n1993 11 1993.875 355.41 355.41 357.54 29\n1993 12 1993.958 356.91 356.91 357.80 30\n1994 1 1994.042 358.24 358.24 358.13 27\n1994 2 1994.125 358.92 358.92 358.09 25\n1994 3 1994.208 359.99 359.99 358.29 29\n1994 4 1994.292 361.23 361.23 358.46 28\n1994 5 1994.375 361.65 361.65 358.46 30\n1994 6 1994.458 360.81 360.81 358.44 27\n1994 7 1994.542 359.38 359.38 358.79 31\n1994 8 1994.625 357.46 357.46 359.16 24\n1994 9 1994.708 355.73 355.73 359.17 24\n1994 10 1994.792 356.08 356.08 359.49 28\n1994 11 1994.875 357.53 357.53 359.68 28\n1994 12 1994.958 358.98 358.98 359.83 28\n1995 1 1995.042 359.92 359.92 359.79 30\n1995 2 1995.125 360.86 360.86 360.05 28\n1995 3 1995.208 361.83 361.83 360.22 29\n1995 4 1995.292 363.30 363.30 360.62 29\n1995 5 1995.375 363.69 363.69 360.58 29\n1995 6 1995.458 363.19 363.19 360.84 27\n1995 7 1995.542 361.64 361.64 360.97 28\n1995 8 1995.625 359.12 359.12 360.73 25\n1995 9 1995.708 358.17 358.17 361.55 24\n1995 10 1995.792 357.99 357.99 361.37 29\n1995 11 1995.875 359.45 359.45 361.59 27\n1995 12 1995.958 360.68 360.68 361.53 30\n1996 1 1996.042 362.07 362.07 361.85 29\n1996 2 1996.125 363.24 363.24 362.35 27\n1996 3 1996.208 364.17 364.17 362.53 27\n1996 4 1996.292 364.57 364.57 361.86 29\n1996 5 1996.375 365.13 365.13 362.10 30\n1996 6 1996.458 364.92 364.92 362.69 30\n1996 7 1996.542 363.55 363.55 362.85 31\n1996 8 1996.625 361.38 361.38 362.98 28\n1996 9 1996.708 359.54 359.54 362.99 25\n1996 10 1996.792 359.58 359.58 362.97 29\n1996 11 1996.875 360.89 360.89 363.03 29\n1996 12 1996.958 362.24 362.24 363.08 29\n1997 1 1997.042 363.09 363.09 362.88 31\n1997 2 1997.125 364.03 364.03 363.22 27\n1997 3 1997.208 364.51 364.51 362.88 31\n1997 4 1997.292 366.35 366.35 363.68 21\n1997 5 1997.375 366.64 366.64 363.74 29\n1997 6 1997.458 365.59 365.59 363.41 27\n1997 7 1997.542 364.31 364.31 363.60 24\n1997 8 1997.625 362.25 362.25 363.84 25\n1997 9 1997.708 360.29 360.29 363.68 26\n1997 10 1997.792 360.82 360.82 364.12 27\n1997 11 1997.875 362.49 362.49 364.56 30\n1997 12 1997.958 364.38 364.38 365.15 30\n1998 1 1998.042 365.27 365.27 365.07 30\n1998 2 1998.125 365.98 365.98 365.17 28\n1998 3 1998.208 367.24 367.24 365.60 31\n1998 4 1998.292 368.66 368.66 366.03 29\n1998 5 1998.375 369.42 369.42 366.55 30\n1998 6 1998.458 368.99 368.99 366.80 28\n1998 7 1998.542 367.82 367.82 367.14 23\n1998 8 1998.625 365.95 365.95 367.55 30\n1998 9 1998.708 364.02 364.02 367.37 28\n1998 10 1998.792 364.40 364.40 367.67 30\n1998 11 1998.875 365.52 365.52 367.56 23\n1998 12 1998.958 367.13 367.13 367.88 26\n1999 1 1999.042 368.18 368.18 367.96 27\n1999 2 1999.125 369.07 369.07 368.26 22\n1999 3 1999.208 369.68 369.68 368.08 25\n1999 4 1999.292 370.99 370.99 368.45 29\n1999 5 1999.375 370.96 370.96 368.15 26\n1999 6 1999.458 370.30 370.30 368.13 26\n1999 7 1999.542 369.45 369.45 368.77 27\n1999 8 1999.625 366.90 366.90 368.48 25\n1999 9 1999.708 364.81 364.81 368.13 28\n1999 10 1999.792 365.37 365.37 368.64 31\n1999 11 1999.875 366.72 366.72 368.71 28\n1999 12 1999.958 368.10 368.10 368.77 26\n2000 1 2000.042 369.29 369.29 369.08 26\n2000 2 2000.125 369.54 369.54 368.83 19\n2000 3 2000.208 370.60 370.60 369.09 30\n2000 4 2000.292 371.81 371.81 369.28 27\n2000 5 2000.375 371.58 371.58 368.71 28\n2000 6 2000.458 371.70 371.70 369.50 28\n2000 7 2000.542 369.86 369.86 369.20 25\n2000 8 2000.625 368.13 368.13 369.72 27\n2000 9 2000.708 367.00 367.00 370.30 26\n2000 10 2000.792 367.03 367.03 370.26 30\n2000 11 2000.875 368.37 368.37 370.32 25\n2000 12 2000.958 369.67 369.67 370.30 30\n2001 1 2001.042 370.59 370.59 370.43 30\n2001 2 2001.125 371.51 371.51 370.78 26\n2001 3 2001.208 372.43 372.43 370.87 26\n2001 4 2001.292 373.37 373.37 370.81 29\n2001 5 2001.375 373.85 373.85 370.94 24\n2001 6 2001.458 373.21 373.21 370.99 26\n2001 7 2001.542 371.51 371.51 370.90 25\n2001 8 2001.625 369.61 369.61 371.22 27\n2001 9 2001.708 368.18 368.18 371.44 28\n2001 10 2001.792 368.45 368.45 371.69 31\n2001 11 2001.875 369.76 369.76 371.74 24\n2001 12 2001.958 371.24 371.24 371.92 29\n2002 1 2002.042 372.53 372.53 372.30 28\n2002 2 2002.125 373.20 373.20 372.33 28\n2002 3 2002.208 374.12 374.12 372.44 24\n2002 4 2002.292 375.02 375.02 372.37 29\n2002 5 2002.375 375.76 375.76 372.81 29\n2002 6 2002.458 375.52 375.52 373.30 28\n2002 7 2002.542 374.01 374.01 373.42 26\n2002 8 2002.625 371.85 371.85 373.52 28\n2002 9 2002.708 370.75 370.75 374.11 23\n2002 10 2002.792 370.55 370.55 373.88 31\n2002 11 2002.875 372.25 372.25 374.34 29\n2002 12 2002.958 373.79 373.79 374.54 31\n2003 1 2003.042 374.88 374.88 374.63 30\n2003 2 2003.125 375.64 375.64 374.77 27\n2003 3 2003.208 376.45 376.45 374.80 28\n2003 4 2003.292 377.73 377.73 375.06 27\n2003 5 2003.375 378.60 378.60 375.55 30\n2003 6 2003.458 378.28 378.28 376.04 25\n2003 7 2003.542 376.70 376.70 376.19 29\n2003 8 2003.625 374.38 374.38 376.08 23\n2003 9 2003.708 373.17 373.17 376.48 25\n2003 10 2003.792 373.15 373.15 376.47 30\n2003 11 2003.875 374.66 374.66 376.81 26\n2003 12 2003.958 375.99 375.99 376.75 27\n2004 1 2004.042 377.00 377.00 376.78 30\n2004 2 2004.125 377.87 377.87 377.02 29\n2004 3 2004.208 378.88 378.88 377.23 28\n2004 4 2004.292 380.35 380.35 377.62 26\n2004 5 2004.375 380.62 380.62 377.48 28\n2004 6 2004.458 379.69 379.69 377.39 21\n2004 7 2004.542 377.47 377.47 376.94 25\n2004 8 2004.625 376.01 376.01 377.74 16\n2004 9 2004.708 374.25 374.25 377.62 15\n2004 10 2004.792 374.46 374.46 377.82 29\n2004 11 2004.875 376.16 376.16 378.31 29\n2004 12 2004.958 377.51 377.51 378.32 30\n2005 1 2005.042 378.46 378.46 378.21 31\n2005 2 2005.125 379.73 379.73 378.93 24\n2005 3 2005.208 380.77 380.77 379.27 26\n2005 4 2005.292 382.29 382.29 379.65 26\n2005 5 2005.375 382.45 382.45 379.31 31\n2005 6 2005.458 382.21 382.21 379.88 28\n2005 7 2005.542 380.74 380.74 380.18 29\n2005 8 2005.625 378.74 378.74 380.42 26\n2005 9 2005.708 376.70 376.70 380.01 26\n2005 10 2005.792 377.00 377.00 380.31 14\n2005 11 2005.875 378.35 378.35 380.50 23\n2005 12 2005.958 380.11 380.11 380.90 26\n2006 1 2006.042 381.38 381.38 381.14 24\n2006 2 2006.125 382.19 382.19 381.39 25\n2006 3 2006.208 382.67 382.67 381.14 30\n2006 4 2006.292 384.61 384.61 381.91 25\n2006 5 2006.375 385.03 385.03 381.87 24\n2006 6 2006.458 384.05 384.05 381.75 28\n2006 7 2006.542 382.46 382.46 381.91 24\n2006 8 2006.625 380.41 380.41 382.08 27\n2006 9 2006.708 378.85 378.85 382.16 27\n2006 10 2006.792 379.13 379.13 382.46 23\n2006 11 2006.875 380.15 380.15 382.33 29\n2006 12 2006.958 381.82 381.82 382.64 27\n2007 1 2007.042 382.89 382.89 382.67 24\n2007 2 2007.125 383.90 383.90 383.01 21\n2007 3 2007.208 384.58 384.58 382.94 26\n2007 4 2007.292 386.50 386.50 383.71 26\n2007 5 2007.375 386.56 386.56 383.34 29\n2007 6 2007.458 386.10 386.10 383.84 26\n2007 7 2007.542 384.50 384.50 384.02 27\n2007 8 2007.625 381.99 381.99 383.70 22\n2007 9 2007.708 380.96 380.96 384.32 21\n2007 10 2007.792 381.12 381.12 384.47 29\n2007 11 2007.875 382.45 382.45 384.65 30\n2007 12 2007.958 383.95 383.95 384.83 21\n2008 1 2008.042 385.52 385.52 385.28 31\n2008 2 2008.125 385.82 385.82 384.96 26\n2008 3 2008.208 386.03 386.03 384.48 30\n2008 4 2008.292 387.21 387.21 384.58 24\n2008 5 2008.375 388.54 388.54 385.45 25\n2008 6 2008.458 387.76 387.76 385.46 23\n2008 7 2008.542 386.36 386.36 385.79 10\n2008 8 2008.625 384.09 384.09 385.75 25\n2008 9 2008.708 383.18 383.18 386.46 27\n2008 10 2008.792 382.99 382.99 386.27 23\n2008 11 2008.875 384.19 384.19 386.36 28\n2008 12 2008.958 385.56 385.56 386.41 29\n2009 1 2009.042 386.94 386.94 386.63 30\n2009 2 2009.125 387.48 387.48 386.59 26\n2009 3 2009.208 388.82 388.82 387.32 28\n2009 4 2009.292 389.55 389.55 386.92 29\n2009 5 2009.375 390.14 390.14 387.02 30\n2009 6 2009.458 389.48 389.48 387.24 29\n2009 7 2009.542 388.03 388.03 387.55 22\n2009 8 2009.625 386.11 386.11 387.80 27\n2009 9 2009.708 384.74 384.74 388.01 28\n2009 10 2009.792 384.43 384.43 387.68 30\n2009 11 2009.875 386.02 386.02 388.16 30\n2009 12 2009.958 387.42 387.42 388.23 20\n2010 1 2010.042 388.71 388.71 388.41 30\n2010 2 2010.125 390.20 390.20 389.26 20\n2010 3 2010.208 391.17 391.17 389.65 25\n2010 4 2010.292 392.46 392.46 389.89 26\n2010 5 2010.375 393.00 393.00 389.88 28\n2010 6 2010.458 392.15 392.15 389.89 28\n2010 7 2010.542 390.20 390.20 389.72 29\n2010 8 2010.625 388.35 388.35 390.01 26\n2010 9 2010.708 386.85 386.85 390.14 29\n2010 10 2010.792 387.24 387.24 390.53 31\n2010 11 2010.875 388.67 388.67 390.79 28\n2010 12 2010.958 389.79 389.79 390.60 29\n2011 1 2011.042 391.33 391.33 391.03 29\n2011 2 2011.125 391.86 391.86 390.94 28\n2011 3 2011.208 392.60 392.60 391.07 29\n2011 4 2011.292 393.25 393.25 390.63 28\n2011 5 2011.375 394.19 394.19 391.02 30\n2011 6 2011.458 393.74 393.74 391.44 28\n2011 7 2011.542 392.51 392.51 392.04 26\n2011 8 2011.625 390.13 390.13 391.83 27\n2011 9 2011.708 389.08 389.08 392.40 26\n2011 10 2011.792 389.00 389.00 392.33 31\n2011 11 2011.875 390.28 390.28 392.44 28\n2011 12 2011.958 391.86 391.86 392.66 28\n2012 1 2012.042 393.12 393.12 392.89 30\n2012 2 2012.125 393.86 393.86 393.04 26\n2012 3 2012.208 394.40 394.40 392.80 30\n2012 4 2012.292 396.18 396.18 393.43 29\n2012 5 2012.375 396.74 396.74 393.54 30\n2012 6 2012.458 395.71 395.71 393.45 28\n2012 7 2012.542 394.36 394.36 393.92 26\n2012 8 2012.625 392.39 392.39 394.17 30\n2012 9 2012.708 391.11 391.11 394.54 27\n2012 10 2012.792 391.05 391.05 394.41 28\n2012 11 2012.875 392.98 392.98 395.02 29\n2012 12 2012.958 394.34 394.34 395.04 29\n2013 1 2013.042 395.55 395.55 395.40 28\n2013 2 2013.125 396.80 396.80 396.01 25\n2013 3 2013.208 397.43 397.43 395.84 30\n2013 4 2013.292 398.41 398.41 395.53 22\n2013 5 2013.375 399.78 399.78 396.40 28\n2013 6 2013.458 398.61 398.61 396.28 26\n2013 7 2013.542 397.32 397.32 396.92 21\n2013 8 2013.625 395.20 395.20 397.08 27\n2013 9 2013.708 393.45 393.45 396.99 27\n2013 10 2013.792 393.70 393.70 397.04 28\n2013 11 2013.875 395.16 395.16 397.15 30\n2013 12 2013.958 396.84 396.84 397.59 30\n2014 1 2014.042 397.85 397.85 397.55 31\n2014 2 2014.125 398.01 398.01 397.21 26\n2014 3 2014.208 399.77 399.77 398.24 24\n2014 4 2014.292 401.38 401.38 398.49 28\n2014 5 2014.375 401.78 401.78 398.38 22\n2014 6 2014.458 401.25 401.25 398.93 28\n2014 7 2014.542 399.10 399.10 398.67 25\n2014 8 2014.625 397.03 397.03 398.92 21\n2014 9 2014.708 395.38 395.38 398.97 21\n2014 10 2014.792 396.03 396.03 399.44 24\n2014 11 2014.875 397.28 397.28 399.36 27\n2014 12 2014.958 398.91 398.91 399.64 29\n2015 1 2015.042 399.98 399.98 399.73 30\n2015 2 2015.125 400.28 400.28 399.52 27\n2015 3 2015.208 401.54 401.54 400.03 24\n2015 4 2015.292 403.28 403.28 400.38 27\n2015 5 2015.375 403.96 403.96 400.51 30\n2015 6 2015.458 402.80 402.80 400.48 28\n2015 7 2015.542 401.31 401.31 400.93 23\n2015 8 2015.625 398.93 398.93 400.85 28\n2015 9 2015.708 397.63 397.63 401.26 25\n2015 10 2015.792 398.29 398.29 401.69 28\n2015 11 2015.875 400.16 400.16 402.11 27\n2015 12 2015.958 401.85 401.85 402.51 30\n2016 1 2016.042 402.56 402.56 402.26 27\n2016 2 2016.125 404.12 404.12 403.31 25\n2016 3 2016.208 404.87 404.87 403.39 28\n2016 4 2016.292 407.45 407.45 404.67 25\n2016 5 2016.375 407.72 407.72 404.35 29\n2016 6 2016.458 406.83 406.83 404.50 26\n2016 7 2016.542 404.41 404.41 404.00 28\n2016 8 2016.625 402.27 402.27 404.15 23\n2016 9 2016.708 401.05 401.05 404.61 24\n2016 10 2016.792 401.59 401.59 404.96 29\n2016 11 2016.875 403.55 403.55 405.54 27\n2016 12 2016.958 404.45 404.45 405.13 30\n2017 1 2017.042 406.17 406.17 405.87 26\n2017 2 2017.125 406.46 406.46 405.64 26\n2017 3 2017.208 407.22 407.22 405.74 23\n2017 4 2017.292 409.04 409.04 406.26 25\n2017 5 2017.375 409.69 409.69 406.32 27\n2017 6 2017.458 408.88 408.88 406.55 26\n2017 7 2017.542 407.12 407.12 406.72 28\n2017 8 2017.625 405.13 405.13 407.00 29\n2017 9 2017.708 403.37 403.37 406.93 26\n2017 10 2017.792 403.63 403.63 407.00 27\n2017 11 2017.875 405.12 405.12 407.11 26\n2017 12 2017.958 406.81 406.81 407.49 31\n2018 1 2018.042 407.96 407.96 407.66 29\n2018 2 2018.125 408.32 408.32 407.51 28\n2018 3 2018.208 409.41 409.41 407.93 30\n2018 4 2018.292 410.24 410.24 407.46 21\n2018 5 2018.375 411.24 411.24 407.87 24\n2018 6 2018.458 410.79 410.79 408.46 29\n2018 7 2018.542 408.71 408.71 408.30 27\n2018 8 2018.625 406.99 406.99 408.87 30\n2018 9 2018.708 405.51 405.51 409.07 29\n2018 10 2018.792 406.00 406.00 409.37 30\n2018 11 2018.875 408.02 408.02 410.01 24\n2018 12 2018.958 409.07 409.07 409.75 30\n2019 1 2019.042 410.83 410.83 410.53 27\n2019 2 2019.125 411.75 411.75 410.93 27\n2019 3 2019.208 411.97 411.97 410.49 28\n2019 4 2019.292 413.32 413.32 410.53 26\n\n"
],
[
"## Import the data in table format:\ndf = pd.read_table(\"ftp://aftp.cmdl.noaa.gov/products/trends/co2/co2_mm_mlo.txt\", \n comment = \"#\", delim_whitespace = True,\n names = [\"year\", \"month\", \"decimal_date\", \"average\", \"interpolated\", \"trend\", \"days\"],\n na_values = [-99.99, -1])\n## show a preview of what our data looks like\ndf",
"_____no_output_____"
],
[
"df.describe()",
"_____no_output_____"
],
[
"plt.plot(df[\"year\"], df[[\"average\",\"trend\"]], lw=1)\nplt.title('Trend Value')\nplt.xlabel('Date')\nplt.ylabel('Value')\nplt.legend(['average','trend'])",
"_____no_output_____"
],
[
"plt.plot(df[\"year\"], df[\"average\"], lw=1)\nplt.title('Average CO2')\nplt.xlabel('Date')\nplt.ylabel('CO2')",
"_____no_output_____"
]
],
[
[
"## Forecast CO2",
"_____no_output_____"
]
],
[
[
"from fbprophet import Prophet",
"_____no_output_____"
],
[
"df1 = df.copy()",
"_____no_output_____"
],
[
"df1['Date'] = pd.to_datetime(df.year.astype(str) + '-' + df.month.astype(str))",
"_____no_output_____"
],
[
"df1 = df1.rename(index=str, columns={\"Date\": \"ds\", \"trend\": \"y\"})",
"_____no_output_____"
],
[
"df1.head()",
"_____no_output_____"
],
[
"df1 = df1.drop(['year','month','decimal_date','average','interpolated','days'], axis=1)",
"_____no_output_____"
],
[
"df1 = df1[['ds', 'y']]",
"_____no_output_____"
],
[
"# Normalize Data [0,1]\ndf1['y'] = (df1['y']-df1['y'].min()) / (df1['y'].max()-df1['y'].min())",
"_____no_output_____"
],
[
"df1.head()",
"_____no_output_____"
],
[
"df2 = df1.copy()",
"_____no_output_____"
],
[
"model = Prophet(yearly_seasonality=True) #instantiate Prophet\nmodel.fit(df1) #fit the model with your dataframe",
"INFO:fbprophet.forecaster:Disabling weekly seasonality. Run prophet with weekly_seasonality=True to override this.\nINFO:fbprophet.forecaster:Disabling daily seasonality. Run prophet with daily_seasonality=True to override this.\n"
],
[
"future_data = model.make_future_dataframe(periods=365)",
"_____no_output_____"
],
[
"forecast_data = model.predict(future_data)",
"_____no_output_____"
],
[
"forecast_data[['ds', 'yhat', 'yhat_lower', 'yhat_upper']].tail()",
"_____no_output_____"
],
[
"plt.plot(forecast_data[['yhat','yhat_upper','yhat_lower']])\nplt.legend(labels=['yhat','yhat_upper','yhat_lower'])",
"_____no_output_____"
],
[
"model.plot(forecast_data)",
"_____no_output_____"
],
[
"model.plot_components(forecast_data)",
"_____no_output_____"
],
[
"forecast_data_orig = forecast_data # make sure we save the original forecast data\nforecast_data_orig['yhat'] = np.exp(forecast_data_orig['yhat'])\nforecast_data_orig['yhat_lower'] = np.exp(forecast_data_orig['yhat_lower'])\nforecast_data_orig['yhat_upper'] = np.exp(forecast_data_orig['yhat_upper'])",
"_____no_output_____"
],
[
"model.plot(forecast_data_orig)",
"_____no_output_____"
],
[
"df2",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0bd912358931fc74ab2b7976e39c10a47507279 | 35,808 | ipynb | Jupyter Notebook | Data Science With Python/13 - Lesson - Model Selection.ipynb | shreejitverma/Data-Scientist | 03c06936e957f93182bb18362b01383e5775ffb1 | [
"MIT"
] | 2 | 2022-03-12T04:53:03.000Z | 2022-03-27T12:39:21.000Z | Data Science With Python/13 - Lesson - Model Selection.ipynb | shreejitverma/Data-Scientist | 03c06936e957f93182bb18362b01383e5775ffb1 | [
"MIT"
] | null | null | null | Data Science With Python/13 - Lesson - Model Selection.ipynb | shreejitverma/Data-Scientist | 03c06936e957f93182bb18362b01383e5775ffb1 | [
"MIT"
] | 2 | 2022-03-12T04:52:21.000Z | 2022-03-27T12:45:32.000Z | 38.921739 | 7,476 | 0.507484 | [
[
[
"# Model Selection",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"## Model Selection\n- The process of selecting the model among a collection of candidates machine learning models\n\n### Problem type\n- What kind of problem are you looking into?\n - **Classification**: *Predict labels on data with predefined classes*\n - Supervised Machine Learning\n - **Clustering**: *Identify similarieties between objects and group them in clusters*\n - Unsupervised Machine Learning\n - **Regression**: *Predict continuous values*\n - Supervised Machine Learning\n- Resource: [Sklearn cheat sheet](https://scikit-learn.org/stable/tutorial/machine_learning_map/index.html)",
"_____no_output_____"
],
[
"### What is the \"best\" model?\n- All models have some **predictive error**\n- We should seek a model that is *good enough*",
"_____no_output_____"
],
[
"### Model Selection Techniques\n- **Probabilistic Measures**: Scoring by performance and complexity of model.\n- **Resampling Methods**: Splitting in sub-train and sub-test datasets and scoring by mean values of repeated runs.",
"_____no_output_____"
]
],
[
[
"import pandas as pd",
"_____no_output_____"
],
[
"data = pd.read_parquet('files/house_sales.parquet')\ndata.head()",
"_____no_output_____"
],
[
"data.describe()",
"_____no_output_____"
],
[
"data['SalePrice'].plot.hist(bins=20)",
"_____no_output_____"
]
],
[
[
"### Converting to Categories\n- [`cut()`](https://pandas.pydata.org/docs/reference/api/pandas.cut.html) Bin values into discrete intervals.\n - Data in bins based on data distribution.\n- [`qcut()`](https://pandas.pydata.org/docs/reference/api/pandas.qcut.html) Quantile-based discretization function.\n - Data in equal size bins",
"_____no_output_____"
],
[
"#### Invstigation\n- Figure out why `cut` is not suitable for 3 bins here.",
"_____no_output_____"
]
],
[
[
"data['Target'] = pd.cut(data['SalePrice'], bins=3, labels=[1, 2, 3])\ndata['Target'].value_counts()/len(data)",
"_____no_output_____"
],
[
"data['Target'] = pd.qcut(data['SalePrice'], q=3, labels=[1, 2, 3])\ndata['Target'].value_counts()/len(data)",
"_____no_output_____"
],
[
"from sklearn.model_selection import train_test_split\nfrom sklearn.svm import SVC, LinearSVC\nfrom sklearn.metrics import accuracy_score",
"_____no_output_____"
],
[
"X = data.drop(['SalePrice', 'Target'], axis=1).fillna(-1)\ny = data['Target']\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.2, random_state=42)",
"_____no_output_____"
],
[
"svc = LinearSVC()\nsvc.fit(X_train, y_train)\ny_pred = svc.predict(X_test)\n\naccuracy_score(y_test, y_pred)",
"/Users/rune/opt/anaconda3/lib/python3.8/site-packages/sklearn/svm/_base.py:1206: ConvergenceWarning: Liblinear failed to converge, increase the number of iterations.\n warnings.warn(\n"
],
[
"from sklearn.neighbors import KNeighborsClassifier",
"_____no_output_____"
],
[
"neigh = KNeighborsClassifier()\nneigh.fit(X_train, y_train)\ny_pred = neigh.predict(X_test)\n\naccuracy_score(y_test, y_pred)",
"_____no_output_____"
],
[
"svc = SVC(kernel='rbf')\nsvc.fit(X_train, y_train)\ny_pred = svc.predict(X_test)\n\naccuracy_score(y_test, y_pred)",
"_____no_output_____"
],
[
"svc = SVC(kernel='sigmoid')\nsvc.fit(X_train, y_train)\ny_pred = svc.predict(X_test)\n\naccuracy_score(y_test, y_pred)",
"_____no_output_____"
],
[
"svc = SVC(kernel='poly', degree=5)\nsvc.fit(X_train, y_train)\ny_pred = svc.predict(X_test)\n\naccuracy_score(y_test, y_pred)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0bd95dd5d496d6786a59e7e5ecc302b922cc9b0 | 32,818 | ipynb | Jupyter Notebook | MODEL - 1 (RNN)/Raw/Post_Processing_RNN.ipynb | rrustagi20/Deep-learning---Automatic-Music-Transcription | 546b37ff7c4e2463933ac010bd976bac1575c27a | [
"MIT"
] | 4 | 2021-07-13T15:20:36.000Z | 2022-03-21T12:13:16.000Z | MODEL - 1 (RNN)/Raw/Post_Processing_RNN.ipynb | rrustagi20/Deep-learning---Automatic-Music-Transcription | 546b37ff7c4e2463933ac010bd976bac1575c27a | [
"MIT"
] | 13 | 2021-07-14T11:14:01.000Z | 2021-07-30T11:52:01.000Z | MODEL - 1 (RNN)/Raw/Post_Processing_RNN.ipynb | rrustagi20/Deep-learning---Automatic-Music-Transcription | 546b37ff7c4e2463933ac010bd976bac1575c27a | [
"MIT"
] | 14 | 2021-07-13T15:02:09.000Z | 2021-07-26T11:35:19.000Z | 75.617512 | 9,702 | 0.74063 | [
[
[
"import numpy as np\narr = np.load('MAPS.npy')\nprint(arr)\nprint(np.shape(arr))",
"[[False False False ... False False False]\n [False False False ... False False False]\n [False False False ... False False False]\n ...\n [False False False ... False False False]\n [False False False ... False False False]\n [False False False ... False False False]]\n(20426, 88)\n"
],
[
"arr2 = np.empty((20426, 88), dtype = int) \nfor i in range(arr.shape[0]):\n for j in range(arr.shape[1]):\n if arr[i,j]==False:\n arr2[i,j]=int(0)\n int(arr2[i,j])\n elif arr[i,j]==True:\n arr2[i,j]=int(1)\n\n \nprint(arr2)",
"[[0 0 0 ... 0 0 0]\n [0 0 0 ... 0 0 0]\n [0 0 0 ... 0 0 0]\n ...\n [0 0 0 ... 0 0 0]\n [0 0 0 ... 0 0 0]\n [0 0 0 ... 0 0 0]]\n"
],
[
"!pip install midiutil",
"Requirement already satisfied: midiutil in /usr/local/lib/python3.7/dist-packages (1.2.1)\n"
],
[
"from midiutil.MidiFile import MIDIFile\n\nmf = MIDIFile(1)\ntrack = 0 \ntime = 0\ndelta = 0.000005\nmf.addTrackName(track, time, \"Output\")\nmf.addTempo(track, time, 120)\n\nchannel = 0\nvolume = 100\nduration = 0.01 \n\nfor i in range(arr2.shape[0]):\n time=time + i*delta\n for j in range(arr2.shape[1]):\n if arr[i,j] == 1:\n pitch = j\n mf.addNote(track, channel, pitch, time, duration, volume)\n\nwith open(\"output.mid\", 'wb') as outf:\n mf.writeFile(outf)",
"_____no_output_____"
],
[
"!pip install pretty_midi",
"Requirement already satisfied: pretty_midi in /usr/local/lib/python3.7/dist-packages (0.2.9)\nRequirement already satisfied: numpy>=1.7.0 in /usr/local/lib/python3.7/dist-packages (from pretty_midi) (1.19.5)\nRequirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from pretty_midi) (1.15.0)\nRequirement already satisfied: mido>=1.1.16 in /usr/local/lib/python3.7/dist-packages (from pretty_midi) (1.2.10)\n"
],
[
"import pretty_midi\nimport pandas as pd\npath = \"output.mid\"\nmidi_data = pretty_midi.PrettyMIDI(path)\nmidi_list = []\n\npretty_midi.pretty_midi.MAX_TICK = 1e10\nmidi_data.tick_to_time(14325216)\n\nfor instrument in midi_data.instruments:\n for note in instrument.notes:\n start = note.start\n end = note.end\n pitch = note.pitch\n velocity = note.velocity\n midi_list.append([start, end, pitch, velocity, instrument.name])\n \nmidi_list = sorted(midi_list, key=lambda x: (x[0], x[2]))\n\ndf = pd.DataFrame(midi_list, columns=['Start', 'End', 'Pitch', 'Velocity', 'Instrument'])\n\nprint(df)",
" Start End Pitch Velocity Instrument\n0 0.002083 0.002604 41 100 Output\n1 0.002604 0.003125 41 100 Output\n2 0.003125 0.003646 41 100 Output\n3 0.003646 0.004167 41 100 Output\n4 0.004167 0.004687 41 100 Output\n... ... ... ... ... ...\n108929 520.888542 520.893229 34 100 Output\n108930 520.888542 520.893229 41 100 Output\n108931 520.888542 520.893229 49 100 Output\n108932 520.888542 520.893229 58 100 Output\n108933 520.888542 520.893229 65 100 Output\n\n[108934 rows x 5 columns]\n"
],
[
"import matplotlib\nimport matplotlib.pyplot as plt\nfrom matplotlib.patches import Rectangle\nimport numpy as np\n",
"_____no_output_____"
],
[
"fig, ax = plt.subplots()\n\ni = 0\nwhile(i<108934) :\n start = float(midi_list[i][0])\n pitch = float(midi_list[i][2])\n duration = float(midi_list[i][1]-midi_list[i][0])\n # if my_reader[i][4]=='Right Hand' :\n # color1 = 'royalblue'\n # else :\n # color1 = 'darkorange'\n rect = matplotlib.patches.Rectangle((start, pitch),duration, 1, ec='black', linewidth=10)\n ax.add_patch(rect)\n i+=1\n \n \n# plt.xlabel(\"Time (seconds)\", fontsize=150)\n# plt.ylabel(\"Pitch\", fontsize=150)\n\n\nplt.xlim([0, 550])\nplt.ylim([0, 88])\n\nplt.grid(color='grey',linewidth=1)\n\nplt.show()",
"_____no_output_____"
]
],
[
[
"ACTUAL",
"_____no_output_____"
]
],
[
[
"import pretty_midi\nimport pandas as pd\npath = \"MAPS.mid\"\nmidi_data = pretty_midi.PrettyMIDI(path)\nmidi_list = []\n\npretty_midi.pretty_midi.MAX_TICK = 1e10\nmidi_data.tick_to_time(14325216)\n\nfor instrument in midi_data.instruments:\n for note in instrument.notes:\n start = note.start\n end = note.end\n pitch = note.pitch\n velocity = note.velocity\n midi_list.append([start, end, pitch, velocity, instrument.name])\n \nmidi_list = sorted(midi_list, key=lambda x: (x[0], x[2]))\n\ndf = pd.DataFrame(midi_list, columns=['Start', 'End', 'Pitch', 'Velocity', 'Instrument'])\n\nprint(df)",
" Start End Pitch Velocity Instrument\n0 0.521985 6.701973 62 74 \n1 1.379984 2.239995 61 69 \n2 2.240987 6.702979 60 64 \n3 3.114985 6.702979 57 61 \n4 4.021975 6.705983 54 57 \n... ... ... ... ... ...\n2196 235.487309 236.997300 86 102 \n2197 235.488301 236.998306 55 79 \n2198 235.488301 236.997300 70 93 \n2199 235.488301 236.997300 79 90 \n2200 235.491305 237.000304 62 79 \n\n[2201 rows x 5 columns]\n"
],
[
"import matplotlib\nimport matplotlib.pyplot as plt\nfrom matplotlib.patches import Rectangle\nimport numpy as np",
"_____no_output_____"
],
[
"fig, ax = plt.subplots()\n\ni = 0\nwhile(i<2200) :\n start = float(midi_list[i][0])\n pitch = float(midi_list[i][2])\n duration = float(midi_list[i][1]-midi_list[i][0])\n # if my_reader[i][4]=='Right Hand' :\n # color1 = 'royalblue'\n # else :\n # color1 = 'darkorange'\n rect = matplotlib.patches.Rectangle((start, pitch),duration, 1, ec='black', linewidth=10)\n ax.add_patch(rect)\n i+=1\n \n \n# plt.xlabel(\"Time (seconds)\", fontsize=150)\n# plt.ylabel(\"Pitch\", fontsize=150)\n\n\nplt.xlim([0, 240])\nplt.ylim([0, 88])\n\nplt.grid(color='grey',linewidth=1)\n\nplt.show()",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
d0bd989da8d38d590a349bca3a591eb2503f581c | 72,559 | ipynb | Jupyter Notebook | notebooks/Text Data.ipynb | rlbellaire/ActT | b6e936e5037c5f92ad1c281e2bf3700bf91aea42 | [
"BSD-3-Clause"
] | 2 | 2020-01-24T20:20:02.000Z | 2021-09-25T03:32:17.000Z | notebooks/Text Data.ipynb | rlbellaire/ActT | b6e936e5037c5f92ad1c281e2bf3700bf91aea42 | [
"BSD-3-Clause"
] | 1 | 2020-11-16T17:08:08.000Z | 2020-11-16T17:08:08.000Z | notebooks/Text Data.ipynb | rlbellaire/ActT | b6e936e5037c5f92ad1c281e2bf3700bf91aea42 | [
"BSD-3-Clause"
] | 1 | 2020-11-16T16:58:39.000Z | 2020-11-16T16:58:39.000Z | 123.609881 | 50,704 | 0.849185 | [
[
[
"# Necessary imports\nimport warnings\nwarnings.filterwarnings('ignore')\nimport re\nimport os\nimport numpy as np\nimport scipy as sp\nfrom scipy.sparse import csr_matrix\nfrom sklearn import datasets\nfrom sklearn.feature_extraction.text import CountVectorizer\nfrom sklearn.feature_extraction.text import TfidfTransformer\nfrom sklearn.naive_bayes import MultinomialNB\nfrom active_tester import ActiveTester\nfrom active_tester.estimators.learned import Learned\nfrom active_tester.estimators.naive import Naive\nfrom active_tester.query_strategy.noisy_label_uncertainty import LabelUncertainty\nfrom active_tester.query_strategy.classifier_uncertainty import ClassifierUncertainty\nfrom active_tester.query_strategy.MCM import MCM\nfrom active_tester.query_strategy.random import Random\nfrom sklearn.metrics import accuracy_score\nfrom active_tester.label_estimation.methods import oracle_one_label, no_oracle, oracle_multiple_labels",
"_____no_output_____"
]
],
[
[
"# Active Testing Using Text Data ",
"_____no_output_____"
],
[
"This is an example of using the ActT library with a text dataset. To walk through this example, download __[a sentiment analysis dataset](https://archive.ics.uci.edu/ml/machine-learning-databases/00331/)__ from the UCI machine learning repository and place the contents in the text_data directory. Additionally, this tutorial follows Scikit Learn's steps on __[Working with Text Data](https://scikit-learn.org/stable/tutorial/text_analytics/working_with_text_data.html)__. Before we employ ActT, using the example dataset, we must preprocess the data to create textfiles for each sentence and to divide the dataset into a train and test set.",
"_____no_output_____"
],
[
"## Data Processing\n\nUsing the preprocessing scripts below, we will combine all of the files into one file containing all 3000 sentence. Then, we will separate the sentences into a test and training set containing the individual sentences as files, then place them in their respective class folders. \n\nAfter the dataset is created, set `create_datasets` to `False` to avoid creating duplicate files.",
"_____no_output_____"
]
],
[
[
"create_datasets = False\n# get rid of temporary files inserted to preserve directory structure\nif create_datasets:\n myfile = 'text_data/temp.txt'\n if os.path.isfile(myfile):\n os.remove(myfile)\n myfile = 'train/positive/temp.txt'\n if os.path.isfile(myfile):\n os.remove(myfile)\n myfile = 'train/negative/temp.txt'\n if os.path.isfile(myfile):\n os.remove(myfile)\n myfile = 'test/positive/temp.txt'\n if os.path.isfile(myfile):\n os.remove(myfile)\n myfile = 'test/negative/temp.txt'\n if os.path.isfile(myfile):\n os.remove(myfile)",
"_____no_output_____"
],
[
"if create_datasets:\n #Combine all sentence files into one file\n try:\n sentences = open('sentences.txt', 'a')\n #Renamed files with dashes\n filenames = ['text_data/imdb_labelled.txt', \n 'text_data/amazon_cells_labelled.txt', \n 'text_data/yelp_labelled.txt']\n for filename in filenames:\n print(filename)\n with open(filename) as file:\n for line in file:\n line = line.rstrip()\n sentences.write(line + '\\n')\n except Exception:\n print('File not found')",
"_____no_output_____"
],
[
"if create_datasets:\n #Separate sentences into a test and training set\n #Write each sentence to a file and place that file in its respective class folder\n filename = 'sentences.txt'\n with open(filename) as file:\n count = 1\n for line in file:\n if count <= 2000:\n line = line.rstrip()\n if line[-1:] == '0':\n input_file = open('train/negative/inputfile-' + str(count) + '.txt', 'a')\n line = line[:-1]\n line = line.rstrip()\n input_file.write(line)\n if line[-1:] == '1':\n input_file = open('train/positive/inputfile-' + str(count) + '.txt', 'a')\n line = line[:-1]\n line = line.rstrip()\n input_file.write(line)\n if count > 2000:\n line = line.rstrip()\n if line[-1:] == '0':\n input_file = open('test/negative/inputfile-' + str(count) + '.txt', 'a')\n line = line[:-1]\n line = line.rstrip()\n input_file.write(line)\n if line[-1:] == '1':\n input_file = open('test/positive/inputfile-' + str(count) + '.txt', 'a')\n line = line[:-1]\n line = line.rstrip()\n input_file.write(line)\n count = count + 1",
"_____no_output_____"
]
],
[
[
"## Loading Data and Training a Model",
"_____no_output_____"
],
[
"Below, we load the training data, create term frequency features, and then fit a classifier to the data.",
"_____no_output_____"
]
],
[
[
"#Load training data from files\ncategories = ['positive', 'negative']\nsent_data = datasets.load_files(container_path='train', categories=categories, shuffle=True)\nX_train, y_train = sent_data.data, sent_data.target\n\n#Extract features\ncount_vect = CountVectorizer()\nX_train_counts = count_vect.fit_transform(X_train)\n\n#Transform occurance matrix to a frequency matrix\ntf_transformer = TfidfTransformer(use_idf=False).fit(X_train_counts)\nX_train_tf = tf_transformer.transform(X_train_counts)\n\n#Build a classifier\nclf = MultinomialNB().fit(X_train_tf, sent_data.target)",
"_____no_output_____"
]
],
[
[
"Now, we transform the test dataset to use the same features, apply the classifier to the test dataset and compute the classifier's true accuracy. ",
"_____no_output_____"
]
],
[
[
"#Load the test data from files\nsent_data_test = datasets.load_files(container_path='test', categories=categories, shuffle=False)\nX_test, y_test = sent_data_test.data, sent_data_test.target\n\n#Extract features\nX_test_counts = count_vect.transform(sent_data_test.data)\n\n#Transform occurance matrix to a frequency matrix\nX_test_tf = tf_transformer.transform(X_test_counts)\n\n#Compute the true accuracy of the classifier\nlabel_predictions = clf.predict(X_test_tf)\ntrue_accuracy = accuracy_score(y_test, label_predictions)",
"_____no_output_____"
]
],
[
[
"## Using Active Tester\n\nThe following code creates a set of noisy labels, reshapes the true labels, and converts the test features to a dense array.",
"_____no_output_____"
]
],
[
[
"#Initialize key variables: X, Y_noisy, and vetted\nY_noisy = []\nnoisy_label_accuracy = 0.75\nfor i in range(len(y_test)):\n if np.random.rand() < noisy_label_accuracy:\n # noisy label is correct\n Y_noisy.append(y_test[i])\n else:\n # noisy label is incorrect\n Y_noisy.append(np.random.choice(np.delete(np.arange(2),y_test[i])))\nY_noisy = np.asarray(Y_noisy, dtype=int)\n#Note that if your y_noisy array is shape (L,), you will need to reshape it to be (L,1)\nY_noisy = np.reshape(Y_noisy,(len(Y_noisy),1))\n\nY_ground_truth = np.reshape(y_test, (len(y_test), 1))\n\n#Note that if using sklearn's transformer, you may recieve an error about a sparse\n#matrix. Using scipy's sparse csr_matrix.toarray() method can resolve this issue\nX = csr_matrix.toarray(X_test_tf)",
"_____no_output_____"
]
],
[
[
"Now to display the sentences to the vetter in an interactive session, we need to create a list of all the test data files. This will serve as raw input to the `query_vetted` method of `active_tester`. ",
"_____no_output_____"
]
],
[
[
"#Create a list with all of the test data files to serve as the raw input to query vetted\nfile_list = []\nsentence_dirs = os.path.join(os.getcwd(),'test')\nfor root, dirs, files in os.walk(sentence_dirs):\n for name in files:\n if name.endswith('.txt'):\n local_path = os.path.join(root, name)\n file_list.append(os.path.join(sentence_dirs, local_path))",
"_____no_output_____"
]
],
[
[
"Now, we are ready to estimate the performance of the classifier by querying the oracle.",
"_____no_output_____"
]
],
[
[
"#Active Tester with a Naive Estimator, Classifier Uncertainty Query Method, and Interactive Query Vetting\nbudget = 5\n\nactive_test = ActiveTester(Naive(metric=accuracy_score), \n ClassifierUncertainty())\nactive_test.standardize_data(X=X, \n classes=sent_data.target_names,\n Y_noisy=Y_noisy)\n\nactive_test.gen_model_predictions(clf)\nactive_test.query_vetted(True, budget, raw=file_list)\nactive_test.test()\nresults = active_test.get_test_results()",
"There may be a mismatch between the ordering of the vetted labels and the items. Please set rearragne to False\nBeginning preprocessing to find vetted labels of each class...\n\"\nGreat place to relax and have an awesome burger and beer.\n\"\n\n\nThe available labels are: ['negative', 'positive']\nLabel the provided item: positive\n\n\n\"\nThe grilled chicken was so tender and yellow from the saffron seasoning.\n\"\n\n\nThe available labels are: ['negative', 'positive']\nLabel the provided item: positive\n\n\n\"\nSome highlights : Great quality nigiri here!\n\"\n\n\nThe available labels are: ['negative', 'positive']\nLabel the provided item: positive\n\n\n\"\nStopped by this place while in Madison for the Ironman, very friendly, kind staff.\n\"\n\n\nThe available labels are: ['negative', 'positive']\nLabel the provided item: positive\n\n\n\"\nVery convenient, since we were staying at the MGM!\n\"\n\n\nThe available labels are: ['negative', 'positive']\nLabel the provided item: positive\n\n\nCompleted preprocessing\nBudget reduced from \"5\" to \"0\"\n"
],
[
"# View the result and compare to the true accuracy\nprint('Test metric with budget of', budget,': ', results['tester_metric'])\nprint('True accuracy of classifier: ', true_accuracy)",
"Test metric with budget of 5 : 0.6368715083798883\nTrue accuracy of classifier: 0.7620111731843575\n"
]
],
[
[
"## A Comparison of Query Strategies and Estimators\n\nBelow, we compare a couple of query strategies and estimators.",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\n\nabs_error_array = []\n\n# Initialize the estimators\nlearned = Learned(metric=accuracy_score, estimation_method=oracle_multiple_labels)\nnaive = Naive(metric=accuracy_score)\nestimator_list = {'Naive': naive, 'Learned': learned}\n\n# Initialize a few query strategies\nrand = Random()\nclassifier_uncertainty = ClassifierUncertainty()\nmcm = MCM(estimation_method=oracle_multiple_labels)\n\nquery_strategy_list = {'Random': rand, 'Classifier Uncertainty': classifier_uncertainty, \n 'Most Common Mistake': mcm}\n\n# Run active testing for each estimator-query pair, for a range of sample sizes\nsample_sizes = [100, 200, 300, 400, 500]\nfor est_k, est_v in estimator_list.items():\n for query_k, query_v in query_strategy_list.items():\n abs_error_array = []\n for i in sample_sizes:\n\n at = ActiveTester(est_v, query_v)\n\n #Set dataset and model values in the active tester object\n at.standardize_data(X=X, \n classes=sent_data.target_names, \n Y_ground_truth=Y_ground_truth, \n Y_noisy=Y_noisy)\n at.gen_model_predictions(clf)\n at.query_vetted(False, i)\n at.test()\n\n results = at.get_test_results()\n abs_error_array.append(np.abs(results['tester_metric'] - true_accuracy))\n\n plt.ylabel(\"Absolute Error\")\n plt.xlabel(\"Number Vetted\")\n\n plt.plot(sample_sizes, abs_error_array, label=est_k + '+' + query_k)\n plt.legend(loc='best')\n plt.title('Absolute Error vs Number Vetted')\n plt.grid(True)\n\nplt.show()",
"Beginning preprocessing to find vetted labels of each class...\nCompleted preprocessing\nBudget reduced from \"100\" to \"98\"\nBeginning preprocessing to find vetted labels of each class...\nCompleted preprocessing\nBudget reduced from \"200\" to \"198\"\nBeginning preprocessing to find vetted labels of each class...\nCompleted preprocessing\nBudget reduced from \"300\" to \"298\"\nBeginning preprocessing to find vetted labels of each class...\nCompleted preprocessing\nBudget reduced from \"400\" to \"398\"\nBeginning preprocessing to find vetted labels of each class...\nCompleted preprocessing\nBudget reduced from \"500\" to \"498\"\nBeginning preprocessing to find vetted labels of each class...\nCompleted preprocessing\nBudget reduced from \"100\" to \"98\"\nBeginning preprocessing to find vetted labels of each class...\nCompleted preprocessing\nBudget reduced from \"200\" to \"198\"\nBeginning preprocessing to find vetted labels of each class...\nCompleted preprocessing\nBudget reduced from \"300\" to \"298\"\nBeginning preprocessing to find vetted labels of each class...\nCompleted preprocessing\nBudget reduced from \"400\" to \"398\"\nBeginning preprocessing to find vetted labels of each class...\nCompleted preprocessing\nBudget reduced from \"500\" to \"498\"\nBeginning preprocessing to find vetted labels of each class...\nCompleted preprocessing\nBudget reduced from \"100\" to \"98\"\nBeginning preprocessing to find vetted labels of each class...\nCompleted preprocessing\nBudget reduced from \"200\" to \"198\"\nBeginning preprocessing to find vetted labels of each class...\nCompleted preprocessing\nBudget reduced from \"300\" to \"298\"\nBeginning preprocessing to find vetted labels of each class...\nCompleted preprocessing\nBudget reduced from \"400\" to \"398\"\nBeginning preprocessing to find vetted labels of each class...\nCompleted preprocessing\nBudget reduced from \"500\" to \"498\"\nBeginning preprocessing to find vetted labels of each class...\nCompleted preprocessing\nBudget reduced from \"100\" to \"98\"\nBeginning preprocessing to find vetted labels of each class...\nCompleted preprocessing\nBudget reduced from \"200\" to \"198\"\nBeginning preprocessing to find vetted labels of each class...\nCompleted preprocessing\nBudget reduced from \"300\" to \"298\"\nBeginning preprocessing to find vetted labels of each class...\nCompleted preprocessing\nBudget reduced from \"400\" to \"398\"\nBeginning preprocessing to find vetted labels of each class...\nCompleted preprocessing\nBudget reduced from \"500\" to \"498\"\nBeginning preprocessing to find vetted labels of each class...\nCompleted preprocessing\nBudget reduced from \"100\" to \"98\"\nBeginning preprocessing to find vetted labels of each class...\nCompleted preprocessing\nBudget reduced from \"200\" to \"198\"\nBeginning preprocessing to find vetted labels of each class...\nCompleted preprocessing\nBudget reduced from \"300\" to \"298\"\nBeginning preprocessing to find vetted labels of each class...\nCompleted preprocessing\nBudget reduced from \"400\" to \"398\"\nBeginning preprocessing to find vetted labels of each class...\nCompleted preprocessing\nBudget reduced from \"500\" to \"498\"\nBeginning preprocessing to find vetted labels of each class...\nCompleted preprocessing\nBudget reduced from \"100\" to \"98\"\nBeginning preprocessing to find vetted labels of each class...\nCompleted preprocessing\nBudget reduced from \"200\" to \"198\"\nBeginning preprocessing to find vetted labels of each class...\nCompleted preprocessing\nBudget reduced from \"300\" to \"298\"\nBeginning preprocessing to find vetted labels of each class...\nCompleted preprocessing\nBudget reduced from \"400\" to \"398\"\nBeginning preprocessing to find vetted labels of each class...\nCompleted preprocessing\nBudget reduced from \"500\" to \"498\"\n"
]
],
[
[
"As you can see from the graph, the absolute error for the learned estimation method is smaller than for the naive method. There is not a large difference between the different query strategies.",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d0bd9bedccdf611e0efe3f37c9c8b99a3740a6ad | 9,627 | ipynb | Jupyter Notebook | examples/plato_colab.ipynb | iQua/plato | 76fdac06af8b4d85922cd12749b4a687e3161745 | [
"Apache-2.0"
] | null | null | null | examples/plato_colab.ipynb | iQua/plato | 76fdac06af8b4d85922cd12749b4a687e3161745 | [
"Apache-2.0"
] | null | null | null | examples/plato_colab.ipynb | iQua/plato | 76fdac06af8b4d85922cd12749b4a687e3161745 | [
"Apache-2.0"
] | 1 | 2021-05-18T15:03:32.000Z | 2021-05-18T15:03:32.000Z | 31.877483 | 351 | 0.546692 | [
[
[
"# Running Plato in Google's Colab Notebooks\n\n## 1. Preparation\n\n### Use Chrome broswer\n\nSince Colab is a product from Google, to take the most advantage of it, Chrome is the most recommended broswer here.\n\n### Activating GPU support\n\nIf you need GPU support in your project, you may activate it in Google Colab by clicking on `Runtime > Change runtime type` in the notebook menu and choosing `GPU` as the hardware accelerator. To check whether the GPU is available for computation, we import the deep learning framework [PyTorch](https://pytorch.org/):",
"_____no_output_____"
]
],
[
[
"import torch\ntorch.cuda.is_available()",
"_____no_output_____"
]
],
[
[
"If successful, the output of the cell above should print `True`.\n\n### Use Google Drive\n\nSince Google Colab removes all the files that you have downloaded or created when you end a session, the best option is to use GitHub to store your code, and Google Drive to store your datasets, logs, and anything else that would normally reside on your filesystem but wouldn’t be tracked by a git repo.\n\nWhen you run the code below, you will need to click a link and follow a process that takes a few seconds. When the process is complete, all of your drive files will be available via ‘/root/drive’ on your Colab instance, and this will allow you to structure your projects in the same way you would if you were using a cloud server.",
"_____no_output_____"
]
],
[
[
"from google.colab import drive\ndrive.mount('/content/drive')\nroot_path = '/content/drive/My\\ Drive'\n%cd $root_path",
"_____no_output_____"
]
],
[
[
"## 2. Installing Plato with PyTorch\n\nClone Plato's public git repository on GitHub to your Google drive. ",
"_____no_output_____"
]
],
[
[
"!git clone https://github.com/TL-System/plato",
"_____no_output_____"
]
],
[
[
"Then install the required Python packages:",
"_____no_output_____"
]
],
[
[
"!pip install -r $root_path/plato/requirements.txt -U",
"_____no_output_____"
]
],
[
[
"Get into the `plato` directory:",
"_____no_output_____"
]
],
[
[
"!chmod -R ugo+rx $root_path/plato/run\n%cd $root_path/plato/",
"_____no_output_____"
]
],
[
[
"## 3. Running Plato\n\n### Make sure you don’t get disconnected\n\nRun the following cell when you plan to do a long training to avoid getting disconnected in the middle of it.",
"_____no_output_____"
]
],
[
[
"%%javascript\nfunction ClickConnect(){\nconsole.log(\"Working\");\ndocument.querySelector(\"colab-toolbar-button#connect\").click()\n}setInterval(ClickConnect,60000)",
"_____no_output_____"
]
],
[
[
"**Note:** Please use this responsibly. Getting booted from Colab is very annoying, but it is done to make resources available for others when you’re not actively using them. \n",
"_____no_output_____"
],
[
"### Setting up Weights and Biases\n\nSupport for logging using Weights and Biases (https://wandb.com) is built-in. It will prompt you to enter your key when starting your first run. If you don't wish to use Weights and Biases, set it to `offline`:",
"_____no_output_____"
]
],
[
[
"!wandb offline",
"_____no_output_____"
]
],
[
[
"### Running Plato in the Colab notebook\n\nTo start a federated learning training workload, run `run` from Plato's home directory. For example:",
"_____no_output_____"
]
],
[
[
"!./run -s 127.0.0.1:8000 -c ./configs/MNIST/fedavg_lenet5.yml",
"_____no_output_____"
]
],
[
[
"Here, `fedavg_lenet5.yml` is a sample configuration file that uses Federated Averaging as the federated learning algorithm, and LeNet5 as the model. Other configuration files under `plato/configs/` could also be used here.\n\n\n",
"_____no_output_____"
],
[
"### Running Plato in a terminal\n\nIt is strongly recommended and more convenient to run Plato in a terminal, preferably in Visual Studio Code. To do this, first sign up for a free account in [ngrok](https://ngrok.com), and then use your authentication token and your account password in the following code:",
"_____no_output_____"
]
],
[
[
"!pip install colab_ssh --upgrade",
"_____no_output_____"
],
[
"from getpass import getpass\nngrok_token = getpass('Your authentication token: ')\npassword = getpass('Your ngrok account password: ')\n\nfrom colab_ssh import launch_ssh, init_git\nlaunch_ssh(ngrok_token, password)",
"_____no_output_____"
]
],
[
[
"This will produce an SSH configuration for you to add to your Visual Studio Code setup, so that you can use **Remote-SSH: Connect to Host...** in Visual Studio Code to connect to this Colab instance. After your SSH connection is setup, you can use your instance just like any other remote virtual machine in the cloud. Detailed steps are:\n\n1. Install the `Remote-SSH: Editing Configuration Files` extension in Visual Studio Code.\n\n2. In Visual Studio Code, click on `View > Command Palette` in the menu (or use `Shift+Command+P`), and type `Remote-SSH: Add New SSH Host...`. It will ask you to enter SSH Connection Command. Enter `root@google_colab_ssh`.\n\n3. Select the SSH configuration file to update, copy the conguration information you get after running the above cell into the selected SSH configuration file. The conguration information should be similar to\n```\nHost google_colab_ssh\n\t\tHostName 0.tcp.ngrok.io\n\t\tUser root\n\t\tPort <your port number>\n```\nThen save this configuration file.\n\n4. Click on `View > Command Palette` again and type `Remote-SSH: Connect to Host...`. You should see the host `google_colab_ssh` you just added. Click it and Visual Studio will automatically open a new window for you, and prompt for your ngrok account password.\n\n5. Enter your ngrok account password and you will be connected to the remote.\n\n6. Open folder `/content/drive/MyDrive/plato/` and you are all set.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
d0bdafbfc4c3e8c5f794c08a21d6e8267b8e782f | 26,327 | ipynb | Jupyter Notebook | pipeline/pipeline_testing.ipynb | bscsjunaid/fake-news-detections | 4c30149c0e9c8e275196f1653b6c9c5c670e5085 | [
"MIT"
] | 119 | 2017-03-15T21:42:25.000Z | 2022-02-16T07:55:36.000Z | pipeline/pipeline_testing.ipynb | bscsjunaid/fake-news-detections | 4c30149c0e9c8e275196f1653b6c9c5c670e5085 | [
"MIT"
] | 2 | 2017-04-26T09:53:25.000Z | 2019-10-01T13:51:31.000Z | pipeline/pipeline_testing.ipynb | bscsjunaid/fake-news-detections | 4c30149c0e9c8e275196f1653b6c9c5c670e5085 | [
"MIT"
] | 53 | 2017-05-03T11:15:39.000Z | 2022-01-31T08:17:00.000Z | 84.38141 | 1,537 | 0.678847 | [
[
[
"import numpy as np\nimport pandas as pd\nimport nltk\nfrom gen_features import FeatureGenerator\nfrom model_loop import ModelLoop\nfrom sklearn.model_selection import train_test_split",
"/Users/aldengolab/miniconda3/envs/amlpp/lib/python3.6/site-packages/sklearn/cross_validation.py:44: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the refactored classes and functions are moved. Also note that the interface of the new CV iterators are different from that of this module. This module will be removed in 0.20.\n \"This module will be removed in 0.20.\", DeprecationWarning)\n/Users/aldengolab/miniconda3/envs/amlpp/lib/python3.6/site-packages/sklearn/grid_search.py:43: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the refactored classes and functions are moved. This module will be removed in 0.20.\n DeprecationWarning)\n"
],
[
"# test-train first; change to take a dataframe\n# save those to to file\n# call FeatureGenerator on training data; call FeatureGenerator.train() --> saves sparse matrices in attribute\n# call FeatureGenerator on test data; call Feature Generator.transform(new_datafile)\n# f.x --> training sparse matrix\n# f.new_x --> testing sparse matrix",
"_____no_output_____"
],
[
"data_fp = '../data_cleaning/articles1.csv'\ndata = pd.read_csv(data_fp)",
"_____no_output_____"
],
[
"data.drop('Unnamed: 0', axis=1, inplace=True)",
"_____no_output_____"
],
[
"data = data.sample(n=1000)",
"_____no_output_____"
],
[
"X, y = train_test_split(data)",
"_____no_output_____"
],
[
"X.to_csv('data/X_sample.csv')",
"_____no_output_____"
],
[
"y.to_csv('data/y_sample.csv')",
"_____no_output_____"
],
[
"config = {\n 'datafile': 'data/X_sample.csv',\n 'test_datafile': 'data/y_sample.csv',\n 'text_label': 'content',\n 'y_label': 'label',\n 'fts_to_try': ['CountVectorize', 'TfidfVectorize', 'CountPOS'],\n}",
"_____no_output_____"
],
[
"args = {k: v for k, v in config.items() if k != 'test_datafile'}",
"_____no_output_____"
],
[
"gen = FeatureGenerator(**args)",
"Creating feature: CountVectorize\nParameters: {'ngram_range': (1, 1)}\nxft size (750, 3000)\nCreating feature: TfidfVectorize\nParameters: {'ngram_range': (1, 2)}\nCreating feature: CountPOS\nParameters: {'language': 'english'}\n6044 features generated for 750 examples\n"
],
[
"gen.transform(config['test_datafile'])",
"6044 features generated for 250 examples\n"
],
[
"run = 'test_3-2'\nmodels = ['NB', 'RF', 'ET', 'LR', 'SVM', 'SGD', 'KNN']\niterations = 1\noutput_dir = 'output_{}/'.format(run)\nks = [0.05, 0.10]",
"_____no_output_____"
],
[
"loop = ModelLoop(gen.X_train, gen.X_test, gen.y_train, gen.y_test, models, iterations, output_dir,\n ks = ks, method='csc', roc=True)",
"_____no_output_____"
],
[
"loop.run()",
"_____no_output_____"
],
[
"pd.read_csv(output_dir + 'simple_report.csv', quotechar='\"', skipinitialspace = True)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0bdbecf4602cc66e52c2cc748d25435e1964427 | 6,390 | ipynb | Jupyter Notebook | tensorflow-handbook-swift-example.ipynb | huan/swift-concise-mnist | 89d556f9ca6b89371eee3c3bbe7d1609f983ca54 | [
"Apache-2.0"
] | 6 | 2019-11-21T09:40:53.000Z | 2021-05-01T15:36:28.000Z | tensorflow-handbook-swift-example.ipynb | huan/swift-concise-mnist | 89d556f9ca6b89371eee3c3bbe7d1609f983ca54 | [
"Apache-2.0"
] | 2 | 2019-08-28T19:02:47.000Z | 2019-08-29T01:15:42.000Z | tensorflow-handbook-swift-example.ipynb | huan/swift-concise-mnist | 89d556f9ca6b89371eee3c3bbe7d1609f983ca54 | [
"Apache-2.0"
] | 1 | 2021-01-03T12:18:28.000Z | 2021-01-03T12:18:28.000Z | 30.28436 | 262 | 0.490141 | [
[
[
"<a href=\"https://colab.research.google.com/github/huan/tensorflow-handbook-swift/blob/master/tensorflow-handbook-swift-example.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"# Swift MNIST Example\n\nLearn more from Github: https://github.com/huan/tensorflow-handbook-swift\n",
"_____no_output_____"
],
[
"## MNIST Dataset\n\n<https://github.com/huan/swift-MNIST/>",
"_____no_output_____"
]
],
[
[
"%install-location $cwd/swift-install\n%install '.package(url: \"https://github.com/huan/swift-MNIST\", from: \"0.4.0\")' MNIST",
"Installing packages:\n\t.package(url: \"https://github.com/huan/swift-MNIST\", from: \"0.4.0\")\n\t\tMNIST\nWith SwiftPM flags: []\nWorking in: /tmp/tmpdu69j2gk/swift-install\nFetching https://github.com/huan/swift-MNIST\nCloning https://github.com/huan/swift-MNIST\nResolving https://github.com/huan/swift-MNIST at 0.4.0\n[1/2] Compiling MNIST MNIST.swift\n[2/3] Merging module MNIST\n[3/4] Compiling jupyterInstalledPackages jupyterInstalledPackages.swift\n[4/5] Merging module jupyterInstalledPackages\n[5/5] Linking libjupyterInstalledPackages.so\nInitializing Swift...\nInstallation complete!\n"
]
],
[
[
"## Define a Simple MLP Model",
"_____no_output_____"
]
],
[
[
"import TensorFlow\nimport Python\nimport Foundation\n\nstruct MLP: Layer {\n typealias Input = Tensor<Float>\n typealias Output = Tensor<Float>\n\n var flatten = Flatten<Float>()\n var dense = Dense<Float>(inputSize: 784, outputSize: 10)\n \n @differentiable\n public func callAsFunction(_ input: Input) -> Output {\n var x = input\n x = flatten(x)\n x = dense(x)\n return x\n } \n}\n\nvar model = MLP()\nlet optimizer = Adam(for: model)",
"_____no_output_____"
]
],
[
[
"## Training",
"_____no_output_____"
]
],
[
[
"import MNIST\n\nlet mnist = MNIST()\nlet ((trainImages, trainLabels), (testImages, testLabels)) = mnist.loadData()\n\nlet imageBatch = Dataset(elements: trainImages).batched(32)\nlet labelBatch = Dataset(elements: trainLabels).batched(32)\n\nfor (X, y) in zip(imageBatch, labelBatch) {\n // Caculate the gradient\n // let (_loss, grads) = valueWithGradient(at: model) { model -> Tensor<Float> in\n let grads = gradient(at: model) { model -> Tensor<Float> in\n let logits = model(X)\n return softmaxCrossEntropy(logits: logits, labels: y)\n }\n\n // Update parameters by optimizer\n optimizer.update(&model.allDifferentiableVariables, along: grads) \n}\n\nlet logits = model(testImages)\nlet acc = mnist.getAccuracy(y: testLabels, logits: logits)\n\nprint(\"Test Accuracy: \\(acc)\" )",
"Downloading train-images-idx3-ubyte ...\nDownloading train-labels-idx1-ubyte ...\nReading data.\nConstructing data tensors.\nTest Accuracy: 0.9125\n"
]
],
[
[
"- Credit: This example is inspired from [A set of notebooks explaining swift for tensorflow optimized to run in Google Collaboratory.](https://github.com/zaidalyafeai/Swift4TF)\n- License [Apache-2.0](https://github.com/tensorflow/swift-models/blob/stable/LICENSE)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d0bdbed908145f035030f6e8280083c24208ee6b | 23,058 | ipynb | Jupyter Notebook | Tutorial 1 Scenario category.ipynb | quepas/ScenarioDomainModel | 005847a4956ac9a4680d230c02ef802ed7a6742a | [
"MIT"
] | null | null | null | Tutorial 1 Scenario category.ipynb | quepas/ScenarioDomainModel | 005847a4956ac9a4680d230c02ef802ed7a6742a | [
"MIT"
] | null | null | null | Tutorial 1 Scenario category.ipynb | quepas/ScenarioDomainModel | 005847a4956ac9a4680d230c02ef802ed7a6742a | [
"MIT"
] | null | null | null | 42.621072 | 687 | 0.626811 | [
[
[
"# Tutorial 1: Instatiating a *scenario category*\n\nIn this tutorial, we will cover the following items:\n\n1. Create *actor categories*, *activity categories*, and *physical thing categories*\n2. Instantiate a *scenario category*\n3. Show all tags of the *scenario category*\n4. Use the `includes` function of a *scenario category*\n5. Export the objects",
"_____no_output_____"
]
],
[
[
"# Before starting, let us do the necessary imports\nimport os\nimport json\nfrom domain_model import ActorCategory, ActivityCategory, Constant, ScenarioCategory, \\\n Sinusoidal, Spline3Knots, StateVariable, PhysicalElementCategory, Tag, VehicleType, \\\n actor_category_from_json, scenario_category_from_json",
"_____no_output_____"
]
],
[
[
"## 1. Create *actor categories*, *activity categories*, and the *static physical thing categories*\n\nIn this tutorial, we will create a *scenario category* in which another vehicle changes lane such that it becomes the ego vehicle's leading vehicle. This is often referred to as a \"cut-in scenario\". The *scenario category* is depicted in the figure below. Here, the blue car represents the ego vehicle and the red car represents the vehicle that performs the cut in.\n\n<img src=\"./examples/images/cut-in.png\" alt=\"Cut in\" width=\"400\"/>\n\nTo create the *scenario category*, we first need to create the *actor categories*, *activity categories*, and the *physical things*. Let us start with the *actor categories*. Just like most objects, an *actor category* has a `name`, a `uid` (a unique ID), and `tags`. Additionally, an *actor category* has a `vehicle_type`. \n\nIn this implementation of the domain model, it is checked whether the correct types are used. For example, `name` must be a string. Similarly, `uid` must be an integer. `tags` must be a (possibly empty) list of type `Tag`. This is to ensure that only tags are chosen out of a predefined list. This is done for consistency, such that, for example, users do not use the tag \"braking\" at one time and \"Braking\" at another time. Note, however, that the disadvantage is that it might be very well possible that the list of possible tags is not complete, so if there is a good reason to add a `Tag` this should be allowed. Lastly, the `vehicle_type` must be of type `VehicleType`.\n\nNow let us create the *actor categories*. For this example, we assume that both *actor categories* are \"vehicles\". Note that we can ignore the `uid` for now. When not `uid` is given, a unique ID is generated automatically. If no `tags` are provided, it will default to an empty list.",
"_____no_output_____"
]
],
[
[
"EGO_VEHICLE = ActorCategory(VehicleType.Vehicle, name=\"Ego vehicle\", \n tags=[Tag.EgoVehicle, Tag.RoadUserType_Vehicle])\nTARGET_VEHICLE = ActorCategory(VehicleType.Vehicle, name=\"Target vehicle\",\n tags=[Tag.RoadUserType_Vehicle])",
"_____no_output_____"
]
],
[
[
"It is as simple as that. If it does not throw an error, you can be assured that a correct *actor category* is created. For example, if we would forget to add the brackets around the `Tag.RoadUserType_Vehicle` - such that it would not be a *list* of `Tag` - an error will be thrown:",
"_____no_output_____"
]
],
[
[
"# The following code results in an error! \n# The error is captured as to show only the final error message.\ntry:\n ActorCategory(VehicleType.Vehicle, name=\"Target vehicle\", tags=Tag.RoadUserType_Vehicle)\nexcept TypeError as error:\n print(error)",
"Input 'tags' should be of type typing.List but is of type <enum 'Tag'>.\n"
]
],
[
[
"Now let us create the *activity categories*:",
"_____no_output_____"
]
],
[
[
"FOLLOWING_LANE = ActivityCategory(Constant(), StateVariable.LATERAL_POSITION,\n name=\"Following lane\",\n tags=[Tag.VehicleLateralActivity_GoingStraight])\nCHANGING_LANE = ActivityCategory(Sinusoidal(), StateVariable.LATERAL_POSITION,\n name=\"Changing lane\",\n tags=[Tag.VehicleLateralActivity_ChangingLane])\nDRIVING_FORWARD = ActivityCategory(Spline3Knots(), StateVariable.SPEED,\n name=\"Driving forward\",\n tags=[Tag.VehicleLongitudinalActivity_DrivingForward])",
"_____no_output_____"
]
],
[
[
"The last object we need to define before we can define the *scenario category* is the *static physical thing category*. A *scenario category* may contain multiple *physical things*, but for now we only define one that specifies the road layout. We assume that the scenario takes place at a straight motorway with multiple lanes:",
"_____no_output_____"
]
],
[
[
"MOTORWAY = PhysicalElementCategory(description=\"Motorway with multiple lanes\",\n name=\"Motorway\",\n tags=[Tag.RoadLayout_Straight,\n Tag.RoadType_PrincipleRoad_Motorway])",
"_____no_output_____"
]
],
[
[
"## 2. Instantiate a *scenario category*\n\nTo define a *scenario category*, we need a description and a location to an image. After this, the static content of the scenario can be specified using the `set_physical_things` function. Next, to describe the dynamic content of the scenarios, the *actor categories* can be passed using the `set_actors` function and the *activity categories* can be passed using the `set_activities` function. Finally, using `set_acts`, it is described which activity is connected to which actor. \n\nNote: It is possible that two actors perform the same activity. In this example, both the ego vehicle and the target vehicle are driving forward.",
"_____no_output_____"
]
],
[
[
"CUTIN = ScenarioCategory(\"./examples/images/cut-in.png\",\n description=\"Cut-in at the motorway\",\n name=\"Cut-in\")\nCUTIN.set_physical_elements([MOTORWAY])\nCUTIN.set_actors([EGO_VEHICLE, TARGET_VEHICLE])\nCUTIN.set_activities([FOLLOWING_LANE, CHANGING_LANE, DRIVING_FORWARD])\nCUTIN.set_acts([(EGO_VEHICLE, DRIVING_FORWARD), (EGO_VEHICLE, FOLLOWING_LANE),\n (TARGET_VEHICLE, DRIVING_FORWARD), (TARGET_VEHICLE, CHANGING_LANE)])",
"_____no_output_____"
]
],
[
[
"## 3. Show all tags of the *scenario category*\n\nThe tags should be used to define the *scenario category* in such a manner that also a computer can understand. However, we did not pass any tags to the *scenario category* itself. On the other hand, the attributes of the *scenario category* (in this case, the *physical things*, the *activity categories*, and the *actor categories*) have tags. Using the `derived_tags` function of the *scenario category*, these tags can be retrieved.\n\nRunning the `derived_tags` function returns a dictionary with (key,value) pairs. Each key is formatted as `<name>::<class>` and the corresponding value contains a list of tags that are associated to that particular object. For example, `Ego vehicle::ActorCategory` is a key and the corresponding tags are the tags that are passed when instantiating the ego vehicle (`EgoVehicle`) and the tags that are part of the *activity categories* that are connected with the ego vehicle (`GoingStraight` and `DrivingForward`).",
"_____no_output_____"
]
],
[
[
"CUTIN.derived_tags()",
"_____no_output_____"
]
],
[
[
"Another way - and possibly easier way - to show the tags, is to simply print the scenario category. Doing this will show the name, the description, and all tags of the scenario category.",
"_____no_output_____"
]
],
[
[
"print(CUTIN)",
"Name: Cut-in\nDescription:\n Cut-in at the motorway\nTags:\n├─ Ego vehicle::ActorCategory\n│ ├─ EgoVehicle\n│ ├─ RoadUserType_Vehicle\n│ ├─ VehicleLongitudinalActivity_DrivingForward\n│ └─ VehicleLateralActivity_GoingStraight\n├─ Target vehicle::ActorCategory\n│ ├─ RoadUserType_Vehicle\n│ ├─ VehicleLongitudinalActivity_DrivingForward\n│ └─ VehicleLateralActivity_ChangingLane\n└─ Motorway::PhysicalElementCategory\n ├─ RoadLayout_Straight\n └─ RoadType_PrincipleRoad_Motorway\n\n"
]
],
[
[
"## 4. Use the *includes* function of a *scenario category*\n\nA *scenario category* A includes another *scenario category* B if it comprises all scenarios that are comprised in B. Loosely said, this means that *scenario category* A is \"more general\" than *scenario category* B. To demonstrate this, let us first create another *scenario category*. The only different between the following *scenario category* is that the target vehicle comes from the left side of the ego vehicle. This means that the target vehicle performs a right lane change, whereas our previously defined *scenario category* did not define to which side the lane change was.",
"_____no_output_____"
]
],
[
[
"CHANGING_LANE_RIGHT = ActivityCategory(Sinusoidal(), StateVariable.LATERAL_POSITION,\n name=\"Changing lane right\",\n tags=[Tag.VehicleLateralActivity_ChangingLane_Right])\nCUTIN_LEFT = ScenarioCategory(\"./examples/images/cut-in.png\", \n description=\"Cut-in from the left at the motorway\",\n name=\"Cut-in from left\")\nCUTIN_LEFT.set_physical_elements([MOTORWAY])\nCUTIN_LEFT.set_actors([EGO_VEHICLE, TARGET_VEHICLE])\nCUTIN_LEFT.set_activities([FOLLOWING_LANE, CHANGING_LANE_RIGHT, DRIVING_FORWARD])\nCUTIN_LEFT.set_acts([(EGO_VEHICLE, DRIVING_FORWARD), (EGO_VEHICLE, FOLLOWING_LANE),\n (TARGET_VEHICLE, DRIVING_FORWARD), (TARGET_VEHICLE, CHANGING_LANE_RIGHT)])",
"_____no_output_____"
]
],
[
[
"To ensure ourselves that we correctly created a new *scenario category*, we can print the scenario category. Note the difference with the previously defined *scenario category*: now the target vehicle performs a right lane change (see the tag `VehicleLateralActivity_ChangingLane_Right`).",
"_____no_output_____"
]
],
[
[
"print(CUTIN_LEFT)",
"Name: Cut-in from left\nDescription:\n Cut-in from the left at the motorway\nTags:\n├─ Ego vehicle::ActorCategory\n│ ├─ EgoVehicle\n│ ├─ RoadUserType_Vehicle\n│ ├─ VehicleLongitudinalActivity_DrivingForward\n│ └─ VehicleLateralActivity_GoingStraight\n├─ Target vehicle::ActorCategory\n│ ├─ RoadUserType_Vehicle\n│ ├─ VehicleLongitudinalActivity_DrivingForward\n│ └─ VehicleLateralActivity_ChangingLane_Right\n└─ Motorway::PhysicalElementCategory\n ├─ RoadLayout_Straight\n └─ RoadType_PrincipleRoad_Motorway\n\n"
]
],
[
[
"Because our original *scenario category* (`CUTIN`) is more general than the *scenario category* we just created (`CUTIN_LEFT`), we expect that `CUTIN` *includes* `CUTIN_LEFT`. In other words: because all \"cut ins from the left\" are also \"cut ins\", `CUTIN` *includes* `CUTIN_LEFT`.\n\nThe converse is not true: not all \"cut ins\" are \"cut ins from the left\". \n\nLet's check it:",
"_____no_output_____"
]
],
[
[
"print(CUTIN.includes(CUTIN_LEFT)) # True\nprint(CUTIN_LEFT.includes(CUTIN)) # False",
"True\nFalse\n"
]
],
[
[
"## 5. Export the objects\n\nIt would be cumbersome if one would be required to define a scenario category each time again. Luckily, there is an easy way to export the objects we have created. \n\nEach object of this domain model comes with a `to_json` function and a `to_json_full` function. These functions return a dictionary that can be directly written to a .json file. The difference between `to_json` and `to_json_full` is that with `to_json`, rather than also returning the full dictionary of the attributes, only a reference (using the unique ID and the name) is returned. In case of the *physical thing*, *actor category*, and *activity category*, this does not make any difference. For the *scenario category*, however, this makes a difference. \n\nTo see this, let's see what the `to_json` function returns.",
"_____no_output_____"
]
],
[
[
"CUTIN.to_json()",
"_____no_output_____"
]
],
[
[
"As can be seen, the *physical thing category* (see `physical_thing_categories`) only returns the `name` and `uid`. This is not enough information for us if we would like to recreate the *physical thing category*. Therefore, for now we will use the `to_json_full` functionality. \n\nNote, however, that if we would like to store the objects in a database, it would be better to have separate tables for *scenario categories*, *physical thing categories*, *activity categories*, and *actor categories*. In that case, the `to_json` function becomes handy. We will demonstrate this in a later tutorial.\n\nAlso note that Python has more efficient ways to store objects than through some json code. However, the reason to opt for the current approach is that this would be easily implementable in a database, such that it is easily possible to perform queries on the data. Again, the actual application of this goes beyond the current tutorial.\n\nTo save the returned dictionary to a .json file, we will use the external library `json`.",
"_____no_output_____"
]
],
[
[
"FILENAME = os.path.join(\"examples\", \"cutin_qualitative.json\")\nwith open(FILENAME, \"w\") as FILE:\n json.dump(CUTIN.to_json_full(), FILE, indent=4)",
"_____no_output_____"
]
],
[
[
"Let us also save the other *scenario category*, such that we can use it for a later tutorial.",
"_____no_output_____"
]
],
[
[
"FILENAME_CUTIN_LEFT = os.path.join(\"examples\", \"cutin_left_qualitative.json\")\nwith open(FILENAME_CUTIN_LEFT, \"w\") as FILE:\n json.dump(CUTIN_LEFT.to_json_full(), FILE, indent=4)",
"_____no_output_____"
]
],
[
[
"So how can we use this .json code to create the *scenario category*? As each object has a `to_json_full` function, for each object there is a `<class_name>_from_json` function. For the objects discussed in this toturial, we have:\n\n- for a *physical thing category*: `physical_thing_category_from_json`\n- for an *actor category*: `actor_category_from_json`\n- for an *activity category*: `actvitiy_category_from_json`\n- for a *model*: `model_from_json`\n- for a *scenario category*: `scenario_category_from_json`\n\nEach of these functions takes as input a dictionary that could be a potential output of its corresponding `to_json_full` function. \n\nTo demonstrate this, let's load the just created .json file and see if we can create a new *scenario category* from this.",
"_____no_output_____"
]
],
[
[
"with open(FILENAME, \"r\") as FILE:\n CUTIN2 = scenario_category_from_json(json.load(FILE))",
"_____no_output_____"
]
],
[
[
"To see that this returns a similar *scenario category* as our previously created `CUTIN`, we can print the just created scenario category:",
"_____no_output_____"
]
],
[
[
"print(CUTIN2)",
"Name: Cut-in\nDescription:\n Cut-in at the motorway\nTags:\n├─ Ego vehicle::ActorCategory\n│ ├─ EgoVehicle\n│ ├─ RoadUserType_Vehicle\n│ ├─ VehicleLongitudinalActivity_DrivingForward\n│ └─ VehicleLateralActivity_GoingStraight\n├─ Target vehicle::ActorCategory\n│ ├─ RoadUserType_Vehicle\n│ ├─ VehicleLongitudinalActivity_DrivingForward\n│ └─ VehicleLateralActivity_ChangingLane\n└─ Motorway::PhysicalElementCategory\n ├─ RoadLayout_Straight\n └─ RoadType_PrincipleRoad_Motorway\n\n"
]
],
[
[
"Note that although the just created *scenario category* is now similar to `CUTIN`, it is a different object in Python. That is, if we would change `CUTIN2`, that change will not apply to `CUTIN`.\n\nYou reached the end of the first tutorial. In the [next tutorial](./Tutorial%202%20Scenario.ipynb), we will see how we can instantiate a *scenario*.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d0bdc90b9c1f685ddd47da59157eeeda20512d29 | 200,760 | ipynb | Jupyter Notebook | _posts/python/fundamentals/data-api/grid-api.ipynb | yutianc/plotly_documents | 53065409a63d64c8a0fb64595762f710e8ba13cc | [
"CC-BY-3.0"
] | null | null | null | _posts/python/fundamentals/data-api/grid-api.ipynb | yutianc/plotly_documents | 53065409a63d64c8a0fb64595762f710e8ba13cc | [
"CC-BY-3.0"
] | null | null | null | _posts/python/fundamentals/data-api/grid-api.ipynb | yutianc/plotly_documents | 53065409a63d64c8a0fb64595762f710e8ba13cc | [
"CC-BY-3.0"
] | null | null | null | 307.91411 | 177,248 | 0.908443 | [
[
[
"#### New to Plotly?\nPlotly's Python library is free and open source! [Get started](https://plot.ly/python/getting-started/) by downloading the client and [reading the primer](https://plot.ly/python/getting-started/).\n<br>You can set up Plotly to work in [online](https://plot.ly/python/getting-started/#initialization-for-online-plotting) or [offline](https://plot.ly/python/getting-started/#initialization-for-offline-plotting) mode, or in [jupyter notebooks](https://plot.ly/python/getting-started/#start-plotting-online).\n<br>We also have a quick-reference [cheatsheet](https://images.plot.ly/plotly-documentation/images/python_cheat_sheet.pdf) (new!) to help you get started!\n",
"_____no_output_____"
],
[
"#### Creating a Plotly Grid\nYou can instantiate a grid with data by either uploading tabular data to Plotly or by creating a Plotly `grid` using the API. To upload the grid we will use `plotly.plotly.grid_ops.upload()`. It takes the following arguments:\n- `grid` (Grid Object): the actual grid object that you are uploading.\n- `filename` (str): name of the grid in your plotly account,\n- `world_readable` (bool): if `True`, the grid is `public` and can be viewed by anyone in your files. If `False`, it is private and can only be viewed by you. \n- `auto_open` (bool): if determines if the grid is opened in the browser or not.\n\nYou can run `help(py.grid_ops.upload)` for a more detailed description of these and all the arguments.",
"_____no_output_____"
]
],
[
[
"import plotly\nimport plotly.plotly as py\nimport plotly.tools as tls\nimport plotly.graph_objs as go\nfrom plotly.grid_objs import Column, Grid\n\nfrom datetime import datetime as dt\nimport numpy as np\nfrom IPython.display import Image\n\ncolumn_1 = Column(['可以', '不可以', '随便'], '第一列')\ncolumn_2 = Column([1, 2, 3], '第二列') # Tabular data can be numbers, strings, or dates\ngrid = Grid([column_1, column_2])\nurl = py.grid_ops.upload(grid, \n filename='grid_ex_'+str(dt.now()), \n world_readable=True, \n auto_open=False)\nprint(url)",
"https://plot.ly/~yutianc/29/\n"
]
],
[
[
"#### View and Share your Grid\nYou can view your newly created grid at the `url`:",
"_____no_output_____"
]
],
[
[
"Image('view_grid_url.png')",
"_____no_output_____"
]
],
[
[
"You are also able to view the grid in your list of files inside your [organize folder](https://plot.ly/organize).",
"_____no_output_____"
],
[
"#### Upload Dataframes to Plotly\nAlong with uploading a grid, you can upload a Dataframe as well as convert it to raw data as a grid:",
"_____no_output_____"
]
],
[
[
"import plotly.plotly as py\nimport plotly.figure_factory as ff\n\nimport pandas as pd\n\ndf = pd.read_csv('https://raw.githubusercontent.com/plotly/datasets/master/2014_apple_stock.csv')\ndf_head = df.head()\ntable = ff.create_table(df_head)\npy.iplot(table, filename='dataframe_ex_preview', oa)",
"_____no_output_____"
],
[
"grid = Grid([Column(df[column_name], column_name) for column_name in df.columns])\nurl = py.grid_ops.upload(grid, filename='dataframe_ex_'+str(dt.now()), world_readable=True, auto_open=True)\nprint(url)",
"https://plot.ly/~chelsea_lyn/17399/\n"
]
],
[
[
"#### Making Graphs from Grids\nPlotly graphs are usually described with data embedded in them. For example, here we place `x` and `y` data directly into our `Histogram2dContour` object:",
"_____no_output_____"
]
],
[
[
"x = np.random.randn(1000)\ny = np.random.randn(1000) + 1\n\ndata = [\n go.Histogram2dContour(\n x=x,\n y=y\n )\n]\n\npy.iplot(data, filename='Example 2D Histogram Contour')",
"_____no_output_____"
]
],
[
[
"We can also create graphs based off of references to columns of grids. Here, we'll upload several `column`s to our Plotly account:",
"_____no_output_____"
]
],
[
[
"column_1 = Column(np.random.randn(1000), 'column 1')\ncolumn_2 = Column(np.random.randn(1000)+1, 'column 2')\ncolumn_3 = Column(np.random.randn(1000)+2, 'column 3')\ncolumn_4 = Column(np.random.randn(1000)+3, 'column 4')\n\ngrid = Grid([column_1, column_2, column_3, column_4])\n#url = py.grid_ops.upload(grid, filename='randn_int_offset_'+str(dt.now()))\nurl = py.grid_ops.upload(grid, filename='randn_int_offset')\nprint(url)",
"https://plot.ly/~yutianc/36/\n"
],
[
"#Image('rand_int_histogram_view.png')",
"_____no_output_____"
]
],
[
[
"#### Make Graph from Raw Data\nInstead of placing data into `x` and `y`, we'll place our Grid columns into `xsrc` and `ysrc`:",
"_____no_output_____"
]
],
[
[
"data = [\n go.Histogram2dContour(\n xsrc=grid[0],\n ysrc=grid[1]\n )\n]\n\npy.iplot(data, filename='2D Contour from Grid Data')",
"_____no_output_____"
]
],
[
[
"So, when you view the data, you'll see your original grid, not just the columns that compose this graph:",
"_____no_output_____"
],
[
"#### Attaching Meta Data to Grids\nIn [Plotly Enterprise](https://plot.ly/product/enterprise/), you can upload and assign free-form JSON `metadata` to any grid object. This means that you can keep all of your raw data in one place, under one grid.\n\nIf you update the original data source, in the workspace or with our API, all of the graphs that are sourced from it will be updated as well. You can make multiple graphs from a single Grid and you can make a graph from multiple grids. You can also add rows and columns to existing grids programatically.",
"_____no_output_____"
]
],
[
[
"meta = {\n \"Month\": \"November\",\n \"Experiment ID\": \"d3kbd\",\n \"Operator\": \"James Murphy\",\n \"Initial Conditions\": {\n \"Voltage\": 5.5\n }\n}\n\n#grid_url = py.grid_ops.upload(grid, filename='grid_with_metadata_'+str(dt.now()), meta=meta)\ngrid_url = py.grid_ops.upload(grid, filename='grid_with_metadata', meta=meta)\nprint(url)",
"https://plot.ly/~yutianc/36/\n"
],
[
"#Image('grid_with_metadata.png')",
"_____no_output_____"
]
],
[
[
"#### Reference",
"_____no_output_____"
]
],
[
[
"help(py.grid_ops)",
"Help on class grid_ops in module plotly.plotly.plotly:\n\nclass grid_ops\n | Interface to Plotly's Grid API.\n | Plotly Grids are Plotly's tabular data object, rendered\n | in an online spreadsheet. Plotly graphs can be made from\n | references of columns of Plotly grid objects. Free-form\n | JSON Metadata can be saved with Plotly grids.\n | \n | To create a Plotly grid in your Plotly account from Python,\n | see `grid_ops.upload`.\n | \n | To add rows or columns to an existing Plotly grid, see\n | `grid_ops.append_rows` and `grid_ops.append_columns`\n | respectively.\n | \n | To delete one of your grid objects, see `grid_ops.delete`.\n | \n | Class methods defined here:\n | \n | append_columns(cls, columns, grid=None, grid_url=None) from __builtin__.classobj\n | Append columns to a Plotly grid.\n | \n | `columns` is an iterable of plotly.grid_objs.Column objects\n | and only one of `grid` and `grid_url` needs to specified.\n | \n | `grid` is a ploty.grid_objs.Grid object that has already been\n | uploaded to plotly with the grid_ops.upload method.\n | \n | `grid_url` is a unique URL of a `grid` in your plotly account.\n | \n | Usage example 1: Upload a grid to Plotly, and then append a column\n | ```\n | from plotly.grid_objs import Grid, Column\n | import plotly.plotly as py\n | column_1 = Column([1, 2, 3], 'time')\n | grid = Grid([column_1])\n | py.grid_ops.upload(grid, 'time vs voltage')\n | \n | # append a column to the grid\n | column_2 = Column([4, 2, 5], 'voltage')\n | py.grid_ops.append_columns([column_2], grid=grid)\n | ```\n | \n | Usage example 2: Append a column to a grid that already exists on\n | Plotly\n | ```\n | from plotly.grid_objs import Grid, Column\n | import plotly.plotly as py\n | \n | grid_url = 'https://plot.ly/~chris/3143'\n | column_1 = Column([1, 2, 3], 'time')\n | py.grid_ops.append_columns([column_1], grid_url=grid_url)\n | ```\n | \n | append_rows(cls, rows, grid=None, grid_url=None) from __builtin__.classobj\n | Append rows to a Plotly grid.\n | \n | `rows` is an iterable of rows, where each row is a\n | list of numbers, strings, or dates. The number of items\n | in each row must be equal to the number of columns\n | in the grid. If appending rows to a grid with columns of\n | unequal length, Plotly will fill the columns with shorter\n | length with empty strings.\n | \n | Only one of `grid` and `grid_url` needs to specified.\n | \n | `grid` is a ploty.grid_objs.Grid object that has already been\n | uploaded to plotly with the grid_ops.upload method.\n | \n | `grid_url` is a unique URL of a `grid` in your plotly account.\n | \n | Usage example 1: Upload a grid to Plotly, and then append rows\n | ```\n | from plotly.grid_objs import Grid, Column\n | import plotly.plotly as py\n | column_1 = Column([1, 2, 3], 'time')\n | column_2 = Column([5, 2, 7], 'voltage')\n | grid = Grid([column_1, column_2])\n | py.grid_ops.upload(grid, 'time vs voltage')\n | \n | # append a row to the grid\n | row = [1, 5]\n | py.grid_ops.append_rows([row], grid=grid)\n | ```\n | \n | Usage example 2: Append a row to a grid that already exists on Plotly\n | ```\n | from plotly.grid_objs import Grid\n | import plotly.plotly as py\n | \n | grid_url = 'https://plot.ly/~chris/3143'\n | \n | row = [1, 5]\n | py.grid_ops.append_rows([row], grid=grid_url)\n | ```\n | \n | delete(cls, grid=None, grid_url=None) from __builtin__.classobj\n | Delete a grid from your Plotly account.\n | \n | Only one of `grid` or `grid_url` needs to be specified.\n | \n | `grid` is a plotly.grid_objs.Grid object that has already\n | been uploaded to Plotly.\n | \n | `grid_url` is the URL of the Plotly grid to delete\n | \n | Usage example 1: Upload a grid to plotly, then delete it\n | ```\n | from plotly.grid_objs import Grid, Column\n | import plotly.plotly as py\n | column_1 = Column([1, 2, 3], 'time')\n | column_2 = Column([4, 2, 5], 'voltage')\n | grid = Grid([column_1, column_2])\n | py.grid_ops.upload(grid, 'time vs voltage')\n | \n | # now delete it, and free up that filename\n | py.grid_ops.delete(grid)\n | ```\n | \n | Usage example 2: Delete a plotly grid by url\n | ```\n | import plotly.plotly as py\n | \n | grid_url = 'https://plot.ly/~chris/3'\n | py.grid_ops.delete(grid_url=grid_url)\n | ```\n | \n | upload(cls, grid, filename, world_readable=True, auto_open=True, meta=None) from __builtin__.classobj\n | Upload a grid to your Plotly account with the specified filename.\n | \n | Positional arguments:\n | - grid: A plotly.grid_objs.Grid object,\n | call `help(plotly.grid_ops.Grid)` for more info.\n | - filename: Name of the grid to be saved in your Plotly account.\n | To save a grid in a folder in your Plotly account,\n | separate specify a filename with folders and filename\n | separated by backslashes (`/`).\n | If a grid, plot, or folder already exists with the same\n | filename, a `plotly.exceptions.RequestError` will be\n | thrown with status_code 409\n | \n | Optional keyword arguments:\n | - world_readable (default=True): make this grid publically (True)\n | or privately (False) viewable.\n | - auto_open (default=True): Automatically open this grid in\n | the browser (True)\n | - meta (default=None): Optional Metadata to associate with\n | this grid.\n | Metadata is any arbitrary\n | JSON-encodable object, for example:\n | `{\"experiment name\": \"GaAs\"}`\n | \n | Filenames must be unique. To overwrite a grid with the same filename,\n | you'll first have to delete the grid with the blocking name. See\n | `plotly.plotly.grid_ops.delete`.\n | \n | Usage example 1: Upload a plotly grid\n | ```\n | from plotly.grid_objs import Grid, Column\n | import plotly.plotly as py\n | column_1 = Column([1, 2, 3], 'time')\n | column_2 = Column([4, 2, 5], 'voltage')\n | grid = Grid([column_1, column_2])\n | py.grid_ops.upload(grid, 'time vs voltage')\n | ```\n | \n | Usage example 2: Make a graph based with data that is sourced\n | from a newly uploaded Plotly grid\n | ```\n | import plotly.plotly as py\n | from plotly.grid_objs import Grid, Column\n | from plotly.graph_objs import Scatter\n | # Upload a grid\n | column_1 = Column([1, 2, 3], 'time')\n | column_2 = Column([4, 2, 5], 'voltage')\n | grid = Grid([column_1, column_2])\n | py.grid_ops.upload(grid, 'time vs voltage')\n | \n | # Build a Plotly graph object sourced from the\n | # grid's columns\n | trace = Scatter(xsrc=grid[0], ysrc=grid[1])\n | py.plot([trace], filename='graph from grid')\n | ```\n | \n | ----------------------------------------------------------------------\n | Static methods defined here:\n | \n | ensure_uploaded(fid)\n\n"
],
[
"from IPython.display import display, HTML\n\ndisplay(HTML('<link href=\"//fonts.googleapis.com/css?family=Open+Sans:600,400,300,200|Inconsolata|Ubuntu+Mono:400,700\" rel=\"stylesheet\" type=\"text/css\" />'))\ndisplay(HTML('<link rel=\"stylesheet\" type=\"text/css\" href=\"http://help.plot.ly/documentation/all_static/css/ipython-notebook-custom.css\">'))\n\n! pip install git+https://github.com/plotly/publisher.git --upgrade\nimport publisher\npublisher.publish(\n 'grid-api.ipynb', 'python/data-api/', 'Upload Data to Plotly from Python',\n 'How to upload data to Plotly from Python with the Plotly Grid API.',\n title = 'Plotly Data API', name = 'Plots from Grids', order = 5,\n language='python', has_thumbnail='true', thumbnail='thumbnail/table.jpg', display_as='file_settings'\n)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d0bddfc713da43109ca0cf558a5f79868ae25e8d | 28,841 | ipynb | Jupyter Notebook | Lecture-Notes/2019/PSS-2019-Day4.ipynb | unmeshvrije/python-for-beginners | d8943130bfd2499a458d92d5f6db97170fd53810 | [
"Apache-2.0"
] | 7 | 2019-08-13T15:36:50.000Z | 2021-09-09T20:37:21.000Z | Lecture-Notes/2019/PSS-2019-Day4.ipynb | unmeshvrije/python-for-beginners | d8943130bfd2499a458d92d5f6db97170fd53810 | [
"Apache-2.0"
] | 2 | 2019-07-04T08:30:38.000Z | 2019-07-16T13:44:45.000Z | Lecture-Notes/2019/PSS-2019-Day4.ipynb | unmeshvrije/python-for-beginners | d8943130bfd2499a458d92d5f6db97170fd53810 | [
"Apache-2.0"
] | 4 | 2019-07-29T10:57:24.000Z | 2021-03-17T15:02:36.000Z | 17.650551 | 402 | 0.430117 | [
[
[
"# Python course Day 4",
"_____no_output_____"
],
[
"## Dictionaries",
"_____no_output_____"
]
],
[
[
"student = {\"number\": 570, \"name\":\"Simon\", \"age\":23, \"height\":165}",
"_____no_output_____"
],
[
"print(student)",
"{'number': 570, 'name': 'Simon', 'age': 23, 'height': 165}\n"
],
[
"print(student['name'])",
"Simon\n"
],
[
"print(student['age'])",
"23\n"
],
[
"my_list = {1: 23, 2:56, 3:78, 4:14, 5:67}",
"_____no_output_____"
],
[
"my_list[1]",
"_____no_output_____"
],
[
"my_list.keys()",
"_____no_output_____"
],
[
"my_list.values()",
"_____no_output_____"
],
[
"student.keys()",
"_____no_output_____"
],
[
"student.values()",
"_____no_output_____"
],
[
"student['number'] = 111",
"_____no_output_____"
],
[
"print(student)",
"{'number': 111, 'name': 'Simon', 'age': 23, 'height': 165}\n"
],
[
"student.items()",
"_____no_output_____"
],
[
"# Iterate over a list\nnumbers = [23,45,12,67,88,34,11]\nfor x in numbers:\n print(x)",
"23\n45\n12\n67\n88\n34\n11\n"
],
[
"student.keys()",
"_____no_output_____"
],
[
"for k in student.keys():\n print(\"student\", k, student[k])",
"student number 111\nstudent name Simon\nstudent age 23\nstudent height 165\n"
],
[
"# initialize key and value with pairs from student dictionary\nfor key, value in student.items():\n print(key , \"--->\", value)",
"number ---> 111\nname ---> Simon\nage ---> 23\nheight ---> 165\n"
],
[
"print(student)",
"{'number': 111, 'name': 'Simon', 'age': 23, 'height': 165}\n"
],
[
"festival = 4\nprint(festival)",
"4\n"
],
[
"print(key)",
"height\n"
],
[
"# Iterate over two lists at a time\n\n\nfor x,y in zip(a,b):\n print(x , \":\", y)",
"1 : 1\n2 : 4\n3 : 9\n4 : 16\n5 : 25\n"
],
[
"test = zip(a,b)",
"_____no_output_____"
],
[
"type(test)",
"_____no_output_____"
],
[
"# Iterate over two lists at a time\na = [1,2,3,4,5]\nb = [1,4,9,16,25]\nc = [1,8,27,64,125]\nfor x,y,z in zip(a,b,c):\n print(x , \":\", y , \"- \", z)",
"1 : 1 - 1\n2 : 4 - 8\n3 : 9 - 27\n4 : 16 - 64\n5 : 25 - 125\n"
],
[
"# items that do not have a corresponding item in the other list\n# are ignored\na = [1,2,3,4,5,6,7]\nb = [1,4,9,16,25]\nfor x,y in zip(a,b):\n print(x , \":\", y)",
"1 : 1\n2 : 4\n3 : 9\n4 : 16\n5 : 25\n"
],
[
"for x in numbers:\n print(x)",
"23\n45\n12\n67\n88\n34\n11\n"
],
[
"for i in range(10):\n print(i)",
"0\n1\n2\n3\n4\n5\n6\n7\n8\n9\n"
],
[
"numbers",
"_____no_output_____"
],
[
"# Iterate over a list using index\nfor index in range(len(numbers)):\n print(index , \":\", numbers[index])",
"0 : 23\n1 : 45\n2 : 12\n3 : 67\n4 : 88\n5 : 34\n6 : 11\n"
]
],
[
[
"## List Comprehension",
"_____no_output_____"
]
],
[
[
"# Pythonic ways to create lists\n# 1\nnumbers = []\nfor i in range(1,101):\n numbers.append(i)",
"_____no_output_____"
],
[
"#1\n'''\nnumbers contains all i's\nsuch that\ni takes value in range(1,101)\n'''\nnumbers = [i for i in range(1, 101)]",
"_____no_output_____"
],
[
"print(numbers)",
"[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100]\n"
],
[
"def square(num):\n return num ** 2",
"_____no_output_____"
],
[
"square(5)",
"_____no_output_____"
],
[
"#2. call a function to make list of squares of numbers from 1-30\nsquared_numbers = [square(i) for i in range(1,31)]",
"_____no_output_____"
],
[
"print(squared_numbers)",
"[1, 4, 9, 16, 25, 36, 49, 64, 81, 100, 121, 144, 169, 196, 225, 256, 289, 324, 361, 400, 441, 484, 529, 576, 625, 676, 729, 784, 841, 900]\n"
],
[
"#3) even numbers from 1 to 30\neven_numbers = [i for i in range(1,31) if i%2 == 0]",
"_____no_output_____"
],
[
"print(even_numbers)",
"[2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30]\n"
],
[
"my_list = []\nfor i in range(1,31):\n if i%2 == 0:\n my_list.append(i)",
"_____no_output_____"
],
[
"print(my_list)",
"[2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30]\n"
],
[
"#4) squares of even numbers from 1 to 30\nsquared_even_numbers = [square(i) for i in range(1,31) if i%2 == 0]",
"_____no_output_____"
],
[
"print(squared_even_numbers)",
"[4, 16, 36, 64, 100, 144, 196, 256, 324, 400, 484, 576, 676, 784, 900]\n"
],
[
"#5) list of pairs\nnumbers_and_letters = [(chr(a),i) for a in range(65,68) for i in range(1,3)]",
"_____no_output_____"
],
[
"print(numbers_and_squares)",
"[('A', 1), ('A', 2), ('B', 1), ('B', 2), ('C', 1), ('C', 2)]\n"
],
[
"fun_list = []\nfor a in range(65,68):\n for i in range(1,3):\n fun_list.append((chr(a),i))\n #print() # prints a new line\n\nprint(fun_list)",
"[('A', 1), ('A', 2), ('B', 1), ('B', 2), ('C', 1), ('C', 2)]\n"
],
[
"even_numbers",
"_____no_output_____"
],
[
"36 in even_numbers",
"_____no_output_____"
],
[
"35 in even_numbers",
"_____no_output_____"
],
[
"if 36 in even_numbers:\n print(\"List contains 36\")",
"List contains 36\n"
],
[
"even_numbers",
"_____no_output_____"
],
[
"all(even_numbers)",
"_____no_output_____"
],
[
"test = [True, True, False, True]",
"_____no_output_____"
],
[
"all(test)",
"_____no_output_____"
],
[
"not any(test)",
"_____no_output_____"
],
[
"even_numbers",
"_____no_output_____"
],
[
"import random",
"_____no_output_____"
],
[
"import math",
"_____no_output_____"
],
[
"math.factorial(6)",
"_____no_output_____"
],
[
"math.log2(16)",
"_____no_output_____"
],
[
"math.log10(1000)",
"_____no_output_____"
],
[
"math.pi",
"_____no_output_____"
],
[
"def fact(x):\n if x == 0:\n return 1\n elif x < 0:\n return -1\n answer = 1\n multiplier = 1\n while multiplier <= x:\n answer *= multiplier\n multiplier += 1\n return answer",
"_____no_output_____"
],
[
"fact(0)",
"_____no_output_____"
],
[
"fact(-5)",
"_____no_output_____"
],
[
"fact(4)",
"_____no_output_____"
],
[
"fact(6)",
"_____no_output_____"
],
[
"def fact_recur(x):\n if x == 0:\n return 1\n if x < 0 :\n return -1\n return x * fact_recur(x-1)",
"_____no_output_____"
],
[
"fact_recur(5)",
"_____no_output_____"
],
[
"def fibo(n):\n if n == 1:\n return 0\n if n == 2:\n return 1\n return fibo(n-1) + fibo(n-2)",
"_____no_output_____"
],
[
"fibo(2)",
"_____no_output_____"
],
[
"fibo(8)",
"_____no_output_____"
],
[
"fibo(9)",
"_____no_output_____"
],
[
"def simple_interest(principal, years, rate):\n return (principal * years * rate ) /100",
"_____no_output_____"
],
[
"print(simple_interest(5000, 5, 2))",
"500.0\n"
],
[
"# Function with default argument\ndef simple_interest(principal, years, rate=2):\n return (principal * years * rate ) /100",
"_____no_output_____"
],
[
"simple_interest(5000,5)",
"_____no_output_____"
],
[
"def simple_interest(principal=5000, years, rate=2):\n return(principal * years * rate ) /100",
"_____no_output_____"
],
[
"def simple_interest(principal=5000, years=5, rate=2):\n print(\"p = \", principal)\n print(\"y = \", years)\n print(\"r = \", rate)\n return (principal * years * rate) / 100",
"_____no_output_____"
],
[
"simple_interest()",
"p = 5000\ny = 5\nr = 2\n"
],
[
"simple_interest(410)",
"p = 410\ny = 5\nr = 2\n"
],
[
"simple_interest(410, 10)",
"p = 410\ny = 10\nr = 2\n"
],
[
"# Call the function with keyword arguments/parameters\nsimple_interest(principal=7000, rate=10)",
"p = 7000\ny = 5\nr = 10\n"
],
[
"simple_interest(rate=3.4)",
"p = 5000\ny = 5\nr = 3.4\n"
],
[
"fun()\ndef fun():\n print(\"fun\")\n gun()\n\ndef gun():\n print(\"gun\")\n hun()\n \ndef hun():\n print(\"hun\")\n ",
"fun\ngun\nhun\n"
],
[
"def good():\n print(\"good\")\n better()\n# good() will cause error because better() function is not defined yet\ndef better():\n print(\"better\")\n best()\n \ndef best():\n print(\"best\")\ngood()",
"good\nbetter\nbest\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0be082b6d7a15f66b2ac5fa0ad2896c4f105f82 | 98,865 | ipynb | Jupyter Notebook | NLP_Spam_MessagesG.ipynb | Vedant1202/ML_SpamMessagesFilter | b35723a3be7372ac294250b400a6a824425d48d1 | [
"MIT"
] | null | null | null | NLP_Spam_MessagesG.ipynb | Vedant1202/ML_SpamMessagesFilter | b35723a3be7372ac294250b400a6a824425d48d1 | [
"MIT"
] | null | null | null | NLP_Spam_MessagesG.ipynb | Vedant1202/ML_SpamMessagesFilter | b35723a3be7372ac294250b400a6a824425d48d1 | [
"MIT"
] | null | null | null | 46.833254 | 13,164 | 0.546705 | [
[
[
"import pandas as pd\nimport numpy as np\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nimport nltk\n%matplotlib inline",
"_____no_output_____"
],
[
"nltk.download_shell()",
"NLTK Downloader\n---------------------------------------------------------------------------\n d) Download l) List u) Update c) Config h) Help q) Quit\n---------------------------------------------------------------------------\nDownloader> l\n\nPackages:\n [ ] abc................. Australian Broadcasting Commission 2006\n [ ] alpino.............. Alpino Dutch Treebank\n [ ] averaged_perceptron_tagger Averaged Perceptron Tagger\n [ ] averaged_perceptron_tagger_ru Averaged Perceptron Tagger (Russian)\n [ ] basque_grammars..... Grammars for Basque\n [ ] biocreative_ppi..... BioCreAtIvE (Critical Assessment of Information\n Extraction Systems in Biology)\n [ ] bllip_wsj_no_aux.... BLLIP Parser: WSJ Model\n [ ] book_grammars....... Grammars from NLTK Book\n [ ] brown............... Brown Corpus\n [ ] brown_tei........... Brown Corpus (TEI XML Version)\n [ ] cess_cat............ CESS-CAT Treebank\n [ ] cess_esp............ CESS-ESP Treebank\n [ ] chat80.............. Chat-80 Data Files\n [ ] city_database....... City Database\n [ ] cmudict............. The Carnegie Mellon Pronouncing Dictionary (0.6)\n [ ] comparative_sentences Comparative Sentence Dataset\n [ ] comtrans............ ComTrans Corpus Sample\n [ ] conll2000........... CONLL 2000 Chunking Corpus\n [ ] conll2002........... CONLL 2002 Named Entity Recognition Corpus\nHit Enter to continue: \n [ ] conll2007........... Dependency Treebanks from CoNLL 2007 (Catalan\n and Basque Subset)\n [ ] crubadan............ Crubadan Corpus\n [ ] dependency_treebank. Dependency Parsed Treebank\n [ ] dolch............... Dolch Word List\n [ ] europarl_raw........ Sample European Parliament Proceedings Parallel\n Corpus\n [ ] floresta............ Portuguese Treebank\n [ ] framenet_v15........ FrameNet 1.5\n [ ] framenet_v17........ FrameNet 1.7\n [ ] gazetteers.......... Gazeteer Lists\n [ ] genesis............. Genesis Corpus\n [ ] gutenberg........... Project Gutenberg Selections\n [ ] ieer................ NIST IE-ER DATA SAMPLE\n [ ] inaugural........... C-Span Inaugural Address Corpus\n [ ] indian.............. Indian Language POS-Tagged Corpus\n [ ] jeita............... JEITA Public Morphologically Tagged Corpus (in\n ChaSen format)\n [ ] kimmo............... PC-KIMMO Data Files\n [ ] knbc................ KNB Corpus (Annotated blog corpus)\n [ ] large_grammars...... Large context-free and feature-based grammars\n for parser comparison\nHit Enter to continue: \n [ ] lin_thesaurus....... Lin's Dependency Thesaurus\n [ ] mac_morpho.......... MAC-MORPHO: Brazilian Portuguese news text with\n part-of-speech tags\n [ ] machado............. Machado de Assis -- Obra Completa\n [ ] masc_tagged......... MASC Tagged Corpus\n [ ] maxent_ne_chunker... ACE Named Entity Chunker (Maximum entropy)\n [ ] maxent_treebank_pos_tagger Treebank Part of Speech Tagger (Maximum entropy)\n [ ] moses_sample........ Moses Sample Models\n [ ] movie_reviews....... Sentiment Polarity Dataset Version 2.0\n [ ] mte_teip5........... MULTEXT-East 1984 annotated corpus 4.0\n [ ] mwa_ppdb............ The monolingual word aligner (Sultan et al.\n 2015) subset of the Paraphrase Database.\n [ ] names............... Names Corpus, Version 1.3 (1994-03-29)\n [ ] nombank.1.0......... NomBank Corpus 1.0\n [ ] nonbreaking_prefixes Non-Breaking Prefixes (Moses Decoder)\n [ ] nps_chat............ NPS Chat\n [ ] omw................. Open Multilingual Wordnet\n [ ] opinion_lexicon..... Opinion Lexicon\n [ ] panlex_swadesh...... PanLex Swadesh Corpora\n [ ] paradigms........... Paradigm Corpus\n [ ] pe08................ Cross-Framework and Cross-Domain Parser\n Evaluation Shared Task\nHit Enter to continue: \n [ ] perluniprops........ perluniprops: Index of Unicode Version 7.0.0\n character properties in Perl\n [ ] pil................. The Patient Information Leaflet (PIL) Corpus\n [ ] pl196x.............. Polish language of the XX century sixties\n [ ] porter_test......... Porter Stemmer Test Files\n [ ] ppattach............ Prepositional Phrase Attachment Corpus\n [ ] problem_reports..... Problem Report Corpus\n [ ] product_reviews_1... Product Reviews (5 Products)\n [ ] product_reviews_2... Product Reviews (9 Products)\n [ ] propbank............ Proposition Bank Corpus 1.0\n [ ] pros_cons........... Pros and Cons\n [ ] ptb................. Penn Treebank\n [ ] punkt............... Punkt Tokenizer Models\n [ ] qc.................. Experimental Data for Question Classification\n [ ] reuters............. The Reuters-21578 benchmark corpus, ApteMod\n version\n [ ] rslp................ RSLP Stemmer (Removedor de Sufixos da Lingua\n Portuguesa)\n [ ] rte................. PASCAL RTE Challenges 1, 2, and 3\n [ ] sample_grammars..... Sample Grammars\n [ ] semcor.............. SemCor 3.0\nHit Enter to continue: \n [ ] senseval............ SENSEVAL 2 Corpus: Sense Tagged Text\n [ ] sentence_polarity... Sentence Polarity Dataset v1.0\n [ ] sentiwordnet........ SentiWordNet\n [ ] shakespeare......... Shakespeare XML Corpus Sample\n [ ] sinica_treebank..... Sinica Treebank Corpus Sample\n [ ] smultron............ SMULTRON Corpus Sample\n [ ] snowball_data....... Snowball Data\n [ ] spanish_grammars.... Grammars for Spanish\n [ ] state_union......... C-Span State of the Union Address Corpus\n [*] stopwords........... Stopwords Corpus\n [ ] subjectivity........ Subjectivity Dataset v1.0\n [ ] swadesh............. Swadesh Wordlists\n [ ] switchboard......... Switchboard Corpus Sample\n [ ] tagsets............. Help on Tagsets\n [ ] timit............... TIMIT Corpus Sample\n [ ] toolbox............. Toolbox Sample Files\n [ ] treebank............ Penn Treebank Sample\n [ ] twitter_samples..... Twitter Samples\n [ ] udhr2............... Universal Declaration of Human Rights Corpus\n (Unicode Version)\n [ ] udhr................ Universal Declaration of Human Rights Corpus\nHit Enter to continue: \n [ ] unicode_samples..... Unicode Samples\n [ ] universal_tagset.... Mappings to the Universal Part-of-Speech Tagset\n [ ] universal_treebanks_v20 Universal Treebanks Version 2.0\n [ ] vader_lexicon....... VADER Sentiment Lexicon\n [ ] verbnet............. VerbNet Lexicon, Version 2.1\n [ ] webtext............. Web Text Corpus\n [ ] wmt15_eval.......... Evaluation data from WMT15\n [ ] word2vec_sample..... Word2Vec Sample\n [ ] wordnet............. WordNet\n [ ] wordnet_ic.......... WordNet-InfoContent\n [ ] words............... Word Lists\n [ ] ycoe................ York-Toronto-Helsinki Parsed Corpus of Old\n English Prose\n\nCollections:\n [P] all-corpora......... All the corpora\n [P] all-nltk............ All packages available on nltk_data gh-pages\n branch\n [P] all................. All packages\n [P] book................ Everything used in the NLTK Book\n [P] popular............. Popular packages\n [ ] tests............... Packages for running tests\nHit Enter to continue: \n [ ] third-party......... Third-party data packages\n\n([*] marks installed packages; [P] marks partially installed collections)\n\n---------------------------------------------------------------------------\n d) Download l) List u) Update c) Config h) Help q) Quit\n---------------------------------------------------------------------------\nDownloader> \n\n---------------------------------------------------------------------------\n d) Download l) List u) Update c) Config h) Help q) Quit\n---------------------------------------------------------------------------\nDownloader> l\n\nPackages:\n [ ] abc................. Australian Broadcasting Commission 2006\n [ ] alpino.............. Alpino Dutch Treebank\n [ ] averaged_perceptron_tagger Averaged Perceptron Tagger\n [ ] averaged_perceptron_tagger_ru Averaged Perceptron Tagger (Russian)\n [ ] basque_grammars..... Grammars for Basque\n [ ] biocreative_ppi..... BioCreAtIvE (Critical Assessment of Information\n Extraction Systems in Biology)\n [ ] bllip_wsj_no_aux.... BLLIP Parser: WSJ Model\n [ ] book_grammars....... Grammars from NLTK Book\n [ ] brown............... Brown Corpus\n [ ] brown_tei........... Brown Corpus (TEI XML Version)\n [ ] cess_cat............ CESS-CAT Treebank\n [ ] cess_esp............ CESS-ESP Treebank\n [ ] chat80.............. Chat-80 Data Files\n [ ] city_database....... City Database\n [ ] cmudict............. The Carnegie Mellon Pronouncing Dictionary (0.6)\n [ ] comparative_sentences Comparative Sentence Dataset\n [ ] comtrans............ ComTrans Corpus Sample\n [ ] conll2000........... CONLL 2000 Chunking Corpus\n [ ] conll2002........... CONLL 2002 Named Entity Recognition Corpus\n"
],
[
"messages = [line.rstrip() for line in open('SMSSpamCollection')] ## Put in your dataset here",
"_____no_output_____"
],
[
"len(messages)",
"_____no_output_____"
],
[
"messages[50]",
"_____no_output_____"
],
[
"for msg_no, message in enumerate(messages[:10]):\n print(msg_no, message)\n print('\\n')",
"0 ham\tGo until jurong point, crazy.. Available only in bugis n great world la e buffet... Cine there got amore wat...\n\n\n1 ham\tOk lar... Joking wif u oni...\n\n\n2 spam\tFree entry in 2 a wkly comp to win FA Cup final tkts 21st May 2005. Text FA to 87121 to receive entry question(std txt rate)T&C's apply 08452810075over18's\n\n\n3 ham\tU dun say so early hor... U c already then say...\n\n\n4 ham\tNah I don't think he goes to usf, he lives around here though\n\n\n5 spam\tFreeMsg Hey there darling it's been 3 week's now and no word back! I'd like some fun you up for it still? Tb ok! XxX std chgs to send, £1.50 to rcv\n\n\n6 ham\tEven my brother is not like to speak with me. They treat me like aids patent.\n\n\n7 ham\tAs per your request 'Melle Melle (Oru Minnaminunginte Nurungu Vettam)' has been set as your callertune for all Callers. Press *9 to copy your friends Callertune\n\n\n8 spam\tWINNER!! As a valued network customer you have been selected to receivea £900 prize reward! To claim call 09061701461. Claim code KL341. Valid 12 hours only.\n\n\n9 spam\tHad your mobile 11 months or more? U R entitled to Update to the latest colour mobiles with camera for Free! Call The Mobile Update Co FREE on 08002986030\n\n\n"
],
[
"messages = pd.read_csv('SMSSpamCollection', sep='\\t', names=['Label', 'Message'])\nmessages.head()",
"_____no_output_____"
],
[
"messages.describe()",
"_____no_output_____"
],
[
"messages.groupby('Label').describe()",
"_____no_output_____"
],
[
"messages['Length'] = messages['Message'].apply(len)",
"_____no_output_____"
],
[
"messages.head()",
"_____no_output_____"
],
[
"plt.figure(figsize=(16,12))\nsns.distplot(messages['Length'], bins=100, kde=False, color='black')",
"C:\\Users\\VEDANT NANDOSKAR\\Anaconda3\\lib\\site-packages\\matplotlib\\axes\\_axes.py:6462: UserWarning: The 'normed' kwarg is deprecated, and has been replaced by the 'density' kwarg.\n warnings.warn(\"The 'normed' kwarg is deprecated, and has been \"\n"
],
[
"messages['Length'].describe()",
"_____no_output_____"
],
[
"messages[messages['Length'] == 910]['Message'].iloc[0]",
"_____no_output_____"
],
[
"messages.hist(column='Length', by='Label', bins=100, figsize=(16,8))",
"_____no_output_____"
],
[
"import string\nfrom nltk.corpus import stopwords",
"_____no_output_____"
],
[
"def split_intoWords(msg):\n \n ## Firstly remove punctuation\n noPunc = [char for char in msg if char not in string.punctuation]\n ## Then join the sepearate characters in a list\n noPunc = ''.join(noPunc)\n ## Finally return only the significant words\n return [word for word in noPunc.split() if word.lower() not in stopwords.words('english')]\n",
"_____no_output_____"
],
[
"messages['Message'].head(5).apply(split_intoWords)",
"_____no_output_____"
],
[
"from sklearn.feature_extraction.text import CountVectorizer",
"_____no_output_____"
],
[
"bow_transform = CountVectorizer(analyzer=split_intoWords).fit(messages['Message'])",
"_____no_output_____"
],
[
"print(len(bow_transform.vocabulary_))",
"11425\n"
],
[
"messages_bow = bow_transform.transform(messages['Message'])",
"_____no_output_____"
],
[
"print('Shape of matrix: ',messages_bow.shape)\nprint('Non zero occurences: ',messages_bow.nnz)",
"Shape of matrix: (5572, 11425)\nNon zero occurences: 50548\n"
],
[
"sparsity = (100.0 * messages_bow.nnz / (messages_bow.shape[0] * messages_bow.shape[1]))\nprint('sparsity: {}'.format(sparsity))",
"sparsity: 0.07940295412668218\n"
],
[
"from sklearn.feature_extraction.text import TfidfTransformer",
"_____no_output_____"
],
[
"tfidf_transform = TfidfTransformer().fit(messages_bow)",
"_____no_output_____"
],
[
"messages_tfidf = tfidf_transform.transform(messages_bow)",
"_____no_output_____"
],
[
"from sklearn.naive_bayes import MultinomialNB",
"_____no_output_____"
],
[
"spam_detect_model = MultinomialNB().fit(messages_tfidf, messages['Label'])",
"_____no_output_____"
],
[
"predictions = spam_detect_model.predict(messages_tfidf)",
"_____no_output_____"
],
[
"predictions",
"_____no_output_____"
],
[
"from sklearn.metrics import confusion_matrix, classification_report",
"_____no_output_____"
],
[
"confusion_matrix(messages['Label'], predictions)",
"_____no_output_____"
],
[
"print(classification_report(messages['Label'], predictions))",
" precision recall f1-score support\n\n ham 0.98 1.00 0.99 4825\n spam 1.00 0.85 0.92 747\n\navg / total 0.98 0.98 0.98 5572\n\n"
],
[
"from sklearn.pipeline import Pipeline",
"_____no_output_____"
],
[
"from sklearn.ensemble import RandomForestClassifier",
"_____no_output_____"
],
[
" pipelineRf = Pipeline([\n ('bow', CountVectorizer(analyzer=split_intoWords)),\n ('tfidf', TfidfTransformer()),\n ('classifier', RandomForestClassifier())\n])",
"_____no_output_____"
]
],
[
[
"# Now comparing with Random Forest Classifier instead of MultinomialNB.\n# Stop here if you dont want to do the ahead steps\n# Skip to 'save to csv' step",
"_____no_output_____"
]
],
[
[
"pipelineRf.fit(messages['Message'], messages['Label'])",
"_____no_output_____"
],
[
"predictionsRf = pipelineRf.predict(messages['Message'])",
"_____no_output_____"
],
[
"confusion_matrix(messages['Label'], predictionsRf)",
"_____no_output_____"
],
[
"print(classification_report(messages['Label'], predictionsRf))",
" precision recall f1-score support\n\n ham 1.00 1.00 1.00 4825\n spam 1.00 0.98 0.99 747\n\navg / total 1.00 1.00 1.00 5572\n\n"
],
[
"predictionsRf",
"_____no_output_____"
],
[
"predictionsDf = pd.DataFrame(predictions, columns=['Naive Bayes Prediction'])",
"_____no_output_____"
],
[
"predictionsDf.head()",
"_____no_output_____"
],
[
"predictionsRfDf = pd.DataFrame(predictionsRf, columns=['Random Forest Predictions'])\npredictionsRfDf.head()",
"_____no_output_____"
],
[
"messagesPred = pd.concat([messages, predictionsDf, predictionsRfDf], axis=1)",
"_____no_output_____"
],
[
"messagesPred",
"_____no_output_____"
],
[
"messagesPred.to_csv('predictions_spamOrHam_messages.csv', header=True, index_label='Index')",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0be082e0d89b8b897a12a5739227b9ed18a7615 | 14,849 | ipynb | Jupyter Notebook | day5_hyperopt.ipynb | marta-0/dw_matrix_cars | 188b388000138189d7fb002de49d4c05e8dc1c41 | [
"MIT"
] | null | null | null | day5_hyperopt.ipynb | marta-0/dw_matrix_cars | 188b388000138189d7fb002de49d4c05e8dc1c41 | [
"MIT"
] | null | null | null | day5_hyperopt.ipynb | marta-0/dw_matrix_cars | 188b388000138189d7fb002de49d4c05e8dc1c41 | [
"MIT"
] | null | null | null | 39.916667 | 565 | 0.52219 | [
[
[
"# Using Hyperopt to optimize XGB model hyperparameters",
"_____no_output_____"
],
[
"## Importing the libraries and loading the data",
"_____no_output_____"
]
],
[
[
"#!pip install --upgrade tables\n#!pip install eli5\n#!pip install xgboost\n#!pip install hyperopt",
"_____no_output_____"
],
[
"import numpy as np\nimport pandas as pd\n\nimport xgboost as xgb\n\nfrom sklearn.metrics import mean_absolute_error as mae\nfrom sklearn.model_selection import cross_val_score\n\nfrom hyperopt import hp, fmin, tpe, STATUS_OK\n\nimport eli5\nfrom eli5.sklearn import PermutationImportance",
"_____no_output_____"
],
[
"cd \"/content/drive/My Drive/Colab Notebooks/dw_matrix_cars\"",
"/content/drive/My Drive/Colab Notebooks/dw_matrix_cars\n"
],
[
"df = pd.read_hdf('data/car.h5')\ndf.shape",
"_____no_output_____"
]
],
[
[
"## Feature Engineering",
"_____no_output_____"
]
],
[
[
"SUFFIX_CAT = '_cat'\n\nfor feat in df.columns:\n if isinstance(df[feat][0], list): continue\n\n factorized_values = df[feat].factorize()[0]\n if SUFFIX_CAT in feat:\n df[feat] = factorized_values\n else:\n df[feat + SUFFIX_CAT] = factorized_values",
"_____no_output_____"
],
[
"df['param_rok-produkcji'] = df['param_rok-produkcji'].map(lambda x: -1 if str(x) == 'None' else int(x))\ndf['param_moc'] = df['param_moc'].map(lambda x: -1 if str(x) == 'None' else int(x.split(' ')[0]) )\ndf['param_pojemność-skokowa'] = df['param_pojemność-skokowa'].map(lambda x: -1 if str(x) == 'None' else int( str(x).split('cm')[0].replace(' ','')) )",
"_____no_output_____"
],
[
"def run_model(model, feats):\n X = df[feats].values\n y = df['price_value'].values\n\n scores = cross_val_score(model, X, y, cv=3, scoring='neg_mean_absolute_error')\n \n return np.mean(scores), np.std(scores)",
"_____no_output_____"
],
[
"feats = ['param_napęd_cat', 'param_rok-produkcji', 'param_stan_cat', 'param_skrzynia-biegów_cat', 'param_faktura-vat_cat', 'param_moc', 'param_marka-pojazdu_cat', 'feature_kamera-cofania_cat', 'param_typ_cat', 'param_pojemność-skokowa', 'seller_name_cat', 'feature_wspomaganie-kierownicy_cat', 'param_model-pojazdu_cat', 'param_wersja_cat', 'param_kod-silnika_cat', 'feature_system-start-stop_cat', 'feature_asystent-pasa-ruchu_cat', 'feature_czujniki-parkowania-przednie_cat', 'feature_łopatki-zmiany-biegów_cat', 'feature_regulowane-zawieszenie_cat']\n\nxgb_params = {\n 'max_depth': 5,\n 'n_estimators': 50,\n 'learning_rate': 0.1,\n 'seed': 0\n}\nrun_model(xgb.XGBRegressor(**xgb_params), feats)",
"[08:03:36] WARNING: /workspace/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n[08:03:41] WARNING: /workspace/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n[08:03:45] WARNING: /workspace/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.\n"
]
],
[
[
"## Hyperopt",
"_____no_output_____"
]
],
[
[
"def obj_func(params):\n print(\"Training with params: \")\n print(params)\n\n mean_mae, score_std = run_model(xgb.XGBRegressor(**params), feats)\n\n return {'loss': np.abs(mean_mae), 'status': STATUS_OK}\n\n\n# space\nxgb_reg_params = {\n 'learning_rate': hp.choice('learning_rate', np.arange(0.05, 0.31, 0.05)),\n 'max_depth': hp.choice('max_depth', np.arange(5, 16, 2, dtype=int)),\n 'subsample': hp.quniform('subsample', 0.5, 1, 0.05),\n 'colsample_bytree': hp.quniform('colsample_bytree', 0.5, 1, 0.05),\n 'objective': 'reg:squarederror',\n 'n_estimators': 100,\n 'seed': 0\n}\n\n\n# run\nbest = fmin(obj_func, xgb_reg_params, algo=tpe.suggest, max_evals=25)\n\nbest",
"Training with params: \n{'colsample_bytree': 0.8, 'learning_rate': 0.25, 'max_depth': 11, 'n_estimators': 100, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.8500000000000001}\nTraining with params: \n{'colsample_bytree': 0.9500000000000001, 'learning_rate': 0.3, 'max_depth': 11, 'n_estimators': 100, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.75}\nTraining with params: \n{'colsample_bytree': 0.65, 'learning_rate': 0.15000000000000002, 'max_depth': 11, 'n_estimators': 100, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.65}\nTraining with params: \n{'colsample_bytree': 0.9, 'learning_rate': 0.2, 'max_depth': 13, 'n_estimators': 100, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.7000000000000001}\nTraining with params: \n{'colsample_bytree': 0.6000000000000001, 'learning_rate': 0.2, 'max_depth': 11, 'n_estimators': 100, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.8}\nTraining with params: \n{'colsample_bytree': 0.8500000000000001, 'learning_rate': 0.1, 'max_depth': 9, 'n_estimators': 100, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.75}\nTraining with params: \n{'colsample_bytree': 0.8500000000000001, 'learning_rate': 0.2, 'max_depth': 9, 'n_estimators': 100, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.8}\nTraining with params: \n{'colsample_bytree': 0.8, 'learning_rate': 0.15000000000000002, 'max_depth': 11, 'n_estimators': 100, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.9500000000000001}\nTraining with params: \n{'colsample_bytree': 0.8500000000000001, 'learning_rate': 0.25, 'max_depth': 15, 'n_estimators': 100, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.8500000000000001}\nTraining with params: \n{'colsample_bytree': 0.8, 'learning_rate': 0.25, 'max_depth': 5, 'n_estimators': 100, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.8}\nTraining with params: \n{'colsample_bytree': 0.55, 'learning_rate': 0.3, 'max_depth': 13, 'n_estimators': 100, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.9500000000000001}\nTraining with params: \n{'colsample_bytree': 0.8500000000000001, 'learning_rate': 0.05, 'max_depth': 5, 'n_estimators': 100, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.5}\nTraining with params: \n{'colsample_bytree': 0.75, 'learning_rate': 0.15000000000000002, 'max_depth': 13, 'n_estimators': 100, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.7000000000000001}\nTraining with params: \n{'colsample_bytree': 0.6000000000000001, 'learning_rate': 0.3, 'max_depth': 11, 'n_estimators': 100, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.75}\nTraining with params: \n{'colsample_bytree': 0.75, 'learning_rate': 0.15000000000000002, 'max_depth': 9, 'n_estimators': 100, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.65}\nTraining with params: \n{'colsample_bytree': 0.8, 'learning_rate': 0.3, 'max_depth': 5, 'n_estimators': 100, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.55}\nTraining with params: \n{'colsample_bytree': 0.75, 'learning_rate': 0.1, 'max_depth': 15, 'n_estimators': 100, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.8500000000000001}\nTraining with params: \n{'colsample_bytree': 0.5, 'learning_rate': 0.25, 'max_depth': 15, 'n_estimators': 100, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.8}\nTraining with params: \n{'colsample_bytree': 0.9, 'learning_rate': 0.2, 'max_depth': 9, 'n_estimators': 100, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.7000000000000001}\nTraining with params: \n{'colsample_bytree': 0.7000000000000001, 'learning_rate': 0.15000000000000002, 'max_depth': 9, 'n_estimators': 100, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.5}\nTraining with params: \n{'colsample_bytree': 0.7000000000000001, 'learning_rate': 0.1, 'max_depth': 7, 'n_estimators': 100, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 1.0}\nTraining with params: \n{'colsample_bytree': 0.7000000000000001, 'learning_rate': 0.1, 'max_depth': 15, 'n_estimators': 100, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.9500000000000001}\nTraining with params: \n{'colsample_bytree': 1.0, 'learning_rate': 0.05, 'max_depth': 7, 'n_estimators': 100, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.9}\nTraining with params: \n{'colsample_bytree': 0.65, 'learning_rate': 0.1, 'max_depth': 15, 'n_estimators': 100, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.9500000000000001}\nTraining with params: \n{'colsample_bytree': 0.75, 'learning_rate': 0.1, 'max_depth': 15, 'n_estimators': 100, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 1.0}\n100%|██████████| 25/25 [25:12<00:00, 69.63s/it, best loss: 7454.406956935974]\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0be1e08a39b0842d56e950afdfc0ae8c20d9f07 | 67,842 | ipynb | Jupyter Notebook | site/public/courses/DS-1.1/Notebooks/StatisticalAnalysis.ipynb | KitsuneNoctus/makeschool | 5eec1a18146abf70bb78b4ee3d301f6a43c9ede4 | [
"MIT"
] | 1 | 2021-08-24T20:22:19.000Z | 2021-08-24T20:22:19.000Z | site/public/courses/DS-1.1/Notebooks/StatisticalAnalysis.ipynb | KitsuneNoctus/makeschool | 5eec1a18146abf70bb78b4ee3d301f6a43c9ede4 | [
"MIT"
] | null | null | null | site/public/courses/DS-1.1/Notebooks/StatisticalAnalysis.ipynb | KitsuneNoctus/makeschool | 5eec1a18146abf70bb78b4ee3d301f6a43c9ede4 | [
"MIT"
] | null | null | null | 157.041667 | 23,425 | 0.894844 | [
[
[
"## Statistical Analysis\n\nWe have learned null hypothesis, and compared two-sample test to check whether two samples are the same or not\n\nTo add more to statistical analysis, the follwoing topics should be covered:\n\n1- Approxite the histogram of data with combination of Gaussian (Normal) distribution functions:\n\n Gaussian Mixture Model (GMM)\n Kernel Density Estimation (KDE)\n \n2- Correlation among features\n",
"_____no_output_____"
],
[
"## Review\n\nWrite a function that computes and plot histogram of a given data\n\nHistogram is one method for estimating density",
"_____no_output_____"
],
[
"## What is Gaussian Mixture Model (GMM)?\n\nGMM is a probabilistic model for representing normally distributed subpopulations within an overall population\n\n<img src=\"Images/gmm_fig.png\" width=\"300\">\n\n$p(x) = \\sum_{i = 1}^{K} w_i \\ \\mathcal{N}(x \\ | \\ \\mu_i,\\ \\sigma_i)$\n\n$\\sum_{i=1}^{K} w_i = 1$\n\nhttps://brilliant.org/wiki/gaussian-mixture-model/\n",
"_____no_output_____"
],
[
"## Activity : Fit a GMM to a given data sample \n\nTask: \n\n1- Generate the concatination of the random variables as follows:\n\n`x_1 = np.random.normal(-5, 1, 3000)\nx_2 = np.random.normal(2, 3, 7000) \nx = np.concatenate((x_1, x_2))`\n\n2- Plot the histogram of `x`\n\n3- Obtain the weights, mean and variances of each Gassuian\n\nSteps needed: \n`from sklearn import mixture \ngmm = mixture.GaussianMixture(n_components=2)\ngmm.fit(x.reshape(-1,1))`",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport matplotlib.pyplot as plt\nfrom sklearn import mixture\n\n# Generate data samples and plot its histogram\nx_1 = np.random.normal(-5, 1, 3000)\nx_2 = np.random.normal(2, 3, 7000) \nx = np.concatenate((x_1, x_2))\nplt.hist(x, bins=20, density=1)\nplt.show()\n\n# Define a GMM model and obtain its parameters\ngmm = mixture.GaussianMixture(n_components=2)\ngmm.fit(x.reshape(-1,1))\nprint(gmm.means_)\nprint(gmm.covariances_)\nprint(gmm.weights_)\n\n\n",
"_____no_output_____"
]
],
[
[
"## The GMM has learn the probability density function of our data sample\n\nLets the model generate sample from it model:\n\n",
"_____no_output_____"
]
],
[
[
"z = gmm.sample(10000)\nplt.hist(z[0], bins=20, density=1)\nplt.show()",
"_____no_output_____"
]
],
[
[
"## Kernel Density Estimation (KDE)\n\nKernel density estimation (KDE) is a non-parametric way to estimate the probability density function of a random variable. In other words the aim of KDE is to find probability density function (PDF) for a given dataset.\n\nApproximate the pdf of dataset:\n\n$p(x) = \\frac{1}{Nh}\\sum_{i = 1}^{N} \\ K(\\frac{x - x_i}{h})$\n\nwhere $h$ is a bandwidth and $N$ is the number of data points",
"_____no_output_____"
],
[
"## Activity: Apply KDE on a given data sample\n\nTask: Apply KDE on previous generated sample data `x`\n\nHint: use \n\n`kde = KernelDensity(kernel='gaussian', bandwidth=0.6)`",
"_____no_output_____"
]
],
[
[
"from sklearn.neighbors import KernelDensity\n\nkde = KernelDensity(kernel='gaussian', bandwidth=0.6)\nkde.fit(x.reshape(-1,1))\n\ns = np.linspace(np.min(x), np.max(x))\nlog_pdf = kde.score_samples(s.reshape(-1,1))\nplt.plot(s, np.exp(log_pdf))",
"_____no_output_____"
],
[
"m = kde.sample(10000)\nplt.hist(m, bins=20, density=1)\nplt.show()",
"_____no_output_____"
]
],
[
[
"## KDE can learn handwitten digits distribution and generate new digits\n\nhttp://scikit-learn.org/stable/auto_examples/neighbors/plot_digits_kde_sampling.html",
"_____no_output_____"
],
[
"## Correlation \n\nCorrelation is used to test relationships between quantitative variables\n\nSome examples of data that have a high correlation:\n\n1- Your caloric intake and your weight\n\n2- The amount of time your study and your GPA\n\nQuestion what is negative correlation?\n\nCorrelations are useful because we can find out what relationship variables have, we can make predictions about future behavior. ",
"_____no_output_____"
],
[
"## Activity: Obtain the correlation among all features of iris dataset\n\n1- Review the iris dataset. What are the features? \n\n2- Eliminate two columns `['Id', 'Species']`\n\n3- Compute the correlation among all features. \n\nHint: Use `df.corr()`\n\n4- Plot the correlation by heatmap and corr plot in Seaborn -> `sns.heatmap`, `sns.corrplot`\n\n5- Write a function that computes the correlation (Pearson formula)\n\nHint: https://en.wikipedia.org/wiki/Pearson_correlation_coefficient\n\n6- Compare your answer with `scipy.stats.pearsonr` for any given two features\n",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\nimport scipy.stats\nimport seaborn as sns\nimport scipy.stats\n\ndf = pd.read_csv('Iris.csv')\ndf = df.drop(columns=['Id', 'Species'])\nsns.heatmap(df.corr(), annot=True)\n\ndef pearson_corr(x, y):\n x_mean = np.mean(x)\n y_mean = np.mean(y)\n num = [(i - x_mean)*(j - y_mean) for i,j in zip(x,y)]\n den_1 = [(i - x_mean)**2 for i in x]\n den_2 = [(j - y_mean)**2 for j in y]\n correlation_x_y = np.sum(num)/np.sqrt(np.sum(den_1))/np.sqrt(np.sum(den_2))\n return correlation_x_y\n\nprint(pearson_corr(df['SepalLengthCm'], df['PetalLengthCm']))\nprint(scipy.stats.pearsonr(df['SepalLengthCm'], df['PetalLengthCm']))",
"0.8717541573048714\n(0.8717541573048712, 1.0384540627941809e-47)\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
]
] |
d0be2cbe975ad3f3e2db5d1fb8b14ff9c242090b | 79,662 | ipynb | Jupyter Notebook | nbs/05_data.transforms.ipynb | Peshlex/fastai | 07c481487e459aac97342bc83f6219fce3c5682c | [
"Apache-2.0"
] | null | null | null | nbs/05_data.transforms.ipynb | Peshlex/fastai | 07c481487e459aac97342bc83f6219fce3c5682c | [
"Apache-2.0"
] | null | null | null | nbs/05_data.transforms.ipynb | Peshlex/fastai | 07c481487e459aac97342bc83f6219fce3c5682c | [
"Apache-2.0"
] | null | null | null | 44.379944 | 3,736 | 0.681178 | [
[
[
"#default_exp data.transforms",
"_____no_output_____"
],
[
"#export\nfrom fastai2.torch_basics import *\nfrom fastai2.data.core import *\nfrom fastai2.data.load import *\nfrom fastai2.data.external import *\n\nfrom sklearn.model_selection import train_test_split",
"_____no_output_____"
],
[
"from nbdev.showdoc import *",
"_____no_output_____"
]
],
[
[
"# Helper functions for processing data and basic transforms\n\n> Functions for getting, splitting, and labeling data, as well as generic transforms",
"_____no_output_____"
],
[
"## Get, split, and label",
"_____no_output_____"
],
[
"For most data source creation we need functions to get a list of items, split them in to train/valid sets, and label them. fastai provides functions to make each of these steps easy (especially when combined with `fastai.data.blocks`).",
"_____no_output_____"
],
[
"### Get",
"_____no_output_____"
],
[
"First we'll look at functions that *get* a list of items (generally file names).\n\nWe'll use *tiny MNIST* (a subset of MNIST with just two classes, `7`s and `3`s) for our examples/tests throughout this page.",
"_____no_output_____"
]
],
[
[
"path = untar_data(URLs.MNIST_TINY)\n(path/'train').ls()",
"_____no_output_____"
],
[
"# export\ndef _get_files(p, fs, extensions=None):\n p = Path(p)\n res = [p/f for f in fs if not f.startswith('.')\n and ((not extensions) or f'.{f.split(\".\")[-1].lower()}' in extensions)]\n return res",
"_____no_output_____"
],
[
"# export\ndef get_files(path, extensions=None, recurse=True, folders=None, followlinks=True):\n \"Get all the files in `path` with optional `extensions`, optionally with `recurse`, only in `folders`, if specified.\"\n path = Path(path)\n folders=L(folders)\n extensions = setify(extensions)\n extensions = {e.lower() for e in extensions}\n if recurse:\n res = []\n for i,(p,d,f) in enumerate(os.walk(path, followlinks=followlinks)): # returns (dirpath, dirnames, filenames)\n if len(folders) !=0 and i==0: d[:] = [o for o in d if o in folders]\n else: d[:] = [o for o in d if not o.startswith('.')]\n if len(folders) !=0 and i==0 and '.' not in folders: continue\n res += _get_files(p, f, extensions)\n else:\n f = [o.name for o in os.scandir(path) if o.is_file()]\n res = _get_files(path, f, extensions)\n return L(res)",
"_____no_output_____"
]
],
[
[
"This is the most general way to grab a bunch of file names from disk. If you pass `extensions` (including the `.`) then returned file names are filtered by that list. Only those files directly in `path` are included, unless you pass `recurse`, in which case all child folders are also searched recursively. `folders` is an optional list of directories to limit the search to.",
"_____no_output_____"
]
],
[
[
"t3 = get_files(path/'train'/'3', extensions='.png', recurse=False)\nt7 = get_files(path/'train'/'7', extensions='.png', recurse=False)\nt = get_files(path/'train', extensions='.png', recurse=True)\ntest_eq(len(t), len(t3)+len(t7))\ntest_eq(len(get_files(path/'train'/'3', extensions='.jpg', recurse=False)),0)\ntest_eq(len(t), len(get_files(path, extensions='.png', recurse=True, folders='train')))\nt",
"_____no_output_____"
],
[
"#hide\ntest_eq(len(get_files(path/'train'/'3', recurse=False)),346)\ntest_eq(len(get_files(path, extensions='.png', recurse=True, folders=['train', 'test'])),729)\ntest_eq(len(get_files(path, extensions='.png', recurse=True, folders='train')),709)\ntest_eq(len(get_files(path, extensions='.png', recurse=True, folders='training')),0)",
"_____no_output_____"
]
],
[
[
"It's often useful to be able to create functions with customized behavior. `fastai.data` generally uses functions named as CamelCase verbs ending in `er` to create these functions. `FileGetter` is a simple example of such a function creator.",
"_____no_output_____"
]
],
[
[
"#export\ndef FileGetter(suf='', extensions=None, recurse=True, folders=None):\n \"Create `get_files` partial function that searches path suffix `suf`, only in `folders`, if specified, and passes along args\"\n def _inner(o, extensions=extensions, recurse=recurse, folders=folders):\n return get_files(o/suf, extensions, recurse, folders)\n return _inner",
"_____no_output_____"
],
[
"fpng = FileGetter(extensions='.png', recurse=False)\ntest_eq(len(t7), len(fpng(path/'train'/'7')))\ntest_eq(len(t), len(fpng(path/'train', recurse=True)))\nfpng_r = FileGetter(extensions='.png', recurse=True)\ntest_eq(len(t), len(fpng_r(path/'train')))",
"_____no_output_____"
],
[
"#export\nimage_extensions = set(k for k,v in mimetypes.types_map.items() if v.startswith('image/'))",
"_____no_output_____"
],
[
"#export\ndef get_image_files(path, recurse=True, folders=None):\n \"Get image files in `path` recursively, only in `folders`, if specified.\"\n return get_files(path, extensions=image_extensions, recurse=recurse, folders=folders)",
"_____no_output_____"
]
],
[
[
"This is simply `get_files` called with a list of standard image extensions.",
"_____no_output_____"
]
],
[
[
"test_eq(len(t), len(get_image_files(path, recurse=True, folders='train')))",
"_____no_output_____"
],
[
"#export\ndef ImageGetter(suf='', recurse=True, folders=None):\n \"Create `get_image_files` partial function that searches path suffix `suf` and passes along `kwargs`, only in `folders`, if specified.\"\n def _inner(o, recurse=recurse, folders=folders): return get_image_files(o/suf, recurse, folders)\n return _inner",
"_____no_output_____"
]
],
[
[
"Same as `FileGetter`, but for image extensions.",
"_____no_output_____"
]
],
[
[
"test_eq(len(get_files(path/'train', extensions='.png', recurse=True, folders='3')),\n len(ImageGetter( 'train', recurse=True, folders='3')(path)))",
"_____no_output_____"
],
[
"#export\ndef get_text_files(path, recurse=True, folders=None):\n \"Get text files in `path` recursively, only in `folders`, if specified.\"\n return get_files(path, extensions=['.txt'], recurse=recurse, folders=folders)",
"_____no_output_____"
],
[
"#export\nclass ItemGetter(ItemTransform):\n \"Creates a proper transform that applies `itemgetter(i)` (even on a tuple)\"\n _retain = False\n def __init__(self, i): self.i = i\n def encodes(self, x): return x[self.i]",
"_____no_output_____"
],
[
"test_eq(ItemGetter(1)((1,2,3)), 2)\ntest_eq(ItemGetter(1)(L(1,2,3)), 2)\ntest_eq(ItemGetter(1)([1,2,3]), 2)\ntest_eq(ItemGetter(1)(np.array([1,2,3])), 2)",
"_____no_output_____"
],
[
"#export\nclass AttrGetter(ItemTransform):\n \"Creates a proper transform that applies `attrgetter(nm)` (even on a tuple)\"\n _retain = False\n def __init__(self, nm, default=None): store_attr(self, 'nm,default')\n def encodes(self, x): return getattr(x, self.nm, self.default)",
"_____no_output_____"
],
[
"test_eq(AttrGetter('shape')(torch.randn([4,5])), [4,5])\ntest_eq(AttrGetter('shape', [0])([4,5]), [0])",
"_____no_output_____"
]
],
[
[
"### Split",
"_____no_output_____"
],
[
"The next set of functions are used to *split* data into training and validation sets. The functions return two lists - a list of indices or masks for each of training and validation sets.",
"_____no_output_____"
]
],
[
[
"# export\ndef RandomSplitter(valid_pct=0.2, seed=None):\n \"Create function that splits `items` between train/val with `valid_pct` randomly.\"\n def _inner(o):\n if seed is not None: torch.manual_seed(seed)\n rand_idx = L(int(i) for i in torch.randperm(len(o)))\n cut = int(valid_pct * len(o))\n return rand_idx[cut:],rand_idx[:cut]\n return _inner",
"_____no_output_____"
],
[
"src = list(range(30))\nf = RandomSplitter(seed=42)\ntrn,val = f(src)\nassert 0<len(trn)<len(src)\nassert all(o not in val for o in trn)\ntest_eq(len(trn), len(src)-len(val))\n# test random seed consistency\ntest_eq(f(src)[0], trn)",
"_____no_output_____"
]
],
[
[
"Use scikit-learn train_test_split. This allow to *split* items in a stratified fashion (uniformely according to the ‘labels‘ distribution)",
"_____no_output_____"
]
],
[
[
"# export\ndef TrainTestSplitter(test_size=0.2, random_state=None, stratify=None, train_size=None, shuffle=True):\n \"Split `items` into random train and test subsets using sklearn train_test_split utility.\"\n def _inner(o, **kwargs):\n train, valid = train_test_split(range(len(o)), test_size=test_size, random_state=random_state, stratify=stratify, train_size=train_size, shuffle=shuffle)\n return L(train), L(valid)\n return _inner",
"_____no_output_____"
],
[
"src = list(range(30))\nlabels = [0] * 20 + [1] * 10\ntest_size = 0.2\n\nf = TrainTestSplitter(test_size=test_size, random_state=42, stratify=labels)\ntrn,val = f(src)\nassert 0<len(trn)<len(src)\nassert all(o not in val for o in trn)\ntest_eq(len(trn), len(src)-len(val))\n\n# test random seed consistency\ntest_eq(f(src)[0], trn)\n\n# test labels distribution consistency\n# there should be test_size % of zeroes and ones respectively in the validation set\ntest_eq(len([t for t in val if t < 20]) / 20, test_size)\ntest_eq(len([t for t in val if t > 20]) / 10, test_size)",
"_____no_output_____"
],
[
"#export\ndef IndexSplitter(valid_idx):\n \"Split `items` so that `val_idx` are in the validation set and the others in the training set\"\n def _inner(o):\n train_idx = np.setdiff1d(np.array(range_of(o)), np.array(valid_idx))\n return L(train_idx, use_list=True), L(valid_idx, use_list=True)\n return _inner",
"_____no_output_____"
],
[
"items = list(range(10))\nsplitter = IndexSplitter([3,7,9])\ntest_eq(splitter(items),[[0,1,2,4,5,6,8],[3,7,9]])",
"_____no_output_____"
],
[
"# export\ndef _grandparent_idxs(items, name):\n def _inner(items, name): return mask2idxs(Path(o).parent.parent.name == name for o in items)\n return [i for n in L(name) for i in _inner(items,n)]",
"_____no_output_____"
],
[
"# export\ndef GrandparentSplitter(train_name='train', valid_name='valid'):\n \"Split `items` from the grand parent folder names (`train_name` and `valid_name`).\"\n def _inner(o):\n return _grandparent_idxs(o, train_name),_grandparent_idxs(o, valid_name)\n return _inner",
"_____no_output_____"
],
[
"fnames = [path/'train/3/9932.png', path/'valid/7/7189.png', \n path/'valid/7/7320.png', path/'train/7/9833.png', \n path/'train/3/7666.png', path/'valid/3/925.png',\n path/'train/7/724.png', path/'valid/3/93055.png']\nsplitter = GrandparentSplitter()\ntest_eq(splitter(fnames),[[0,3,4,6],[1,2,5,7]])",
"_____no_output_____"
],
[
"fnames2 = fnames + [path/'test/3/4256.png', path/'test/7/2345.png', path/'valid/7/6467.png']\nsplitter = GrandparentSplitter(train_name=('train', 'valid'), valid_name='test')\ntest_eq(splitter(fnames2),[[0,3,4,6,1,2,5,7,10],[8,9]])",
"_____no_output_____"
],
[
"# export\ndef FuncSplitter(func):\n \"Split `items` by result of `func` (`True` for validation, `False` for training set).\"\n def _inner(o):\n val_idx = mask2idxs(func(o_) for o_ in o)\n return IndexSplitter(val_idx)(o)\n return _inner",
"_____no_output_____"
],
[
"splitter = FuncSplitter(lambda o: Path(o).parent.parent.name == 'valid')\ntest_eq(splitter(fnames),[[0,3,4,6],[1,2,5,7]])",
"_____no_output_____"
],
[
"# export\ndef MaskSplitter(mask):\n \"Split `items` depending on the value of `mask`.\"\n def _inner(o): return IndexSplitter(mask2idxs(mask))(o)\n return _inner",
"_____no_output_____"
],
[
"items = list(range(6))\nsplitter = MaskSplitter([True,False,False,True,False,True])\ntest_eq(splitter(items),[[1,2,4],[0,3,5]])",
"_____no_output_____"
],
[
"# export\ndef FileSplitter(fname):\n \"Split `items` by providing file `fname` (contains names of valid items separated by newline).\"\n valid = Path(fname).read().split('\\n')\n def _func(x): return x.name in valid\n def _inner(o): return FuncSplitter(_func)(o)\n return _inner",
"_____no_output_____"
],
[
"with tempfile.TemporaryDirectory() as d:\n fname = Path(d)/'valid.txt'\n fname.write('\\n'.join([Path(fnames[i]).name for i in [1,3,4]]))\n splitter = FileSplitter(fname)\n test_eq(splitter(fnames),[[0,2,5,6,7],[1,3,4]])",
"_____no_output_____"
],
[
"# export\ndef ColSplitter(col='is_valid'):\n \"Split `items` (supposed to be a dataframe) by value in `col`\"\n def _inner(o):\n assert isinstance(o, pd.DataFrame), \"ColSplitter only works when your items are a pandas DataFrame\"\n valid_idx = (o.iloc[:,col] if isinstance(col, int) else o[col]).values\n return IndexSplitter(mask2idxs(valid_idx))(o)\n return _inner",
"_____no_output_____"
],
[
"df = pd.DataFrame({'a': [0,1,2,3,4], 'b': [True,False,True,True,False]})\nsplits = ColSplitter('b')(df)\ntest_eq(splits, [[1,4], [0,2,3]])\n#Works with strings or index\nsplits = ColSplitter(1)(df)\ntest_eq(splits, [[1,4], [0,2,3]])",
"_____no_output_____"
],
[
"# export\ndef RandomSubsetSplitter(train_sz, valid_sz, seed=None):\n \"Take randoms subsets of `splits` with `train_sz` and `valid_sz`\"\n assert 0 < train_sz < 1\n assert 0 < valid_sz < 1\n assert train_sz + valid_sz <= 1.\n\n def _inner(o):\n if seed is not None: torch.manual_seed(seed)\n train_len,valid_len = int(len(o)*train_sz),int(len(o)*valid_sz)\n idxs = L(int(i) for i in torch.randperm(len(o)))\n return idxs[:train_len],idxs[train_len:train_len+valid_len]\n return _inner",
"_____no_output_____"
],
[
"items = list(range(100))\nvalid_idx = list(np.arange(70,100))\nsplits = RandomSubsetSplitter(0.3, 0.1)(items)\ntest_eq(len(splits[0]), 30)\ntest_eq(len(splits[1]), 10)",
"_____no_output_____"
]
],
[
[
"### Label",
"_____no_output_____"
],
[
"The final set of functions is used to *label* a single item of data.",
"_____no_output_____"
]
],
[
[
"# export\ndef parent_label(o):\n \"Label `item` with the parent folder name.\"\n return Path(o).parent.name",
"_____no_output_____"
]
],
[
[
"Note that `parent_label` doesn't have anything customize, so it doesn't return a function - you can just use it directly.",
"_____no_output_____"
]
],
[
[
"test_eq(parent_label(fnames[0]), '3')\ntest_eq(parent_label(\"fastai_dev/dev/data/mnist_tiny/train/3/9932.png\"), '3')\n[parent_label(o) for o in fnames]",
"_____no_output_____"
],
[
"#hide\n#test for MS Windows when os.path.sep is '\\\\' instead of '/'\ntest_eq(parent_label(os.path.join(\"fastai_dev\",\"dev\",\"data\",\"mnist_tiny\",\"train\", \"3\", \"9932.png\") ), '3')",
"_____no_output_____"
],
[
"# export\nclass RegexLabeller():\n \"Label `item` with regex `pat`.\"\n def __init__(self, pat, match=False):\n self.pat = re.compile(pat)\n self.matcher = self.pat.match if match else self.pat.search\n\n def __call__(self, o):\n res = self.matcher(str(o))\n assert res,f'Failed to find \"{self.pat}\" in \"{o}\"'\n return res.group(1)",
"_____no_output_____"
]
],
[
[
"`RegexLabeller` is a very flexible function since it handles any regex search of the stringified item. Pass `match=True` to use `re.match` (i.e. check only start of string), or `re.search` otherwise (default).\n\nFor instance, here's an example the replicates the previous `parent_label` results.",
"_____no_output_____"
]
],
[
[
"f = RegexLabeller(fr'{os.path.sep}(\\d){os.path.sep}')\ntest_eq(f(fnames[0]), '3')\n[f(o) for o in fnames]",
"_____no_output_____"
],
[
"f = RegexLabeller(r'(\\d*)', match=True)\ntest_eq(f(fnames[0].name), '9932')",
"_____no_output_____"
],
[
"#export\nclass ColReader():\n \"Read `cols` in `row` with potential `pref` and `suff`\"\n store_attrs = 'cols'\n def __init__(self, cols, pref='', suff='', label_delim=None):\n store_attr(self, 'suff,label_delim')\n self.pref = str(pref) + os.path.sep if isinstance(pref, Path) else pref\n self.cols = L(cols)\n\n def _do_one(self, r, c):\n o = r[c] if isinstance(c, int) else r[c] if c=='name' else getattr(r, c)\n if len(self.pref)==0 and len(self.suff)==0 and self.label_delim is None: return o\n if self.label_delim is None: return f'{self.pref}{o}{self.suff}'\n else: return o.split(self.label_delim) if len(o)>0 else []\n\n def __call__(self, o):\n if len(self.cols) == 1: return self._do_one(o, self.cols[0])\n return L(self._do_one(o, c) for c in self.cols)\n\n @property\n def name(self): return f\"ColReader -- {attrdict(self, *self.store_attrs.split(','))}\"",
"_____no_output_____"
]
],
[
[
"`cols` can be a list of column names or a list of indices (or a mix of both). If `label_delim` is passed, the result is split using it.",
"_____no_output_____"
]
],
[
[
"df = pd.DataFrame({'a': 'a b c d'.split(), 'b': ['1 2', '0', '', '1 2 3']})\nf = ColReader('a', pref='0', suff='1')\ntest_eq([f(o) for o in df.itertuples()], '0a1 0b1 0c1 0d1'.split())\n\nf = ColReader('b', label_delim=' ')\ntest_eq([f(o) for o in df.itertuples()], [['1', '2'], ['0'], [], ['1', '2', '3']])\n\ndf['a1'] = df['a']\nf = ColReader(['a', 'a1'], pref='0', suff='1')\ntest_eq([f(o) for o in df.itertuples()], [L('0a1', '0a1'), L('0b1', '0b1'), L('0c1', '0c1'), L('0d1', '0d1')])\n\ndf = pd.DataFrame({'a': [L(0,1), L(2,3,4), L(5,6,7)]})\nf = ColReader('a')\ntest_eq([f(o) for o in df.itertuples()], [L(0,1), L(2,3,4), L(5,6,7)])\n\ndf['name'] = df['a']\nf = ColReader('name')\ntest_eq([f(df.iloc[0,:])], [L(0,1)])",
"_____no_output_____"
]
],
[
[
"## Categorize -",
"_____no_output_____"
]
],
[
[
"#export\nclass CategoryMap(CollBase):\n \"Collection of categories with the reverse mapping in `o2i`\"\n def __init__(self, col, sort=True, add_na=False, strict=False):\n if is_categorical_dtype(col):\n items = L(col.cat.categories, use_list=True)\n #Remove non-used categories while keeping order\n if strict: items = L(o for o in items if o in col.unique())\n else:\n if not hasattr(col,'unique'): col = L(col, use_list=True)\n # `o==o` is the generalized definition of non-NaN used by Pandas\n items = L(o for o in col.unique() if o==o)\n if sort: items = items.sorted()\n self.items = '#na#' + items if add_na else items\n self.o2i = defaultdict(int, self.items.val2idx()) if add_na else dict(self.items.val2idx())\n \n def map_objs(self,objs):\n \"Map `objs` to IDs\"\n return L(self.o2i[o] for o in objs)\n\n def map_ids(self,ids):\n \"Map `ids` to objects in vocab\"\n return L(self.items[o] for o in ids)\n\n def __eq__(self,b): return all_equal(b,self)",
"_____no_output_____"
],
[
"t = CategoryMap([4,2,3,4])\ntest_eq(t, [2,3,4])\ntest_eq(t.o2i, {2:0,3:1,4:2})\ntest_eq(t.map_objs([2,3]), [0,1])\ntest_eq(t.map_ids([0,1]), [2,3])\ntest_fail(lambda: t.o2i['unseen label'])",
"_____no_output_____"
],
[
"t = CategoryMap([4,2,3,4], add_na=True)\ntest_eq(t, ['#na#',2,3,4])\ntest_eq(t.o2i, {'#na#':0,2:1,3:2,4:3})",
"_____no_output_____"
],
[
"t = CategoryMap(pd.Series([4,2,3,4]), sort=False)\ntest_eq(t, [4,2,3])\ntest_eq(t.o2i, {4:0,2:1,3:2})",
"_____no_output_____"
],
[
"col = pd.Series(pd.Categorical(['M','H','L','M'], categories=['H','M','L'], ordered=True))\nt = CategoryMap(col)\ntest_eq(t, ['H','M','L'])\ntest_eq(t.o2i, {'H':0,'M':1,'L':2})",
"_____no_output_____"
],
[
"col = pd.Series(pd.Categorical(['M','H','M'], categories=['H','M','L'], ordered=True))\nt = CategoryMap(col, strict=True)\ntest_eq(t, ['H','M'])\ntest_eq(t.o2i, {'H':0,'M':1})",
"_____no_output_____"
],
[
"# export\nclass Categorize(Transform):\n \"Reversible transform of category string to `vocab` id\"\n loss_func,order,store_attrs=CrossEntropyLossFlat(),1,'vocab,add_na'\n def __init__(self, vocab=None, sort=True, add_na=False):\n store_attr(self, self.store_attrs+',sort')\n self.vocab = None if vocab is None else CategoryMap(vocab, sort=sort, add_na=add_na)\n\n def setups(self, dsets):\n if self.vocab is None and dsets is not None: self.vocab = CategoryMap(dsets, sort=self.sort, add_na=self.add_na)\n self.c = len(self.vocab)\n\n def encodes(self, o): return TensorCategory(self.vocab.o2i[o])\n def decodes(self, o): return Category (self.vocab [o])\n\n @property\n def name(self): return f\"{super().name} -- {attrdict(self, *self.store_attrs.split(','))}\"",
"_____no_output_____"
],
[
"#export\nclass Category(str, ShowTitle): _show_args = {'label': 'category'}",
"_____no_output_____"
],
[
"cat = Categorize()\ntds = Datasets(['cat', 'dog', 'cat'], tfms=[cat])\ntest_eq(cat.vocab, ['cat', 'dog'])\ntest_eq(cat('cat'), 0)\ntest_eq(cat.decode(1), 'dog')\ntest_stdout(lambda: show_at(tds,2), 'cat')",
"_____no_output_____"
],
[
"cat = Categorize(add_na=True)\ntds = Datasets(['cat', 'dog', 'cat'], tfms=[cat])\ntest_eq(cat.vocab, ['#na#', 'cat', 'dog'])\ntest_eq(cat('cat'), 1)\ntest_eq(cat.decode(2), 'dog')\ntest_stdout(lambda: show_at(tds,2), 'cat')",
"_____no_output_____"
],
[
"cat = Categorize(vocab=['dog', 'cat'], sort=False, add_na=True)\ntds = Datasets(['cat', 'dog', 'cat'], tfms=[cat])\ntest_eq(cat.vocab, ['#na#', 'dog', 'cat'])\ntest_eq(cat('dog'), 1)\ntest_eq(cat.decode(2), 'cat')\ntest_stdout(lambda: show_at(tds,2), 'cat')",
"_____no_output_____"
]
],
[
[
"## Multicategorize -",
"_____no_output_____"
]
],
[
[
"# export\nclass MultiCategorize(Categorize):\n \"Reversible transform of multi-category strings to `vocab` id\"\n loss_func,order=BCEWithLogitsLossFlat(),1\n def __init__(self, vocab=None, add_na=False): super().__init__(vocab=vocab,add_na=add_na)\n\n def setups(self, dsets):\n if not dsets: return\n if self.vocab is None:\n vals = set()\n for b in dsets: vals = vals.union(set(b))\n self.vocab = CategoryMap(list(vals), add_na=self.add_na)\n\n def encodes(self, o): return TensorMultiCategory([self.vocab.o2i[o_] for o_ in o])\n def decodes(self, o): return MultiCategory ([self.vocab [o_] for o_ in o])",
"_____no_output_____"
],
[
"#export\nclass MultiCategory(L):\n def show(self, ctx=None, sep=';', color='black', **kwargs):\n return show_title(sep.join(self.map(str)), ctx=ctx, color=color, **kwargs)",
"_____no_output_____"
],
[
"cat = MultiCategorize()\ntds = Datasets([['b', 'c'], ['a'], ['a', 'c'], []], tfms=[cat])\ntest_eq(tds[3][0], TensorMultiCategory([]))\ntest_eq(cat.vocab, ['a', 'b', 'c'])\ntest_eq(cat(['a', 'c']), tensor([0,2]))\ntest_eq(cat([]), tensor([]))\ntest_eq(cat.decode([1]), ['b'])\ntest_eq(cat.decode([0,2]), ['a', 'c'])\ntest_stdout(lambda: show_at(tds,2), 'a;c')",
"_____no_output_____"
],
[
"# export\nclass OneHotEncode(Transform):\n \"One-hot encodes targets\"\n order,store_attrs=2,'c'\n def __init__(self, c=None):\n self.c = c\n\n def setups(self, dsets):\n if self.c is None: self.c = len(L(getattr(dsets, 'vocab', None)))\n if not self.c: warn(\"Couldn't infer the number of classes, please pass a value for `c` at init\")\n\n def encodes(self, o): return TensorMultiCategory(one_hot(o, self.c).float())\n def decodes(self, o): return one_hot_decode(o, None)\n\n @property\n def name(self): return f\"{super().name} -- {attrdict(self, *self.store_attrs.split(','))}\"",
"_____no_output_____"
]
],
[
[
"Works in conjunction with ` MultiCategorize` or on its own if you have one-hot encoded targets (pass a `vocab` for decoding and `do_encode=False` in this case)",
"_____no_output_____"
]
],
[
[
"_tfm = OneHotEncode(c=3)\ntest_eq(_tfm([0,2]), tensor([1.,0,1]))\ntest_eq(_tfm.decode(tensor([0,1,1])), [1,2])",
"_____no_output_____"
],
[
"tds = Datasets([['b', 'c'], ['a'], ['a', 'c'], []], [[MultiCategorize(), OneHotEncode()]])\ntest_eq(tds[1], [tensor([1.,0,0])])\ntest_eq(tds[3], [tensor([0.,0,0])])\ntest_eq(tds.decode([tensor([False, True, True])]), [['b','c']])\ntest_eq(type(tds[1][0]), TensorMultiCategory)\ntest_stdout(lambda: show_at(tds,2), 'a;c')",
"_____no_output_____"
],
[
"#hide\n#test with passing the vocab\ntds = Datasets([['b', 'c'], ['a'], ['a', 'c'], []], [[MultiCategorize(vocab=['a', 'b', 'c']), OneHotEncode()]])\ntest_eq(tds[1], [tensor([1.,0,0])])\ntest_eq(tds[3], [tensor([0.,0,0])])\ntest_eq(tds.decode([tensor([False, True, True])]), [['b','c']])\ntest_eq(type(tds[1][0]), TensorMultiCategory)\ntest_stdout(lambda: show_at(tds,2), 'a;c')",
"_____no_output_____"
],
[
"# export\nclass EncodedMultiCategorize(Categorize):\n \"Transform of one-hot encoded multi-category that decodes with `vocab`\"\n loss_func,order=BCEWithLogitsLossFlat(),1\n def __init__(self, vocab):\n super().__init__(vocab)\n self.c = len(vocab)\n def encodes(self, o): return TensorMultiCategory(tensor(o).float())\n def decodes(self, o): return MultiCategory (one_hot_decode(o, self.vocab))",
"_____no_output_____"
],
[
"_tfm = EncodedMultiCategorize(vocab=['a', 'b', 'c'])\ntest_eq(_tfm([1,0,1]), tensor([1., 0., 1.]))\ntest_eq(type(_tfm([1,0,1])), TensorMultiCategory)\ntest_eq(_tfm.decode(tensor([False, True, True])), ['b','c'])",
"_____no_output_____"
],
[
"_tfm",
"_____no_output_____"
],
[
"#export\nclass RegressionSetup(Transform):\n \"Transform that floatifies targets\"\n loss_func,store_attrs=MSELossFlat(),'c'\n def __init__(self, c=None):\n self.c = c\n\n def encodes(self, o): return tensor(o).float()\n def decodes(self, o): return TitledFloat(o) if o.ndim==0 else TitledTuple(o_.item() for o_ in o)\n def setups(self, dsets):\n if self.c is not None: return\n try: self.c = len(dsets[0]) if hasattr(dsets[0], '__len__') else 1\n except: self.c = 0\n\n @property\n def name(self): return f\"{super().name} -- {attrdict(self, *self.store_attrs.split(','))}\"",
"_____no_output_____"
],
[
"_tfm = RegressionSetup()\ndsets = Datasets([0, 1, 2], RegressionSetup)\ntest_eq(dsets.c, 1)\ntest_eq_type(dsets[0], (tensor(0.),))\n\ndsets = Datasets([[0, 1, 2], [3,4,5]], RegressionSetup)\ntest_eq(dsets.c, 3)\ntest_eq_type(dsets[0], (tensor([0.,1.,2.]),))",
"_____no_output_____"
],
[
"#export\ndef get_c(dls):\n if getattr(dls, 'c', False): return dls.c\n if getattr(getattr(dls.train, 'after_item', None), 'c', False): return dls.train.after_item.c\n if getattr(getattr(dls.train, 'after_batch', None), 'c', False): return dls.train.after_batch.c\n vocab = getattr(dls, 'vocab', [])\n if len(vocab) > 0 and is_listy(vocab[-1]): vocab = vocab[-1]\n return len(vocab)",
"_____no_output_____"
]
],
[
[
"## End-to-end dataset example with MNIST",
"_____no_output_____"
],
[
"Let's show how to use those functions to grab the mnist dataset in a `Datasets`. First we grab all the images.",
"_____no_output_____"
]
],
[
[
"path = untar_data(URLs.MNIST_TINY)\nitems = get_image_files(path)",
"_____no_output_____"
]
],
[
[
"Then we split between train and validation depending on the folder.",
"_____no_output_____"
]
],
[
[
"splitter = GrandparentSplitter()\nsplits = splitter(items)\ntrain,valid = (items[i] for i in splits)\ntrain[:3],valid[:3]",
"_____no_output_____"
]
],
[
[
"Our inputs are images that we open and convert to tensors, our targets are labeled depending on the parent directory and are categories.",
"_____no_output_____"
]
],
[
[
"from PIL import Image\ndef open_img(fn:Path): return Image.open(fn).copy()\ndef img2tensor(im:Image.Image): return TensorImage(array(im)[None])\n\ntfms = [[open_img, img2tensor],\n [parent_label, Categorize()]]\ntrain_ds = Datasets(train, tfms)",
"_____no_output_____"
],
[
"x,y = train_ds[3]\nxd,yd = decode_at(train_ds,3)\ntest_eq(parent_label(train[3]),yd)\ntest_eq(array(Image.open(train[3])),xd[0].numpy())",
"_____no_output_____"
],
[
"ax = show_at(train_ds, 3, cmap=\"Greys\", figsize=(1,1))",
"_____no_output_____"
],
[
"assert ax.title.get_text() in ('3','7')\ntest_fig_exists(ax)",
"_____no_output_____"
]
],
[
[
"## ToTensor -",
"_____no_output_____"
]
],
[
[
"#export\nclass ToTensor(Transform):\n \"Convert item to appropriate tensor class\"\n order = 5",
"_____no_output_____"
]
],
[
[
"## IntToFloatTensor -",
"_____no_output_____"
]
],
[
[
"# export\nclass IntToFloatTensor(Transform):\n \"Transform image to float tensor, optionally dividing by 255 (e.g. for images).\"\n order,store_attrs = 10,'div,div_mask' #Need to run after PIL transforms on the GPU\n def __init__(self, div=255., div_mask=1):\n store_attr(self, 'div,div_mask')\n def encodes(self, o:TensorImage): return o.float().div_(self.div)\n def encodes(self, o:TensorMask ): return o.long() // self.div_mask\n def decodes(self, o:TensorImage): return ((o.clamp(0., 1.) * self.div).long()) if self.div else o\n\n @property\n def name(self): return f\"{super().name} -- {attrdict(self, *self.store_attrs.split(','))}\"",
"_____no_output_____"
],
[
"t = (TensorImage(tensor(1)),tensor(2).long(),TensorMask(tensor(3)))\ntfm = IntToFloatTensor()\nft = tfm(t)\ntest_eq(ft, [1./255, 2, 3])\ntest_eq(type(ft[0]), TensorImage)\ntest_eq(type(ft[2]), TensorMask)\ntest_eq(ft[0].type(),'torch.FloatTensor')\ntest_eq(ft[1].type(),'torch.LongTensor')\ntest_eq(ft[2].type(),'torch.LongTensor')",
"_____no_output_____"
]
],
[
[
"## Normalization -",
"_____no_output_____"
]
],
[
[
"# export\ndef broadcast_vec(dim, ndim, *t, cuda=True):\n \"Make a vector broadcastable over `dim` (out of `ndim` total) by prepending and appending unit axes\"\n v = [1]*ndim\n v[dim] = -1\n f = to_device if cuda else noop\n return [f(tensor(o).view(*v)) for o in t]",
"_____no_output_____"
],
[
"# export\n@docs\nclass Normalize(Transform):\n \"Normalize/denorm batch of `TensorImage`\"\n parameters,order,store_attrs=L('mean', 'std'),99, 'mean,std,axes'\n def __init__(self, mean=None, std=None, axes=(0,2,3)):\n self.mean,self.std,self.axes = mean,std,axes\n\n @classmethod\n def from_stats(cls, mean, std, dim=1, ndim=4, cuda=True): return cls(*broadcast_vec(dim, ndim, mean, std, cuda=cuda))\n\n def setups(self, dl:DataLoader):\n if self.mean is None or self.std is None:\n x,*_ = dl.one_batch()\n self.mean,self.std = x.mean(self.axes, keepdim=True),x.std(self.axes, keepdim=True)+1e-7\n\n def encodes(self, x:TensorImage): return (x-self.mean) / self.std\n def decodes(self, x:TensorImage):\n f = to_cpu if x.device.type=='cpu' else noop\n return (x*f(self.std) + f(self.mean))\n\n @property\n def name(self): return f\"{super().name} -- {attrdict(self, *self.store_attrs.split(','))}\"\n\n _docs=dict(encodes=\"Normalize batch\", decodes=\"Denormalize batch\")",
"_____no_output_____"
],
[
"mean,std = [0.5]*3,[0.5]*3\nmean,std = broadcast_vec(1, 4, mean, std)\nbatch_tfms = [IntToFloatTensor(), Normalize.from_stats(mean,std)]\ntdl = TfmdDL(train_ds, after_batch=batch_tfms, bs=4, device=default_device())",
"_____no_output_____"
],
[
"x,y = tdl.one_batch()\nxd,yd = tdl.decode((x,y))\n\ntest_eq(x.type(), 'torch.cuda.FloatTensor' if default_device().type=='cuda' else 'torch.FloatTensor')\ntest_eq(xd.type(), 'torch.LongTensor')\ntest_eq(type(x), TensorImage)\ntest_eq(type(y), TensorCategory)\nassert x.mean()<0.0\nassert x.std()>0.5\nassert 0<xd.float().mean()/255.<1\nassert 0<xd.float().std()/255.<0.5",
"_____no_output_____"
],
[
"#hide\nnrm = Normalize()\nbatch_tfms = [IntToFloatTensor(), nrm]\ntdl = TfmdDL(train_ds, after_batch=batch_tfms, bs=4)\nx,y = tdl.one_batch()\ntest_close(x.mean(), 0.0, 1e-4)\nassert x.std()>0.9, x.std()",
"_____no_output_____"
],
[
"#Just for visuals\nfrom fastai2.vision.core import *",
"_____no_output_____"
],
[
"tdl.show_batch((x,y))",
"_____no_output_____"
],
[
"x,y = torch.add(x,0),torch.add(y,0) #Lose type of tensors (to emulate predictions)\ntest_ne(type(x), TensorImage)\ntdl.show_batch((x,y), figsize=(4,4)) #Check that types are put back by dl.",
"_____no_output_____"
],
[
"#TODO: make the above check a proper test",
"_____no_output_____"
]
],
[
[
"## Export -",
"_____no_output_____"
]
],
[
[
"#hide\nfrom nbdev.export import notebook2script\nnotebook2script()",
"Converted 00_torch_core.ipynb.\nConverted 01_layers.ipynb.\nConverted 02_data.load.ipynb.\nConverted 03_data.core.ipynb.\nConverted 04_data.external.ipynb.\nConverted 05_data.transforms.ipynb.\nConverted 06_data.block.ipynb.\nConverted 07_vision.core.ipynb.\nConverted 08_vision.data.ipynb.\nConverted 09_vision.augment.ipynb.\nConverted 09b_vision.utils.ipynb.\nConverted 09c_vision.widgets.ipynb.\nConverted 10_tutorial.pets.ipynb.\nConverted 11_vision.models.xresnet.ipynb.\nConverted 12_optimizer.ipynb.\nConverted 13_callback.core.ipynb.\nConverted 13a_learner.ipynb.\nConverted 13b_metrics.ipynb.\nConverted 14_callback.schedule.ipynb.\nConverted 14a_callback.data.ipynb.\nConverted 15_callback.hook.ipynb.\nConverted 15a_vision.models.unet.ipynb.\nConverted 16_callback.progress.ipynb.\nConverted 17_callback.tracker.ipynb.\nConverted 18_callback.fp16.ipynb.\nConverted 18a_callback.training.ipynb.\nConverted 19_callback.mixup.ipynb.\nConverted 20_interpret.ipynb.\nConverted 20a_distributed.ipynb.\nConverted 21_vision.learner.ipynb.\nConverted 22_tutorial.imagenette.ipynb.\nConverted 23_tutorial.vision.ipynb.\nConverted 24_tutorial.siamese.ipynb.\nConverted 24_vision.gan.ipynb.\nConverted 30_text.core.ipynb.\nConverted 31_text.data.ipynb.\nConverted 32_text.models.awdlstm.ipynb.\nConverted 33_text.models.core.ipynb.\nConverted 34_callback.rnn.ipynb.\nConverted 35_tutorial.wikitext.ipynb.\nConverted 36_text.models.qrnn.ipynb.\nConverted 37_text.learner.ipynb.\nConverted 38_tutorial.text.ipynb.\nConverted 39_tutorial.transformers.ipynb.\nConverted 40_tabular.core.ipynb.\nConverted 41_tabular.data.ipynb.\nConverted 42_tabular.model.ipynb.\nConverted 43_tabular.learner.ipynb.\nConverted 44_tutorial.tabular.ipynb.\nConverted 45_collab.ipynb.\nConverted 46_tutorial.collab.ipynb.\nConverted 50_tutorial.datablock.ipynb.\nConverted 60_medical.imaging.ipynb.\nConverted 61_tutorial.medical_imaging.ipynb.\nConverted 65_medical.text.ipynb.\nConverted 70_callback.wandb.ipynb.\nConverted 71_callback.tensorboard.ipynb.\nConverted 72_callback.neptune.ipynb.\nConverted 73_callback.captum.ipynb.\nConverted 74_callback.cutmix.ipynb.\nConverted 97_test_utils.ipynb.\nConverted 99_pytorch_doc.ipynb.\nConverted index.ipynb.\nConverted tutorial.ipynb.\n"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0be30fbb7f1bda764e817360446c769702f8683 | 830,057 | ipynb | Jupyter Notebook | 10_effects_of_complexity_and_averaging/averaging_effects/distribution_2/lr_001/loss_function_with_entropy/second_layer_averaging/type_4_second_Layer_entropy_loss_k005_lr001.ipynb | lnpandey/DL_explore_synth_data | 0a5d8b417091897f4c7f358377d5198a155f3f24 | [
"MIT"
] | 2 | 2019-08-24T07:20:35.000Z | 2020-03-27T08:16:59.000Z | 10_effects_of_complexity_and_averaging/averaging_effects/distribution_2/lr_001/loss_function_with_entropy/second_layer_averaging/type_4_second_Layer_entropy_loss_k005_lr001.ipynb | lnpandey/DL_explore_synth_data | 0a5d8b417091897f4c7f358377d5198a155f3f24 | [
"MIT"
] | null | null | null | 10_effects_of_complexity_and_averaging/averaging_effects/distribution_2/lr_001/loss_function_with_entropy/second_layer_averaging/type_4_second_Layer_entropy_loss_k005_lr001.ipynb | lnpandey/DL_explore_synth_data | 0a5d8b417091897f4c7f358377d5198a155f3f24 | [
"MIT"
] | 3 | 2019-06-21T09:34:32.000Z | 2019-09-19T10:43:07.000Z | 352.465817 | 38,370 | 0.89596 | [
[
[
"import numpy as np\nimport pandas as pd\n\nimport torch\nimport torchvision\nfrom torch.utils.data import Dataset, DataLoader\nfrom torchvision import transforms, utils\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\n\nfrom matplotlib import pyplot as plt\n%matplotlib inline",
"_____no_output_____"
],
[
"class MosaicDataset1(Dataset):\n \"\"\"MosaicDataset dataset.\"\"\"\n\n def __init__(self, mosaic_list, mosaic_label,fore_idx):\n \"\"\"\n Args:\n csv_file (string): Path to the csv file with annotations.\n root_dir (string): Directory with all the images.\n transform (callable, optional): Optional transform to be applied\n on a sample.\n \"\"\"\n self.mosaic = mosaic_list\n self.label = mosaic_label\n self.fore_idx = fore_idx\n \n def __len__(self):\n return len(self.label)\n\n def __getitem__(self, idx):\n return self.mosaic[idx] , self.label[idx] , self.fore_idx[idx]",
"_____no_output_____"
],
[
"data = np.load(\"type4_data.npy\",allow_pickle=True)",
"_____no_output_____"
],
[
"mosaic_list_of_images = data[0][\"mosaic_list\"]\nmosaic_label = data[0][\"mosaic_label\"]\nfore_idx = data[0][\"fore_idx\"]",
"_____no_output_____"
],
[
"batch = 250\nmsd = MosaicDataset1(mosaic_list_of_images, mosaic_label, fore_idx)\ntrain_loader = DataLoader( msd,batch_size= batch ,shuffle=True)",
"_____no_output_____"
],
[
"class Focus_deep(nn.Module):\n '''\n deep focus network averaged at zeroth layer\n input : elemental data\n '''\n def __init__(self,inputs,output,K,d):\n super(Focus_deep,self).__init__()\n self.inputs = inputs\n self.output = output\n self.K = K\n self.d = d\n self.linear1 = nn.Linear(self.inputs,50) #,self.output)\n self.linear2 = nn.Linear(50,50)\n self.linear3 = nn.Linear(50,self.output) \n def forward(self,z):\n batch = z.shape[0]\n x = torch.zeros([batch,self.K],dtype=torch.float64)\n y = torch.zeros([batch,50], dtype=torch.float64) # number of features of output\n features = torch.zeros([batch,self.K,50],dtype=torch.float64)\n x,y = x.to(\"cuda\"),y.to(\"cuda\")\n features = features.to(\"cuda\")\n for i in range(self.K):\n alp,ftrs = self.helper(z[:,i] ) # self.d*i:self.d*i+self.d\n x[:,i] = alp[:,0]\n features[:,i] = ftrs \n log_x = F.log_softmax(x,dim=1) #log alpha \n x = F.softmax(x,dim=1) # alphas\n \n for i in range(self.K):\n x1 = x[:,i] \n y = y+torch.mul(x1[:,None],features[:,i]) # self.d*i:self.d*i+self.d\n return y , x,log_x \n def helper(self,x):\n x = self.linear1(x)\n \n x = F.relu(x) \n x = self.linear2(x)\n x1 = F.tanh(x)\n x = F.relu(x)\n x = self.linear3(x)\n #print(x1.shape)\n return x,x1\n",
"_____no_output_____"
],
[
"class Classification_deep(nn.Module):\n '''\n input : elemental data\n deep classification module data averaged at zeroth layer\n '''\n def __init__(self,inputs,output):\n super(Classification_deep,self).__init__()\n self.inputs = inputs\n self.output = output\n self.linear1 = nn.Linear(self.inputs,50)\n #self.linear2 = nn.Linear(50,50)\n self.linear2 = nn.Linear(50,self.output)\n\n def forward(self,x):\n x = F.relu(self.linear1(x))\n #x = F.relu(self.linear2(x))\n x = self.linear2(x)\n return x ",
"_____no_output_____"
],
[
"criterion = nn.CrossEntropyLoss()\ndef my_cross_entropy(x, y,alpha,log_alpha,k):\n loss = criterion(x,y)\n b = -1.0* alpha * log_alpha\n b = torch.mean(torch.sum(b,dim=1))\n closs = loss\n entropy = b \n loss = (1-k)*loss + ((k)*b)\n return loss,closs,entropy",
"_____no_output_____"
],
[
"def calculate_attn_loss(dataloader,what,where,k):\n what.eval()\n where.eval()\n r_loss = 0\n cc_loss = 0\n cc_entropy = 0 \n alphas = []\n lbls = []\n pred = []\n fidices = []\n with torch.no_grad():\n for i, data in enumerate(dataloader, 0):\n inputs, labels,fidx = data\n lbls.append(labels)\n fidices.append(fidx)\n inputs = inputs.double()\n inputs, labels = inputs.to(\"cuda\"),labels.to(\"cuda\")\n avg,alpha,log_alpha = where(inputs)\n outputs = what(avg)\n _, predicted = torch.max(outputs.data, 1)\n pred.append(predicted.cpu().numpy())\n alphas.append(alpha.cpu().numpy())\n loss,closs,entropy = my_cross_entropy(outputs,labels,alpha,log_alpha,k)\n r_loss += loss.item()\n cc_loss += closs.item()\n cc_entropy += entropy.item()\n alphas = np.concatenate(alphas,axis=0)\n pred = np.concatenate(pred,axis=0)\n lbls = np.concatenate(lbls,axis=0)\n fidices = np.concatenate(fidices,axis=0)\n #print(alphas.shape,pred.shape,lbls.shape,fidices.shape) \n analysis = analyse_data(alphas,lbls,pred,fidices)\n return r_loss/i,cc_loss/i,cc_entropy/i,analysis",
"_____no_output_____"
],
[
"def analyse_data(alphas,lbls,predicted,f_idx):\n '''\n analysis data is created here\n '''\n batch = len(predicted)\n amth,alth,ftpt,ffpt,ftpf,ffpf = 0,0,0,0,0,0\n for j in range (batch):\n focus = np.argmax(alphas[j])\n if(alphas[j][focus] >= 0.5):\n amth +=1\n else:\n alth +=1\n if(focus == f_idx[j] and predicted[j] == lbls[j]):\n ftpt += 1\n elif(focus != f_idx[j] and predicted[j] == lbls[j]):\n ffpt +=1\n elif(focus == f_idx[j] and predicted[j] != lbls[j]):\n ftpf +=1\n elif(focus != f_idx[j] and predicted[j] != lbls[j]):\n ffpf +=1\n #print(sum(predicted==lbls),ftpt+ffpt)\n return [ftpt,ffpt,ftpf,ffpf,amth,alth]",
"_____no_output_____"
],
[
"number_runs = 20\nfull_analysis = []\nFTPT_analysis = pd.DataFrame(columns = [\"FTPT\",\"FFPT\", \"FTPF\",\"FFPF\"])\nk = 0.005\nr_loss = []\nr_closs = [] \nr_centropy = []\nfor n in range(number_runs):\n print(\"--\"*40)\n \n # instantiate focus and classification Model\n torch.manual_seed(n)\n where = Focus_deep(2,1,9,2).double()\n torch.manual_seed(n)\n what = Classification_deep(50,3).double()\n where = where.to(\"cuda\")\n what = what.to(\"cuda\")\n\n\n\n # instantiate optimizer\n optimizer_where = optim.Adam(where.parameters(),lr =0.001)#,momentum=0.9)\n optimizer_what = optim.Adam(what.parameters(), lr=0.001)#,momentum=0.9)\n #criterion = nn.CrossEntropyLoss()\n acti = []\n analysis_data = []\n loss_curi = []\n cc_loss_curi = []\n cc_entropy_curi = []\n epochs = 3000\n\n\n # calculate zeroth epoch loss and FTPT values\n running_loss,_,_,anlys_data = calculate_attn_loss(train_loader,what,where,k)\n loss_curi.append(running_loss)\n analysis_data.append(anlys_data)\n\n print('epoch: [%d ] loss: %.3f' %(0,running_loss)) \n\n # training starts \n for epoch in range(epochs): # loop over the dataset multiple times\n ep_lossi = []\n running_loss = 0.0\n what.train()\n where.train()\n for i, data in enumerate(train_loader, 0):\n # get the inputs\n inputs, labels,_ = data\n inputs = inputs.double()\n inputs, labels = inputs.to(\"cuda\"),labels.to(\"cuda\")\n\n # zero the parameter gradients\n optimizer_where.zero_grad()\n optimizer_what.zero_grad()\n \n # forward + backward + optimize\n avg, alpha,log_alpha = where(inputs)\n outputs = what(avg)\n loss,_,_ = my_cross_entropy( outputs,labels,alpha,log_alpha,k)\n\n # print statistics\n running_loss += loss.item()\n loss.backward()\n optimizer_where.step()\n optimizer_what.step()\n\n running_loss,ccloss,ccentropy,anls_data = calculate_attn_loss(train_loader,what,where,k)\n analysis_data.append(anls_data)\n print('epoch: [%d] loss: %.3f celoss: %.3f entropy: %.3f' %(epoch + 1,running_loss,ccloss,ccentropy)) \n loss_curi.append(running_loss) #loss per epoch\n cc_loss_curi.append(ccloss)\n cc_entropy_curi.append(ccentropy)\n if running_loss<=0.01:\n break\n print('Finished Training run ' +str(n))\n analysis_data = np.array(analysis_data)\n FTPT_analysis.loc[n] = analysis_data[-1,:4]/30\n full_analysis.append((epoch, analysis_data))\n r_loss.append(np.array(loss_curi))\n r_closs.append(np.array(cc_loss_curi))\n r_centropy.append(np.array(cc_entropy_curi))\n correct = 0\n total = 0\n with torch.no_grad():\n for data in train_loader:\n images, labels,_ = data\n images = images.double()\n images, labels = images.to(\"cuda\"), labels.to(\"cuda\")\n avg, alpha,_ = where(images)\n outputs = what(avg)\n _, predicted = torch.max(outputs.data, 1)\n total += labels.size(0)\n correct += (predicted == labels).sum().item()\n\n print('Accuracy of the network on the 3000 train images: %d %%' % ( 100 * correct / total))\n ",
"--------------------------------------------------------------------------------\nepoch: [0 ] loss: 1.225\n"
],
[
"a,b= full_analysis[0]\nprint(a)",
"58\n"
],
[
"cnt=1\nfor epoch, analysis_data in full_analysis:\n analysis_data = np.array(analysis_data)\n # print(\"=\"*20+\"run \",cnt,\"=\"*20)\n \n plt.figure(figsize=(6,6))\n plt.plot(np.arange(0,epoch+2,1),analysis_data[:,0],label=\"ftpt\")\n plt.plot(np.arange(0,epoch+2,1),analysis_data[:,1],label=\"ffpt\")\n plt.plot(np.arange(0,epoch+2,1),analysis_data[:,2],label=\"ftpf\")\n plt.plot(np.arange(0,epoch+2,1),analysis_data[:,3],label=\"ffpf\")\n\n plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))\n plt.title(\"Training trends for run \"+str(cnt))\n #plt.savefig(\"/content/drive/MyDrive/Research/alpha_analysis/100_300/k\"+str(k)+\"/\"+\"run\"+str(cnt)+name+\".png\",bbox_inches=\"tight\")\n #plt.savefig(\"/content/drive/MyDrive/Research/alpha_analysis/100_300/k\"+str(k)+\"/\"+\"run\"+str(cnt)+name+\".pdf\",bbox_inches=\"tight\")\n cnt+=1",
"_____no_output_____"
],
[
"# plt.figure(figsize=(6,6))\n# plt.plot(np.arange(0,epoch+2,1),analysis_data[:,0],label=\"ftpt\")\n# plt.plot(np.arange(0,epoch+2,1),analysis_data[:,1],label=\"ffpt\")\n# plt.plot(np.arange(0,epoch+2,1),analysis_data[:,2],label=\"ftpf\")\n# plt.plot(np.arange(0,epoch+2,1),analysis_data[:,3],label=\"ffpf\")\n\n# plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))\n\n",
"_____no_output_____"
],
[
"plt.plot(loss_curi)",
"_____no_output_____"
],
[
"np.mean(np.array(FTPT_analysis),axis=0)",
"_____no_output_____"
],
[
"FTPT_analysis.to_csv(\"type4_first_k_value_01_lr_001.csv\",index=False)",
"_____no_output_____"
]
],
[
[
"\n\n\n",
"_____no_output_____"
]
],
[
[
"FTPT_analysis",
"_____no_output_____"
]
],
[
[
"# Entropy",
"_____no_output_____"
]
],
[
[
"entropy_1 = r_centropy[11] # FTPT 100 ,FFPT 0 k value =0.01\nloss_1 = r_loss[11]\nce_loss_1 = r_closs[11]",
"_____no_output_____"
],
[
"entropy_2 = r_centropy[16] # kvalue = 0 FTPT 99.96, FFPT 0.03\nce_loss_2 = r_closs[16]",
"_____no_output_____"
],
[
"# plt.plot(r_closs[1])\n\nplt.plot(entropy_1,label = \"entropy k_value=0.01\")\nplt.plot(loss_1,label = \"overall k_value=0.01\")\nplt.plot(ce_loss_1,label = \"ce kvalue = 0.01\")\nplt.plot(entropy_2,label = \"entropy k_value = 0\")\nplt.plot(ce_loss_2,label = \"ce k_value=0\")\nplt.legend(bbox_to_anchor=(1.05, 1), loc='upper left')\nplt.savefig(\"second_layer.png\")\n",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
d0be5a906364b4829b808c2ee35b579f6aae7df8 | 47,815 | ipynb | Jupyter Notebook | tv-script-generation/dlnd_tv_script_generation.ipynb | SafwanAhmad/deep-learning-foundations-nanodegree | ce6e1b5acadb860e4b783a31f8701f57929b0e44 | [
"MIT"
] | null | null | null | tv-script-generation/dlnd_tv_script_generation.ipynb | SafwanAhmad/deep-learning-foundations-nanodegree | ce6e1b5acadb860e4b783a31f8701f57929b0e44 | [
"MIT"
] | null | null | null | tv-script-generation/dlnd_tv_script_generation.ipynb | SafwanAhmad/deep-learning-foundations-nanodegree | ce6e1b5acadb860e4b783a31f8701f57929b0e44 | [
"MIT"
] | null | null | null | 36.55581 | 556 | 0.558109 | [
[
[
"# TV Script Generation\nIn this project, you'll generate your own [Simpsons](https://en.wikipedia.org/wiki/The_Simpsons) TV scripts using RNNs. You'll be using part of the [Simpsons dataset](https://www.kaggle.com/wcukierski/the-simpsons-by-the-data) of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at [Moe's Tavern](https://simpsonswiki.com/wiki/Moe's_Tavern).\n## Get the Data\nThe data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like \"Moe's Cavern\", \"Flaming Moe's\", \"Uncle Moe's Family Feed-Bag\", etc..",
"_____no_output_____"
]
],
[
[
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport helper\n\ndata_dir = './data/simpsons/moes_tavern_lines.txt'\ntext = helper.load_data(data_dir)\n# Ignore notice, since we don't use it for analysing the data\ntext = text[81:]",
"_____no_output_____"
]
],
[
[
"## Explore the Data\nPlay around with `view_sentence_range` to view different parts of the data.",
"_____no_output_____"
]
],
[
[
"view_sentence_range = (0, 10)\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport numpy as np\n\nprint('Dataset Stats')\nprint('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))\nscenes = text.split('\\n\\n')\nprint('Number of scenes: {}'.format(len(scenes)))\nsentence_count_scene = [scene.count('\\n') for scene in scenes]\nprint('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))\n\nsentences = [sentence for scene in scenes for sentence in scene.split('\\n')]\nprint('Number of lines: {}'.format(len(sentences)))\nword_count_sentence = [len(sentence.split()) for sentence in sentences]\nprint('Average number of words in each line: {}'.format(np.average(word_count_sentence)))\n\nprint()\nprint('The sentences {} to {}:'.format(*view_sentence_range))\nprint('\\n'.join(text.split('\\n')[view_sentence_range[0]:view_sentence_range[1]]))",
"Dataset Stats\nRoughly the number of unique words: 11492\nNumber of scenes: 262\nAverage number of sentences in each scene: 15.248091603053435\nNumber of lines: 4257\nAverage number of words in each line: 11.50434578341555\n\nThe sentences 0 to 10:\nMoe_Szyslak: (INTO PHONE) Moe's Tavern. Where the elite meet to drink.\nBart_Simpson: Eh, yeah, hello, is Mike there? Last name, Rotch.\nMoe_Szyslak: (INTO PHONE) Hold on, I'll check. (TO BARFLIES) Mike Rotch. Mike Rotch. Hey, has anybody seen Mike Rotch, lately?\nMoe_Szyslak: (INTO PHONE) Listen you little puke. One of these days I'm gonna catch you, and I'm gonna carve my name on your back with an ice pick.\nMoe_Szyslak: What's the matter Homer? You're not your normal effervescent self.\nHomer_Simpson: I got my problems, Moe. Give me another one.\nMoe_Szyslak: Homer, hey, you should not drink to forget your problems.\nBarney_Gumble: Yeah, you should only drink to enhance your social skills.\n\n\n"
]
],
[
[
"## Implement Preprocessing Functions\nThe first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:\n- Lookup Table\n- Tokenize Punctuation\n\n### Lookup Table\nTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:\n- Dictionary to go from the words to an id, we'll call `vocab_to_int`\n- Dictionary to go from the id to word, we'll call `int_to_vocab`\n\nReturn these dictionaries in the following tuple `(vocab_to_int, int_to_vocab)`",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport problem_unittests as tests\nfrom collections import Counter\n\ndef create_lookup_tables(text):\n \"\"\"\n Create lookup tables for vocabulary\n :param text: The text of tv scripts split into words\n :return: A tuple of dicts (vocab_to_int, int_to_vocab)\n \"\"\"\n # TODO: Implement Function\n \n # Create a counter to find unique words\n words_count = Counter(text)\n \n # Sort the words\n sorted_words = sorted(words_count, key=words_count.get)\n \n # Create vocab to int dictionary\n vocab_to_int = { word: index for index, word in enumerate(sorted_words)}\n \n # Create int to vocab dictionary\n int_to_vocab = {index: word for word, index in vocab_to_int.items()}\n \n return vocab_to_int, int_to_vocab\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_create_lookup_tables(create_lookup_tables)",
"Tests Passed\n"
]
],
[
[
"### Tokenize Punctuation\nWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word \"bye\" and \"bye!\".\n\nImplement the function `token_lookup` to return a dict that will be used to tokenize symbols like \"!\" into \"||Exclamation_Mark||\". Create a dictionary for the following symbols where the symbol is the key and value is the token:\n- Period ( . )\n- Comma ( , )\n- Quotation Mark ( \" )\n- Semicolon ( ; )\n- Exclamation mark ( ! )\n- Question mark ( ? )\n- Left Parentheses ( ( )\n- Right Parentheses ( ) )\n- Dash ( -- )\n- Return ( \\n )\n\nThis dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token \"dash\", try using something like \"||dash||\".",
"_____no_output_____"
]
],
[
[
"def token_lookup():\n \"\"\"\n Generate a dict to turn punctuation into a token.\n :return: Tokenize dictionary where the key is the punctuation and the value is the token\n \"\"\"\n # TODO: Implement Function\n token_dict = {'.': '||Period||',\n ',': '||Comma||',\n '\"': '||Quotation_Mark||',\n ';': '||Semicolon||',\n '!': '||Exclamation_Mark||',\n '?': '||Question_Mark||',\n '(': '||Left_Parentheses||',\n ')': '||Right_Parentheses||',\n '--': '||Dash||',\n '\\n': '||Return||'}\n return token_dict\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_tokenize(token_lookup)",
"Tests Passed\n"
]
],
[
[
"## Preprocess all the data and save it\nRunning the code cell below will preprocess all the data and save it to file.",
"_____no_output_____"
]
],
[
[
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Preprocess Training, Validation, and Testing Data\nhelper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)",
"_____no_output_____"
]
],
[
[
"# Check Point\nThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.",
"_____no_output_____"
]
],
[
[
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport helper\nimport numpy as np\nimport problem_unittests as tests\n\nint_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()",
"_____no_output_____"
]
],
[
[
"## Build the Neural Network\nYou'll build the components necessary to build a RNN by implementing the following functions below:\n- get_inputs\n- get_init_cell\n- get_embed\n- build_rnn\n- build_nn\n- get_batches\n\n### Check the Version of TensorFlow and Access to GPU",
"_____no_output_____"
]
],
[
[
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nfrom distutils.version import LooseVersion\nimport warnings\nimport tensorflow as tf\n\n# Check TensorFlow Version\nassert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'\nprint('TensorFlow Version: {}'.format(tf.__version__))\n\n# Check for a GPU\nif not tf.test.gpu_device_name():\n warnings.warn('No GPU found. Please use a GPU to train your neural network.')\nelse:\n print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))",
"TensorFlow Version: 1.2.1\n"
]
],
[
[
"### Input\nImplement the `get_inputs()` function to create TF Placeholders for the Neural Network. It should create the following placeholders:\n- Input text placeholder named \"input\" using the [TF Placeholder](https://www.tensorflow.org/api_docs/python/tf/placeholder) `name` parameter.\n- Targets placeholder\n- Learning Rate placeholder\n\nReturn the placeholders in the following tuple `(Input, Targets, LearningRate)`",
"_____no_output_____"
]
],
[
[
"def get_inputs():\n \"\"\"\n Create TF Placeholders for input, targets, and learning rate.\n :return: Tuple (input, targets, learning rate)\n \"\"\"\n # TODO: Implement Function\n \n # Input placeholder\n input = tf.placeholder(dtype=tf.int32, shape=[None, None], name='input')\n # Targets placeholder\n targets = tf.placeholder(dtype=tf.int32, shape=[None, None], name='targets')\n # Learning rate placeholder\n learning_rate = tf.placeholder(dtype=tf.float32, name='learning_rate')\n \n return input, targets, learning_rate\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_inputs(get_inputs)",
"Tests Passed\n"
]
],
[
[
"### Build RNN Cell and Initialize\nStack one or more [`BasicLSTMCells`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/BasicLSTMCell) in a [`MultiRNNCell`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/MultiRNNCell).\n- The Rnn size should be set using `rnn_size`\n- Initalize Cell State using the MultiRNNCell's [`zero_state()`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/MultiRNNCell#zero_state) function\n - Apply the name \"initial_state\" to the initial state using [`tf.identity()`](https://www.tensorflow.org/api_docs/python/tf/identity)\n\nReturn the cell and initial state in the following tuple `(Cell, InitialState)`",
"_____no_output_____"
]
],
[
[
"def get_init_cell(batch_size, rnn_size):\n \"\"\"\n Create an RNN Cell and initialize it.\n :param batch_size: Size of batches\n :param rnn_size: Size of RNNs\n :return: Tuple (cell, initialize state)\n \"\"\"\n # TODO: Implement Function\n \n # Crete a basic LSTM cell (reason to use reuse param: https://github.com/tensorflow/tensorflow/issues/8191)\n # Create a single MultiRNNCell\n multi_cell = tf.contrib.rnn.MultiRNNCell([get_lstm_cell(rnn_size) for _ in range(2)])\n \n #print((batch_size.shape))\n \n # Initialize multicell state\n initial_state = multi_cell.zero_state(batch_size, dtype=tf.float32)\n # Provide a name for initial state\n initial_state = tf.identity(initial_state, name='initial_state')\n \n return multi_cell, initial_state\n\ndef get_lstm_cell(lstm_size):\n return tf.contrib.rnn.BasicLSTMCell(num_units=lstm_size)\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_init_cell(get_init_cell)",
"Tests Passed\n"
]
],
[
[
"### Word Embedding\nApply embedding to `input_data` using TensorFlow. Return the embedded sequence.",
"_____no_output_____"
]
],
[
[
"def get_embed(input_data, vocab_size, embed_dim):\n \"\"\"\n Create embedding for <input_data>.\n :param input_data: TF placeholder for text input.\n :param vocab_size: Number of words in vocabulary.\n :param embed_dim: Number of embedding dimensions\n :return: Embedded input.\n \"\"\"\n # TODO: Implement Function\n # Create embedding matrix(weight matrix) for the embedding layer\n embeddings = tf.Variable(tf.random_uniform(shape=(vocab_size, embed_dim), minval= -0.1, maxval=0.1, dtype=tf.float32, \n name='embeddings'))\n # Create the lookup table using tf.nn.embedding_lookup method\n embed = tf.nn.embedding_lookup(embeddings, input_data)\n \n return embed\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_embed(get_embed)",
"Tests Passed\n"
]
],
[
[
"### Build RNN\nYou created a RNN Cell in the `get_init_cell()` function. Time to use the cell to create a RNN.\n- Build the RNN using the [`tf.nn.dynamic_rnn()`](https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn)\n - Apply the name \"final_state\" to the final state using [`tf.identity()`](https://www.tensorflow.org/api_docs/python/tf/identity)\n\nReturn the outputs and final_state state in the following tuple `(Outputs, FinalState)` ",
"_____no_output_____"
]
],
[
[
"def build_rnn(cell, inputs):\n \"\"\"\n Create a RNN using a RNN Cell\n :param cell: RNN Cell\n :param inputs: Input text data\n :return: Tuple (Outputs, Final State)\n \"\"\"\n # TODO: Implement Function\n \n # Build the RNN using method tf.nn.dynamic_rnn()\n outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)\n \n # Provide a name for the state\n final_state = tf.identity(final_state, name='final_state')\n \n return outputs, final_state\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_build_rnn(build_rnn)",
"Tests Passed\n"
]
],
[
[
"### Build the Neural Network\nApply the functions you implemented above to:\n- Apply embedding to `input_data` using your `get_embed(input_data, vocab_size, embed_dim)` function.\n- Build RNN using `cell` and your `build_rnn(cell, inputs)` function.\n- Apply a fully connected layer with a linear activation and `vocab_size` as the number of outputs.\n\nReturn the logits and final state in the following tuple (Logits, FinalState) ",
"_____no_output_____"
]
],
[
[
"def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):\n \"\"\"\n Build part of the neural network\n :param cell: RNN cell\n :param rnn_size: Size of rnns\n :param input_data: Input data\n :param vocab_size: Vocabulary size\n :param embed_dim: Number of embedding dimensions\n :return: Tuple (Logits, FinalState)\n \"\"\"\n # TODO: Implement Function\n \n # Get the embed look up table\n embed_look_up = get_embed(input_data=input_data, vocab_size=vocab_size, embed_dim=embed_dim)\n \n # Build the RNN\n outputs, final_state = build_rnn(cell, embed_look_up)\n \n # Apply a fully connected layer with linear activation and vocab_size as the number of output\n logits = tf.contrib.layers.fully_connected(outputs, vocab_size,\n weights_initializer=tf.truncated_normal_initializer(stddev=0.1)\n ,activation_fn=None)\n \n #print(logits.shape)\n \n return logits, final_state\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_build_nn(build_nn)",
"Tests Passed\n"
]
],
[
[
"### Batches\nImplement `get_batches` to create batches of input and targets using `int_text`. The batches should be a Numpy array with the shape `(number of batches, 2, batch size, sequence length)`. Each batch contains two elements:\n- The first element is a single batch of **input** with the shape `[batch size, sequence length]`\n- The second element is a single batch of **targets** with the shape `[batch size, sequence length]`\n\nIf you can't fill the last batch with enough data, drop the last batch.\n\nFor exmple, `get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2)` would return a Numpy array of the following:\n```\n[\n # First Batch\n [\n # Batch of Input\n [[ 1 2], [ 7 8], [13 14]]\n # Batch of targets\n [[ 2 3], [ 8 9], [14 15]]\n ]\n\n # Second Batch\n [\n # Batch of Input\n [[ 3 4], [ 9 10], [15 16]]\n # Batch of targets\n [[ 4 5], [10 11], [16 17]]\n ]\n\n # Third Batch\n [\n # Batch of Input\n [[ 5 6], [11 12], [17 18]]\n # Batch of targets\n [[ 6 7], [12 13], [18 1]]\n ]\n]\n```\n\nNotice that the last target value in the last batch is the first input value of the first batch. In this case, `1`. This is a common technique used when creating sequence batches, although it is rather unintuitive.",
"_____no_output_____"
]
],
[
[
"def get_batches(int_text, batch_size, seq_length):\n \"\"\"\n Return batches of input and target\n :param int_text: Text with the words replaced by their ids\n :param batch_size: The size of batch\n :param seq_length: The length of sequence\n :return: Batches as a Numpy array\n \"\"\"\n # TODO: Implement Function\n n_batches = int(len(int_text) / (batch_size * seq_length))\n\n # Drop the last few characters to make only full batches\n xdata = np.array(int_text[: n_batches * batch_size * seq_length])\n ydata = np.array(int_text[1: n_batches * batch_size * seq_length + 1])\n ydata[-1] = xdata[0]\n\n x_batches = np.split(xdata.reshape(batch_size, -1), n_batches, 1)\n y_batches = np.split(ydata.reshape(batch_size, -1), n_batches, 1)\n\n return np.array(list(zip(x_batches, y_batches)))\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_batches(get_batches)",
"Tests Passed\n"
]
],
[
[
"## Neural Network Training\n### Hyperparameters\nTune the following parameters:\n\n- Set `num_epochs` to the number of epochs.\n- Set `batch_size` to the batch size.\n- Set `rnn_size` to the size of the RNNs.\n- Set `embed_dim` to the size of the embedding.\n- Set `seq_length` to the length of sequence.\n- Set `learning_rate` to the learning rate.\n- Set `show_every_n_batches` to the number of batches the neural network should print progress.",
"_____no_output_____"
]
],
[
[
"# Number of Epochs\nnum_epochs = 50\n# Batch Size\nbatch_size = 256\n# RNN Size\nrnn_size = 256\n# Embedding Dimension Size\nembed_dim = 300\n# Sequence Length\nseq_length = 15\n# Learning Rate\nlearning_rate = 0.01\n# Show stats for every n number of batches\nshow_every_n_batches = 5\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nsave_dir = './save'",
"_____no_output_____"
]
],
[
[
"### Build the Graph\nBuild the graph using the neural network you implemented.",
"_____no_output_____"
]
],
[
[
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nfrom tensorflow.contrib import seq2seq\n\ntrain_graph = tf.Graph()\nwith train_graph.as_default():\n vocab_size = len(int_to_vocab)\n input_text, targets, lr = get_inputs()\n input_data_shape = tf.shape(input_text)\n cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)\n logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)\n\n # Probabilities for generating words\n probs = tf.nn.softmax(logits, name='probs')\n print(probs.shape)\n print(logits.shape)\n\n # Loss function\n cost = seq2seq.sequence_loss(\n logits,\n targets,\n tf.ones([input_data_shape[0], input_data_shape[1]]))\n\n # Optimizer\n optimizer = tf.train.AdamOptimizer(lr)\n\n # Gradient Clipping\n gradients = optimizer.compute_gradients(cost)\n capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]\n train_op = optimizer.apply_gradients(capped_gradients)",
"(?, ?, 6779)\n(?, ?, 6779)\n"
]
],
[
[
"## Train\nTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the [forms](https://discussions.udacity.com/) to see if anyone is having the same problem.",
"_____no_output_____"
]
],
[
[
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nbatches = get_batches(int_text, batch_size, seq_length)\n\nwith tf.Session(graph=train_graph) as sess:\n sess.run(tf.global_variables_initializer())\n\n for epoch_i in range(num_epochs):\n state = sess.run(initial_state, {input_text: batches[0][0]})\n\n for batch_i, (x, y) in enumerate(batches):\n #print(\"x = \", x)\n #print(\"y = \", y)\n feed = {\n input_text: x,\n targets: y,\n initial_state: state,\n lr: learning_rate}\n train_loss, state, _ = sess.run([cost, final_state, train_op], feed)\n\n # Show every <show_every_n_batches> batches\n if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:\n print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(\n epoch_i,\n batch_i,\n len(batches),\n train_loss))\n\n # Save Model\n saver = tf.train.Saver()\n saver.save(sess, save_dir)\n print('Model Trained and Saved')",
"Epoch 0 Batch 0/17 train_loss = 8.821\nEpoch 0 Batch 5/17 train_loss = 6.577\nEpoch 0 Batch 10/17 train_loss = 6.325\nEpoch 0 Batch 15/17 train_loss = 6.187\nEpoch 1 Batch 3/17 train_loss = 6.022\nEpoch 1 Batch 8/17 train_loss = 5.694\nEpoch 1 Batch 13/17 train_loss = 5.650\nEpoch 2 Batch 1/17 train_loss = 5.499\nEpoch 2 Batch 6/17 train_loss = 5.468\nEpoch 2 Batch 11/17 train_loss = 5.225\nEpoch 2 Batch 16/17 train_loss = 5.135\nEpoch 3 Batch 4/17 train_loss = 5.093\nEpoch 3 Batch 9/17 train_loss = 5.035\nEpoch 3 Batch 14/17 train_loss = 4.862\nEpoch 4 Batch 2/17 train_loss = 4.805\nEpoch 4 Batch 7/17 train_loss = 4.770\nEpoch 4 Batch 12/17 train_loss = 4.692\nEpoch 5 Batch 0/17 train_loss = 4.664\nEpoch 5 Batch 5/17 train_loss = 4.613\nEpoch 5 Batch 10/17 train_loss = 4.547\nEpoch 5 Batch 15/17 train_loss = 4.508\nEpoch 6 Batch 3/17 train_loss = 4.516\nEpoch 6 Batch 8/17 train_loss = 4.368\nEpoch 6 Batch 13/17 train_loss = 4.349\nEpoch 7 Batch 1/17 train_loss = 4.315\nEpoch 7 Batch 6/17 train_loss = 4.322\nEpoch 7 Batch 11/17 train_loss = 4.185\nEpoch 7 Batch 16/17 train_loss = 4.208\nEpoch 8 Batch 4/17 train_loss = 4.206\nEpoch 8 Batch 9/17 train_loss = 4.165\nEpoch 8 Batch 14/17 train_loss = 4.089\nEpoch 9 Batch 2/17 train_loss = 4.078\nEpoch 9 Batch 7/17 train_loss = 4.038\nEpoch 9 Batch 12/17 train_loss = 3.943\nEpoch 10 Batch 0/17 train_loss = 3.981\nEpoch 10 Batch 5/17 train_loss = 3.948\nEpoch 10 Batch 10/17 train_loss = 3.896\nEpoch 10 Batch 15/17 train_loss = 3.867\nEpoch 11 Batch 3/17 train_loss = 3.905\nEpoch 11 Batch 8/17 train_loss = 3.793\nEpoch 11 Batch 13/17 train_loss = 3.747\nEpoch 12 Batch 1/17 train_loss = 3.758\nEpoch 12 Batch 6/17 train_loss = 3.700\nEpoch 12 Batch 11/17 train_loss = 3.574\nEpoch 12 Batch 16/17 train_loss = 3.603\nEpoch 13 Batch 4/17 train_loss = 3.658\nEpoch 13 Batch 9/17 train_loss = 3.572\nEpoch 13 Batch 14/17 train_loss = 3.537\nEpoch 14 Batch 2/17 train_loss = 3.528\nEpoch 14 Batch 7/17 train_loss = 3.525\nEpoch 14 Batch 12/17 train_loss = 3.399\nEpoch 15 Batch 0/17 train_loss = 3.423\nEpoch 15 Batch 5/17 train_loss = 3.414\nEpoch 15 Batch 10/17 train_loss = 3.398\nEpoch 15 Batch 15/17 train_loss = 3.316\nEpoch 16 Batch 3/17 train_loss = 3.370\nEpoch 16 Batch 8/17 train_loss = 3.308\nEpoch 16 Batch 13/17 train_loss = 3.191\nEpoch 17 Batch 1/17 train_loss = 3.259\nEpoch 17 Batch 6/17 train_loss = 3.198\nEpoch 17 Batch 11/17 train_loss = 3.065\nEpoch 17 Batch 16/17 train_loss = 3.061\nEpoch 18 Batch 4/17 train_loss = 3.144\nEpoch 18 Batch 9/17 train_loss = 3.033\nEpoch 18 Batch 14/17 train_loss = 3.030\nEpoch 19 Batch 2/17 train_loss = 2.971\nEpoch 19 Batch 7/17 train_loss = 2.986\nEpoch 19 Batch 12/17 train_loss = 2.851\nEpoch 20 Batch 0/17 train_loss = 2.835\nEpoch 20 Batch 5/17 train_loss = 2.832\nEpoch 20 Batch 10/17 train_loss = 2.835\nEpoch 20 Batch 15/17 train_loss = 2.807\nEpoch 21 Batch 3/17 train_loss = 2.804\nEpoch 21 Batch 8/17 train_loss = 2.809\nEpoch 21 Batch 13/17 train_loss = 2.654\nEpoch 22 Batch 1/17 train_loss = 2.777\nEpoch 22 Batch 6/17 train_loss = 2.669\nEpoch 22 Batch 11/17 train_loss = 2.568\nEpoch 22 Batch 16/17 train_loss = 2.624\nEpoch 23 Batch 4/17 train_loss = 2.632\nEpoch 23 Batch 9/17 train_loss = 2.559\nEpoch 23 Batch 14/17 train_loss = 2.570\nEpoch 24 Batch 2/17 train_loss = 2.590\nEpoch 24 Batch 7/17 train_loss = 2.504\nEpoch 24 Batch 12/17 train_loss = 2.475\nEpoch 25 Batch 0/17 train_loss = 2.451\nEpoch 25 Batch 5/17 train_loss = 2.469\nEpoch 25 Batch 10/17 train_loss = 2.436\nEpoch 25 Batch 15/17 train_loss = 2.431\nEpoch 26 Batch 3/17 train_loss = 2.424\nEpoch 26 Batch 8/17 train_loss = 2.422\nEpoch 26 Batch 13/17 train_loss = 2.302\nEpoch 27 Batch 1/17 train_loss = 2.413\nEpoch 27 Batch 6/17 train_loss = 2.315\nEpoch 27 Batch 11/17 train_loss = 2.170\nEpoch 27 Batch 16/17 train_loss = 2.213\nEpoch 28 Batch 4/17 train_loss = 2.258\nEpoch 28 Batch 9/17 train_loss = 2.146\nEpoch 28 Batch 14/17 train_loss = 2.179\nEpoch 29 Batch 2/17 train_loss = 2.162\nEpoch 29 Batch 7/17 train_loss = 2.124\nEpoch 29 Batch 12/17 train_loss = 2.006\nEpoch 30 Batch 0/17 train_loss = 1.995\nEpoch 30 Batch 5/17 train_loss = 2.030\nEpoch 30 Batch 10/17 train_loss = 1.983\nEpoch 30 Batch 15/17 train_loss = 1.976\nEpoch 31 Batch 3/17 train_loss = 1.970\nEpoch 31 Batch 8/17 train_loss = 2.031\nEpoch 31 Batch 13/17 train_loss = 1.867\nEpoch 32 Batch 1/17 train_loss = 2.051\nEpoch 32 Batch 6/17 train_loss = 1.949\nEpoch 32 Batch 11/17 train_loss = 1.893\nEpoch 32 Batch 16/17 train_loss = 1.831\nEpoch 33 Batch 4/17 train_loss = 1.945\nEpoch 33 Batch 9/17 train_loss = 1.812\nEpoch 33 Batch 14/17 train_loss = 1.851\nEpoch 34 Batch 2/17 train_loss = 1.758\nEpoch 34 Batch 7/17 train_loss = 1.775\nEpoch 34 Batch 12/17 train_loss = 1.692\nEpoch 35 Batch 0/17 train_loss = 1.648\nEpoch 35 Batch 5/17 train_loss = 1.673\nEpoch 35 Batch 10/17 train_loss = 1.665\nEpoch 35 Batch 15/17 train_loss = 1.671\nEpoch 36 Batch 3/17 train_loss = 1.631\nEpoch 36 Batch 8/17 train_loss = 1.680\nEpoch 36 Batch 13/17 train_loss = 1.542\nEpoch 37 Batch 1/17 train_loss = 1.671\nEpoch 37 Batch 6/17 train_loss = 1.606\nEpoch 37 Batch 11/17 train_loss = 1.558\nEpoch 37 Batch 16/17 train_loss = 1.507\nEpoch 38 Batch 4/17 train_loss = 1.584\nEpoch 38 Batch 9/17 train_loss = 1.491\nEpoch 38 Batch 14/17 train_loss = 1.568\nEpoch 39 Batch 2/17 train_loss = 1.521\nEpoch 39 Batch 7/17 train_loss = 1.527\nEpoch 39 Batch 12/17 train_loss = 1.459\nEpoch 40 Batch 0/17 train_loss = 1.413\nEpoch 40 Batch 5/17 train_loss = 1.435\nEpoch 40 Batch 10/17 train_loss = 1.484\nEpoch 40 Batch 15/17 train_loss = 1.451\nEpoch 41 Batch 3/17 train_loss = 1.376\nEpoch 41 Batch 8/17 train_loss = 1.420\nEpoch 41 Batch 13/17 train_loss = 1.384\nEpoch 42 Batch 1/17 train_loss = 1.474\nEpoch 42 Batch 6/17 train_loss = 1.413\nEpoch 42 Batch 11/17 train_loss = 1.307\nEpoch 42 Batch 16/17 train_loss = 1.307\nEpoch 43 Batch 4/17 train_loss = 1.339\nEpoch 43 Batch 9/17 train_loss = 1.298\nEpoch 43 Batch 14/17 train_loss = 1.393\nEpoch 44 Batch 2/17 train_loss = 1.238\nEpoch 44 Batch 7/17 train_loss = 1.364\nEpoch 44 Batch 12/17 train_loss = 1.279\nEpoch 45 Batch 0/17 train_loss = 1.349\nEpoch 45 Batch 5/17 train_loss = 1.185\nEpoch 45 Batch 10/17 train_loss = 1.355\nEpoch 45 Batch 15/17 train_loss = 1.350\nEpoch 46 Batch 3/17 train_loss = 1.299\nEpoch 46 Batch 8/17 train_loss = 1.318\nEpoch 46 Batch 13/17 train_loss = 1.239\nEpoch 47 Batch 1/17 train_loss = 1.468\nEpoch 47 Batch 6/17 train_loss = 1.269\nEpoch 47 Batch 11/17 train_loss = 1.326\nEpoch 47 Batch 16/17 train_loss = 1.214\nEpoch 48 Batch 4/17 train_loss = 1.387\nEpoch 48 Batch 9/17 train_loss = 1.209\nEpoch 48 Batch 14/17 train_loss = 1.375\nEpoch 49 Batch 2/17 train_loss = 1.208\nEpoch 49 Batch 7/17 train_loss = 1.269\nEpoch 49 Batch 12/17 train_loss = 1.189\nModel Trained and Saved\n"
]
],
[
[
"## Save Parameters\nSave `seq_length` and `save_dir` for generating a new TV script.",
"_____no_output_____"
]
],
[
[
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Save parameters for checkpoint\nhelper.save_params((seq_length, save_dir))",
"_____no_output_____"
]
],
[
[
"# Checkpoint",
"_____no_output_____"
]
],
[
[
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport tensorflow as tf\nimport numpy as np\nimport helper\nimport problem_unittests as tests\n\n_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()\nseq_length, load_dir = helper.load_params()",
"_____no_output_____"
]
],
[
[
"## Implement Generate Functions\n### Get Tensors\nGet tensors from `loaded_graph` using the function [`get_tensor_by_name()`](https://www.tensorflow.org/api_docs/python/tf/Graph#get_tensor_by_name). Get the tensors using the following names:\n- \"input:0\"\n- \"initial_state:0\"\n- \"final_state:0\"\n- \"probs:0\"\n\nReturn the tensors in the following tuple `(InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)` ",
"_____no_output_____"
]
],
[
[
"def get_tensors(loaded_graph):\n \"\"\"\n Get input, initial state, final state, and probabilities tensor from <loaded_graph>\n :param loaded_graph: TensorFlow graph loaded from file\n :return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)\n \"\"\"\n # TODO: Implement Function\n input_tensor = loaded_graph.get_tensor_by_name(name='input:0')\n initial_state_tensor = loaded_graph.get_tensor_by_name(name='initial_state:0')\n final_state_tensor = loaded_graph.get_tensor_by_name(name='final_state:0')\n probs_tensor = loaded_graph.get_tensor_by_name(name='probs:0')\n \n return input_tensor, initial_state_tensor, final_state_tensor, probs_tensor\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_tensors(get_tensors)",
"Tests Passed\n"
]
],
[
[
"### Choose Word\nImplement the `pick_word()` function to select the next word using `probabilities`.",
"_____no_output_____"
]
],
[
[
"def pick_word(probabilities, int_to_vocab):\n \"\"\"\n Pick the next word in the generated text\n :param probabilities: Probabilites of the next word\n :param int_to_vocab: Dictionary of word ids as the keys and words as the values\n :return: String of the predicted word\n \"\"\"\n # TODO: Implement Function\n #print(probabilities.shape[0])\n #print(probabilities)\n \n next_word = np.random.choice(len(probabilities), p=probabilities)\n next_word = str(int_to_vocab[next_word])\n\n return next_word\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_pick_word(pick_word)",
"Tests Passed\n"
]
],
[
[
"## Generate TV Script\nThis will generate the TV script for you. Set `gen_length` to the length of TV script you want to generate.",
"_____no_output_____"
]
],
[
[
"gen_length = 200\n# homer_simpson, moe_szyslak, or Barney_Gumble\nprime_word = 'moe_szyslak'\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nloaded_graph = tf.Graph()\nwith tf.Session(graph=loaded_graph) as sess:\n # Load saved model\n loader = tf.train.import_meta_graph(load_dir + '.meta')\n loader.restore(sess, load_dir)\n\n # Get Tensors from loaded model\n input_text, initial_state, final_state, probs = get_tensors(loaded_graph)\n\n # Sentences generation setup\n gen_sentences = [prime_word + ':']\n prev_state = sess.run(initial_state, {input_text: np.array([[1]])})\n\n # Generate sentences\n for n in range(gen_length):\n # Dynamic Input\n dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]\n dyn_seq_length = len(dyn_input[0])\n # debug \n #print(len(dyn_input[0]))\n\n # Get Prediction\n probabilities, prev_state = sess.run(\n [probs, final_state],\n {input_text: dyn_input, initial_state: prev_state})\n \n # debug\n #print(probabilities.shape)\n #print(type(probabilities))\n #print(dyn_seq_length-1)\n temp = np.squeeze(probabilities, axis=0)\n #print(temp.shape)\n temp = temp[dyn_seq_length - 1]\n #print(temp.shape)\n \n pred_word = pick_word(temp, int_to_vocab)\n\n gen_sentences.append(pred_word)\n \n # Remove tokens\n tv_script = ' '.join(gen_sentences)\n for key, token in token_dict.items():\n ending = ' ' if key in ['\\n', '(', '\"'] else ''\n tv_script = tv_script.replace(' ' + token.lower(), key)\n tv_script = tv_script.replace('\\n ', '\\n')\n tv_script = tv_script.replace('( ', '(')\n \n print(tv_script)",
"INFO:tensorflow:Restoring parameters from ./save\nmoe_szyslak: excuse me, i'm making the cat's here.\nseymour_skinner:(kindly) we want chilly willy! we want chilly willy!\nbarney_gumble: you see your o'problem is that.\nhomer_simpson:(reading)\" strokkur geysir. our name is alva. and it begins, that wants to party. they couldn't bear to another woman. boxing plank's elephants. you can't believe there!\ncarl:(still dancing noises)\nchief_wiggum: now, i'd be gonna get my crap out.\nmoe_szyslak: yeah, let me think about it, homer i'm black enough.\n\n\nlookalike:(peppy) hi, artist, i think that verdict is overturned in the way.\nhomer_simpson:(clears throat) and it is... inclination...\nall:(disgusted) ew...(to marge) eh, son everybody closed, i'll help it.\nmoe_szyslak: hurry, the evening began to homer on! i floated up when we pulled the bar, then, you'll clean and the bubbles?\nhomer_simpson: hey, i rob\n"
]
],
[
[
"# The TV Script is Nonsensical\nIt's ok if the TV script doesn't make any sense. We trained on less than a megabyte of text. In order to get good results, you'll have to use a smaller vocabulary or get more data. Luckly there's more data! As we mentioned in the begging of this project, this is a subset of [another dataset](https://www.kaggle.com/wcukierski/the-simpsons-by-the-data). We didn't have you train on all the data, because that would take too long. However, you are free to train your neural network on all the data. After you complete the project, of course.\n# Submitting This Project\nWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as \"dlnd_tv_script_generation.ipynb\" and save it as a HTML file under \"File\" -> \"Download as\". Include the \"helper.py\" and \"problem_unittests.py\" files in your submission.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d0be81b70f78b40798067733e4019a1f69f54493 | 704,738 | ipynb | Jupyter Notebook | docs/notebooks/Parser.ipynb | rindPHI/fuzzingbook | 39e3359621aeafe915d1a28e1536cad2cd7465ce | [
"MIT"
] | null | null | null | docs/notebooks/Parser.ipynb | rindPHI/fuzzingbook | 39e3359621aeafe915d1a28e1536cad2cd7465ce | [
"MIT"
] | null | null | null | docs/notebooks/Parser.ipynb | rindPHI/fuzzingbook | 39e3359621aeafe915d1a28e1536cad2cd7465ce | [
"MIT"
] | null | null | null | 39.858492 | 1,043 | 0.502331 | [
[
[
"# Parsing Inputs\n\nIn the chapter on [Grammars](Grammars.ipynb), we discussed how grammars can be\nused to represent various languages. We also saw how grammars can be used to\ngenerate strings of the corresponding language. Grammars can also perform the\nreverse. That is, given a string, one can decompose the string into its\nconstituent parts that correspond to the parts of grammar used to generate it\n– the _derivation tree_ of that string. These parts (and parts from other similar\nstrings) can later be recombined using the same grammar to produce new strings.\n\nIn this chapter, we use grammars to parse and decompose a given set of valid seed inputs into their corresponding derivation trees. This structural representation allows us to mutate, crossover, and recombine their parts in order to generate new valid, slightly changed inputs (i.e., fuzz)",
"_____no_output_____"
]
],
[
[
"from bookutils import YouTubeVideo\nYouTubeVideo('2yS9EfBEirE')",
"_____no_output_____"
]
],
[
[
"**Prerequisites**\n\n* You should have read the [chapter on grammars](Grammars.ipynb).\n* An understanding of derivation trees from the [chapter on grammar fuzzer](GrammarFuzzer.ipynb)\n is also required.",
"_____no_output_____"
],
[
"## Synopsis\n<!-- Automatically generated. Do not edit. -->\n\nTo [use the code provided in this chapter](Importing.ipynb), write\n\n```python\n>>> from fuzzingbook.Parser import <identifier>\n```\n\nand then make use of the following features.\n\n\nThis chapter introduces `Parser` classes, parsing a string into a _derivation tree_ as introduced in the [chapter on efficient grammar fuzzing](GrammarFuzzer.ipynb). Two important parser classes are provided:\n\n* [Parsing Expression Grammar parsers](#Parsing-Expression-Grammars) (`PEGParser`). These are very efficient, but limited to specific grammar structure. Notably, the alternatives represent *ordered choice*. That is, rather than choosing all rules that can potentially match, we stop at the first match that succeed.\n* [Earley parsers](#Parsing-Context-Free-Grammars) (`EarleyParser`). These accept any kind of context-free grammars, and explore all parsing alternatives (if any).\n\nUsing any of these is fairly easy, though. First, instantiate them with a grammar:\n\n```python\n>>> from Grammars import US_PHONE_GRAMMAR\n>>> us_phone_parser = EarleyParser(US_PHONE_GRAMMAR)\n```\nThen, use the `parse()` method to retrieve a list of possible derivation trees:\n\n```python\n>>> trees = us_phone_parser.parse(\"(555)987-6543\")\n>>> tree = list(trees)[0]\n>>> display_tree(tree)\n```\n\n\nThese derivation trees can then be used for test generation, notably for mutating and recombining existing inputs.\n\n\n\n",
"_____no_output_____"
]
],
[
[
"import bookutils",
"_____no_output_____"
],
[
"from typing import Dict, List, Tuple, Collection, Set, Iterable, Generator, cast",
"_____no_output_____"
],
[
"from Fuzzer import Fuzzer # minor dependendcy",
"_____no_output_____"
],
[
"from Grammars import EXPR_GRAMMAR, START_SYMBOL, RE_NONTERMINAL\nfrom Grammars import is_valid_grammar, syntax_diagram, Grammar",
"_____no_output_____"
],
[
"from GrammarFuzzer import GrammarFuzzer, display_tree, tree_to_string, dot_escape\nfrom GrammarFuzzer import DerivationTree",
"_____no_output_____"
],
[
"from ExpectError import ExpectError",
"_____no_output_____"
],
[
"from IPython.display import display",
"_____no_output_____"
],
[
"from Timer import Timer",
"_____no_output_____"
]
],
[
[
"## Why Parsing for Fuzzing?",
"_____no_output_____"
],
[
"Why would one want to parse existing inputs in order to fuzz? Let us illustrate the problem with an example. Here is a simple program that accepts a CSV file of vehicle details and processes this information.",
"_____no_output_____"
]
],
[
[
"def process_inventory(inventory):\n res = []\n for vehicle in inventory.split('\\n'):\n ret = process_vehicle(vehicle)\n res.extend(ret)\n return '\\n'.join(res)",
"_____no_output_____"
]
],
[
[
"The CSV file contains details of one vehicle per line. Each row is processed in `process_vehicle()`.",
"_____no_output_____"
]
],
[
[
"def process_vehicle(vehicle):\n year, kind, company, model, *_ = vehicle.split(',')\n if kind == 'van':\n return process_van(year, company, model)\n\n elif kind == 'car':\n return process_car(year, company, model)\n\n else:\n raise Exception('Invalid entry')",
"_____no_output_____"
]
],
[
[
"Depending on the kind of vehicle, the processing changes.",
"_____no_output_____"
]
],
[
[
"def process_van(year, company, model):\n res = [\"We have a %s %s van from %s vintage.\" % (company, model, year)]\n iyear = int(year)\n if iyear > 2010:\n res.append(\"It is a recent model!\")\n else:\n res.append(\"It is an old but reliable model!\")\n return res",
"_____no_output_____"
],
[
"def process_car(year, company, model):\n res = [\"We have a %s %s car from %s vintage.\" % (company, model, year)]\n iyear = int(year)\n if iyear > 2016:\n res.append(\"It is a recent model!\")\n else:\n res.append(\"It is an old but reliable model!\")\n return res",
"_____no_output_____"
]
],
[
[
"Here is a sample of inputs that the `process_inventory()` accepts.",
"_____no_output_____"
]
],
[
[
"mystring = \"\"\"\\\n1997,van,Ford,E350\n2000,car,Mercury,Cougar\\\n\"\"\"\nprint(process_inventory(mystring))",
"We have a Ford E350 van from 1997 vintage.\nIt is an old but reliable model!\nWe have a Mercury Cougar car from 2000 vintage.\nIt is an old but reliable model!\n"
]
],
[
[
"Let us try to fuzz this program. Given that the `process_inventory()` takes a CSV file, we can write a simple grammar for generating comma separated values, and generate the required CSV rows. For convenience, we fuzz `process_vehicle()` directly.",
"_____no_output_____"
]
],
[
[
"import string",
"_____no_output_____"
],
[
"CSV_GRAMMAR: Grammar = {\n '<start>': ['<csvline>'],\n '<csvline>': ['<items>'],\n '<items>': ['<item>,<items>', '<item>'],\n '<item>': ['<letters>'],\n '<letters>': ['<letter><letters>', '<letter>'],\n '<letter>': list(string.ascii_letters + string.digits + string.punctuation + ' \\t\\n')\n}",
"_____no_output_____"
]
],
[
[
" We need some infrastructure first for viewing the grammar.",
"_____no_output_____"
]
],
[
[
"syntax_diagram(CSV_GRAMMAR)",
"start\n"
]
],
[
[
"We generate `1000` values, and evaluate the `process_vehicle()` with each.",
"_____no_output_____"
]
],
[
[
"gf = GrammarFuzzer(CSV_GRAMMAR, min_nonterminals=4)\ntrials = 1000\nvalid: List[str] = []\ntime = 0\nfor i in range(trials):\n with Timer() as t:\n vehicle_info = gf.fuzz()\n try:\n process_vehicle(vehicle_info)\n valid.append(vehicle_info)\n except:\n pass\n time += t.elapsed_time()\nprint(\"%d valid strings, that is GrammarFuzzer generated %f%% valid entries from %d inputs\" %\n (len(valid), len(valid) * 100.0 / trials, trials))\nprint(\"Total time of %f seconds\" % time)",
"0 valid strings, that is GrammarFuzzer generated 0.000000% valid entries from 1000 inputs\nTotal time of 8.500033 seconds\n"
]
],
[
[
"This is obviously not working. But why?",
"_____no_output_____"
]
],
[
[
"gf = GrammarFuzzer(CSV_GRAMMAR, min_nonterminals=4)\ntrials = 10\ntime = 0\nfor i in range(trials):\n vehicle_info = gf.fuzz()\n try:\n print(repr(vehicle_info), end=\"\")\n process_vehicle(vehicle_info)\n except Exception as e:\n print(\"\\t\", e)\n else:\n print()",
"'9w9J\\'/,LU<\"l,|,Y,Zv)Amvx,c\\n'\t Invalid entry\n'(n8].H7,qolS'\t not enough values to unpack (expected at least 4, got 2)\n'\\nQoLWQ,jSa'\t not enough values to unpack (expected at least 4, got 2)\n'K1,\\n,RE,fq,%,,sT+aAb'\t Invalid entry\n\"m,d,,8j4'),-yQ,B7\"\t Invalid entry\n'g4,s1\\t[}{.,M,<,\\nzd,.am'\t Invalid entry\n',Z[,z,c,#x1,gc.F'\t Invalid entry\n'pWs,rT`,R'\t not enough values to unpack (expected at least 4, got 3)\n'iN,br%,Q,R'\t Invalid entry\n'ol,\\nH<\\tn,^#,=A'\t Invalid entry\n"
]
],
[
[
"None of the entries will get through unless the fuzzer can produce either `van` or `car`.\nIndeed, the reason is that the grammar itself does not capture the complete information about the format. So here is another idea. We modify the `GrammarFuzzer` to know a bit about our format.",
"_____no_output_____"
]
],
[
[
"import copy",
"_____no_output_____"
],
[
"import random",
"_____no_output_____"
],
[
"class PooledGrammarFuzzer(GrammarFuzzer):\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self._node_cache = {}\n\n def update_cache(self, key, values):\n self._node_cache[key] = values\n\n def expand_node_randomly(self, node):\n (symbol, children) = node\n assert children is None\n if symbol in self._node_cache:\n if random.randint(0, 1) == 1:\n return super().expand_node_randomly(node)\n return copy.deepcopy(random.choice(self._node_cache[symbol]))\n return super().expand_node_randomly(node)",
"_____no_output_____"
]
],
[
[
"Let us try again!",
"_____no_output_____"
]
],
[
[
"gf = PooledGrammarFuzzer(CSV_GRAMMAR, min_nonterminals=4)\ngf.update_cache('<item>', [\n ('<item>', [('car', [])]),\n ('<item>', [('van', [])]),\n])\ntrials = 10\ntime = 0\nfor i in range(trials):\n vehicle_info = gf.fuzz()\n try:\n print(repr(vehicle_info), end=\"\")\n process_vehicle(vehicle_info)\n except Exception as e:\n print(\"\\t\", e)\n else:\n print()",
"',h,van,|'\t Invalid entry\n'M,w:K,car,car,van'\t Invalid entry\n'J,?Y,van,van,car,J,~D+'\t Invalid entry\n'S4,car,car,o'\t invalid literal for int() with base 10: 'S4'\n'2*-,van'\t not enough values to unpack (expected at least 4, got 2)\n'van,%,5,]'\t Invalid entry\n'van,G3{y,j,h:'\t Invalid entry\n'$0;o,M,car,car'\t Invalid entry\n'2d,f,e'\t not enough values to unpack (expected at least 4, got 3)\n'/~NE,car,car'\t not enough values to unpack (expected at least 4, got 3)\n"
]
],
[
[
"At least we are getting somewhere! It would be really nice if _we could incorporate what we know about the sample data in our fuzzer._ In fact, it would be nice if we could _extract_ the template and valid values from samples, and use them in our fuzzing. How do we do that? The quick answer to this question is: Use a *parser*. ",
"_____no_output_____"
],
[
"## Using a Parser\n\nGenerally speaking, a _parser_ is the part of a a program that processes (structured) input. The parsers we discuss in this chapter transform an input string into a _derivation tree_ (discussed in the [chapter on efficient grammar fuzzing](GrammarFuzzer.ipynb)). From a user's perspective, all it takes to parse an input is two steps: \n\n1. Initialize the parser with a grammar, as in\n```\nparser = Parser(grammar)\n```\n\n2. Using the parser to retrieve a list of derivation trees:\n\n```python\ntrees = parser.parse(input)\n```\n\nOnce we have parsed a tree, we can use it just as the derivation trees produced from grammar fuzzing.\n\nWe discuss a number of such parsers, in particular\n* [parsing expression grammar parsers](#Parsing-Expression-Grammars) (`PEGParser`), which are very efficient, but limited to specific grammar structure; and\n* [Earley parsers](#Parsing-Context-Free-Grammars) (`EarleyParser`), which accept any kind of context-free grammars.\n\nIf you just want to _use_ parsers (say, because your main focus is testing), you can just stop here and move on [to the next chapter](LangFuzzer.ipynb), where we learn how to make use of parsed inputs to mutate and recombine them. If you want to _understand_ how parsers work, though, this chapter is right for you.",
"_____no_output_____"
],
[
"## An Ad Hoc Parser\n\nAs we saw in the previous section, programmers often have to extract parts of data that obey certain rules. For example, for *CSV* files, each element in a row is separated by *commas*, and multiple raws are used to store the data.",
"_____no_output_____"
],
[
"To extract the information, we write an ad hoc parser `simple_parse_csv()`.",
"_____no_output_____"
]
],
[
[
"def simple_parse_csv(mystring: str) -> DerivationTree:\n children: List[DerivationTree] = []\n tree = (START_SYMBOL, children)\n for i, line in enumerate(mystring.split('\\n')):\n children.append((\"record %d\" % i, [(cell, [])\n for cell in line.split(',')]))\n return tree",
"_____no_output_____"
]
],
[
[
"We also change the default orientation of the graph to *left to right* rather than *top to bottom* for easier viewing using `lr_graph()`.",
"_____no_output_____"
]
],
[
[
"def lr_graph(dot):\n dot.attr('node', shape='plain')\n dot.graph_attr['rankdir'] = 'LR'",
"_____no_output_____"
]
],
[
[
"The `display_tree()` shows the structure of our CSV file after parsing.",
"_____no_output_____"
]
],
[
[
"tree = simple_parse_csv(mystring)\ndisplay_tree(tree, graph_attr=lr_graph)",
"_____no_output_____"
]
],
[
[
"This is of course simple. What if we encounter slightly more complexity? Again, another example from the Wikipedia.",
"_____no_output_____"
]
],
[
[
"mystring = '''\\\n1997,Ford,E350,\"ac, abs, moon\",3000.00\\\n'''\nprint(mystring)",
"1997,Ford,E350,\"ac, abs, moon\",3000.00\n"
]
],
[
[
"We define a new annotation method `highlight_node()` to mark the nodes that are interesting.",
"_____no_output_____"
]
],
[
[
"def highlight_node(predicate):\n def hl_node(dot, nid, symbol, ann):\n if predicate(dot, nid, symbol, ann):\n dot.node(repr(nid), dot_escape(symbol), fontcolor='red')\n else:\n dot.node(repr(nid), dot_escape(symbol))\n return hl_node",
"_____no_output_____"
]
],
[
[
"Using `highlight_node()` we can highlight particular nodes that we were wrongly parsed.",
"_____no_output_____"
]
],
[
[
"tree = simple_parse_csv(mystring)\nbad_nodes = {5, 6, 7, 12, 13, 20, 22, 23, 24, 25}",
"_____no_output_____"
],
[
"def hl_predicate(_d, nid, _s, _a): return nid in bad_nodes",
"_____no_output_____"
],
[
"highlight_err_node = highlight_node(hl_predicate)\ndisplay_tree(tree, log=False, node_attr=highlight_err_node,\n graph_attr=lr_graph)",
"_____no_output_____"
]
],
[
[
"The marked nodes indicate where our parsing went wrong. We can of course extend our parser to understand quotes. First we define some of the helper functions `parse_quote()`, `find_comma()` and `comma_split()`",
"_____no_output_____"
]
],
[
[
"def parse_quote(string, i):\n v = string[i + 1:].find('\"')\n return v + i + 1 if v >= 0 else -1",
"_____no_output_____"
],
[
"def find_comma(string, i):\n slen = len(string)\n while i < slen:\n if string[i] == '\"':\n i = parse_quote(string, i)\n if i == -1:\n return -1\n if string[i] == ',':\n return i\n i += 1\n return -1",
"_____no_output_____"
],
[
"def comma_split(string):\n slen = len(string)\n i = 0\n while i < slen:\n c = find_comma(string, i)\n if c == -1:\n yield string[i:]\n return\n else:\n yield string[i:c]\n i = c + 1",
"_____no_output_____"
]
],
[
[
"We can update our `parse_csv()` procedure to use our advanced quote parser.",
"_____no_output_____"
]
],
[
[
"def parse_csv(mystring):\n children = []\n tree = (START_SYMBOL, children)\n for i, line in enumerate(mystring.split('\\n')):\n children.append((\"record %d\" % i, [(cell, [])\n for cell in comma_split(line)]))\n return tree",
"_____no_output_____"
]
],
[
[
"Our new `parse_csv()` can now handle quotes correctly.",
"_____no_output_____"
]
],
[
[
"tree = parse_csv(mystring)\ndisplay_tree(tree, graph_attr=lr_graph)",
"_____no_output_____"
]
],
[
[
"That of course does not survive long:",
"_____no_output_____"
]
],
[
[
"mystring = '''\\\n1999,Chevy,\"Venture \\\\\"Extended Edition, Very Large\\\\\"\",,5000.00\\\n'''\nprint(mystring)",
"1999,Chevy,\"Venture \\\"Extended Edition, Very Large\\\"\",,5000.00\n"
]
],
[
[
"A few embedded quotes are sufficient to confuse our parser again.",
"_____no_output_____"
]
],
[
[
"tree = parse_csv(mystring)\nbad_nodes = {4, 5}\ndisplay_tree(tree, node_attr=highlight_err_node, graph_attr=lr_graph)",
"_____no_output_____"
]
],
[
[
"Here is another record from that CSV file:",
"_____no_output_____"
]
],
[
[
"mystring = '''\\\n1996,Jeep,Grand Cherokee,\"MUST SELL!\nair, moon roof, loaded\",4799.00\n'''\nprint(mystring)",
"1996,Jeep,Grand Cherokee,\"MUST SELL!\nair, moon roof, loaded\",4799.00\n\n"
],
[
"tree = parse_csv(mystring)\nbad_nodes = {5, 6, 7, 8, 9, 10}\ndisplay_tree(tree, node_attr=highlight_err_node, graph_attr=lr_graph)",
"_____no_output_____"
]
],
[
[
"Fixing this would require modifying both inner `parse_quote()` and the outer `parse_csv()` procedures. We note that each of these features actually documented in the CSV [RFC 4180](https://tools.ietf.org/html/rfc4180)",
"_____no_output_____"
],
[
"Indeed, each additional improvement falls apart even with a little extra complexity. The problem becomes severe when one encounters recursive expressions. For example, JSON is a common alternative to CSV files for saving data. Similarly, one may have to parse data from an HTML table instead of a CSV file if one is getting the data from the web.\n\nOne might be tempted to fix it with a little more ad hoc parsing, with a bit of *regular expressions* thrown in. However, that is the [path to insanity](https://stackoverflow.com/a/1732454).",
"_____no_output_____"
],
[
"It is here that _formal parsers_ shine. The main idea is that, any given set of strings belong to a language, and these languages can be specified by their grammars (as we saw in the [chapter on grammars](Grammars.ipynb)). The great thing about grammars is that they can be _composed_. That is, one can introduce finer and finer details into an internal structure without affecting the external structure, and similarly, one can change the external structure without much impact on the internal structure.",
"_____no_output_____"
],
[
"## Grammars in Parsing\n\nWe briefly describe grammars in the context of parsing.",
"_____no_output_____"
],
[
"### Excursion: Grammars and Derivation Trees",
"_____no_output_____"
],
[
"A grammar, as you have read from the [chapter on grammars](Grammars.ipynb) is a set of _rules_ that explain how the start symbol can be expanded. Each rule has a name, also called a _nonterminal_, and a set of _alternative choices_ in how the nonterminal can be expanded.",
"_____no_output_____"
]
],
[
[
"A1_GRAMMAR: Grammar = {\n \"<start>\": [\"<expr>\"],\n \"<expr>\": [\"<expr>+<expr>\", \"<expr>-<expr>\", \"<integer>\"],\n \"<integer>\": [\"<digit><integer>\", \"<digit>\"],\n \"<digit>\": [\"0\", \"1\", \"2\", \"3\", \"4\", \"5\", \"6\", \"7\", \"8\", \"9\"]\n}",
"_____no_output_____"
],
[
"syntax_diagram(A1_GRAMMAR)",
"start\n"
]
],
[
[
"In the above expression, the rule `<expr> : [<expr>+<expr>,<expr>-<expr>,<integer>]` corresponds to how the nonterminal `<expr>` might be expanded. The expression `<expr>+<expr>` corresponds to one of the alternative choices. We call this an _alternative_ expansion for the nonterminal `<expr>`. Finally, in an expression `<expr>+<expr>`, each of `<expr>`, `+`, and `<expr>` are _symbols_ in that expansion. A symbol could be either a nonterminal or a terminal symbol based on whether its expansion is available in the grammar.",
"_____no_output_____"
],
[
"Here is a string that represents an arithmetic expression that we would like to parse, which is specified by the grammar above:",
"_____no_output_____"
]
],
[
[
"mystring = '1+2'",
"_____no_output_____"
]
],
[
[
"The _derivation tree_ for our expression from this grammar is given by:",
"_____no_output_____"
]
],
[
[
"tree = ('<start>', [('<expr>',\n [('<expr>', [('<integer>', [('<digit>', [('1', [])])])]),\n ('+', []),\n ('<expr>', [('<integer>', [('<digit>', [('2',\n [])])])])])])\nassert mystring == tree_to_string(tree)\ndisplay_tree(tree)",
"_____no_output_____"
]
],
[
[
"While a grammar can be used to specify a given language, there could be multiple\ngrammars that correspond to the same language. For example, here is another \ngrammar to describe the same addition expression.",
"_____no_output_____"
]
],
[
[
"A2_GRAMMAR: Grammar = {\n \"<start>\": [\"<expr>\"],\n \"<expr>\": [\"<integer><expr_>\"],\n \"<expr_>\": [\"+<expr>\", \"-<expr>\", \"\"],\n \"<integer>\": [\"<digit><integer_>\"],\n \"<integer_>\": [\"<integer>\", \"\"],\n \"<digit>\": [\"0\", \"1\", \"2\", \"3\", \"4\", \"5\", \"6\", \"7\", \"8\", \"9\"]\n}",
"_____no_output_____"
],
[
"syntax_diagram(A2_GRAMMAR)",
"start\n"
]
],
[
[
"The corresponding derivation tree is given by:",
"_____no_output_____"
]
],
[
[
"tree = ('<start>', [('<expr>', [('<integer>', [('<digit>', [('1', [])]),\n ('<integer_>', [])]),\n ('<expr_>', [('+', []),\n ('<expr>',\n [('<integer>',\n [('<digit>', [('2', [])]),\n ('<integer_>', [])]),\n ('<expr_>', [])])])])])\nassert mystring == tree_to_string(tree)\ndisplay_tree(tree)",
"_____no_output_____"
]
],
[
[
"Indeed, there could be different classes of grammars that\ndescribe the same language. For example, the first grammar `A1_GRAMMAR`\nis a grammar that sports both _right_ and _left_ recursion, while the\nsecond grammar `A2_GRAMMAR` does not have left recursion in the\nnonterminals in any of its productions, but contains _epsilon_ productions.\n(An epsilon production is a production that has empty string in its right\nhand side.)",
"_____no_output_____"
],
[
"### End of Excursion",
"_____no_output_____"
],
[
"### Excursion: Recursion",
"_____no_output_____"
],
[
"You would have noticed that we reuse the term `<expr>` in its own definition. Using the same nonterminal in its own definition is called *recursion*. There are two specific kinds of recursion one should be aware of in parsing, as we see in the next section.",
"_____no_output_____"
],
[
"#### Recursion\n\nA grammar is _left recursive_ if any of its nonterminals are left recursive,\nand a nonterminal is directly left-recursive if the left-most symbol of\nany of its productions is itself.",
"_____no_output_____"
]
],
[
[
"LR_GRAMMAR: Grammar = {\n '<start>': ['<A>'],\n '<A>': ['<A>a', ''],\n}",
"_____no_output_____"
],
[
"syntax_diagram(LR_GRAMMAR)",
"start\n"
],
[
"mystring = 'aaaaaa'\ndisplay_tree(\n ('<start>', [('<A>', [('<A>', [('<A>', []), ('a', [])]), ('a', [])]),\n ('a', [])]))",
"_____no_output_____"
]
],
[
[
"A grammar is indirectly left-recursive if any\nof the left-most symbols can be expanded using their definitions to\nproduce the nonterminal as the left-most symbol of the expansion. The left\nrecursion is called a _hidden-left-recursion_ if during the series of\nexpansions of a nonterminal, one reaches a rule where the rule contains\nthe same nonterminal after a prefix of other symbols, and these symbols can\nderive the empty string. For example, in `A1_GRAMMAR`, `<integer>` will be\nconsidered hidden-left recursive if `<digit>` could derive an empty string.\n\nRight recursive grammars are defined similarly.\nBelow is the derivation tree for the right recursive grammar that represents the same\nlanguage as that of `LR_GRAMMAR`.",
"_____no_output_____"
]
],
[
[
"RR_GRAMMAR: Grammar = {\n '<start>': ['<A>'],\n '<A>': ['a<A>', ''],\n}",
"_____no_output_____"
],
[
"syntax_diagram(RR_GRAMMAR)",
"start\n"
],
[
"display_tree(('<start>', [('<A>', [\n ('a', []), ('<A>', [('a', []), ('<A>', [('a', []), ('<A>', [])])])])]\n ))",
"_____no_output_____"
]
],
[
[
"#### Ambiguity\n\nTo complicate matters further, there could be\nmultiple derivation trees – also called _parses_ – corresponding to the\nsame string from the same grammar. For example, a string `1+2+3` can be parsed\nin two ways as we see below using the `A1_GRAMMAR`",
"_____no_output_____"
]
],
[
[
"mystring = '1+2+3'\ntree = ('<start>',\n [('<expr>',\n [('<expr>', [('<expr>', [('<integer>', [('<digit>', [('1', [])])])]),\n ('+', []),\n ('<expr>', [('<integer>',\n [('<digit>', [('2', [])])])])]), ('+', []),\n ('<expr>', [('<integer>', [('<digit>', [('3', [])])])])])])\nassert mystring == tree_to_string(tree)\ndisplay_tree(tree)",
"_____no_output_____"
],
[
"tree = ('<start>',\n [('<expr>', [('<expr>', [('<integer>', [('<digit>', [('1', [])])])]),\n ('+', []),\n ('<expr>',\n [('<expr>', [('<integer>', [('<digit>', [('2', [])])])]),\n ('+', []),\n ('<expr>', [('<integer>', [('<digit>', [('3',\n [])])])])])])])\nassert tree_to_string(tree) == mystring\ndisplay_tree(tree)",
"_____no_output_____"
]
],
[
[
"There are many ways to resolve ambiguities. One approach taken by *Parsing Expression Grammars* explained in the next section is to specify a particular order of resolution, and choose the first one. Another approach is to simply return all possible derivation trees, which is the approach taken by *Earley parser* we develop later.",
"_____no_output_____"
],
[
"### End of Excursion",
"_____no_output_____"
],
[
"## A Parser Class",
"_____no_output_____"
],
[
"Next, we develop different parsers. To do that, we define a minimal interface for parsing that is obeyed by all parsers. There are two approaches to parsing a string using a grammar.\n\n1. The traditional approach is to use a *lexer* (also called a *tokenizer* or a *scanner*) to first tokenize the incoming string, and feed the grammar one token at a time. The lexer is typically a smaller parser that accepts a *regular language*. The advantage of this approach is that the grammar used by the parser can eschew the details of tokenization. Further, one gets a shallow derivation tree at the end of the parsing which can be directly used for generating the *Abstract Syntax Tree*.\n2. The second approach is to use a tree pruner after the complete parse. With this approach, one uses a grammar that incorporates complete details of the syntax. Next, the nodes corresponding to tokens are pruned and replaced with their corresponding strings as leaf nodes. The utility of this approach is that the parser is more powerful, and further there is no artificial distinction between *lexing* and *parsing*.\n\nIn this chapter, we use the second approach. This approach is implemented in the `prune_tree` method.",
"_____no_output_____"
],
[
"The *Parser* class we define below provides the minimal interface. The main methods that need to be implemented by the classes implementing this interface are `parse_prefix` and `parse`. The `parse_prefix` returns a tuple, which contains the index until which parsing was completed successfully, and the parse forest until that index. The method `parse` returns a list of derivation trees if the parse was successful.",
"_____no_output_____"
]
],
[
[
"class Parser:\n \"\"\"Base class for parsing.\"\"\"\n\n def __init__(self, grammar: Grammar, *,\n start_symbol: str = START_SYMBOL,\n log: bool = False,\n coalesce: bool = True,\n tokens: Set[str] = set()) -> None:\n \"\"\"Constructor.\n `grammar` is the grammar to be used for parsing.\n Keyword arguments:\n `start_symbol` is the start symbol (default: '<start>').\n `log` enables logging (default: False).\n `coalesce` defines if tokens should be coalesced (default: True).\n `tokens`, if set, is a set of tokens to be used.\"\"\"\n self._grammar = grammar\n self._start_symbol = start_symbol\n self.log = log\n self.coalesce_tokens = coalesce\n self.tokens = tokens\n\n def grammar(self) -> Grammar:\n \"\"\"Return the grammar of this parser.\"\"\"\n return self._grammar\n\n def start_symbol(self) -> str:\n \"\"\"Return the start symbol of this parser.\"\"\"\n return self._start_symbol\n\n def parse_prefix(self, text: str) -> Tuple[int, Iterable[DerivationTree]]:\n \"\"\"Return pair (cursor, forest) for longest prefix of text. \n To be defined in subclasses.\"\"\"\n raise NotImplementedError\n\n def parse(self, text: str) -> Iterable[DerivationTree]:\n \"\"\"Parse `text` using the grammar. \n Return an iterable of parse trees.\"\"\"\n cursor, forest = self.parse_prefix(text)\n if cursor < len(text):\n raise SyntaxError(\"at \" + repr(text[cursor:]))\n return [self.prune_tree(tree) for tree in forest]\n\n def parse_on(self, text: str, start_symbol: str) -> Generator:\n old_start = self._start_symbol\n try:\n self._start_symbol = start_symbol\n yield from self.parse(text)\n finally:\n self._start_symbol = old_start\n\n def coalesce(self, children: List[DerivationTree]) -> List[DerivationTree]:\n last = ''\n new_lst: List[DerivationTree] = []\n for cn, cc in children:\n if cn not in self._grammar:\n last += cn\n else:\n if last:\n new_lst.append((last, []))\n last = ''\n new_lst.append((cn, cc))\n if last:\n new_lst.append((last, []))\n return new_lst\n\n def prune_tree(self, tree: DerivationTree) -> DerivationTree:\n name, children = tree\n assert isinstance(children, list)\n\n if self.coalesce_tokens:\n children = self.coalesce(cast(List[DerivationTree], children))\n if name in self.tokens:\n return (name, [(tree_to_string(tree), [])])\n else:\n return (name, [self.prune_tree(c) for c in children])",
"_____no_output_____"
]
],
[
[
"### Excursion: Canonical Grammars",
"_____no_output_____"
],
[
"The `EXPR_GRAMMAR` we import from the [chapter on grammars](Grammars.ipynb) is oriented towards generation. In particular, the production rules are stored as strings. We need to massage this representation a little to conform to a _canonical representation_ where each token in a rule is represented separately. The `canonical` format uses separate tokens to represent each symbol in an expansion.",
"_____no_output_____"
]
],
[
[
"CanonicalGrammar = Dict[str, List[List[str]]]",
"_____no_output_____"
],
[
"import re",
"_____no_output_____"
],
[
"def single_char_tokens(grammar: Grammar) -> Dict[str, List[List[Collection[str]]]]:\n g_ = {}\n for key in grammar:\n rules_ = []\n for rule in grammar[key]:\n rule_ = []\n for token in rule:\n if token in grammar:\n rule_.append(token)\n else:\n rule_.extend(token)\n rules_.append(rule_)\n g_[key] = rules_\n return g_",
"_____no_output_____"
],
[
"def canonical(grammar: Grammar) -> CanonicalGrammar:\n def split(expansion):\n if isinstance(expansion, tuple):\n expansion = expansion[0]\n\n return [token for token in re.split(\n RE_NONTERMINAL, expansion) if token]\n\n return {\n k: [split(expression) for expression in alternatives]\n for k, alternatives in grammar.items()\n }",
"_____no_output_____"
],
[
"CE_GRAMMAR: CanonicalGrammar = canonical(EXPR_GRAMMAR)\nCE_GRAMMAR",
"_____no_output_____"
]
],
[
[
"We also provide a convenience method for easier display of canonical grammars.",
"_____no_output_____"
]
],
[
[
"def recurse_grammar(grammar, key, order):\n rules = sorted(grammar[key])\n old_len = len(order)\n for rule in rules:\n for token in rule:\n if token not in grammar: continue\n if token not in order:\n order.append(token)\n new = order[old_len:]\n for ckey in new:\n recurse_grammar(grammar, ckey, order)",
"_____no_output_____"
],
[
"def show_grammar(grammar, start_symbol=START_SYMBOL):\n order = [start_symbol]\n recurse_grammar(grammar, start_symbol, order)\n return {k: sorted(grammar[k]) for k in order}",
"_____no_output_____"
],
[
"show_grammar(CE_GRAMMAR)",
"_____no_output_____"
]
],
[
[
"We provide a way to revert a canonical expression.",
"_____no_output_____"
]
],
[
[
"def non_canonical(grammar):\n new_grammar = {}\n for k in grammar:\n rules = grammar[k]\n new_rules = []\n for rule in rules:\n new_rules.append(''.join(rule))\n new_grammar[k] = new_rules\n return new_grammar",
"_____no_output_____"
],
[
"non_canonical(CE_GRAMMAR)",
"_____no_output_____"
]
],
[
[
"It is easier to work with the `canonical` representation during parsing. Hence, we update our parser class to store the `canonical` representation also.",
"_____no_output_____"
]
],
[
[
"class Parser(Parser):\n def __init__(self, grammar, **kwargs):\n self._start_symbol = kwargs.get('start_symbol', START_SYMBOL)\n self.log = kwargs.get('log', False)\n self.tokens = kwargs.get('tokens', set())\n self.coalesce_tokens = kwargs.get('coalesce', True)\n canonical_grammar = kwargs.get('canonical', False)\n if canonical_grammar:\n self.cgrammar = single_char_tokens(grammar)\n self._grammar = non_canonical(grammar)\n else:\n self._grammar = dict(grammar)\n self.cgrammar = single_char_tokens(canonical(grammar))\n # we do not require a single rule for the start symbol\n if len(grammar.get(self._start_symbol, [])) != 1:\n self.cgrammar['<>'] = [[self._start_symbol]]",
"_____no_output_____"
]
],
[
[
"We update the `prune_tree()` to account for the phony start symbol if it was insserted.",
"_____no_output_____"
]
],
[
[
"class Parser(Parser):\n def prune_tree(self, tree):\n name, children = tree\n if name == '<>':\n assert len(children) == 1\n return self.prune_tree(children[0])\n if self.coalesce_tokens:\n children = self.coalesce(children)\n if name in self.tokens:\n return (name, [(tree_to_string(tree), [])])\n else:\n return (name, [self.prune_tree(c) for c in children])",
"_____no_output_____"
]
],
[
[
"### End of Excursion",
"_____no_output_____"
],
[
"## Parsing Expression Grammars\n\nA _[Parsing Expression Grammar](http://bford.info/pub/lang/peg)_ (*PEG*) \\cite{Ford2004} is a type of _recognition based formal grammar_ that specifies the sequence of steps to take to parse a given string.\nA _parsing expression grammar_ is very similar to a _context-free grammar_ (*CFG*) such as the ones we saw in the [chapter on grammars](Grammars.ipynb). As in a CFG, a parsing expression grammar is represented by a set of nonterminals and corresponding alternatives representing how to match each. For example, here is a PEG that matches `a` or `b`.",
"_____no_output_____"
]
],
[
[
"PEG1 = {\n '<start>': ['a', 'b']\n}",
"_____no_output_____"
]
],
[
[
"However, unlike the _CFG_, the alternatives represent *ordered choice*. That is, rather than choosing all rules that can potentially match, we stop at the first match that succeed. For example, the below _PEG_ can match `ab` but not `abc` unlike a _CFG_ which will match both. (We call the sequence of ordered choice expressions *choice expressions* rather than alternatives to make the distinction from _CFG_ clear.)",
"_____no_output_____"
]
],
[
[
"PEG2 = {\n '<start>': ['ab', 'abc']\n}",
"_____no_output_____"
]
],
[
[
"Each choice in a _choice expression_ represents a rule on how to satisfy that particular choice. The choice is a sequence of symbols (terminals and nonterminals) that are matched against a given text as in a _CFG_.",
"_____no_output_____"
],
[
"Beyond the syntax of grammar definitions we have seen so far, a _PEG_ can also contain a few additional elements. See the exercises at the end of the chapter for additional information.\n\nThe PEGs model the typical practice in handwritten recursive descent parsers, and hence it may be considered more intuitive to understand.",
"_____no_output_____"
],
[
"### The Packrat Parser for Predicate Expression Grammars\n\nShort of hand rolling a parser, _Packrat_ parsing is one of the simplest parsing techniques, and is one of the techniques for parsing PEGs.\nThe _Packrat_ parser is so named because it tries to cache all results from simpler problems in the hope that these solutions can be used to avoid re-computation later. We develop a minimal _Packrat_ parser next.",
"_____no_output_____"
],
[
"We derive from the `Parser` base class first, and we accept the text to be parsed in the `parse()` method, which in turn calls `unify_key()` with the `start_symbol`.\n\n__Note.__ While our PEG parser can produce only a single unambiguous parse tree, other parsers can produce multiple parses for ambiguous grammars. Hence, we return a list of trees (in this case with a single element).",
"_____no_output_____"
]
],
[
[
"class PEGParser(Parser):\n def parse_prefix(self, text):\n cursor, tree = self.unify_key(self.start_symbol(), text, 0)\n return cursor, [tree]",
"_____no_output_____"
]
],
[
[
"### Excursion: Implementing `PEGParser`",
"_____no_output_____"
],
[
"#### Unify Key\nThe `unify_key()` algorithm is simple. If given a terminal symbol, it tries to match the symbol with the current position in the text. If the symbol and text match, it returns successfully with the new parse index `at`.\n\nIf on the other hand, it was given a nonterminal, it retrieves the choice expression corresponding to the key, and tries to match each choice *in order* using `unify_rule()`. If **any** of the rules succeed in being unified with the given text, the parse is considered a success, and we return with the new parse index returned by `unify_rule()`.",
"_____no_output_____"
]
],
[
[
"class PEGParser(PEGParser):\n \"\"\"Packrat parser for Parsing Expression Grammars (PEGs).\"\"\"\n\n def unify_key(self, key, text, at=0):\n if self.log:\n print(\"unify_key: %s with %s\" % (repr(key), repr(text[at:])))\n if key not in self.cgrammar:\n if text[at:].startswith(key):\n return at + len(key), (key, [])\n else:\n return at, None\n for rule in self.cgrammar[key]:\n to, res = self.unify_rule(rule, text, at)\n if res is not None:\n return (to, (key, res))\n return 0, None",
"_____no_output_____"
],
[
"mystring = \"1\"\npeg = PEGParser(EXPR_GRAMMAR, log=True)\npeg.unify_key('1', mystring)",
"unify_key: '1' with '1'\n"
],
[
"mystring = \"2\"\npeg.unify_key('1', mystring)",
"unify_key: '1' with '2'\n"
]
],
[
[
"#### Unify Rule\n\nThe `unify_rule()` method is similar. It retrieves the tokens corresponding to the rule that it needs to unify with the text, and calls `unify_key()` on them in sequence. If **all** tokens are successfully unified with the text, the parse is a success.",
"_____no_output_____"
]
],
[
[
"class PEGParser(PEGParser):\n def unify_rule(self, rule, text, at):\n if self.log:\n print('unify_rule: %s with %s' % (repr(rule), repr(text[at:])))\n results = []\n for token in rule:\n at, res = self.unify_key(token, text, at)\n if res is None:\n return at, None\n results.append(res)\n return at, results",
"_____no_output_____"
],
[
"mystring = \"0\"\npeg = PEGParser(EXPR_GRAMMAR, log=True)\npeg.unify_rule(peg.cgrammar['<digit>'][0], mystring, 0)",
"unify_rule: ['0'] with '0'\nunify_key: '0' with '0'\n"
],
[
"mystring = \"12\"\npeg.unify_rule(peg.cgrammar['<integer>'][0], mystring, 0)",
"unify_rule: ['<digit>', '<integer>'] with '12'\nunify_key: '<digit>' with '12'\nunify_rule: ['0'] with '12'\nunify_key: '0' with '12'\nunify_rule: ['1'] with '12'\nunify_key: '1' with '12'\nunify_key: '<integer>' with '2'\nunify_rule: ['<digit>', '<integer>'] with '2'\nunify_key: '<digit>' with '2'\nunify_rule: ['0'] with '2'\nunify_key: '0' with '2'\nunify_rule: ['1'] with '2'\nunify_key: '1' with '2'\nunify_rule: ['2'] with '2'\nunify_key: '2' with '2'\nunify_key: '<integer>' with ''\nunify_rule: ['<digit>', '<integer>'] with ''\nunify_key: '<digit>' with ''\nunify_rule: ['0'] with ''\nunify_key: '0' with ''\nunify_rule: ['1'] with ''\nunify_key: '1' with ''\nunify_rule: ['2'] with ''\nunify_key: '2' with ''\nunify_rule: ['3'] with ''\nunify_key: '3' with ''\nunify_rule: ['4'] with ''\nunify_key: '4' with ''\nunify_rule: ['5'] with ''\nunify_key: '5' with ''\nunify_rule: ['6'] with ''\nunify_key: '6' with ''\nunify_rule: ['7'] with ''\nunify_key: '7' with ''\nunify_rule: ['8'] with ''\nunify_key: '8' with ''\nunify_rule: ['9'] with ''\nunify_key: '9' with ''\nunify_rule: ['<digit>'] with ''\nunify_key: '<digit>' with ''\nunify_rule: ['0'] with ''\nunify_key: '0' with ''\nunify_rule: ['1'] with ''\nunify_key: '1' with ''\nunify_rule: ['2'] with ''\nunify_key: '2' with ''\nunify_rule: ['3'] with ''\nunify_key: '3' with ''\nunify_rule: ['4'] with ''\nunify_key: '4' with ''\nunify_rule: ['5'] with ''\nunify_key: '5' with ''\nunify_rule: ['6'] with ''\nunify_key: '6' with ''\nunify_rule: ['7'] with ''\nunify_key: '7' with ''\nunify_rule: ['8'] with ''\nunify_key: '8' with ''\nunify_rule: ['9'] with ''\nunify_key: '9' with ''\nunify_rule: ['<digit>'] with '2'\nunify_key: '<digit>' with '2'\nunify_rule: ['0'] with '2'\nunify_key: '0' with '2'\nunify_rule: ['1'] with '2'\nunify_key: '1' with '2'\nunify_rule: ['2'] with '2'\nunify_key: '2' with '2'\n"
],
[
"mystring = \"1 + 2\"\npeg = PEGParser(EXPR_GRAMMAR, log=False)\npeg.parse(mystring)",
"_____no_output_____"
]
],
[
[
"The two methods are mutually recursive, and given that `unify_key()` tries each alternative until it succeeds, `unify_key` can be called multiple times with the same arguments. Hence, it is important to memoize the results of `unify_key`. Python provides a simple decorator `lru_cache` for memoizing any function call that has hashable arguments. We add that to our implementation so that repeated calls to `unify_key()` with the same argument get cached results.\n\nThis memoization gives the algorithm its name – _Packrat_.",
"_____no_output_____"
]
],
[
[
"from functools import lru_cache",
"_____no_output_____"
],
[
"class PEGParser(PEGParser):\n @lru_cache(maxsize=None)\n def unify_key(self, key, text, at=0):\n if key not in self.cgrammar:\n if text[at:].startswith(key):\n return at + len(key), (key, [])\n else:\n return at, None\n for rule in self.cgrammar[key]:\n to, res = self.unify_rule(rule, text, at)\n if res is not None:\n return (to, (key, res))\n return 0, None",
"_____no_output_____"
]
],
[
[
"We wrap initialization and calling of `PEGParser` in a method `parse()` already implemented in the `Parser` base class that accepts the text to be parsed along with the grammar.",
"_____no_output_____"
],
[
"### End of Excursion",
"_____no_output_____"
],
[
"Here are a few examples of our parser in action.",
"_____no_output_____"
]
],
[
[
"mystring = \"1 + (2 * 3)\"\npeg = PEGParser(EXPR_GRAMMAR)\nfor tree in peg.parse(mystring):\n assert tree_to_string(tree) == mystring\n display(display_tree(tree))",
"_____no_output_____"
],
[
"mystring = \"1 * (2 + 3.35)\"\nfor tree in peg.parse(mystring):\n assert tree_to_string(tree) == mystring\n display(display_tree(tree))",
"_____no_output_____"
]
],
[
[
"One should be aware that while the grammar looks like a *CFG*, the language described by a *PEG* may be different. Indeed, only *LL(1)* grammars are guaranteed to represent the same language for both PEGs and other parsers. Behavior of PEGs for other classes of grammars could be surprising \\cite{redziejowski2008}. ",
"_____no_output_____"
],
[
"## Parsing Context-Free Grammars",
"_____no_output_____"
],
[
"### Problems with PEG\nWhile _PEGs_ are simple at first sight, their behavior in some cases might be a bit unintuitive. For example, here is an example \\cite{redziejowski2008}:",
"_____no_output_____"
]
],
[
[
"PEG_SURPRISE: Grammar = {\n \"<A>\": [\"a<A>a\", \"aa\"]\n}",
"_____no_output_____"
]
],
[
[
"When interpreted as a *CFG* and used as a string generator, it will produce strings of the form `aa, aaaa, aaaaaa` that is, it produces strings where the number of `a` is $ 2*n $ where $ n > 0 $.",
"_____no_output_____"
]
],
[
[
"strings = []\nfor nn in range(4):\n f = GrammarFuzzer(PEG_SURPRISE, start_symbol='<A>')\n tree = ('<A>', None)\n for _ in range(nn):\n tree = f.expand_tree_once(tree)\n tree = f.expand_tree_with_strategy(tree, f.expand_node_min_cost)\n strings.append(tree_to_string(tree))\n display_tree(tree)\nstrings",
"_____no_output_____"
]
],
[
[
"However, the _PEG_ parser can only recognize strings of the form $2^n$",
"_____no_output_____"
]
],
[
[
"peg = PEGParser(PEG_SURPRISE, start_symbol='<A>')\nfor s in strings:\n with ExpectError():\n for tree in peg.parse(s):\n display_tree(tree)\n print(s)",
"aa\naaaa\naaaaaaaa\n"
]
],
[
[
"This is not the only problem with _Parsing Expression Grammars_. While *PEGs* are expressive and the *packrat* parser for parsing them is simple and intuitive, *PEGs* suffer from a major deficiency for our purposes. *PEGs* are oriented towards language recognition, and it is not clear how to translate an arbitrary *PEG* to a *CFG*. As we mentioned earlier, a naive re-interpretation of a *PEG* as a *CFG* does not work very well. Further, it is not clear what is the exact relation between the class of languages represented by *PEG* and the class of languages represented by *CFG*. Since our primary focus is *fuzzing* – that is _generation_ of strings – , we next look at _parsers that can accept context-free grammars_.",
"_____no_output_____"
],
[
"The general idea of *CFG* parser is the following: Peek at the input text for the allowed number of characters, and use these, and our parser state to determine which rules can be applied to complete parsing. We next look at a typical *CFG* parsing algorithm, the Earley Parser.",
"_____no_output_____"
],
[
"### The Earley Parser",
"_____no_output_____"
],
[
"The Earley parser is a general parser that is able to parse any arbitrary *CFG*. It was invented by Jay Earley \\cite{Earley1970} for use in computational linguistics. While its computational complexity is $O(n^3)$ for parsing strings with arbitrary grammars, it can parse strings with unambiguous grammars in $O(n^2)$ time, and all *[LR(k)](https://en.wikipedia.org/wiki/LR_parser)* grammars in linear time ($O(n)$ \\cite{Leo1991}). Further improvements – notably handling epsilon rules – were invented by Aycock et al. \\cite{Aycock2002}.",
"_____no_output_____"
],
[
"Note that one restriction of our implementation is that the start symbol can have only one alternative in its alternative expressions. This is not a restriction in practice because any grammar with multiple alternatives for its start symbol can be extended with a new start symbol that has the original start symbol as its only choice. That is, given a grammar as below,\n\n```\ngrammar = {\n '<start>': ['<A>', '<B>'],\n ...\n}\n```\none may rewrite it as below to conform to the *single-alternative* rule.\n```\ngrammar = {\n '<start>': ['<start_>'],\n '<start_>': ['<A>', '<B>'],\n ...\n}\n```",
"_____no_output_____"
],
[
"Let us implement a class `EarleyParser`, again derived from `Parser` which implements an Earley parser.",
"_____no_output_____"
],
[
"### Excursion: Implementing `EarleyParser`",
"_____no_output_____"
],
[
"We first implement a simpler parser that is a parser for nearly all *CFGs*, but not quite. In particular, our parser does not understand _epsilon rules_ – rules that derive empty string. We show later how the parser can be extended to handle these.",
"_____no_output_____"
],
[
"We use the following grammar in our examples below.",
"_____no_output_____"
]
],
[
[
"SAMPLE_GRAMMAR: Grammar = {\n '<start>': ['<A><B>'],\n '<A>': ['a<B>c', 'a<A>'],\n '<B>': ['b<C>', '<D>'],\n '<C>': ['c'],\n '<D>': ['d']\n}\nC_SAMPLE_GRAMMAR = canonical(SAMPLE_GRAMMAR)",
"_____no_output_____"
],
[
"syntax_diagram(SAMPLE_GRAMMAR)",
"start\n"
]
],
[
[
"The basic idea of Earley parsing is the following:\n\n* Start with the alternative expressions corresponding to the START_SYMBOL. These represent the possible ways to parse the string from a high level. Essentially each expression represents a parsing path. Queue each expression in our set of possible parses of the string. The parsed index of an expression is the part of expression that has already been recognized. In the beginning of parse, the parsed index of all expressions is at the beginning. Further, each letter gets a queue of expressions that recognizes that letter at that point in our parse.\n* Examine our queue of possible parses and check if any of them start with a nonterminal. If it does, then that nonterminal needs to be recognized from the input before the given rule can be parsed. Hence, add the alternative expressions corresponding to the nonterminal to the queue. Do this recursively.\n* At this point, we are ready to advance. Examine the current letter in the input, and select all expressions that have that particular letter at the parsed index. These expressions can now advance one step. Advance these selected expressions by incrementing their parsed index and add them to the queue of expressions in line for recognizing the next input letter.\n* If while doing these things, we find that any of the expressions have finished parsing, we fetch its corresponding nonterminal, and advance all expressions that have that nonterminal at their parsed index.\n* Continue this procedure recursively until all expressions that we have queued for the current letter have been processed. Then start processing the queue for the next letter.\n\nWe explain each step in detail with examples in the coming sections.",
"_____no_output_____"
],
[
"The parser uses dynamic programming to generate a table containing a _forest of possible parses_ at each letter index – the table contains as many columns as there are letters in the input, and each column contains different parsing rules at various stages of the parse.\n\nFor example, given an input `adcd`, the Column 0 would contain the following:\n```\n<start> : ● <A> <B>\n```\nwhich is the starting rule that indicates that we are currently parsing the rule `<start>`, and the parsing state is just before identifying the symbol `<A>`. It would also contain the following which are two alternative paths it could take to complete the parsing.\n\n```\n<A> : ● a <B> c\n<A> : ● a <A>\n```",
"_____no_output_____"
],
[
"Column 1 would contain the following, which represents the possible completion after reading `a`.\n```\n<A> : a ● <B> c\n<A> : a ● <A>\n<B> : ● b <C>\n<B> : ● <D>\n<A> : ● a <B> c\n<A> : ● a <A>\n<D> : ● d\n```",
"_____no_output_____"
],
[
"Column 2 would contain the following after reading `d`\n```\n<D> : d ●\n<B> : <D> ●\n<A> : a <B> ● c\n```",
"_____no_output_____"
],
[
"Similarly, Column 3 would contain the following after reading `c`\n```\n<A> : a <B> c ●\n<start> : <A> ● <B>\n<B> : ● b <C>\n<B> : ● <D>\n<D> : ● d\n```",
"_____no_output_____"
],
[
"Finally, Column 4 would contain the following after reading `d`, with the `●` at the end of the `<start>` rule indicating that the parse was successful.\n```\n<D> : d ●\n<B> : <D> ●\n<start> : <A> <B> ●\n```",
"_____no_output_____"
],
[
"As you can see from above, we are essentially filling a table (a table is also called a **chart**) of entries based on each letter we read, and the grammar rules that can be applied. This chart gives the parser its other name -- Chart parsing.",
"_____no_output_____"
],
[
"#### Columns\n\nWe define the `Column` first. The `Column` is initialized by its own `index` in the input string, and the `letter` at that index. Internally, we also keep track of the states that are added to the column as the parsing progresses.",
"_____no_output_____"
]
],
[
[
"class Column:\n def __init__(self, index, letter):\n self.index, self.letter = index, letter\n self.states, self._unique = [], {}\n\n def __str__(self):\n return \"%s chart[%d]\\n%s\" % (self.letter, self.index, \"\\n\".join(\n str(state) for state in self.states if state.finished()))",
"_____no_output_____"
]
],
[
[
"The `Column` only stores unique `states`. Hence, when a new `state` is `added` to our `Column`, we check whether it is already known.",
"_____no_output_____"
]
],
[
[
"class Column(Column):\n def add(self, state):\n if state in self._unique:\n return self._unique[state]\n self._unique[state] = state\n self.states.append(state)\n state.e_col = self\n return self._unique[state]",
"_____no_output_____"
]
],
[
[
"#### Items\n\nAn item represents a _parse in progress for a specific rule._ Hence the item contains the name of the nonterminal, and the corresponding alternative expression (`expr`) which together form the rule, and the current position of parsing in this expression -- `dot`.\n\n\n**Note.** If you are familiar with [LR parsing](https://en.wikipedia.org/wiki/LR_parser), you will notice that an item is simply an `LR0` item.",
"_____no_output_____"
]
],
[
[
"class Item:\n def __init__(self, name, expr, dot):\n self.name, self.expr, self.dot = name, expr, dot",
"_____no_output_____"
]
],
[
[
"We also provide a few convenience methods. The method `finished()` checks if the `dot` has moved beyond the last element in `expr`. The method `advance()` produces a new `Item` with the `dot` advanced one token, and represents an advance of the parsing. The method `at_dot()` returns the current symbol being parsed.",
"_____no_output_____"
]
],
[
[
"class Item(Item):\n def finished(self):\n return self.dot >= len(self.expr)\n\n def advance(self):\n return Item(self.name, self.expr, self.dot + 1)\n\n def at_dot(self):\n return self.expr[self.dot] if self.dot < len(self.expr) else None",
"_____no_output_____"
]
],
[
[
"Here is how an item could be used. We first define our item",
"_____no_output_____"
]
],
[
[
"item_name = '<B>'\nitem_expr = C_SAMPLE_GRAMMAR[item_name][1]\nan_item = Item(item_name, tuple(item_expr), 0)",
"_____no_output_____"
]
],
[
[
"To determine where the status of parsing, we use `at_dot()`",
"_____no_output_____"
]
],
[
[
"an_item.at_dot()",
"_____no_output_____"
]
],
[
[
"That is, the next symbol to be parsed is `<D>`",
"_____no_output_____"
],
[
"If we advance the item, we get another item that represents the finished parsing rule `<B>`.",
"_____no_output_____"
]
],
[
[
"another_item = an_item.advance()",
"_____no_output_____"
],
[
"another_item.finished()",
"_____no_output_____"
]
],
[
[
"#### States\n\nFor `Earley` parsing, the state of the parsing is simply one `Item` along with some meta information such as the starting `s_col` and ending column `e_col` for each state. Hence we inherit from `Item` to create a `State`.\nSince we are interested in comparing states, we define `hash()` and `eq()` with the corresponding methods.",
"_____no_output_____"
]
],
[
[
"class State(Item):\n def __init__(self, name, expr, dot, s_col, e_col=None):\n super().__init__(name, expr, dot)\n self.s_col, self.e_col = s_col, e_col\n\n def __str__(self):\n def idx(var):\n return var.index if var else -1\n\n return self.name + ':= ' + ' '.join([\n str(p)\n for p in [*self.expr[:self.dot], '|', *self.expr[self.dot:]]\n ]) + \"(%d,%d)\" % (idx(self.s_col), idx(self.e_col))\n\n def copy(self):\n return State(self.name, self.expr, self.dot, self.s_col, self.e_col)\n\n def _t(self):\n return (self.name, self.expr, self.dot, self.s_col.index)\n\n def __hash__(self):\n return hash(self._t())\n\n def __eq__(self, other):\n return self._t() == other._t()\n\n def advance(self):\n return State(self.name, self.expr, self.dot + 1, self.s_col)",
"_____no_output_____"
]
],
[
[
"The usage of `State` is similar to that of `Item`. The only difference is that it is used along with the `Column` to track the parsing state. For example, we initialize the first column as follows:",
"_____no_output_____"
]
],
[
[
"col_0 = Column(0, None)\nitem_tuple = tuple(*C_SAMPLE_GRAMMAR[START_SYMBOL])\nstart_state = State(START_SYMBOL, item_tuple, 0, col_0)\ncol_0.add(start_state)\nstart_state.at_dot()",
"_____no_output_____"
]
],
[
[
"The first column is then updated by using `add()` method of `Column`",
"_____no_output_____"
]
],
[
[
"sym = start_state.at_dot()\nfor alt in C_SAMPLE_GRAMMAR[sym]:\n col_0.add(State(sym, tuple(alt), 0, col_0))\nfor s in col_0.states:\n print(s)",
"<start>:= | <A> <B>(0,0)\n<A>:= | a <B> c(0,0)\n<A>:= | a <A>(0,0)\n"
]
],
[
[
"#### The Parsing Algorithm",
"_____no_output_____"
],
[
"The _Earley_ algorithm starts by initializing the chart with columns (as many as there are letters in the input). We also seed the first column with a state representing the expression corresponding to the start symbol. In our case, the state corresponds to the start symbol with the `dot` at `0` is represented as below. The `●` symbol represents the parsing status. In this case, we have not parsed anything.\n```\n<start>: ● <A> <B>\n```\nWe pass this partial chart to a method for filling the rest of the parse chart.",
"_____no_output_____"
],
[
"Before starting to parse, we seed the chart with the state representing the ongoing parse of the start symbol.",
"_____no_output_____"
]
],
[
[
"class EarleyParser(Parser):\n \"\"\"Earley Parser. This parser can parse any context-free grammar.\"\"\"\n\n def __init__(self, grammar: Grammar, **kwargs) -> None:\n super().__init__(grammar, **kwargs)\n self.chart: List = [] # for type checking\n\n def chart_parse(self, words, start):\n alt = tuple(*self.cgrammar[start])\n chart = [Column(i, tok) for i, tok in enumerate([None, *words])]\n chart[0].add(State(start, alt, 0, chart[0]))\n return self.fill_chart(chart)",
"_____no_output_____"
]
],
[
[
"The main parsing loop in `fill_chart()` has three fundamental operations. `predict()`, `scan()`, and `complete()`. We discuss `predict` next.",
"_____no_output_____"
],
[
"#### Predicting States\n\nWe have already seeded `chart[0]` with a state `[<A>,<B>]` with `dot` at `0`. Next, given that `<A>` is a nonterminal, we `predict` the possible parse continuations of this state. That is, it could be either `a <B> c` or `A <A>`.\n\nThe general idea of `predict()` is as follows: Say you have a state with name `<A>` from the above grammar, and expression containing `[a,<B>,c]`. Imagine that you have seen `a` already, which means that the `dot` will be on `<B>`. Below, is a representation of our parse status. The left hand side of ● represents the portion already parsed (`a`), and the right hand side represents the portion yet to be parsed (`<B> c`).\n\n```\n<A>: a ● <B> c\n```",
"_____no_output_____"
],
[
"To recognize `<B>`, we look at the definition of `<B>`, which has different alternative expressions. The `predict()` step adds each of these alternatives to the set of states, with `dot` at `0`.\n\n```\n<A>: a ● <B> c\n<B>: ● b c\n<B>: ● <D>\n```\n\nIn essence, the `predict()` method, when called with the current nonterminal, fetches the alternative expressions corresponding to this nonterminal, and adds these as predicted _child_ states to the _current_ column.",
"_____no_output_____"
]
],
[
[
"class EarleyParser(EarleyParser):\n def predict(self, col, sym, state):\n for alt in self.cgrammar[sym]:\n col.add(State(sym, tuple(alt), 0, col))",
"_____no_output_____"
]
],
[
[
"To see how to use `predict`, we first construct the 0th column as before, and we assign the constructed column to an instance of the EarleyParser.",
"_____no_output_____"
]
],
[
[
"col_0 = Column(0, None)\ncol_0.add(start_state)\nep = EarleyParser(SAMPLE_GRAMMAR)\nep.chart = [col_0]",
"_____no_output_____"
]
],
[
[
"It should contain a single state -- `<start> at 0`",
"_____no_output_____"
]
],
[
[
"for s in ep.chart[0].states:\n print(s)",
"<start>:= | <A> <B>(0,0)\n"
]
],
[
[
"We apply predict to fill out the 0th column, and the column should contain the possible parse paths.",
"_____no_output_____"
]
],
[
[
"ep.predict(col_0, '<A>', s)\nfor s in ep.chart[0].states:\n print(s)",
"<start>:= | <A> <B>(0,0)\n<A>:= | a <B> c(0,0)\n<A>:= | a <A>(0,0)\n"
]
],
[
[
"#### Scanning Tokens\n\nWhat if rather than a nonterminal, the state contained a terminal symbol such as a letter? In that case, we are ready to make some progress. For example, consider the second state:\n```\n<B>: ● b c\n```\nWe `scan` the next column's letter. Say the next token is `b`.\nIf the letter matches what we have, then create a new state by advancing the current state by one letter.\n\n```\n<B>: b ● c\n```\nThis new state is added to the next column (i.e the column that has the matched letter).",
"_____no_output_____"
]
],
[
[
"class EarleyParser(EarleyParser):\n def scan(self, col, state, letter):\n if letter == col.letter:\n col.add(state.advance())",
"_____no_output_____"
]
],
[
[
"As before, we construct the partial parse first, this time adding a new column so that we can observe the effects of `scan()`",
"_____no_output_____"
]
],
[
[
"ep = EarleyParser(SAMPLE_GRAMMAR)\ncol_1 = Column(1, 'a')\nep.chart = [col_0, col_1]",
"_____no_output_____"
],
[
"new_state = ep.chart[0].states[1]\nprint(new_state)",
"<A>:= | a <B> c(0,0)\n"
],
[
"ep.scan(col_1, new_state, 'a')\nfor s in ep.chart[1].states:\n print(s)",
"<A>:= a | <B> c(0,1)\n"
]
],
[
[
"#### Completing Processing\n\nWhen we advance, what if we actually `complete()` the processing of the current rule? If so, we want to update not just this state, but also all the _parent_ states from which this state was derived.\nFor example, say we have states as below.\n```\n<A>: a ● <B> c\n<B>: b c ● \n```\nThe state `<B>: b c ●` is now complete. So, we need to advance `<A>: a ● <B> c` one step forward.\n\nHow do we determine the parent states? Note from `predict` that we added the predicted child states to the _same_ column as that of the inspected state. Hence, we look at the starting column of the current state, with the same symbol `at_dot` as that of the name of the completed state.\n\nFor each such parent found, we advance that parent (because we have just finished parsing that non terminal for their `at_dot`) and add the new states to the current column.",
"_____no_output_____"
]
],
[
[
"class EarleyParser(EarleyParser):\n def complete(self, col, state):\n return self.earley_complete(col, state)\n\n def earley_complete(self, col, state):\n parent_states = [\n st for st in state.s_col.states if st.at_dot() == state.name\n ]\n for st in parent_states:\n col.add(st.advance())",
"_____no_output_____"
]
],
[
[
"Here is an example of completed processing. First we complete the Column 0",
"_____no_output_____"
]
],
[
[
"ep = EarleyParser(SAMPLE_GRAMMAR)\ncol_1 = Column(1, 'a')\ncol_2 = Column(2, 'd')\nep.chart = [col_0, col_1, col_2]\nep.predict(col_0, '<A>', s)\nfor s in ep.chart[0].states:\n print(s)",
"<start>:= | <A> <B>(0,0)\n<A>:= | a <B> c(0,0)\n<A>:= | a <A>(0,0)\n"
]
],
[
[
"Then we use `scan()` to populate Column 1",
"_____no_output_____"
]
],
[
[
"for state in ep.chart[0].states:\n if state.at_dot() not in SAMPLE_GRAMMAR:\n ep.scan(col_1, state, 'a')\nfor s in ep.chart[1].states:\n print(s)",
"<A>:= a | <B> c(0,1)\n<A>:= a | <A>(0,1)\n"
],
[
"for state in ep.chart[1].states:\n if state.at_dot() in SAMPLE_GRAMMAR:\n ep.predict(col_1, state.at_dot(), state)\nfor s in ep.chart[1].states:\n print(s)",
"<A>:= a | <B> c(0,1)\n<A>:= a | <A>(0,1)\n<B>:= | b <C>(1,1)\n<B>:= | <D>(1,1)\n<A>:= | a <B> c(1,1)\n<A>:= | a <A>(1,1)\n<D>:= | d(1,1)\n"
]
],
[
[
"Then we use `scan()` again to populate Column 2",
"_____no_output_____"
]
],
[
[
"for state in ep.chart[1].states:\n if state.at_dot() not in SAMPLE_GRAMMAR:\n ep.scan(col_2, state, state.at_dot())\n\nfor s in ep.chart[2].states:\n print(s)",
"<D>:= d |(1,2)\n"
]
],
[
[
"Now, we can use `complete()`:",
"_____no_output_____"
]
],
[
[
"for state in ep.chart[2].states:\n if state.finished():\n ep.complete(col_2, state)\n\nfor s in ep.chart[2].states:\n print(s)",
"<D>:= d |(1,2)\n<B>:= <D> |(1,2)\n<A>:= a <B> | c(0,2)\n"
]
],
[
[
"#### Filling the Chart\n\nThe main driving loop in `fill_chart()` essentially calls these operations in order. We loop over each column in order.\n* For each column, fetch one state in the column at a time, and check if the state is `finished`. \n * If it is, then we `complete()` all the parent states depending on this state. \n* If the state was not finished, we check to see if the state's current symbol `at_dot` is a nonterminal. \n * If it is a nonterminal, we `predict()` possible continuations, and update the current column with these states. \n * If it was not, we `scan()` the next column and advance the current state if it matches the next letter.",
"_____no_output_____"
]
],
[
[
"class EarleyParser(EarleyParser):\n def fill_chart(self, chart):\n for i, col in enumerate(chart):\n for state in col.states:\n if state.finished():\n self.complete(col, state)\n else:\n sym = state.at_dot()\n if sym in self.cgrammar:\n self.predict(col, sym, state)\n else:\n if i + 1 >= len(chart):\n continue\n self.scan(chart[i + 1], state, sym)\n if self.log:\n print(col, '\\n')\n return chart",
"_____no_output_____"
]
],
[
[
"We now can recognize a given string as belonging to a language represented by a grammar.",
"_____no_output_____"
]
],
[
[
"ep = EarleyParser(SAMPLE_GRAMMAR, log=True)\ncolumns = ep.chart_parse('adcd', START_SYMBOL)",
"None chart[0]\n \n\na chart[1]\n \n\nd chart[2]\n<D>:= d |(1,2)\n<B>:= <D> |(1,2) \n\nc chart[3]\n<A>:= a <B> c |(0,3) \n\nd chart[4]\n<D>:= d |(3,4)\n<B>:= <D> |(3,4)\n<start>:= <A> <B> |(0,4) \n\n"
]
],
[
[
"The chart we printed above only shows completed entries at each index. The parenthesized expression indicates the column just before the first character was recognized, and the ending column.\n\nNotice how the `<start>` nonterminal shows fully parsed status.",
"_____no_output_____"
]
],
[
[
"last_col = columns[-1]\nfor state in last_col.states:\n if state.name == '<start>':\n print(state)",
"<start>:= <A> <B> |(0,4)\n"
]
],
[
[
"Since `chart_parse()` returns the completed table, we now need to extract the derivation trees.",
"_____no_output_____"
],
[
"#### The Parse Method\n\nFor determining how far we have managed to parse, we simply look for the last index from `chart_parse()` where the `start_symbol` was found.",
"_____no_output_____"
]
],
[
[
"class EarleyParser(EarleyParser):\n def parse_prefix(self, text):\n self.table = self.chart_parse(text, self.start_symbol())\n for col in reversed(self.table):\n states = [\n st for st in col.states if st.name == self.start_symbol()\n ]\n if states:\n return col.index, states\n return -1, []",
"_____no_output_____"
]
],
[
[
"Here is the `parse_prefix()` in action.",
"_____no_output_____"
]
],
[
[
"ep = EarleyParser(SAMPLE_GRAMMAR)\ncursor, last_states = ep.parse_prefix('adcd')\nprint(cursor, [str(s) for s in last_states])",
"4 ['<start>:= <A> <B> |(0,4)']\n"
]
],
[
[
"The following is adapted from the excellent reference on Earley parsing by [Loup Vaillant](http://loup-vaillant.fr/tutorials/earley-parsing/).\n",
"_____no_output_____"
],
[
"Our `parse()` method is as follows. It depends on two methods `parse_forest()` and `extract_trees()` that will be defined next.",
"_____no_output_____"
]
],
[
[
"class EarleyParser(EarleyParser):\n def parse(self, text):\n cursor, states = self.parse_prefix(text)\n start = next((s for s in states if s.finished()), None)\n\n if cursor < len(text) or not start:\n raise SyntaxError(\"at \" + repr(text[cursor:]))\n\n forest = self.parse_forest(self.table, start)\n for tree in self.extract_trees(forest):\n yield self.prune_tree(tree)",
"_____no_output_____"
]
],
[
[
"#### Parsing Paths\n\nThe `parse_paths()` method tries to unify the given expression in `named_expr` with the parsed string. For that, it extracts the last symbol in `named_expr` and checks if it is a terminal symbol. If it is, then it checks the chart at `til` to see if the letter corresponding to the position matches the terminal symbol. If it does, extend our start index by the length of the symbol.\n\nIf the symbol was a nonterminal symbol, then we retrieve the parsed states at the current end column index (`til`) that correspond to the nonterminal symbol, and collect the start index. These are the end column indexes for the remaining expression.\n\nGiven our list of start indexes, we obtain the parse paths from the remaining expression. If we can obtain any, then we return the parse paths. If not, we return an empty list.",
"_____no_output_____"
]
],
[
[
"class EarleyParser(EarleyParser):\n def parse_paths(self, named_expr, chart, frm, til):\n def paths(state, start, k, e):\n if not e:\n return [[(state, k)]] if start == frm else []\n else:\n return [[(state, k)] + r\n for r in self.parse_paths(e, chart, frm, start)]\n\n *expr, var = named_expr\n starts = None\n if var not in self.cgrammar:\n starts = ([(var, til - len(var),\n 't')] if til > 0 and chart[til].letter == var else [])\n else:\n starts = [(s, s.s_col.index, 'n') for s in chart[til].states\n if s.finished() and s.name == var]\n\n return [p for s, start, k in starts for p in paths(s, start, k, expr)]",
"_____no_output_____"
]
],
[
[
"Here is the `parse_paths()` in action",
"_____no_output_____"
]
],
[
[
"print(SAMPLE_GRAMMAR['<start>'])\nep = EarleyParser(SAMPLE_GRAMMAR)\ncompleted_start = last_states[0]\npaths = ep.parse_paths(completed_start.expr, columns, 0, 4)\nfor path in paths:\n print([list(str(s_) for s_ in s) for s in path])",
"['<A><B>']\n[['<B>:= <D> |(3,4)', 'n'], ['<A>:= a <B> c |(0,3)', 'n']]\n"
]
],
[
[
"That is, the parse path for `<start>` given the input `adcd` included recognizing the expression `<A><B>`. This was recognized by the two states: `<A>` from input(0) to input(2) which further involved recognizing the rule `a<B>c`, and the next state `<B>` from input(3) which involved recognizing the rule `<D>`.",
"_____no_output_____"
],
[
"#### Parsing Forests\n\nThe `parse_forest()` method takes the state which represents the completed parse, and determines the possible ways that its expressions corresponded to the parsed expression. For example, say we are parsing `1+2+3`, and the state has `[<expr>,+,<expr>]` in `expr`. It could have been parsed as either `[{<expr>:1+2},+,{<expr>:3}]` or `[{<expr>:1},+,{<expr>:2+3}]`.",
"_____no_output_____"
]
],
[
[
"class EarleyParser(EarleyParser):\n def forest(self, s, kind, chart):\n return self.parse_forest(chart, s) if kind == 'n' else (s, [])\n\n def parse_forest(self, chart, state):\n pathexprs = self.parse_paths(state.expr, chart, state.s_col.index,\n state.e_col.index) if state.expr else []\n return state.name, [[(v, k, chart) for v, k in reversed(pathexpr)]\n for pathexpr in pathexprs]",
"_____no_output_____"
],
[
"ep = EarleyParser(SAMPLE_GRAMMAR)\nresult = ep.parse_forest(columns, last_states[0])\nresult",
"_____no_output_____"
]
],
[
[
"#### Extracting Trees",
"_____no_output_____"
],
[
"What we have from `parse_forest()` is a forest of trees. We need to extract a single tree from that forest. That is accomplished as follows.",
"_____no_output_____"
],
[
"(For now, we return the first available derivation tree. To do that, we need to extract the parse forest from the state corresponding to `start`.)",
"_____no_output_____"
]
],
[
[
"class EarleyParser(EarleyParser):\n def extract_a_tree(self, forest_node):\n name, paths = forest_node\n if not paths:\n return (name, [])\n return (name, [self.extract_a_tree(self.forest(*p)) for p in paths[0]])\n\n def extract_trees(self, forest):\n yield self.extract_a_tree(forest)",
"_____no_output_____"
]
],
[
[
"We now verify that our parser can parse a given expression.",
"_____no_output_____"
]
],
[
[
"A3_GRAMMAR: Grammar = {\n \"<start>\": [\"<bexpr>\"],\n \"<bexpr>\": [\n \"<aexpr><gt><aexpr>\", \"<aexpr><lt><aexpr>\", \"<aexpr>=<aexpr>\",\n \"<bexpr>=<bexpr>\", \"<bexpr>&<bexpr>\", \"<bexpr>|<bexpr>\", \"(<bexrp>)\"\n ],\n \"<aexpr>\":\n [\"<aexpr>+<aexpr>\", \"<aexpr>-<aexpr>\", \"(<aexpr>)\", \"<integer>\"],\n \"<integer>\": [\"<digit><integer>\", \"<digit>\"],\n \"<digit>\": [\"0\", \"1\", \"2\", \"3\", \"4\", \"5\", \"6\", \"7\", \"8\", \"9\"],\n \"<lt>\": ['<'],\n \"<gt>\": ['>']\n}",
"_____no_output_____"
],
[
"syntax_diagram(A3_GRAMMAR)",
"start\n"
],
[
"mystring = '(1+24)=33'\nparser = EarleyParser(A3_GRAMMAR)\nfor tree in parser.parse(mystring):\n assert tree_to_string(tree) == mystring\n display_tree(tree)",
"_____no_output_____"
]
],
[
[
"We now have a complete parser that can parse almost arbitrary *CFG*. There remains a small corner to fix -- the case of epsilon rules as we will see later.",
"_____no_output_____"
],
[
"#### Ambiguous Parsing",
"_____no_output_____"
],
[
"Ambiguous grammars are grammars that can produce multiple derivation trees for some given string. For example, the `A3_GRAMMAR` can parse `1+2+3` in two different ways – `[1+2]+3` and `1+[2+3]`.\n\nExtracting a single tree might be reasonable for unambiguous parses. However, what if the given grammar produces ambiguity when given a string? We need to extract all derivation trees in that case. We enhance our `extract_trees()` method to extract multiple derivation trees.",
"_____no_output_____"
]
],
[
[
"import itertools as I",
"_____no_output_____"
],
[
"class EarleyParser(EarleyParser):\n def extract_trees(self, forest_node):\n name, paths = forest_node\n if not paths:\n yield (name, [])\n\n for path in paths:\n ptrees = [self.extract_trees(self.forest(*p)) for p in path]\n for p in I.product(*ptrees):\n yield (name, p)",
"_____no_output_____"
]
],
[
[
"As before, we verify that everything works.",
"_____no_output_____"
]
],
[
[
"mystring = '1+2'\nparser = EarleyParser(A1_GRAMMAR)\nfor tree in parser.parse(mystring):\n assert mystring == tree_to_string(tree)\n display_tree(tree)",
"_____no_output_____"
]
],
[
[
"One can also use a `GrammarFuzzer` to verify that everything works.",
"_____no_output_____"
]
],
[
[
"gf = GrammarFuzzer(A1_GRAMMAR)\nfor i in range(5):\n s = gf.fuzz()\n print(i, s)\n for tree in parser.parse(s):\n assert tree_to_string(tree) == s",
"0 045+3+2-9+7-7-5-1-449\n1 0+9+5-2+1-8+4-3+7+2\n2 76413\n3 9339\n4 62\n"
]
],
[
[
"#### The Aycock Epsilon Fix\n\nWhile parsing, one often requires to know whether a given nonterminal can derive an empty string. For example, in the following grammar A can derive an empty string, while B can't. The nonterminals that can derive an empty string are called _nullable_ nonterminals. For example, in the below grammar `E_GRAMMAR_1`, `<A>` is _nullable_, and since `<A>` is one of the alternatives of `<start>`, `<start>` is also _nullable_. But `<B>` is not _nullable_.",
"_____no_output_____"
]
],
[
[
"E_GRAMMAR_1: Grammar = {\n '<start>': ['<A>', '<B>'],\n '<A>': ['a', ''],\n '<B>': ['b']\n}",
"_____no_output_____"
]
],
[
[
"One of the problems with the original Earley implementation is that it does not handle rules that can derive empty strings very well. For example, the given grammar should match `a`",
"_____no_output_____"
]
],
[
[
"EPSILON = ''\nE_GRAMMAR: Grammar = {\n '<start>': ['<S>'],\n '<S>': ['<A><A><A><A>'],\n '<A>': ['a', '<E>'],\n '<E>': [EPSILON]\n}",
"_____no_output_____"
],
[
"syntax_diagram(E_GRAMMAR)",
"start\n"
],
[
"mystring = 'a'\nparser = EarleyParser(E_GRAMMAR)\nwith ExpectError():\n trees = parser.parse(mystring)",
"_____no_output_____"
]
],
[
[
"Aycock et al.\\cite{Aycock2002} suggests a simple fix. Their idea is to pre-compute the `nullable` set and use it to advance the `nullable` states. However, before we do that, we need to compute the `nullable` set. The `nullable` set consists of all nonterminals that can derive an empty string.",
"_____no_output_____"
],
[
"Computing the `nullable` set requires expanding each production rule in the grammar iteratively and inspecting whether a given rule can derive the empty string. Each iteration needs to take into account new terminals that have been found to be `nullable`. The procedure stops when we obtain a stable result. This procedure can be abstracted into a more general method `fixpoint`.",
"_____no_output_____"
],
[
"##### Fixpoint\n\nA `fixpoint` of a function is an element in the function's domain such that it is mapped to itself. For example, 1 is a `fixpoint` of square root because `squareroot(1) == 1`.\n\n(We use `str` rather than `hash` to check for equality in `fixpoint` because the data structure `set`, which we would like to use as an argument has a good string representation but is not hashable).",
"_____no_output_____"
]
],
[
[
"def fixpoint(f):\n def helper(arg):\n while True:\n sarg = str(arg)\n arg_ = f(arg)\n if str(arg_) == sarg:\n return arg\n arg = arg_\n\n return helper",
"_____no_output_____"
]
],
[
[
"Remember `my_sqrt()` from [the first chapter](Intro_Testing.ipynb)? We can define `my_sqrt()` using fixpoint.",
"_____no_output_____"
]
],
[
[
"def my_sqrt(x):\n @fixpoint\n def _my_sqrt(approx):\n return (approx + x / approx) / 2\n\n return _my_sqrt(1)",
"_____no_output_____"
],
[
"my_sqrt(2)",
"_____no_output_____"
]
],
[
[
"##### Nullable\n\nSimilarly, we can define `nullable` using `fixpoint`. We essentially provide the definition of a single intermediate step. That is, assuming that `nullables` contain the current `nullable` nonterminals, we iterate over the grammar looking for productions which are `nullable` -- that is, productions where the entire sequence can yield an empty string on some expansion.",
"_____no_output_____"
],
[
"We need to iterate over the different alternative expressions and their corresponding nonterminals. Hence we define a `rules()` method converts our dictionary representation to this pair format.",
"_____no_output_____"
]
],
[
[
"def rules(grammar):\n return [(key, choice)\n for key, choices in grammar.items()\n for choice in choices]",
"_____no_output_____"
]
],
[
[
"The `terminals()` method extracts all terminal symbols from a `canonical` grammar representation.",
"_____no_output_____"
]
],
[
[
"def terminals(grammar):\n return set(token\n for key, choice in rules(grammar)\n for token in choice if token not in grammar)",
"_____no_output_____"
],
[
"def nullable_expr(expr, nullables):\n return all(token in nullables for token in expr)",
"_____no_output_____"
],
[
"def nullable(grammar):\n productions = rules(grammar)\n\n @fixpoint\n def nullable_(nullables):\n for A, expr in productions:\n if nullable_expr(expr, nullables):\n nullables |= {A}\n return (nullables)\n\n return nullable_({EPSILON})",
"_____no_output_____"
],
[
"for key, grammar in {\n 'E_GRAMMAR': E_GRAMMAR,\n 'E_GRAMMAR_1': E_GRAMMAR_1\n}.items():\n print(key, nullable(canonical(grammar)))",
"E_GRAMMAR {'', '<A>', '<S>', '<E>', '<start>'}\nE_GRAMMAR_1 {'', '<start>', '<A>'}\n"
]
],
[
[
"So, once we have the `nullable` set, all that we need to do is, after we have called `predict` on a state corresponding to a nonterminal, check if it is `nullable` and if it is, advance and add the state to the current column.",
"_____no_output_____"
]
],
[
[
"class EarleyParser(EarleyParser):\n def __init__(self, grammar, **kwargs):\n super().__init__(grammar, **kwargs)\n self.epsilon = nullable(self.cgrammar)\n\n def predict(self, col, sym, state):\n for alt in self.cgrammar[sym]:\n col.add(State(sym, tuple(alt), 0, col))\n if sym in self.epsilon:\n col.add(state.advance())",
"_____no_output_____"
],
[
"mystring = 'a'\nparser = EarleyParser(E_GRAMMAR)\nfor tree in parser.parse(mystring):\n display_tree(tree)",
"_____no_output_____"
]
],
[
[
"To ensure that our parser does parse all kinds of grammars, let us try two more test cases.",
"_____no_output_____"
]
],
[
[
"DIRECTLY_SELF_REFERRING: Grammar = {\n '<start>': ['<query>'],\n '<query>': ['select <expr> from a'],\n \"<expr>\": [\"<expr>\", \"a\"],\n}\nINDIRECTLY_SELF_REFERRING: Grammar = {\n '<start>': ['<query>'],\n '<query>': ['select <expr> from a'],\n \"<expr>\": [\"<aexpr>\", \"a\"],\n \"<aexpr>\": [\"<expr>\"],\n}",
"_____no_output_____"
],
[
"mystring = 'select a from a'\nfor grammar in [DIRECTLY_SELF_REFERRING, INDIRECTLY_SELF_REFERRING]:\n forest = EarleyParser(grammar).parse(mystring)\n print('recognized', mystring)\n try:\n for tree in forest:\n print(tree_to_string(tree))\n except RecursionError as e:\n print(\"Recursion error\", e)",
"recognized select a from a\nRecursion error maximum recursion depth exceeded\nrecognized select a from a\nRecursion error maximum recursion depth exceeded\n"
]
],
[
[
"Why do we get recursion error here? The reason is that, our implementation of `extract_trees()` is eager. That is, it attempts to extract _all_ inner parse trees before it can construct the outer parse tree. When there is a self reference, this results in recursion. Here is a simple extractor that avoids this problem. The idea here is that we randomly and lazily choose a node to expand, which avoids the infinite recursion.",
"_____no_output_____"
],
[
"#### Tree Extractor",
"_____no_output_____"
],
[
"As you saw above, one of the problems with attempting to extract all trees is that the parse forest can consist of an infinite number of trees. So, here, we solve that problem by extracting one tree at a time.",
"_____no_output_____"
]
],
[
[
"class SimpleExtractor:\n def __init__(self, parser, text):\n self.parser = parser\n cursor, states = parser.parse_prefix(text)\n start = next((s for s in states if s.finished()), None)\n if cursor < len(text) or not start:\n raise SyntaxError(\"at \" + repr(cursor))\n self.my_forest = parser.parse_forest(parser.table, start)\n\n def extract_a_node(self, forest_node):\n name, paths = forest_node\n if not paths:\n return ((name, 0, 1), []), (name, [])\n cur_path, i, length = self.choose_path(paths)\n child_nodes = []\n pos_nodes = []\n for s, kind, chart in cur_path:\n f = self.parser.forest(s, kind, chart)\n postree, ntree = self.extract_a_node(f)\n child_nodes.append(ntree)\n pos_nodes.append(postree)\n\n return ((name, i, length), pos_nodes), (name, child_nodes)\n\n def choose_path(self, arr):\n length = len(arr)\n i = random.randrange(length)\n return arr[i], i, length\n\n def extract_a_tree(self):\n pos_tree, parse_tree = self.extract_a_node(self.my_forest)\n return self.parser.prune_tree(parse_tree)",
"_____no_output_____"
]
],
[
[
"Using it is as folows:",
"_____no_output_____"
]
],
[
[
"de = SimpleExtractor(EarleyParser(DIRECTLY_SELF_REFERRING), mystring)",
"_____no_output_____"
],
[
"for i in range(5):\n tree = de.extract_a_tree()\n print(tree_to_string(tree))",
"select a from a\nselect a from a\nselect a from a\nselect a from a\nselect a from a\n"
]
],
[
[
"On the indirect reference:",
"_____no_output_____"
]
],
[
[
"ie = SimpleExtractor(EarleyParser(INDIRECTLY_SELF_REFERRING), mystring)",
"_____no_output_____"
],
[
"for i in range(5):\n tree = ie.extract_a_tree()\n print(tree_to_string(tree))",
"select a from a\nselect a from a\nselect a from a\nselect a from a\nselect a from a\n"
]
],
[
[
"Note that the `SimpleExtractor` gives no guarantee of the uniqueness of the returned trees. This can however be fixed by keeping track of the particular nodes that were expanded from `pos_tree` variable, and hence, avoiding exploration of the same paths.\n\nFor implementing this, we extract the random stream passing into the `SimpleExtractor`, and use it to control which nodes are explored. Different exploration paths can then form a tree of nodes.",
"_____no_output_____"
],
[
"We start with the node definition for a single choice. The `self._chosen` is the current choice made, `self.next` holds the next choice done using `self._chosen`. The `self.total` holds the total number of choices that one can have in this node.",
"_____no_output_____"
]
],
[
[
"class ChoiceNode:\n def __init__(self, parent, total):\n self._p, self._chosen = parent, 0\n self._total, self.next = total, None\n\n def chosen(self):\n assert not self.finished()\n return self._chosen\n\n def __str__(self):\n return '%d(%s/%s %s)' % (self._i, str(self._chosen),\n str(self._total), str(self.next))\n\n def __repr__(self):\n return repr((self._i, self._chosen, self._total))\n\n def increment(self):\n # as soon as we increment, next becomes invalid\n self.next = None\n self._chosen += 1\n if self.finished():\n if self._p is None:\n return None\n return self._p.increment()\n return self\n\n def finished(self):\n return self._chosen >= self._total",
"_____no_output_____"
]
],
[
[
"Now we come to the enhanced `EnhancedExtractor()`.",
"_____no_output_____"
]
],
[
[
"class EnhancedExtractor(SimpleExtractor):\n def __init__(self, parser, text):\n super().__init__(parser, text)\n self.choices = ChoiceNode(None, 1)",
"_____no_output_____"
]
],
[
[
"First we define `choose_path()` that given an array and a choice node, returns the element in array corresponding to the next choice node if it exists, or produces a new choice nodes, and returns that element.",
"_____no_output_____"
]
],
[
[
"class EnhancedExtractor(EnhancedExtractor):\n def choose_path(self, arr, choices):\n arr_len = len(arr)\n if choices.next is not None:\n if choices.next.finished():\n return None, None, None, choices.next\n else:\n choices.next = ChoiceNode(choices, arr_len)\n next_choice = choices.next.chosen()\n choices = choices.next\n return arr[next_choice], next_choice, arr_len, choices",
"_____no_output_____"
]
],
[
[
"We define `extract_a_node()` here. While extracting, we have a choice. Should we allow infinite forests, or should we have a finite number of trees with no direct recursion? A direct recursion is when there exists a parent node with the same nonterminal that parsed the same span. We choose here not to extract such trees. They can be added back after parsing.\n\nThis is a recursive procedure that inspects a node, extracts the path required to complete that node. A single path (corresponding to a nonterminal) may again be composed of a sequence of smaller paths. Such paths are again extracted using another call to `extract_a_node()` recursively.\n\nWhat happens when we hit on one of the node recursions we want to avoid? In that case, we return the current choice node, which bubbles up to `extract_a_tree()`. That procedure increments the last choice, which in turn increments up the parents until we reach a choice node that still has options to explore.\n\nWhat if we hit the end of choices for a particular choice node(i.e, we have exhausted paths that can be taken from a node)? In this case also, we return the current choice node, which bubbles up to `extract_a_tree()`.\nThat procedure increments the last choice, which bubbles up to the next choice that has some unexplored paths.",
"_____no_output_____"
]
],
[
[
"class EnhancedExtractor(EnhancedExtractor):\n def extract_a_node(self, forest_node, seen, choices):\n name, paths = forest_node\n if not paths:\n return (name, []), choices\n\n cur_path, _i, _l, new_choices = self.choose_path(paths, choices)\n if cur_path is None:\n return None, new_choices\n child_nodes = []\n for s, kind, chart in cur_path:\n if kind == 't':\n child_nodes.append((s, []))\n continue\n nid = (s.name, s.s_col.index, s.e_col.index)\n if nid in seen:\n return None, new_choices\n f = self.parser.forest(s, kind, chart)\n ntree, newer_choices = self.extract_a_node(f, seen | {nid}, new_choices)\n if ntree is None:\n return None, newer_choices\n child_nodes.append(ntree)\n new_choices = newer_choices\n return (name, child_nodes), new_choices",
"_____no_output_____"
]
],
[
[
"The `extract_a_tree()` is a depth first extractor of a single tree. It tries to extract a tree, and if the extraction returns `None`, it means that a particular choice was exhausted, or we hit on a recursion. In that case, we increment the choice, and explore a new path.",
"_____no_output_____"
]
],
[
[
"class EnhancedExtractor(EnhancedExtractor):\n def extract_a_tree(self):\n while not self.choices.finished():\n parse_tree, choices = self.extract_a_node(self.my_forest, set(), self.choices)\n choices.increment()\n if parse_tree is not None:\n return self.parser.prune_tree(parse_tree)\n return None",
"_____no_output_____"
]
],
[
[
"Note that the `EnhancedExtractor` only extracts nodes that are not directly recursive. That is, if it finds a node with a nonterminal that covers the same span as that of a parent node with the same nonterminal, it skips the node.",
"_____no_output_____"
]
],
[
[
"ee = EnhancedExtractor(EarleyParser(INDIRECTLY_SELF_REFERRING), mystring)",
"_____no_output_____"
],
[
"i = 0\nwhile True:\n i += 1\n t = ee.extract_a_tree()\n if t is None: break\n print(i, t)\n s = tree_to_string(t)\n assert s == mystring",
"1 ('<start>', [('<query>', [('select ', []), ('<expr>', [('a', [])]), (' from a', [])])])\n"
],
[
"istring = '1+2+3+4'\nee = EnhancedExtractor(EarleyParser(A1_GRAMMAR), istring)",
"_____no_output_____"
],
[
"i = 0\nwhile True:\n i += 1\n t = ee.extract_a_tree()\n if t is None: break\n print(i, t)\n s = tree_to_string(t)\n assert s == istring",
"1 ('<start>', [('<expr>', [('<expr>', [('<expr>', [('<expr>', [('<integer>', [('<digit>', [('1', [])])])]), ('+', []), ('<expr>', [('<integer>', [('<digit>', [('2', [])])])])]), ('+', []), ('<expr>', [('<integer>', [('<digit>', [('3', [])])])])]), ('+', []), ('<expr>', [('<integer>', [('<digit>', [('4', [])])])])])])\n2 ('<start>', [('<expr>', [('<expr>', [('<expr>', [('<integer>', [('<digit>', [('1', [])])])]), ('+', []), ('<expr>', [('<expr>', [('<integer>', [('<digit>', [('2', [])])])]), ('+', []), ('<expr>', [('<integer>', [('<digit>', [('3', [])])])])])]), ('+', []), ('<expr>', [('<integer>', [('<digit>', [('4', [])])])])])])\n3 ('<start>', [('<expr>', [('<expr>', [('<expr>', [('<integer>', [('<digit>', [('1', [])])])]), ('+', []), ('<expr>', [('<integer>', [('<digit>', [('2', [])])])])]), ('+', []), ('<expr>', [('<expr>', [('<integer>', [('<digit>', [('3', [])])])]), ('+', []), ('<expr>', [('<integer>', [('<digit>', [('4', [])])])])])])])\n4 ('<start>', [('<expr>', [('<expr>', [('<integer>', [('<digit>', [('1', [])])])]), ('+', []), ('<expr>', [('<expr>', [('<expr>', [('<integer>', [('<digit>', [('2', [])])])]), ('+', []), ('<expr>', [('<integer>', [('<digit>', [('3', [])])])])]), ('+', []), ('<expr>', [('<integer>', [('<digit>', [('4', [])])])])])])])\n5 ('<start>', [('<expr>', [('<expr>', [('<integer>', [('<digit>', [('1', [])])])]), ('+', []), ('<expr>', [('<expr>', [('<integer>', [('<digit>', [('2', [])])])]), ('+', []), ('<expr>', [('<expr>', [('<integer>', [('<digit>', [('3', [])])])]), ('+', []), ('<expr>', [('<integer>', [('<digit>', [('4', [])])])])])])])])\n"
]
],
[
[
"#### More Earley Parsing\n\nA number of other optimizations exist for Earley parsers. A fast industrial strength Earley parser implementation is the [Marpa parser](https://jeffreykegler.github.io/Marpa-web-site/). Further, Earley parsing need not be restricted to character data. One may also parse streams (audio and video streams) \\cite{qi2018generalized} using a generalized Earley parser.",
"_____no_output_____"
],
[
"### End of Excursion",
"_____no_output_____"
],
[
"Here are a few examples of the Earley parser in action.",
"_____no_output_____"
]
],
[
[
"mystring = \"1 + (2 * 3)\"\nearley = EarleyParser(EXPR_GRAMMAR)\nfor tree in earley.parse(mystring):\n assert tree_to_string(tree) == mystring\n display(display_tree(tree))",
"_____no_output_____"
],
[
"mystring = \"1 * (2 + 3.35)\"\nfor tree in earley.parse(mystring):\n assert tree_to_string(tree) == mystring\n display(display_tree(tree))",
"_____no_output_____"
]
],
[
[
"In contrast to the `PEGParser`, above, the `EarleyParser` can handle arbitrary context-free grammars.",
"_____no_output_____"
],
[
"### Excursion: Testing the Parsers\n\nWhile we have defined two parser variants, it would be nice to have some confirmation that our parses work well. While it is possible to formally prove that they work, it is much more satisfying to generate random grammars, their corresponding strings, and parse them using the same grammar.",
"_____no_output_____"
]
],
[
[
"def prod_line_grammar(nonterminals, terminals):\n g = {\n '<start>': ['<symbols>'],\n '<symbols>': ['<symbol><symbols>', '<symbol>'],\n '<symbol>': ['<nonterminals>', '<terminals>'],\n '<nonterminals>': ['<lt><alpha><gt>'],\n '<lt>': ['<'],\n '<gt>': ['>'],\n '<alpha>': nonterminals,\n '<terminals>': terminals\n }\n\n if not nonterminals:\n g['<nonterminals>'] = ['']\n del g['<lt>']\n del g['<alpha>']\n del g['<gt>']\n\n return g",
"_____no_output_____"
],
[
"syntax_diagram(prod_line_grammar([\"A\", \"B\", \"C\"], [\"1\", \"2\", \"3\"]))",
"start\n"
],
[
"def make_rule(nonterminals, terminals, num_alts):\n prod_grammar = prod_line_grammar(nonterminals, terminals)\n\n gf = GrammarFuzzer(prod_grammar, min_nonterminals=3, max_nonterminals=5)\n name = \"<%s>\" % ''.join(random.choices(string.ascii_uppercase, k=3))\n\n return (name, [gf.fuzz() for _ in range(num_alts)])",
"_____no_output_____"
],
[
"make_rule([\"A\", \"B\", \"C\"], [\"1\", \"2\", \"3\"], 3)",
"_____no_output_____"
],
[
"from Grammars import unreachable_nonterminals",
"_____no_output_____"
],
[
"def make_grammar(num_symbols=3, num_alts=3):\n terminals = list(string.ascii_lowercase)\n grammar = {}\n name = None\n for _ in range(num_symbols):\n nonterminals = [k[1:-1] for k in grammar.keys()]\n name, expansions = \\\n make_rule(nonterminals, terminals, num_alts)\n grammar[name] = expansions\n\n grammar[START_SYMBOL] = [name]\n\n # Remove unused parts\n for nonterminal in unreachable_nonterminals(grammar):\n del grammar[nonterminal]\n\n assert is_valid_grammar(grammar)\n\n return grammar",
"_____no_output_____"
],
[
"make_grammar()",
"_____no_output_____"
]
],
[
[
"Now we verify if our arbitrary grammars can be used by the Earley parser.",
"_____no_output_____"
]
],
[
[
"for i in range(5):\n my_grammar = make_grammar()\n print(my_grammar)\n parser = EarleyParser(my_grammar)\n mygf = GrammarFuzzer(my_grammar)\n s = mygf.fuzz()\n print(s)\n for tree in parser.parse(s):\n assert tree_to_string(tree) == s\n display_tree(tree)",
"{'<SCS>': ['ts', 'f', 'ng'], '<BQN>': ['wm<SCS>', '<SCS>wi', '<SCS>hw'], '<UZC>': ['gyk<BQN>br', '<SCS>iqp', '<BQN>vb'], '<start>': ['<UZC>']}\nfhwvb\n{'<CRN>': ['meze', 'de', 'cpcv'], '<AIS>': ['<CRN>hb', 'dc<CRN>', 'pa<CRN>x'], '<MAO>': ['<CRN>su', '<CRN>hj', '<CRN><AIS>g'], '<start>': ['<MAO>']}\ndehj\n{'<MFY>': ['y', 'w', ''], '<ZOY>': ['oe<MFY>', 'h<MFY>u', 'lowr'], '<HFT>': ['<ZOY>ro', '<ZOY>w', '<ZOY><ZOY>w'], '<start>': ['<HFT>']}\nlowrro\n{'<CYC>': ['cg', 'enl', 'ovd'], '<TUV>': ['<CYC>hf', '<CYC>nl', 'fhg'], '<MOQ>': ['g<TUV>g', '<CYC>ix', '<CYC><TUV><CYC>'], '<start>': ['<MOQ>']}\ncgix\n{'<WJJ>': ['dszdlh', 'j', 'fd'], '<RQM>': ['<WJJ>wx', 'xs<WJJ><WJJ>', '<WJJ>x'], '<JNY>': ['<WJJ>oa', '<WJJ><WJJ>cx', 'xd<RQM>'], '<start>': ['<JNY>']}\njoa\n"
]
],
[
[
"With this, we have completed both implementation and testing of *arbitrary* CFG, which can now be used along with `LangFuzzer` to generate better fuzzing inputs.",
"_____no_output_____"
],
[
"### End of Excursion",
"_____no_output_____"
],
[
"## Background\n\n\nNumerous parsing techniques exist that can parse a given string using a\ngiven grammar, and produce corresponding derivation tree or trees. However,\nsome of these techniques work only on specific classes of grammars.\nThese classes of grammars are named after the specific kind of parser\nthat can accept grammars of that category. That is, the upper bound for\nthe capabilities of the parser defines the grammar class named after that\nparser.\n\nThe *LL* and *LR* parsing are the main traditions in parsing. Here, *LL* means left-to-right, leftmost derivation, and it represents a top-down approach. On the other hand, and LR (left-to-right, rightmost derivation) represents a bottom-up approach. Another way to look at it is that LL parsers compute the derivation tree incrementally in *pre-order* while LR parsers compute the derivation tree in *post-order* \\cite{pingali2015graphical}).\n\nDifferent classes of grammars differ in the features that are available to\nthe user for writing a grammar of that class. That is, the corresponding\nkind of parser will be unable to parse a grammar that makes use of more\nfeatures than allowed. For example, the `A2_GRAMMAR` is an *LL*\ngrammar because it lacks left recursion, while `A1_GRAMMAR` is not an\n*LL* grammar. This is because an *LL* parser parses\nits input from left to right, and constructs the leftmost derivation of its\ninput by expanding the nonterminals it encounters. If there is a left\nrecursion in one of these rules, an *LL* parser will enter an infinite loop.\n\nSimilarly, a grammar is LL(k) if it can be parsed by an LL parser with k lookahead token, and LR(k) grammar can only be parsed with LR parser with at least k lookahead tokens. These grammars are interesting because both LL(k) and LR(k) grammars have $O(n)$ parsers, and can be used with relatively restricted computational budget compared to other grammars.\n\nThe languages for which one can provide an *LL(k)* grammar is called *LL(k)* languages (where k is the minimum lookahead required). Similarly, *LR(k)* is defined as the set of languages that have an *LR(k)* grammar. In terms of languages, LL(k) $\\subset$ LL(k+1) and LL(k) $\\subset$ LR(k), and *LR(k)* $=$ *LR(1)*. All deterministic *CFLs* have an *LR(1)* grammar. However, there exist *CFLs* that are inherently ambiguous \\cite{ogden1968helpful}, and for these, one can't provide an *LR(1)* grammar.\n\nThe other main parsing algorithms for *CFGs* are GLL \\cite{scott2010gll}, GLR \\cite{tomita1987efficient,tomita2012generalized}, and CYK \\cite{grune2008parsing}.\nThe ALL(\\*) (used by ANTLR) on the other hand is a grammar representation that uses *Regular Expression* like predicates (similar to advanced PEGs – see [Exercise](#Exercise-3:-PEG-Predicates)) rather than a fixed lookahead. Hence, ALL(\\*) can accept a larger class of grammars than CFGs.\n\nIn terms of computational limits of parsing, the main CFG parsers have a complexity of $O(n^3)$ for arbitrary grammars. However, parsing with arbitrary *CFG* is reducible to boolean matrix multiplication \\cite{Valiant1975} (and the reverse \\cite{Lee2002}). This is at present bounded by $O(2^{23728639}$) \\cite{LeGall2014}. Hence, worse case complexity for parsing arbitrary CFG is likely to remain close to cubic.\n\nRegarding PEGs, the actual class of languages that is expressible in *PEG* is currently unknown. In particular, we know that *PEGs* can express certain languages such as $a^n b^n c^n$. However, we do not know if there exist *CFLs* that are not expressible with *PEGs*. In Section 2.3, we provided an instance of a counter-intuitive PEG grammar. While important for our purposes (we use grammars for generation of inputs) this is not a criticism of parsing with PEGs. PEG focuses on writing grammars for recognizing a given language, and not necessarily in interpreting what language an arbitrary PEG might yield. Given a Context-Free Language to parse, it is almost always possible to write a grammar for it in PEG, and given that 1) a PEG can parse any string in $O(n)$ time, and 2) at present we know of no CFL that can't be expressed as a PEG, and 3) compared with *LR* grammars, a PEG is often more intuitive because it allows top-down interpretation, when writing a parser for a language, PEGs should be under serious consideration.",
"_____no_output_____"
],
[
"## Synopsis\n\nThis chapter introduces `Parser` classes, parsing a string into a _derivation tree_ as introduced in the [chapter on efficient grammar fuzzing](GrammarFuzzer.ipynb). Two important parser classes are provided:\n\n* [Parsing Expression Grammar parsers](#Parsing-Expression-Grammars) (`PEGParser`). These are very efficient, but limited to specific grammar structure. Notably, the alternatives represent *ordered choice*. That is, rather than choosing all rules that can potentially match, we stop at the first match that succeed.\n* [Earley parsers](#Parsing-Context-Free-Grammars) (`EarleyParser`). These accept any kind of context-free grammars, and explore all parsing alternatives (if any).\n\nUsing any of these is fairly easy, though. First, instantiate them with a grammar:",
"_____no_output_____"
]
],
[
[
"from Grammars import US_PHONE_GRAMMAR",
"_____no_output_____"
],
[
"us_phone_parser = EarleyParser(US_PHONE_GRAMMAR)",
"_____no_output_____"
]
],
[
[
"Then, use the `parse()` method to retrieve a list of possible derivation trees:",
"_____no_output_____"
]
],
[
[
"trees = us_phone_parser.parse(\"(555)987-6543\")\ntree = list(trees)[0]\ndisplay_tree(tree)",
"_____no_output_____"
]
],
[
[
"These derivation trees can then be used for test generation, notably for mutating and recombining existing inputs.",
"_____no_output_____"
]
],
[
[
"# ignore\nfrom ClassDiagram import display_class_hierarchy",
"_____no_output_____"
],
[
"# ignore\ndisplay_class_hierarchy([PEGParser, EarleyParser],\n public_methods=[\n Parser.parse,\n Parser.__init__,\n Parser.grammar,\n Parser.start_symbol\n ],\n types={\n 'DerivationTree': DerivationTree,\n 'Grammar': Grammar\n },\n project='fuzzingbook')",
"_____no_output_____"
]
],
[
[
"## Lessons Learned\n\n* Grammars can be used to generate derivation trees for a given string.\n* Parsing Expression Grammars are intuitive, and easy to implement, but require care to write.\n* Earley Parsers can parse arbitrary Context Free Grammars.\n",
"_____no_output_____"
],
[
"## Next Steps\n\n* Use parsed inputs to [recombine existing inputs](LangFuzzer.ipynb)",
"_____no_output_____"
],
[
"## Exercises",
"_____no_output_____"
],
[
"### Exercise 1: An Alternative Packrat\n\nIn the _Packrat_ parser, we showed how one could implement a simple _PEG_ parser. That parser kept track of the current location in the text using an index. Can you modify the parser so that it simply uses the current substring rather than tracking the index? That is, it should no longer have the `at` parameter.",
"_____no_output_____"
],
[
"**Solution.** Here is a possible solution:",
"_____no_output_____"
]
],
[
[
"class PackratParser(Parser):\n def parse_prefix(self, text):\n txt, res = self.unify_key(self.start_symbol(), text)\n return len(txt), [res]\n\n def parse(self, text):\n remain, res = self.parse_prefix(text)\n if remain:\n raise SyntaxError(\"at \" + res)\n return res\n\n def unify_rule(self, rule, text):\n results = []\n for token in rule:\n text, res = self.unify_key(token, text)\n if res is None:\n return text, None\n results.append(res)\n return text, results\n\n def unify_key(self, key, text):\n if key not in self.cgrammar:\n if text.startswith(key):\n return text[len(key):], (key, [])\n else:\n return text, None\n for rule in self.cgrammar[key]:\n text_, res = self.unify_rule(rule, text)\n if res:\n return (text_, (key, res))\n return text, None",
"_____no_output_____"
],
[
"mystring = \"1 + (2 * 3)\"\nfor tree in PackratParser(EXPR_GRAMMAR).parse(mystring):\n assert tree_to_string(tree) == mystring\n display_tree(tree)",
"_____no_output_____"
]
],
[
[
"### Exercise 2: More PEG Syntax\n\nThe _PEG_ syntax provides a few notational conveniences reminiscent of regular expressions. For example, it supports the following operators (letters `T` and `A` represents tokens that can be either terminal or nonterminal. `ε` is an empty string, and `/` is the ordered choice operator similar to the non-ordered choice operator `|`):\n\n* `T?` represents an optional greedy match of T and `A := T?` is equivalent to `A := T/ε`.\n* `T*` represents zero or more greedy matches of `T` and `A := T*` is equivalent to `A := T A/ε`.\n* `T+` represents one or more greedy matches – equivalent to `TT*`\n\nIf you look at the three notations above, each can be represented in the grammar in terms of basic syntax.\nRemember the exercise from [the chapter on grammars](Grammars.ipynb) that developed `define_ex_grammar()` that can represent grammars as Python code? extend `define_ex_grammar()` to `define_peg()` to support the above notational conveniences. The decorator should rewrite a given grammar that contains these notations to an equivalent grammar in basic syntax.",
"_____no_output_____"
],
[
"### Exercise 3: PEG Predicates\n\nBeyond these notational conveniences, it also supports two predicates that can provide a powerful lookahead facility that does not consume any input.\n\n* `T&A` represents an _And-predicate_ that matches `T` if `T` is matched, and it is immediately followed by `A`\n* `T!A` represents a _Not-predicate_ that matches `T` if `T` is matched, and it is *not* immediately followed by `A`\n\nImplement these predicates in our _PEG_ parser.",
"_____no_output_____"
],
[
"### Exercise 4: Earley Fill Chart\n\nIn the `Earley Parser`, `Column` class, we keep the states both as a `list` and also as a `dict` even though `dict` is ordered. Can you explain why?\n\n**Hint**: see the `fill_chart` method.",
"_____no_output_____"
],
[
"**Solution.** Python allows us to append to a list in flight, while a dict, eventhough it is ordered does not allow that facility.\n\nThat is, the following will work\n\n```python\nvalues = [1]\nfor v in values:\n values.append(v*2)\n```\n\nHowever, the following will result in an error\n```python\nvalues = {1:1}\nfor v in values:\n values[v*2] = v*2\n```\n\nIn the `fill_chart`, we make use of this facility to modify the set of states we are iterating on, on the fly.",
"_____no_output_____"
],
[
"### Exercise 5: Leo Parser\n\nOne of the problems with the original Earley parser is that while it can parse strings using arbitrary _Context Free Gramamrs_, its performance on right-recursive grammars is quadratic. That is, it takes $O(n^2)$ runtime and space for parsing with right-recursive grammars. For example, consider the parsing of the following string by two different grammars `LR_GRAMMAR` and `RR_GRAMMAR`.",
"_____no_output_____"
]
],
[
[
"mystring = 'aaaaaa'",
"_____no_output_____"
]
],
[
[
"To see the problem, we need to enable logging. Here is the logged version of parsing with the `LR_GRAMMAR`",
"_____no_output_____"
]
],
[
[
"result = EarleyParser(LR_GRAMMAR, log=True).parse(mystring)\nfor _ in result: pass # consume the generator so that we can see the logs",
"None chart[0]\n<A>:= |(0,0)\n<start>:= <A> |(0,0) \n\na chart[1]\n<A>:= <A> a |(0,1)\n<start>:= <A> |(0,1) \n\na chart[2]\n<A>:= <A> a |(0,2)\n<start>:= <A> |(0,2) \n\na chart[3]\n<A>:= <A> a |(0,3)\n<start>:= <A> |(0,3) \n\na chart[4]\n<A>:= <A> a |(0,4)\n<start>:= <A> |(0,4) \n\na chart[5]\n<A>:= <A> a |(0,5)\n<start>:= <A> |(0,5) \n\na chart[6]\n<A>:= <A> a |(0,6)\n<start>:= <A> |(0,6) \n\n"
]
],
[
[
"Compare that to the parsing of `RR_GRAMMAR` as seen below:",
"_____no_output_____"
]
],
[
[
"result = EarleyParser(RR_GRAMMAR, log=True).parse(mystring)\nfor _ in result: pass",
"None chart[0]\n<A>:= |(0,0)\n<start>:= <A> |(0,0) \n\na chart[1]\n<A>:= |(1,1)\n<A>:= a <A> |(0,1)\n<start>:= <A> |(0,1) \n\na chart[2]\n<A>:= |(2,2)\n<A>:= a <A> |(1,2)\n<A>:= a <A> |(0,2)\n<start>:= <A> |(0,2) \n\na chart[3]\n<A>:= |(3,3)\n<A>:= a <A> |(2,3)\n<A>:= a <A> |(1,3)\n<A>:= a <A> |(0,3)\n<start>:= <A> |(0,3) \n\na chart[4]\n<A>:= |(4,4)\n<A>:= a <A> |(3,4)\n<A>:= a <A> |(2,4)\n<A>:= a <A> |(1,4)\n<A>:= a <A> |(0,4)\n<start>:= <A> |(0,4) \n\na chart[5]\n<A>:= |(5,5)\n<A>:= a <A> |(4,5)\n<A>:= a <A> |(3,5)\n<A>:= a <A> |(2,5)\n<A>:= a <A> |(1,5)\n<A>:= a <A> |(0,5)\n<start>:= <A> |(0,5) \n\na chart[6]\n<A>:= |(6,6)\n<A>:= a <A> |(5,6)\n<A>:= a <A> |(4,6)\n<A>:= a <A> |(3,6)\n<A>:= a <A> |(2,6)\n<A>:= a <A> |(1,6)\n<A>:= a <A> |(0,6)\n<start>:= <A> |(0,6) \n\n"
]
],
[
[
"As can be seen from the parsing log for each letter, the number of states with representation `<A>: a <A> ● (i, j)` increases at each stage, and these are simply a left over from the previous letter. They do not contribute anything more to the parse other than to simply complete these entries. However, they take up space, and require resources for inspection, contributing a factor of `n` in analysis.\n\nJoop Leo \\cite{Leo1991} found that this inefficiency can be avoided by detecting right recursion. The idea is that before starting the `completion` step, check whether the current item has a _deterministic reduction path_. If such a path exists, add a copy of the topmost element of the _deteministic reduction path_ to the current column, and return. If not, perform the original `completion` step.\n\n\n**Definition 2.1**: An item is said to be on the deterministic reduction path above $[A \\rightarrow \\gamma., i]$ if it is $[B \\rightarrow \\alpha A ., k]$ with $[B \\rightarrow \\alpha . A, k]$ being the only item in $ I_i $ with the dot in front of A, or if it is on the deterministic reduction path above $[B \\rightarrow \\alpha A ., k]$. An item on such a path is called *topmost* one if there is no item on the deterministic reduction path above it\\cite{Leo1991}.",
"_____no_output_____"
],
[
"Finding a _deterministic reduction path_ is as follows:\n\nGiven a complete state, represented by `<A> : seq_1 ● (s, e)` where `s` is the starting column for this rule, and `e` the current column, there is a _deterministic reduction path_ **above** it if two constraints are satisfied.\n\n1. There exist a *single* item in the form `<B> : seq_2 ● <A> (k, s)` in column `s`.\n2. That should be the *single* item in s with dot in front of `<A>`\n\nThe resulting item is of the form `<B> : seq_2 <A> ● (k, e)`, which is simply item from (1) advanced, and is considered above `<A>:.. (s, e)` in the deterministic reduction path.\nThe `seq_1` and `seq_2` are arbitrary symbol sequences.\n\nThis forms the following chain of links, with `<A>:.. (s_1, e)` being the child of `<B>:.. (s_2, e)` etc.",
"_____no_output_____"
],
[
"Here is one way to visualize the chain:\n```\n<C> : seq_3 <B> ● (s_3, e) \n | constraints satisfied by <C> : seq_3 ● <B> (s_3, s_2)\n <B> : seq_2 <A> ● (s_2, e) \n | constraints satisfied by <B> : seq_2 ● <A> (s_2, s_1)\n <A> : seq_1 ● (s_1, e)\n```",
"_____no_output_____"
],
[
"Essentially, what we want to do is to identify potential deterministic right recursion candidates, perform completion on them, and *throw away the result*. We do this until we reach the top. See Grune et al.~\\cite{grune2008parsing} for further information.",
"_____no_output_____"
],
[
"Note that the completions are in the same column (`e`), with each candidates with constraints satisfied \nin further and further earlier columns (as shown below):\n```\n<C> : seq_3 ● <B> (s_3, s_2) --> <C> : seq_3 <B> ● (s_3, e)\n |\n <B> : seq_2 ● <A> (s_2, s_1) --> <B> : seq_2 <A> ● (s_2, e) \n |\n <A> : seq_1 ● (s_1, e)\n```",
"_____no_output_____"
],
[
"Following this chain, the topmost item is the item `<C>:.. (s_3, e)` that does not have a parent. The topmost item needs to be saved is called a *transitive* item by Leo, and it is associated with the non-terminal symbol that started the lookup. The transitive item needs to be added to each column we inspect.",
"_____no_output_____"
],
[
"Here is the skeleton for the parser `LeoParser`.",
"_____no_output_____"
]
],
[
[
"class LeoParser(EarleyParser):\n def complete(self, col, state):\n return self.leo_complete(col, state)\n\n def leo_complete(self, col, state):\n detred = self.deterministic_reduction(state)\n if detred:\n col.add(detred.copy())\n else:\n self.earley_complete(col, state)\n\n def deterministic_reduction(self, state):\n raise NotImplementedError",
"_____no_output_____"
]
],
[
[
"Can you implement the `deterministic_reduction()` method to obtain the topmost element?",
"_____no_output_____"
],
[
"**Solution.** Here is a possible solution:",
"_____no_output_____"
],
[
"First, we update our `Column` class with the ability to add transitive items. Note that, while Leo asks the transitive to be added to the set $ I_k $ there is no actual requirement for the transitive states to be added to the `states` list. The transitive items are only intended for memoization and not for the `fill_chart()` method. Hence, we track them separately.",
"_____no_output_____"
]
],
[
[
"class Column(Column):\n def __init__(self, index, letter):\n self.index, self.letter = index, letter\n self.states, self._unique, self.transitives = [], {}, {}\n\n def add_transitive(self, key, state):\n assert key not in self.transitives\n self.transitives[key] = state\n return self.transitives[key]",
"_____no_output_____"
]
],
[
[
"Remember the picture we drew of the deterministic path?\n```\n <C> : seq_3 <B> ● (s_3, e) \n | constraints satisfied by <C> : seq_3 ● <B> (s_3, s_2)\n <B> : seq_2 <A> ● (s_2, e) \n | constraints satisfied by <B> : seq_2 ● <A> (s_2, s_1)\n <A> : seq_1 ● (s_1, e)\n```",
"_____no_output_____"
],
[
"We define a function `uniq_postdot()` that given the item `<A> := seq_1 ● (s_1, e)`, returns a `<B> : seq_2 ● <A> (s_2, s_1)` that satisfies the constraints mentioned in the above picture.",
"_____no_output_____"
]
],
[
[
"class LeoParser(LeoParser):\n def uniq_postdot(self, st_A):\n col_s1 = st_A.s_col\n parent_states = [\n s for s in col_s1.states if s.expr and s.at_dot() == st_A.name\n ]\n if len(parent_states) > 1:\n return None\n matching_st_B = [s for s in parent_states if s.dot == len(s.expr) - 1]\n return matching_st_B[0] if matching_st_B else None",
"_____no_output_____"
],
[
"lp = LeoParser(RR_GRAMMAR)\n[(str(s), str(lp.uniq_postdot(s))) for s in columns[-1].states]",
"_____no_output_____"
]
],
[
[
"We next define the function `get_top()` that is the core of deterministic reduction which gets the topmost state above the current state (`A`).",
"_____no_output_____"
]
],
[
[
"class LeoParser(LeoParser):\n def get_top(self, state_A):\n st_B_inc = self.uniq_postdot(state_A)\n if not st_B_inc:\n return None\n \n t_name = st_B_inc.name\n if t_name in st_B_inc.e_col.transitives:\n return st_B_inc.e_col.transitives[t_name]\n\n st_B = st_B_inc.advance()\n\n top = self.get_top(st_B) or st_B\n return st_B_inc.e_col.add_transitive(t_name, top)",
"_____no_output_____"
]
],
[
[
"Once we have the machinery in place, `deterministic_reduction()` itself is simply a wrapper to call `get_top()`",
"_____no_output_____"
]
],
[
[
"class LeoParser(LeoParser):\n def deterministic_reduction(self, state):\n return self.get_top(state)",
"_____no_output_____"
],
[
"lp = LeoParser(RR_GRAMMAR)\ncolumns = lp.chart_parse(mystring, lp.start_symbol())\n[(str(s), str(lp.get_top(s))) for s in columns[-1].states]",
"_____no_output_____"
]
],
[
[
"Now, both LR and RR grammars should work within $O(n)$ bounds.",
"_____no_output_____"
]
],
[
[
"result = LeoParser(RR_GRAMMAR, log=True).parse(mystring)\nfor _ in result: pass",
"None chart[0]\n<A>:= |(0,0)\n<start>:= <A> |(0,0) \n\na chart[1]\n<A>:= |(1,1)\n<A>:= a <A> |(0,1)\n<start>:= <A> |(0,1) \n\na chart[2]\n<A>:= |(2,2)\n<A>:= a <A> |(1,2)\n<start>:= <A> |(0,2) \n\na chart[3]\n<A>:= |(3,3)\n<A>:= a <A> |(2,3)\n<start>:= <A> |(0,3) \n\na chart[4]\n<A>:= |(4,4)\n<A>:= a <A> |(3,4)\n<start>:= <A> |(0,4) \n\na chart[5]\n<A>:= |(5,5)\n<A>:= a <A> |(4,5)\n<start>:= <A> |(0,5) \n\na chart[6]\n<A>:= |(6,6)\n<A>:= a <A> |(5,6)\n<start>:= <A> |(0,6) \n\n"
]
],
[
[
"We verify the Leo parser with a few more right recursive grammars.",
"_____no_output_____"
]
],
[
[
"RR_GRAMMAR2 = {\n '<start>': ['<A>'],\n '<A>': ['ab<A>', ''],\n}\nmystring2 = 'ababababab'",
"_____no_output_____"
],
[
"result = LeoParser(RR_GRAMMAR2, log=True).parse(mystring2)\nfor _ in result: pass",
"None chart[0]\n<A>:= |(0,0)\n<start>:= <A> |(0,0) \n\na chart[1]\n \n\nb chart[2]\n<A>:= |(2,2)\n<A>:= a b <A> |(0,2)\n<start>:= <A> |(0,2) \n\na chart[3]\n \n\nb chart[4]\n<A>:= |(4,4)\n<A>:= a b <A> |(2,4)\n<start>:= <A> |(0,4) \n\na chart[5]\n \n\nb chart[6]\n<A>:= |(6,6)\n<A>:= a b <A> |(4,6)\n<start>:= <A> |(0,6) \n\na chart[7]\n \n\nb chart[8]\n<A>:= |(8,8)\n<A>:= a b <A> |(6,8)\n<start>:= <A> |(0,8) \n\na chart[9]\n \n\nb chart[10]\n<A>:= |(10,10)\n<A>:= a b <A> |(8,10)\n<start>:= <A> |(0,10) \n\n"
],
[
"RR_GRAMMAR3 = {\n '<start>': ['c<A>'],\n '<A>': ['ab<A>', ''],\n}\nmystring3 = 'cababababab'",
"_____no_output_____"
],
[
"result = LeoParser(RR_GRAMMAR3, log=True).parse(mystring3)\nfor _ in result: pass",
"None chart[0]\n \n\nc chart[1]\n<A>:= |(1,1)\n<start>:= c <A> |(0,1) \n\na chart[2]\n \n\nb chart[3]\n<A>:= |(3,3)\n<A>:= a b <A> |(1,3)\n<start>:= c <A> |(0,3) \n\na chart[4]\n \n\nb chart[5]\n<A>:= |(5,5)\n<A>:= a b <A> |(3,5)\n<start>:= c <A> |(0,5) \n\na chart[6]\n \n\nb chart[7]\n<A>:= |(7,7)\n<A>:= a b <A> |(5,7)\n<start>:= c <A> |(0,7) \n\na chart[8]\n \n\nb chart[9]\n<A>:= |(9,9)\n<A>:= a b <A> |(7,9)\n<start>:= c <A> |(0,9) \n\na chart[10]\n \n\nb chart[11]\n<A>:= |(11,11)\n<A>:= a b <A> |(9,11)\n<start>:= c <A> |(0,11) \n\n"
],
[
"RR_GRAMMAR4 = {\n '<start>': ['<A>c'],\n '<A>': ['ab<A>', ''],\n}\nmystring4 = 'ababababc'",
"_____no_output_____"
],
[
"result = LeoParser(RR_GRAMMAR4, log=True).parse(mystring4)\nfor _ in result: pass",
"None chart[0]\n<A>:= |(0,0) \n\na chart[1]\n \n\nb chart[2]\n<A>:= |(2,2)\n<A>:= a b <A> |(0,2) \n\na chart[3]\n \n\nb chart[4]\n<A>:= |(4,4)\n<A>:= a b <A> |(2,4)\n<A>:= a b <A> |(0,4) \n\na chart[5]\n \n\nb chart[6]\n<A>:= |(6,6)\n<A>:= a b <A> |(4,6)\n<A>:= a b <A> |(0,6) \n\na chart[7]\n \n\nb chart[8]\n<A>:= |(8,8)\n<A>:= a b <A> |(6,8)\n<A>:= a b <A> |(0,8) \n\nc chart[9]\n<start>:= <A> c |(0,9) \n\n"
],
[
"RR_GRAMMAR5 = {\n '<start>': ['<A>'],\n '<A>': ['ab<B>', ''],\n '<B>': ['<A>'],\n}\nmystring5 = 'abababab'",
"_____no_output_____"
],
[
"result = LeoParser(RR_GRAMMAR5, log=True).parse(mystring5)\nfor _ in result: pass",
"None chart[0]\n<A>:= |(0,0)\n<start>:= <A> |(0,0) \n\na chart[1]\n \n\nb chart[2]\n<A>:= a b <B> |(0,2)\n<A>:= |(2,2)\n<B>:= <A> |(2,2)\n<start>:= <A> |(0,2) \n\na chart[3]\n \n\nb chart[4]\n<A>:= a b <B> |(2,4)\n<A>:= |(4,4)\n<B>:= <A> |(4,4)\n<start>:= <A> |(0,4) \n\na chart[5]\n \n\nb chart[6]\n<A>:= a b <B> |(4,6)\n<A>:= |(6,6)\n<B>:= <A> |(6,6)\n<start>:= <A> |(0,6) \n\na chart[7]\n \n\nb chart[8]\n<A>:= a b <B> |(6,8)\n<A>:= |(8,8)\n<B>:= <A> |(8,8)\n<start>:= <A> |(0,8) \n\n"
],
[
"RR_GRAMMAR6 = {\n '<start>': ['<A>'],\n '<A>': ['a<B>', ''],\n '<B>': ['b<A>'],\n}\nmystring6 = 'abababab'",
"_____no_output_____"
],
[
"result = LeoParser(RR_GRAMMAR6, log=True).parse(mystring6)\nfor _ in result: pass",
"None chart[0]\n<A>:= |(0,0)\n<start>:= <A> |(0,0) \n\na chart[1]\n \n\nb chart[2]\n<A>:= |(2,2)\n<B>:= b <A> |(1,2)\n<start>:= <A> |(0,2) \n\na chart[3]\n \n\nb chart[4]\n<A>:= |(4,4)\n<B>:= b <A> |(3,4)\n<start>:= <A> |(0,4) \n\na chart[5]\n \n\nb chart[6]\n<A>:= |(6,6)\n<B>:= b <A> |(5,6)\n<start>:= <A> |(0,6) \n\na chart[7]\n \n\nb chart[8]\n<A>:= |(8,8)\n<B>:= b <A> |(7,8)\n<start>:= <A> |(0,8) \n\n"
],
[
"RR_GRAMMAR7 = {\n '<start>': ['<A>'],\n '<A>': ['a<A>', 'a'],\n}\nmystring7 = 'aaaaaaaa'",
"_____no_output_____"
],
[
"result = LeoParser(RR_GRAMMAR7, log=True).parse(mystring7)\nfor _ in result: pass",
"None chart[0]\n \n\na chart[1]\n<A>:= a |(0,1)\n<start>:= <A> |(0,1) \n\na chart[2]\n<A>:= a |(1,2)\n<start>:= <A> |(0,2) \n\na chart[3]\n<A>:= a |(2,3)\n<start>:= <A> |(0,3) \n\na chart[4]\n<A>:= a |(3,4)\n<start>:= <A> |(0,4) \n\na chart[5]\n<A>:= a |(4,5)\n<start>:= <A> |(0,5) \n\na chart[6]\n<A>:= a |(5,6)\n<start>:= <A> |(0,6) \n\na chart[7]\n<A>:= a |(6,7)\n<start>:= <A> |(0,7) \n\na chart[8]\n<A>:= a |(7,8)\n<start>:= <A> |(0,8) \n\n"
]
],
[
[
"We verify that our parser works correctly on `LR_GRAMMAR` too.",
"_____no_output_____"
]
],
[
[
"result = LeoParser(LR_GRAMMAR, log=True).parse(mystring)\nfor _ in result: pass",
"None chart[0]\n<A>:= |(0,0)\n<start>:= <A> |(0,0) \n\na chart[1]\n<A>:= <A> a |(0,1)\n<start>:= <A> |(0,1) \n\na chart[2]\n<A>:= <A> a |(0,2)\n<start>:= <A> |(0,2) \n\na chart[3]\n<A>:= <A> a |(0,3)\n<start>:= <A> |(0,3) \n\na chart[4]\n<A>:= <A> a |(0,4)\n<start>:= <A> |(0,4) \n\na chart[5]\n<A>:= <A> a |(0,5)\n<start>:= <A> |(0,5) \n\na chart[6]\n<A>:= <A> a |(0,6)\n<start>:= <A> |(0,6) \n\n"
]
],
[
[
"__Advanced:__ We have fixed the complexity bounds. However, because we are saving only the topmost item of a right recursion, we need to fix our parser to be aware of our fix while extracting parse trees. Can you fix it?\n\n__Hint:__ Leo suggests simply transforming the Leo item sets to normal Earley sets, with the results from deterministic reduction expanded to their originals. For that, keep in mind the picture of constraint chain we drew earlier.",
"_____no_output_____"
],
[
"**Solution.** Here is a possible solution.",
"_____no_output_____"
],
[
"We first change the definition of `add_transitive()` so that results of deterministic reduction can be identified later.",
"_____no_output_____"
]
],
[
[
"class Column(Column):\n def add_transitive(self, key, state):\n assert key not in self.transitives\n self.transitives[key] = TState(state.name, state.expr, state.dot,\n state.s_col, state.e_col)\n return self.transitives[key]",
"_____no_output_____"
]
],
[
[
"We also need a `back()` method to create the constraints.",
"_____no_output_____"
]
],
[
[
"class State(State):\n def back(self):\n return TState(self.name, self.expr, self.dot - 1, self.s_col, self.e_col)",
"_____no_output_____"
]
],
[
[
"We update `copy()` to make `TState` items instead.",
"_____no_output_____"
]
],
[
[
"class TState(State):\n def copy(self):\n return TState(self.name, self.expr, self.dot, self.s_col, self.e_col)",
"_____no_output_____"
]
],
[
[
"We now modify the `LeoParser` to keep track of the chain of constrains that we mentioned earlier.",
"_____no_output_____"
]
],
[
[
"class LeoParser(LeoParser):\n def __init__(self, grammar, **kwargs):\n super().__init__(grammar, **kwargs)\n self._postdots = {}",
"_____no_output_____"
]
],
[
[
"Next, we update the `uniq_postdot()` so that it tracks the chain of links.",
"_____no_output_____"
]
],
[
[
"class LeoParser(LeoParser):\n def uniq_postdot(self, st_A):\n col_s1 = st_A.s_col\n parent_states = [\n s for s in col_s1.states if s.expr and s.at_dot() == st_A.name\n ]\n if len(parent_states) > 1:\n return None\n matching_st_B = [s for s in parent_states if s.dot == len(s.expr) - 1]\n if matching_st_B:\n self._postdots[matching_st_B[0]._t()] = st_A\n return matching_st_B[0]\n return None\n ",
"_____no_output_____"
]
],
[
[
"We next define a method `expand_tstate()` that, when given a `TState`, generates all the intermediate links that we threw away earlier for a given end column.",
"_____no_output_____"
]
],
[
[
"class LeoParser(LeoParser):\n def expand_tstate(self, state, e):\n if state._t() not in self._postdots:\n return\n c_C = self._postdots[state._t()]\n e.add(c_C.advance())\n self.expand_tstate(c_C.back(), e)",
"_____no_output_____"
]
],
[
[
"We define a `rearrange()` method to generate a reversed table where each column contains states that start at that column.",
"_____no_output_____"
]
],
[
[
"class LeoParser(LeoParser):\n def rearrange(self, table):\n f_table = [Column(c.index, c.letter) for c in table]\n for col in table:\n for s in col.states:\n f_table[s.s_col.index].states.append(s)\n return f_table",
"_____no_output_____"
]
],
[
[
"Here is the rearranged table. (Can you explain why the Column 0 has a large number of `<start>` items?)",
"_____no_output_____"
]
],
[
[
"ep = LeoParser(RR_GRAMMAR)\ncolumns = ep.chart_parse(mystring, ep.start_symbol())\nr_table = ep.rearrange(columns)\nfor col in r_table:\n print(col, \"\\n\")",
"None chart[0]\n<A>:= |(0,0)\n<start>:= <A> |(0,0)\n<A>:= a <A> |(0,1)\n<start>:= <A> |(0,1)\n<start>:= <A> |(0,2)\n<start>:= <A> |(0,3)\n<start>:= <A> |(0,4)\n<start>:= <A> |(0,5)\n<start>:= <A> |(0,6) \n\na chart[1]\n<A>:= |(1,1)\n<A>:= a <A> |(1,2) \n\na chart[2]\n<A>:= |(2,2)\n<A>:= a <A> |(2,3) \n\na chart[3]\n<A>:= |(3,3)\n<A>:= a <A> |(3,4) \n\na chart[4]\n<A>:= |(4,4)\n<A>:= a <A> |(4,5) \n\na chart[5]\n<A>:= |(5,5)\n<A>:= a <A> |(5,6) \n\na chart[6]\n<A>:= |(6,6) \n\n"
]
],
[
[
"We save the result of rearrange before going into `parse_forest()`.",
"_____no_output_____"
]
],
[
[
"class LeoParser(LeoParser):\n def parse(self, text):\n cursor, states = self.parse_prefix(text)\n start = next((s for s in states if s.finished()), None)\n if cursor < len(text) or not start:\n raise SyntaxError(\"at \" + repr(text[cursor:]))\n\n self.r_table = self.rearrange(self.table)\n forest = self.extract_trees(self.parse_forest(self.table, start))\n for tree in forest:\n yield self.prune_tree(tree)",
"_____no_output_____"
]
],
[
[
"Finally, during `parse_forest()`, we first check to see if it is a transitive state, and if it is, expand it to the original sequence of states using `traverse_constraints()`.",
"_____no_output_____"
]
],
[
[
"class LeoParser(LeoParser):\n def parse_forest(self, chart, state):\n if isinstance(state, TState):\n self.expand_tstate(state.back(), state.e_col)\n \n return super().parse_forest(chart, state)",
"_____no_output_____"
]
],
[
[
"This completes our implementation of `LeoParser`.",
"_____no_output_____"
],
[
"We check whether the previously defined right recursive grammars parse and return the correct parse trees.",
"_____no_output_____"
]
],
[
[
"result = LeoParser(RR_GRAMMAR).parse(mystring)\nfor tree in result:\n assert mystring == tree_to_string(tree)",
"_____no_output_____"
],
[
"result = LeoParser(RR_GRAMMAR2).parse(mystring2)\nfor tree in result:\n assert mystring2 == tree_to_string(tree)",
"_____no_output_____"
],
[
"result = LeoParser(RR_GRAMMAR3).parse(mystring3)\nfor tree in result:\n assert mystring3 == tree_to_string(tree)",
"_____no_output_____"
],
[
"result = LeoParser(RR_GRAMMAR4).parse(mystring4)\nfor tree in result:\n assert mystring4 == tree_to_string(tree)",
"_____no_output_____"
],
[
"result = LeoParser(RR_GRAMMAR5).parse(mystring5)\nfor tree in result:\n assert mystring5 == tree_to_string(tree)",
"_____no_output_____"
],
[
"result = LeoParser(RR_GRAMMAR6).parse(mystring6)\nfor tree in result:\n assert mystring6 == tree_to_string(tree)",
"_____no_output_____"
],
[
"result = LeoParser(RR_GRAMMAR7).parse(mystring7)\nfor tree in result:\n assert mystring7 == tree_to_string(tree)",
"_____no_output_____"
],
[
"result = LeoParser(LR_GRAMMAR).parse(mystring)\nfor tree in result:\n assert mystring == tree_to_string(tree)",
"_____no_output_____"
],
[
"RR_GRAMMAR8 = {\n '<start>': ['<A>'],\n '<A>': ['a<A>', 'a']\n}\nmystring8 = 'aa'",
"_____no_output_____"
],
[
"RR_GRAMMAR9 = {\n '<start>': ['<A>'],\n '<A>': ['<B><A>', '<B>'],\n '<B>': ['b']\n}\nmystring9 = 'bbbbbbb'",
"_____no_output_____"
],
[
"result = LeoParser(RR_GRAMMAR8).parse(mystring8)\nfor tree in result:\n print(repr(tree_to_string(tree)))\n assert mystring8 == tree_to_string(tree)",
"'aa'\n'aa'\n"
],
[
"result = LeoParser(RR_GRAMMAR9).parse(mystring9)\nfor tree in result:\n print(repr(tree_to_string(tree)))\n assert mystring9 == tree_to_string(tree)",
"'bbbbbbb'\n'bbbbbbb'\n"
]
],
[
[
"### Exercise 6: Filtered Earley Parser",
"_____no_output_____"
],
[
"One of the problems with our Earley and Leo Parsers is that it can get stuck in infinite loops when parsing with grammars that contain token repetitions in alternatives. For example, consider the grammar below.",
"_____no_output_____"
]
],
[
[
"RECURSION_GRAMMAR: Grammar = {\n \"<start>\": [\"<A>\"],\n \"<A>\": [\"<A>\", \"<A>aa\", \"AA\", \"<B>\"],\n \"<B>\": [\"<C>\", \"<C>cc\", \"CC\"],\n \"<C>\": [\"<B>\", \"<B>bb\", \"BB\"]\n}",
"_____no_output_____"
]
],
[
[
"With this grammar, one can produce an infinite chain of derivations of `<A>`, (direct recursion) or an infinite chain of derivations of `<B> -> <C> -> <B> ...` (indirect recursion). The problem is that, our implementation can get stuck trying to derive one of these infinite chains. One possibility is to use the `LazyExtractor`. Another, is to simply avoid generating such chains.",
"_____no_output_____"
]
],
[
[
"from ExpectError import ExpectTimeout",
"_____no_output_____"
],
[
"with ExpectTimeout(1, print_traceback=False):\n mystring = 'AA'\n parser = LeoParser(RECURSION_GRAMMAR)\n tree, *_ = parser.parse(mystring)\n assert tree_to_string(tree) == mystring\n display_tree(tree)",
"RecursionError: maximum recursion depth exceeded (expected)\n"
]
],
[
[
"Can you implement a solution such that any tree that contains such a chain is discarded?",
"_____no_output_____"
],
[
"**Solution.** Here is a possible solution.",
"_____no_output_____"
]
],
[
[
"class FilteredLeoParser(LeoParser):\n def forest(self, s, kind, seen, chart):\n return self.parse_forest(chart, s, seen) if kind == 'n' else (s, [])\n\n def parse_forest(self, chart, state, seen=None):\n if isinstance(state, TState):\n self.expand_tstate(state.back(), state.e_col)\n\n def was_seen(chain, s):\n if isinstance(s, str):\n return False\n if len(s.expr) > 1:\n return False\n return s in chain\n\n if len(state.expr) > 1: # things get reset if we have a non loop\n seen = set()\n elif seen is None: # initialization\n seen = {state}\n\n pathexprs = self.parse_paths(state.expr, chart, state.s_col.index,\n state.e_col.index) if state.expr else []\n return state.name, [[(s, k, seen | {s}, chart)\n for s, k in reversed(pathexpr)\n if not was_seen(seen, s)] for pathexpr in pathexprs]",
"_____no_output_____"
]
],
[
[
"With the `FilteredLeoParser`, we should be able to recover minimal parse trees in reasonable time.",
"_____no_output_____"
]
],
[
[
"mystring = 'AA'\nparser = FilteredLeoParser(RECURSION_GRAMMAR)\ntree, *_ = parser.parse(mystring)\nassert tree_to_string(tree) == mystring\ndisplay_tree(tree)",
"_____no_output_____"
],
[
"mystring = 'AAaa'\nparser = FilteredLeoParser(RECURSION_GRAMMAR)\ntree, *_ = parser.parse(mystring)\nassert tree_to_string(tree) == mystring\ndisplay_tree(tree)",
"_____no_output_____"
],
[
"mystring = 'AAaaaa'\nparser = FilteredLeoParser(RECURSION_GRAMMAR)\ntree, *_ = parser.parse(mystring)\nassert tree_to_string(tree) == mystring\ndisplay_tree(tree)",
"_____no_output_____"
],
[
"mystring = 'CC'\nparser = FilteredLeoParser(RECURSION_GRAMMAR)\ntree, *_ = parser.parse(mystring)\nassert tree_to_string(tree) == mystring\ndisplay_tree(tree)",
"_____no_output_____"
],
[
"mystring = 'BBcc'\nparser = FilteredLeoParser(RECURSION_GRAMMAR)\ntree, *_ = parser.parse(mystring)\nassert tree_to_string(tree) == mystring\ndisplay_tree(tree)",
"_____no_output_____"
],
[
"mystring = 'BB'\nparser = FilteredLeoParser(RECURSION_GRAMMAR)\ntree, *_ = parser.parse(mystring)\nassert tree_to_string(tree) == mystring\ndisplay_tree(tree)",
"_____no_output_____"
],
[
"mystring = 'BBccbb'\nparser = FilteredLeoParser(RECURSION_GRAMMAR)\ntree, *_ = parser.parse(mystring)\nassert tree_to_string(tree) == mystring\ndisplay_tree(tree)",
"_____no_output_____"
]
],
[
[
"As can be seen, we are able to recover minimal parse trees without hitting on infinite chains.",
"_____no_output_____"
],
[
"### Exercise 7: Iterative Earley Parser\n\nRecursive algorithms are quite handy in some cases but sometimes we might want to have iteration instead of recursion due to memory or speed problems. \n\nCan you implement an iterative version of the `EarleyParser`? \n\n__Hint:__ In general, you can use a stack to replace a recursive algorithm with an iterative one. An easy way to do this is pushing the parameters onto a stack instead of passing them to the recursive function.",
"_____no_output_____"
],
[
"**Solution.** Here is a possible solution.",
"_____no_output_____"
],
[
"First, we define `parse_paths()` that extract paths from a parsed expression, which is very similar to the original.",
"_____no_output_____"
]
],
[
[
"class IterativeEarleyParser(EarleyParser):\n def parse_paths(self, named_expr_, chart, frm, til_):\n return_paths = []\n path_build_stack = [(named_expr_, til_, [])]\n\n def iter_paths(path_prefix, path, start, k, e):\n x = path_prefix + [(path, k)]\n if not e:\n return_paths.extend([x] if start == frm else [])\n else:\n path_build_stack.append((e, start, x))\n\n while path_build_stack:\n named_expr, til, path_prefix = path_build_stack.pop()\n *expr, var = named_expr\n\n starts = None\n if var not in self.cgrammar:\n starts = ([(var, til - len(var), 't')]\n if til > 0 and chart[til].letter == var else [])\n else:\n starts = [(s, s.s_col.index, 'n') for s in chart[til].states\n if s.finished() and s.name == var]\n\n for s, start, k in starts:\n iter_paths(path_prefix, s, start, k, expr)\n\n return return_paths",
"_____no_output_____"
]
],
[
[
"Next we used these paths to recover the forest data structure using `parse_forest()`. Since `parse_forest()` does not recurse, we reuse the original definition. Next, we define `extract_a_tree()`",
"_____no_output_____"
],
[
"Now we are ready to extract trees from the forest using `extract_a_tree()`",
"_____no_output_____"
]
],
[
[
"class IterativeEarleyParser(IterativeEarleyParser):\n def choose_a_node_to_explore(self, node_paths, level_count):\n first, *rest = node_paths\n return first\n\n def extract_a_tree(self, forest_node_):\n start_node = (forest_node_[0], [])\n tree_build_stack = [(forest_node_, start_node[-1], 0)]\n\n while tree_build_stack:\n forest_node, tree, level_count = tree_build_stack.pop()\n name, paths = forest_node\n\n if not paths:\n tree.append((name, []))\n else:\n new_tree = []\n current_node = self.choose_a_node_to_explore(paths, level_count)\n for p in reversed(current_node):\n new_forest_node = self.forest(*p)\n tree_build_stack.append((new_forest_node, new_tree, level_count + 1))\n tree.append((name, new_tree))\n\n return start_node",
"_____no_output_____"
]
],
[
[
"For now, we simply extract the first tree found.",
"_____no_output_____"
]
],
[
[
"class IterativeEarleyParser(IterativeEarleyParser):\n def extract_trees(self, forest):\n yield self.extract_a_tree(forest)",
"_____no_output_____"
]
],
[
[
"Let's see if it works with some of the grammars we have seen so far.",
"_____no_output_____"
]
],
[
[
"test_cases: List[Tuple[Grammar, str]] = [\n (A1_GRAMMAR, '1-2-3+4-5'),\n (A2_GRAMMAR, '1+2'),\n (A3_GRAMMAR, '1+2+3-6=6-1-2-3'),\n (LR_GRAMMAR, 'aaaaa'),\n (RR_GRAMMAR, 'aa'),\n (DIRECTLY_SELF_REFERRING, 'select a from a'),\n (INDIRECTLY_SELF_REFERRING, 'select a from a'),\n (RECURSION_GRAMMAR, 'AA'),\n (RECURSION_GRAMMAR, 'AAaaaa'),\n (RECURSION_GRAMMAR, 'BBccbb')\n]\n\nfor i, (grammar, text) in enumerate(test_cases):\n print(i, text)\n tree, *_ = IterativeEarleyParser(grammar).parse(text)\n assert text == tree_to_string(tree)",
"0 1-2-3+4-5\n1 1+2\n2 1+2+3-6=6-1-2-3\n3 aaaaa\n4 aa\n5 select a from a\n6 select a from a\n7 AA\n8 AAaaaa\n9 BBccbb\n"
]
],
[
[
"As can be seen, our `IterativeEarleyParser` is able to handle recursive grammars. However, it can only extract the first tree found. What should one do to get all possible parses? What we can do, is to keep track of options to explore at each `choose_a_node_to_explore()`. Next, capture in the nodes explored in a tree data structure, adding new paths each time a new leaf is expanded. See the `TraceTree` datastructure in the [chapter on Concolic fuzzing](ConcolicFuzzer.ipynb) for an example.",
"_____no_output_____"
],
[
"### Exercise 8: First Set of a Nonterminal\n\nWe previously gave a way to extract a the `nullable` (epsilon) set, which is often used for parsing.\nAlong with `nullable`, parsing algorithms often use two other sets [`first` and `follow`](https://en.wikipedia.org/wiki/Canonical_LR_parser#FIRST_and_FOLLOW_sets).\nThe first set of a terminal symbol is itself, and the first set of a nonterminal is composed of terminal symbols that can come at the beginning of any derivation\nof that nonterminal. The first set of any nonterminal that can derive the empty string should contain `EPSILON`. For example, using our `A1_GRAMMAR`, the first set of both `<expr>` and `<start>` is `{0,1,2,3,4,5,6,7,8,9}`. The extraction first set for any self-recursive nonterminal is simple enough. One simply has to recursively compute the first set of the first element of its choice expressions. The computation of `first` set for a self-recursive nonterminal is tricky. One has to recursively compute the first set until one is sure that no more terminals can be added to the first set.\n\nCan you implement the `first` set using our `fixpoint()` decorator?",
"_____no_output_____"
],
[
"**Solution.** The first set of all terminals is the set containing just themselves. So we initialize that first. Then we update the first set with rules that derive empty strings.",
"_____no_output_____"
]
],
[
[
"def firstset(grammar, nullable):\n first = {i: {i} for i in terminals(grammar)}\n for k in grammar:\n first[k] = {EPSILON} if k in nullable else set()\n return firstset_((rules(grammar), first, nullable))[1]",
"_____no_output_____"
]
],
[
[
"Finally, we rely on the `fixpoint` to update the first set with the contents of the current first set until the first set stops changing.",
"_____no_output_____"
]
],
[
[
"def first_expr(expr, first, nullable):\n tokens = set()\n for token in expr:\n tokens |= first[token]\n if token not in nullable:\n break\n return tokens",
"_____no_output_____"
],
[
"@fixpoint\ndef firstset_(arg):\n (rules, first, epsilon) = arg\n for A, expression in rules:\n first[A] |= first_expr(expression, first, epsilon)\n return (rules, first, epsilon)",
"_____no_output_____"
],
[
"firstset(canonical(A1_GRAMMAR), EPSILON)",
"_____no_output_____"
]
],
[
[
"### Exercise 9: Follow Set of a Nonterminal\n\nThe follow set definition is similar to the first set. The follow set of a nonterminal is the set of terminals that can occur just after that nonterminal is used in any derivation. The follow set of the start symbol is `EOF`, and the follow set of any nonterminal is the super set of first sets of all symbols that come after it in any choice expression.\n\nFor example, the follow set of `<expr>` in `A1_GRAMMAR` is the set `{EOF, +, -}`.\n\nAs in the previous exercise, implement the `followset()` using the `fixpoint()` decorator.",
"_____no_output_____"
],
[
"**Solution.** The implementation of `followset()` is similar to `firstset()`. We first initialize the follow set with `EOF`, get the epsilon and first sets, and use the `fixpoint()` decorator to iteratively compute the follow set until nothing changes.",
"_____no_output_____"
]
],
[
[
"EOF = '\\0'",
"_____no_output_____"
],
[
"def followset(grammar, start):\n follow = {i: set() for i in grammar}\n follow[start] = {EOF}\n\n epsilon = nullable(grammar)\n first = firstset(grammar, epsilon)\n return followset_((grammar, epsilon, first, follow))[-1]",
"_____no_output_____"
]
],
[
[
"Given the current follow set, one can update the follow set as follows:",
"_____no_output_____"
]
],
[
[
"@fixpoint\ndef followset_(arg):\n grammar, epsilon, first, follow = arg\n for A, expression in rules(grammar):\n f_B = follow[A]\n for t in reversed(expression):\n if t in grammar:\n follow[t] |= f_B\n f_B = f_B | first[t] if t in epsilon else (first[t] - {EPSILON})\n\n return (grammar, epsilon, first, follow)",
"_____no_output_____"
],
[
"followset(canonical(A1_GRAMMAR), START_SYMBOL)",
"_____no_output_____"
]
],
[
[
"### Exercise 10: A LL(1) Parser\n\nAs we mentioned previously, there exist other kinds of parsers that operate left-to-right with right most derivation (*LR(k)*) or left-to-right with left most derivation (*LL(k)*) with _k_ signifying the amount of lookahead the parser is permitted to use.\n\nWhat should one do with the lookahead? That lookahead can be used to determine which rule to apply. In the case of an *LL(1)* parser, the rule to apply is determined by looking at the _first_ set of the different rules. We previously implemented `first_expr()` that takes a an expression, the set of `nullables`, and computes the first set of that rule.\n\nIf a rule can derive an empty set, then that rule may also be applicable if of sees the `follow()` set of the corresponding nonterminal.",
"_____no_output_____"
],
[
"#### Part 1: A LL(1) Parsing Table\n\nThe first part of this exercise is to implement the _parse table_ that describes what action to take for an *LL(1)* parser on seeing a terminal symbol on lookahead. The table should be in the form of a _dictionary_ such that the keys represent the nonterminal symbol, and the value should contain another dictionary with keys as terminal symbols and the particular rule to continue parsing as the value.\n\nLet us illustrate this table with an example. The `parse_table()` method populates a `self.table` data structure that should conform to the following requirements:",
"_____no_output_____"
]
],
[
[
"class LL1Parser(Parser):\n def parse_table(self):\n self.my_rules = rules(self.cgrammar)\n self.table = ... # fill in here to produce\n\n def rules(self):\n for i, rule in enumerate(self.my_rules):\n print(i, rule)\n\n def show_table(self):\n ts = list(sorted(terminals(self.cgrammar)))\n print('Rule Name\\t| %s' % ' | '.join(t for t in ts))\n for k in self.table:\n pr = self.table[k]\n actions = list(str(pr[t]) if t in pr else ' ' for t in ts)\n print('%s \\t| %s' % (k, ' | '.join(actions)))",
"_____no_output_____"
]
],
[
[
"On invocation of `LL1Parser(A2_GRAMMAR).show_table()`\nIt should result in the following table:",
"_____no_output_____"
]
],
[
[
"for i, r in enumerate(rules(canonical(A2_GRAMMAR))):\n print(\"%d\\t %s := %s\" % (i, r[0], r[1]))",
"0\t <start> := ['<expr>']\n1\t <expr> := ['<integer>', '<expr_>']\n2\t <expr_> := ['+', '<expr>']\n3\t <expr_> := ['-', '<expr>']\n4\t <expr_> := []\n5\t <integer> := ['<digit>', '<integer_>']\n6\t <integer_> := ['<integer>']\n7\t <integer_> := []\n8\t <digit> := ['0']\n9\t <digit> := ['1']\n10\t <digit> := ['2']\n11\t <digit> := ['3']\n12\t <digit> := ['4']\n13\t <digit> := ['5']\n14\t <digit> := ['6']\n15\t <digit> := ['7']\n16\t <digit> := ['8']\n17\t <digit> := ['9']\n"
]
],
[
[
"|Rule Name || + | - | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9|\n|-----------||---|---|---|---|---|---|---|---|---|---|---|--|\n|start \t|| | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0|\n|expr \t|| | | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1|\n|expr_ \t|| 2 | 3 | | | | | | | | | | |\n|integer \t|| | | 5 | 5 | 5 | 5 | 5 | 5 | 5 | 5 | 5 | 5|\n|integer_ \t|| 7 | 7 | 6 | 6 | 6 | 6 | 6 | 6 | 6 | 6 | 6 | 6|\n|digit \t|| | | 8 | 9 |10 |11 |12 |13 |14 |15 |16 |17|",
"_____no_output_____"
],
[
"**Solution.** We define `predict()` as we explained before. Then we use the predicted rules to populate the parse table.",
"_____no_output_____"
]
],
[
[
"class LL1Parser(LL1Parser):\n def predict(self, rulepair, first, follow, epsilon):\n A, rule = rulepair\n rf = first_expr(rule, first, epsilon)\n if nullable_expr(rule, epsilon):\n rf |= follow[A]\n return rf\n\n def parse_table(self):\n self.my_rules = rules(self.cgrammar)\n epsilon = nullable(self.cgrammar)\n first = firstset(self.cgrammar, epsilon)\n # inefficient, can combine the three.\n follow = followset(self.cgrammar, self.start_symbol())\n\n ptable = [(i, self.predict(rule, first, follow, epsilon))\n for i, rule in enumerate(self.my_rules)]\n\n parse_tbl = {k: {} for k in self.cgrammar}\n\n for i, pvals in ptable:\n (k, expr) = self.my_rules[i]\n parse_tbl[k].update({v: i for v in pvals})\n\n self.table = parse_tbl",
"_____no_output_____"
],
[
"ll1parser = LL1Parser(A2_GRAMMAR)\nll1parser.parse_table()\nll1parser.show_table()",
"Rule Name\t| + | - | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9\n<start> \t| | | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0\n<expr> \t| | | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1\n<expr_> \t| 2 | 3 | | | | | | | | | | \n<integer> \t| | | 5 | 5 | 5 | 5 | 5 | 5 | 5 | 5 | 5 | 5\n<integer_> \t| 7 | 7 | 6 | 6 | 6 | 6 | 6 | 6 | 6 | 6 | 6 | 6\n<digit> \t| | | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17\n"
]
],
[
[
"#### Part 2: The Parser\n\nOnce we have the parse table, implementing the parser is as follows: Consider the first item from the sequence of tokens to parse, and seed the stack with the start symbol.\n\nWhile the stack is not empty, extract the first symbol from the stack, and if the symbol is a terminal, verify that the symbol matches the item from the input stream. If the symbol is a nonterminal, use the symbol and input item to lookup the next rule from the parse table. Insert the rule thus found to the top of the stack. Keep track of the expressions being parsed to build up the parse table.\n\nUse the parse table defined previously to implement the complete LL(1) parser.",
"_____no_output_____"
],
[
"**Solution.** Here is the complete parser:",
"_____no_output_____"
]
],
[
[
"class LL1Parser(LL1Parser):\n def parse_helper(self, stack, inplst):\n inp, *inplst = inplst\n exprs = []\n while stack:\n val, *stack = stack\n if isinstance(val, tuple):\n exprs.append(val)\n elif val not in self.cgrammar: # terminal\n assert val == inp\n exprs.append(val)\n inp, *inplst = inplst or [None]\n else:\n if inp is not None:\n i = self.table[val][inp]\n _, rhs = self.my_rules[i]\n stack = rhs + [(val, len(rhs))] + stack\n return self.linear_to_tree(exprs)\n\n def parse(self, inp):\n self.parse_table()\n k, _ = self.my_rules[0]\n stack = [k]\n return self.parse_helper(stack, inp)\n\n def linear_to_tree(self, arr):\n stack = []\n while arr:\n elt = arr.pop(0)\n if not isinstance(elt, tuple):\n stack.append((elt, []))\n else:\n # get the last n\n sym, n = elt\n elts = stack[-n:] if n > 0 else []\n stack = stack[0:len(stack) - n]\n stack.append((sym, elts))\n assert len(stack) == 1\n return stack[0]",
"_____no_output_____"
],
[
"ll1parser = LL1Parser(A2_GRAMMAR)\ntree = ll1parser.parse('1+2')\ndisplay_tree(tree)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
]
] |
d0be87d1e9455beeb20994152996b7dac7d0e264 | 2,560 | ipynb | Jupyter Notebook | Week_12_Assessment.ipynb | RYCMDNT/CPEN-21A-CPE-1-2 | 433762d3c6a7eb00677c430a3d0899aafc470958 | [
"Apache-2.0"
] | null | null | null | Week_12_Assessment.ipynb | RYCMDNT/CPEN-21A-CPE-1-2 | 433762d3c6a7eb00677c430a3d0899aafc470958 | [
"Apache-2.0"
] | null | null | null | Week_12_Assessment.ipynb | RYCMDNT/CPEN-21A-CPE-1-2 | 433762d3c6a7eb00677c430a3d0899aafc470958 | [
"Apache-2.0"
] | null | null | null | 24.615385 | 239 | 0.411328 | [
[
[
"<a href=\"https://colab.research.google.com/github/RYCMDNT/CPEN-21A-CPE-1-2/blob/main/Week_12_Assessment.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
]
],
[
[
"#for loops\nvalue=[\"value 0\",\"value 1\",\"value 2\",\"value 3\",\"value 4\",\"value 5\",\"value 6\", \"value 7\", \"value 8\", \"value 9\", \"value 10\"]\nfor x in value:\n print(x)",
"value 0\nvalue 1\nvalue 2\nvalue 3\nvalue 4\nvalue 5\nvalue 6\nvalue 7\nvalue 8\nvalue 9\nvalue 10\n"
],
[
"#while loop\ni=0\nwhile i<11:\n print(\"Value\",i)\n i+=1",
"Value 0\nValue 1\nValue 2\nValue 3\nValue 4\nValue 5\nValue 6\nValue 7\nValue 8\nValue 9\nValue 10\n"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
]
] |
d0be8a66ff20d70da0fdefa1c73342d383eac9c8 | 5,846 | ipynb | Jupyter Notebook | testpad/test04.ipynb | furyhawk/kaggle_practice | 04bf045ae179db6a849fd2c2e833acc2e869f0f8 | [
"MIT"
] | 2 | 2021-11-22T09:21:25.000Z | 2021-12-18T13:12:06.000Z | testpad/test04.ipynb | furyhawk/kaggle_practice | 04bf045ae179db6a849fd2c2e833acc2e869f0f8 | [
"MIT"
] | null | null | null | testpad/test04.ipynb | furyhawk/kaggle_practice | 04bf045ae179db6a849fd2c2e833acc2e869f0f8 | [
"MIT"
] | null | null | null | 19.952218 | 89 | 0.447828 | [
[
[
"# Python program to count the frequency of\n# elements in a list using a dictionary\n \ndef CountFrequency(my_list):\n \n # Creating an empty dictionary\n freq = {}\n for item in my_list:\n if (item in freq):\n freq[item] += 1\n else:\n freq[item] = 1\n \n # for key, value in freq.items():\n # print (\"% d : % d\"%(key, value))\n return freq\n \n# Driver function\n\nmy_list =[1, 1, 1, 5, 5, 3, 1, 3, 3, 1, 4, 4, 4, 2, 2, 2, 2]\n\nsample = CountFrequency(my_list)",
"_____no_output_____"
],
[
"sample",
"_____no_output_____"
],
[
"import numpy as np\nsampleMin = 3\nmy_list =[1, 1, 1, 5, 5, 3, 1, 3, 3, 1, 4, 4, 4, 2, 2, 2, 2]\nkeys_list, values_list = np.unique(my_list, return_counts=True)",
"_____no_output_____"
],
[
"def sample_min(values_list, sample_size=1):\n # sample = []\n return [min(x, sample_size) for x in values_list]\n \n # return sample",
"_____no_output_____"
],
[
"import numpy as np\ninput_a = 3\ninput_b = np.array([1,2,3,4,5])\n\ninput_b[input_b > input_a] = input_a\n\nprint(input_b)",
"[1 2 3 3 3]\n"
],
[
"values_list",
"_____no_output_____"
],
[
"values_list[values_list > sampleMin] = sampleMin\n",
"_____no_output_____"
],
[
"values_list",
"_____no_output_____"
],
[
"zip_iterator = zip(keys_list, values_list)\na_dictionary = dict(zip_iterator)",
"_____no_output_____"
],
[
"a_dictionary",
"_____no_output_____"
],
[
"sample_params = { 1: min(SAMPLE, 1468136),\n 2: min(SAMPLE, 2262087),\n 3: min(SAMPLE, 195712),\n 4: min(SAMPLE, 377),\n 5: min(SAMPLE, 1),\n 6: min(SAMPLE, 11426),\n 7: min(SAMPLE, 62261)}",
"_____no_output_____"
],
[
"import numpy as np\nfrom scipy import stats\na = np.array([[1, 3, 4, 2, 2, 7],\n [5, 2, 2, 1, 4, 1],\n [3, 3, 2, 2, 1, 1]])",
"_____no_output_____"
],
[
"a.shape",
"_____no_output_____"
],
[
"m = stats.mode(a)\nprint(m)",
"ModeResult(mode=array([[1, 3, 2, 2, 1, 1]]), count=array([[1, 2, 2, 2, 1, 2]]))\n"
],
[
"m[0][0]",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0be96421a2f20eb9373d1255139250d2a0dbf35 | 21,015 | ipynb | Jupyter Notebook | notebooks/tutorials/network_sediment_transporter/network_plotting_examples.ipynb | pfeiffea/landlab | a138dacc12766a7470b3a6a0165651f344e97de6 | [
"MIT"
] | 4 | 2019-04-05T16:41:40.000Z | 2021-06-11T20:33:14.000Z | notebooks/tutorials/network_sediment_transporter/network_plotting_examples.ipynb | pfeiffea/landlab | a138dacc12766a7470b3a6a0165651f344e97de6 | [
"MIT"
] | 4 | 2018-05-01T21:43:07.000Z | 2020-05-17T05:03:49.000Z | notebooks/tutorials/network_sediment_transporter/network_plotting_examples.ipynb | pfeiffea/landlab | a138dacc12766a7470b3a6a0165651f344e97de6 | [
"MIT"
] | 4 | 2018-05-21T17:40:58.000Z | 2020-08-21T05:44:41.000Z | 36.868421 | 287 | 0.554509 | [
[
[
"<a href=\"http://landlab.github.io\"><img style=\"float: left\" src=\"../../landlab_header.png\"></a>",
"_____no_output_____"
],
[
"# Using plotting tools associated with the Landlab NetworkSedimentTransporter component \n\n<hr>\n<small>For more Landlab tutorials, click here: <a href=\"https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html\">https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html</a></small>\n<hr>\n\nThis tutorial illustrates how to plot the results of the NetworkSedimentTransporter Landlab component using the `plot_network_and_parcels` tool. \n\nIn this example we will: \n- create a simple instance of the NetworkSedimentTransporter using a *synthetic river network\n- create a simple instance of the NetworkSedimentTransporter using an *input shapefile for the river network\n- show options for setting the color and line widths of network links\n- show options for setting the color of parcels (marked as dots on the network)\n- show options for setting the size of parcels\n- show options for plotting a subset of the parcels\n- demonstrate changing the timestep plotted\n- show an example combining many plotting controls\n\nFirst, import the necessary libraries:",
"_____no_output_____"
]
],
[
[
"import warnings\nwarnings.filterwarnings('ignore')\nimport os\nimport pathlib\n\nimport matplotlib.pyplot as plt\nimport matplotlib.colors as colors\n\nimport numpy as np\nfrom landlab import ExampleData\nfrom landlab.components import FlowDirectorSteepest, NetworkSedimentTransporter\nfrom landlab.data_record import DataRecord\nfrom landlab.grid.network import NetworkModelGrid\nfrom landlab.plot import plot_network_and_parcels\nfrom landlab.io import read_shapefile\n\n\nfrom matplotlib.colors import Normalize",
"_____no_output_____"
]
],
[
[
"## 1. Create and run the synthetic example of NST\n\nFirst, we need to create an implementation of the Landlab NetworkModelGrid to plot. This example creates a synthetic grid, defining the location of each node and link. ",
"_____no_output_____"
]
],
[
[
"y_of_node = (0, 100, 200, 200, 300, 400, 400, 125)\nx_of_node = (0, 0, 100, -50, -100, 50, -150, -100)\n\nnodes_at_link = ((1, 0), (2, 1), (1, 7), (3, 1), (3, 4), (4, 5), (4, 6))\n\ngrid1 = NetworkModelGrid((y_of_node, x_of_node), nodes_at_link)\ngrid1.at_node[\"bedrock__elevation\"] = [0.0, 0.05, 0.2, 0.1, 0.25, 0.4, 0.8, 0.8]\ngrid1.at_node[\"topographic__elevation\"] = [0.0, 0.05, 0.2, 0.1, 0.25, 0.4, 0.8, 0.8]\ngrid1.at_link[\"flow_depth\"] = 2.5 * np.ones(grid1.number_of_links) # m\ngrid1.at_link[\"reach_length\"] = 200*np.ones(grid1.number_of_links) # m\ngrid1.at_link[\"channel_width\"] = 1*np.ones(grid1.number_of_links) # m\n\n# element_id is the link on which the parcel begins. \nelement_id = np.repeat(np.arange(grid1.number_of_links),30)\nelement_id = np.expand_dims(element_id, axis=1)\n\nvolume = 0.1*np.ones(np.shape(element_id)) # (m3)\nactive_layer = np.ones(np.shape(element_id)) # 1= active, 0 = inactive\ndensity = 2650 * np.ones(np.size(element_id)) # (kg/m3)\nabrasion_rate = 0 * np.ones(np.size(element_id)) # (mass loss /m)\n\n# Lognormal GSD\nmedianD = 0.05 # m\nmu = np.log(medianD)\nsigma = np.log(2) #assume that D84 = sigma*D50\nnp.random.seed(0)\nD = np.random.lognormal(\n mu,\n sigma,\n np.shape(element_id)\n) # (m) the diameter of grains in each parcel\n\ntime_arrival_in_link = np.random.rand(np.size(element_id), 1) \nlocation_in_link = np.random.rand(np.size(element_id), 1) \n\nvariables = {\n \"abrasion_rate\": ([\"item_id\"], abrasion_rate),\n \"density\": ([\"item_id\"], density),\n \"time_arrival_in_link\": ([\"item_id\", \"time\"], time_arrival_in_link),\n \"active_layer\": ([\"item_id\", \"time\"], active_layer),\n \"location_in_link\": ([\"item_id\", \"time\"], location_in_link),\n \"D\": ([\"item_id\", \"time\"], D),\n \"volume\": ([\"item_id\", \"time\"], volume)\n}\n\nitems = {\"grid_element\": \"link\", \"element_id\": element_id}\n\nparcels1 = DataRecord(\n grid1,\n items=items,\n time=[0.0],\n data_vars=variables,\n dummy_elements={\"link\": [NetworkSedimentTransporter.OUT_OF_NETWORK]},\n)\n\nfd1 = FlowDirectorSteepest(grid1, \"topographic__elevation\")\nfd1.run_one_step()\n\nnst1 = NetworkSedimentTransporter( \n grid1,\n parcels1,\n fd1,\n bed_porosity=0.3,\n g=9.81,\n fluid_density=1000,\n transport_method=\"WilcockCrowe\",\n)\ntimesteps = 10 # total number of timesteps\ndt = 60 * 60 * 24 *1 # length of timestep (seconds) \nfor t in range(0, (timesteps * dt), dt):\n nst1.run_one_step(dt)",
"_____no_output_____"
]
],
[
[
"## 2. Create and run an example of NST using a shapefile to define the network\n\nFirst, we need to create an implementation of the Landlab NetworkModelGrid to plot. This example creates a grid based on a polyline shapefile. ",
"_____no_output_____"
]
],
[
[
"datadir = ExampleData(\"io/shapefile\", case=\"methow\").base\n\nshp_file = datadir / \"MethowSubBasin.shp\"\npoints_shapefile = datadir / \"MethowSubBasin_Nodes_4.shp\"\n\ngrid2 = read_shapefile(\n shp_file,\n points_shapefile=points_shapefile,\n node_fields=[\"usarea_km2\", \"Elev_m\"],\n link_fields=[\"usarea_km2\", \"Length_m\"],\n link_field_conversion={\"usarea_km2\": \"drainage_area\", \"Slope\":\"channel_slope\", \"Length_m\":\"reach_length\"},\n node_field_conversion={\n \"usarea_km2\": \"drainage_area\",\n \"Elev_m\": \"topographic__elevation\",\n },\n threshold=0.01,\n )\ngrid2.at_node[\"bedrock__elevation\"] = grid2.at_node[\"topographic__elevation\"].copy()\ngrid2.at_link[\"channel_width\"] = 1 * np.ones(grid2.number_of_links)\ngrid2.at_link[\"flow_depth\"] = 0.9 * np.ones(grid2.number_of_links)\n\n# element_id is the link on which the parcel begins. \nelement_id = np.repeat(np.arange(grid2.number_of_links), 50)\nelement_id = np.expand_dims(element_id, axis=1)\n\nvolume = 1*np.ones(np.shape(element_id)) # (m3)\nactive_layer = np.ones(np.shape(element_id)) # 1= active, 0 = inactive\ndensity = 2650 * np.ones(np.size(element_id)) # (kg/m3)\nabrasion_rate = 0 * np.ones(np.size(element_id)) # (mass loss /m)\n\n# Lognormal GSD\nmedianD = 0.15 # m\nmu = np.log(medianD)\nsigma = np.log(2) #assume that D84 = sigma*D50\nnp.random.seed(0)\nD = np.random.lognormal(\n mu,\n sigma,\n np.shape(element_id)\n) # (m) the diameter of grains in each parcel\n\ntime_arrival_in_link = np.random.rand(np.size(element_id), 1) \nlocation_in_link = np.random.rand(np.size(element_id), 1) \n\nvariables = {\n \"abrasion_rate\": ([\"item_id\"], abrasion_rate),\n \"density\": ([\"item_id\"], density),\n \"time_arrival_in_link\": ([\"item_id\", \"time\"], time_arrival_in_link),\n \"active_layer\": ([\"item_id\", \"time\"], active_layer),\n \"location_in_link\": ([\"item_id\", \"time\"], location_in_link),\n \"D\": ([\"item_id\", \"time\"], D),\n \"volume\": ([\"item_id\", \"time\"], volume)\n}\n\nitems = {\"grid_element\": \"link\", \"element_id\": element_id}\n\nparcels2 = DataRecord(\n grid2,\n items=items,\n time=[0.0],\n data_vars=variables,\n dummy_elements={\"link\": [NetworkSedimentTransporter.OUT_OF_NETWORK]},\n)\n\nfd2 = FlowDirectorSteepest(grid2, \"topographic__elevation\")\nfd2.run_one_step()\n\nnst2 = NetworkSedimentTransporter( \n grid2,\n parcels2,\n fd2,\n bed_porosity=0.3,\n g=9.81,\n fluid_density=1000,\n transport_method=\"WilcockCrowe\",\n)\n\nfor t in range(0, (timesteps * dt), dt):\n nst2.run_one_step(dt)",
"_____no_output_____"
]
],
[
[
"## 3. Options for link color and link line widths\n\nThe dictionary below (`link_color_options`) outlines 4 examples of link color and line width choices: \n1. The default output of `plot_network_and_parcels`\n2. Some simple modifications: the whole network is red, with a line width of 7, and no parcels.\n3. Coloring links by an existing grid link attribute, in this case the total volume of sediment on the link (`grid.at_link.[\"sediment_total_volume\"]`, which is created by the `NetworkSedimentTransporter`)\n4. Similar to #3 above, but taking advantange of additional flexiblity in plotting",
"_____no_output_____"
]
],
[
[
"network_norm = Normalize(-1, 6) # see matplotlib.colors.Normalize \n \nlink_color_options = [\n {# empty dictionary = defaults\n },\n {\n \"network_color\":'r', # specify some simple modifications. \n \"network_linewidth\":7,\n \"parcel_alpha\":0 # make parcels transparent (not visible)\n },\n {\n \"link_attribute\": \"sediment_total_volume\", # color links by an existing grid link attribute\n \"parcel_alpha\":0\n },\n {\n \"link_attribute\": \"sediment_total_volume\", \n \"network_norm\": network_norm, # and normalize color scheme\n \"link_attribute_title\": \"Total Sediment Volume\", # title on link color legend\n \"parcel_alpha\":0, \n \"network_linewidth\":3 \n\n }\n]",
"_____no_output_____"
]
],
[
[
"Below, we implement these 4 plotting options, first for the synthetic network, and then for the shapefile-delineated network:",
"_____no_output_____"
]
],
[
[
"for grid, parcels in zip([grid1, grid2], [parcels1, parcels2]):\n for l_opts in link_color_options:\n fig = plot_network_and_parcels(\n grid, parcels, \n parcel_time_index=0, **l_opts)\n plt.show()\n",
"_____no_output_____"
]
],
[
[
"In addition to plotting link coloring using an existing link attribute, we can pass any array of size link. In this example, we color links using an array of random values. ",
"_____no_output_____"
]
],
[
[
"random_link = np.random.randn(grid2.size(\"link\"))\n\nl_opts = {\n \"link_attribute\": random_link, # use an array of size link\n \"network_cmap\": \"jet\", # change colormap\n \"network_norm\": network_norm, # and normalize\n \"link_attribute_title\": \"A random number\",\n \"parcel_alpha\":0,\n \"network_linewidth\":3\n }\nfig = plot_network_and_parcels(\n grid2, parcels2, \n parcel_time_index=0, **l_opts)\nplt.show()",
"_____no_output_____"
]
],
[
[
"## 4. Options for parcel color\n\nThe dictionary below (`parcel_color_options`) outlines 4 examples of link color and line width choices: \n1. The default output of `plot_network_and_parcels`\n2. Some simple modifications: all parcels are red, with a parcel size of 10\n3. Color parcels by an existing parcel attribute, in this case the sediment diameter of the parcel (`parcels1.dataset['D']`)\n4. Color parcels by an existing parcel attribute, but change the colormap. ",
"_____no_output_____"
]
],
[
[
"parcel_color_norm = Normalize(0, 1) # Linear normalization\nparcel_color_norm2=colors.LogNorm(vmin=0.01, vmax=1)\n\nparcel_color_options = [\n {# empty dictionary = defaults\n },\n {\n \"parcel_color\":'r', # specify some simple modifications. \n \"parcel_size\":10\n },\n {\n \"parcel_color_attribute\": \"D\", # existing parcel attribute. \n \"parcel_color_norm\": parcel_color_norm,\n \"parcel_color_attribute_title\":\"Diameter [m]\",\n \"parcel_alpha\":1.0,\n },\n {\n \"parcel_color_attribute\": \"abrasion_rate\", # silly example, does not vary in our example\n \"parcel_color_cmap\": \"bone\",\n },\n]\n\nfor grid, parcels in zip([grid1, grid2], [parcels1, parcels2]):\n for pc_opts in parcel_color_options:\n fig = plot_network_and_parcels(\n grid, parcels, \n parcel_time_index=0, **pc_opts)\n plt.show()",
"_____no_output_____"
]
],
[
[
"## 5. Options for parcel size\nThe dictionary below (`parcel_size_options`) outlines 4 examples of link color and line width choices: \n1. The default output of `plot_network_and_parcels`\n2. Set a uniform parcel size and color\n3. Size parcels by an existing parcel attribute, in this case the sediment diameter (`parcels1.dataset['D']`), and making the parcel markers entirely opaque. \n4. Normalize parcel size on a logarithmic scale, and change the default maximum and minimum parcel sizes. ",
"_____no_output_____"
]
],
[
[
"parcel_size_norm = Normalize(0, 1)\nparcel_size_norm2=colors.LogNorm(vmin=0.01, vmax=1)\n\nparcel_size_options = [\n {# empty dictionary = defaults\n },\n {\n \"parcel_color\":'b', # specify some simple modifications. \n \"parcel_size\":10\n },\n {\n \"parcel_size_attribute\": \"D\", # use a parcel attribute. \n \"parcel_size_norm\": parcel_color_norm,\n \"parcel_size_attribute_title\":\"Diameter [m]\",\n \"parcel_alpha\":1.0, # default parcel_alpha = 0.5\n },\n {\n \"parcel_size_attribute\": \"D\", \n \"parcel_size_norm\": parcel_size_norm2,\n \"parcel_size_min\": 10, # default = 5\n \"parcel_size_max\": 100, # default = 40\n \"parcel_alpha\": 0.1\n },\n]\n\nfor grid, parcels in zip([grid1, grid2], [parcels1, parcels2]):\n for ps_opts in parcel_size_options:\n fig = plot_network_and_parcels(\n grid, parcels, \n parcel_time_index=0, **ps_opts)\n plt.show()\n",
"_____no_output_____"
]
],
[
[
"## 6. Plotting a subset of the parcels\n\nIn some cases, we might want to plot only a subset of the parcels on the network. Below, we plot every 50th parcel in the `DataRecord`. ",
"_____no_output_____"
]
],
[
[
"parcel_filter = np.zeros((parcels2.dataset.dims[\"item_id\"]), dtype=bool)\nparcel_filter[::50] = True\npc_opts= {\n \"parcel_color_attribute\": \"D\", # a more complex normalization and a parcel filter. \n \"parcel_color_norm\": parcel_color_norm2,\n \"parcel_color_attribute_title\":\"Diameter [m]\",\n \"parcel_alpha\": 1.0,\n \"parcel_size\": 40,\n \"parcel_filter\": parcel_filter\n }\nfig = plot_network_and_parcels(\n grid2, parcels2, \n parcel_time_index=0, **pc_opts\n)\nplt.show()",
"_____no_output_____"
]
],
[
[
"## 7. Select the parcel timestep to be plotted\n\nAs a default, `plot_network_and_parcels` plots parcel positions for the last timestep of the model run. However, `NetworkSedimentTransporter` tracks the motion of parcels for all timesteps. We can plot the location of parcels on the link at any timestep using `parcel_time_index`. ",
"_____no_output_____"
]
],
[
[
"parcel_time_options = [0,4,7]\n\nfor grid, parcels in zip([grid1, grid2], [parcels1, parcels2]):\n for pt_opts in parcel_time_options:\n fig = plot_network_and_parcels(\n grid, parcels, \n parcel_size = 20,\n parcel_alpha = 0.1,\n parcel_time_index=pt_opts)\n plt.show()",
"_____no_output_____"
]
],
[
[
"## 7. Combining network and parcel plotting options\n\nNothing will stop us from making all of the choices at once. ",
"_____no_output_____"
]
],
[
[
"parcel_color_norm=colors.LogNorm(vmin=0.01, vmax=1)\n\nparcel_filter = np.zeros((parcels2.dataset.dims[\"item_id\"]), dtype=bool)\nparcel_filter[::30] = True\n\nfig = plot_network_and_parcels(grid2, \n parcels2, \n parcel_time_index=0, \n parcel_filter=parcel_filter,\n link_attribute=\"sediment_total_volume\", \n network_norm=network_norm,\n network_linewidth=4,\n network_cmap='bone_r',\n parcel_alpha=1.0, \n parcel_color_attribute=\"D\",\n parcel_color_norm=parcel_color_norm2, \n parcel_size_attribute=\"D\",\n parcel_size_min=5,\n parcel_size_max=150,\n parcel_size_norm=parcel_size_norm,\n parcel_size_attribute_title=\"D\")",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0be98990f225c4df9c6894bb466223a0432ac3b | 10,273 | ipynb | Jupyter Notebook | Integration/Double-Exponential/DoubleExp.ipynb | dkaramit/ASAP | afade2737b332e7dbf0ea06eb4f31564a478ee40 | [
"MIT"
] | null | null | null | Integration/Double-Exponential/DoubleExp.ipynb | dkaramit/ASAP | afade2737b332e7dbf0ea06eb4f31564a478ee40 | [
"MIT"
] | null | null | null | Integration/Double-Exponential/DoubleExp.ipynb | dkaramit/ASAP | afade2737b332e7dbf0ea06eb4f31564a478ee40 | [
"MIT"
] | 1 | 2021-12-15T02:03:01.000Z | 2021-12-15T02:03:01.000Z | 32.612698 | 334 | 0.445829 | [
[
[
"The tanh-sinh (or double exponential) method.\n\nWe calculate an integral in the following fashion:\n\n$$\nI=\\int_{-1}^{1} dx f(x) = \\int_{-\\infty}^{\\infty} dt \\; f(g(t)) \\;g^{\\prime}(t) \\approx h \\sum_{j=-N}^{N} w_j \\; f(x_j)\\; ,\n$$\n\nwith $x_j= g(h \\, t)$ and $w_j = g^{\\prime}(h \\, t) $. The functio $g(t)$ transorms the interval from $x \\in [-1,1]$ to $t \\in ({-\\infty} , {\\infty})$. The parameter $N$ is chosen so that $| w_j \\; f(x_j) |< \\epsilon$ (for $j>N$) with $\\epsilon \\equiv 10^{-p}$, with $p$ the precision leven (number of digits).\n\nThe method is called $\\tanh-\\sinh$ because we choose $g$ to be \n\n$$\ng(t)=\\tanh \\left(\\dfrac{\\pi}{2} \\sinh(t) \\right).\n$$\n\nThis means that \n\n\n$$\nx_j = \\tanh \\left(\\dfrac{\\pi}{2} \\sinh(h \\; j ) \\right)\\\\\nw_j =\\dfrac{ \\dfrac{\\pi}{2} \\cosh(h \\; j ) }{\\cosh^2 \\left(\\dfrac{\\pi}{2} \\sinh(h \\; j ) \\right) }.\n$$\n\nIt is worth mentioning that $x_j$ and $w_j$ can be computed once, and then just applied in a lot of integrals.\n\nThe error of the estimate is \n$$\nErr \\approx h \\left(\\dfrac{h}{2 \\pi}\\right)^2 \\sum_{j=-N}^{N} \n\\left[ \\dfrac{d^2 \\; g^{\\prime}(t) f( g(t) ) }{dt} \\right]_{t=h \\, j}\n$$\n\n\n\nSo, we start by choosing some N such that $|w_{N+1} f(\\pm x_{N+1})| < \\epsilon$.\nThen, we calculate the integral and the error. if the error is acceptable (according to some tolerances \ndefined by the user) then the integral is returned. If the eror is large, then we update $h$ and $N$ as\n$$\nh \\to h/2 \\\\\nN \\to 2N \\; .\n$$\n\nNote, that once we have found $N$ suche that $|w_{N+1} f(\\pm x_{N+1})| < \\epsilon$, then by changing \n$h \\to h/2$, we need $N \\to 2N$, so that $N \\, h$ to be such that $|w_{N+1} f(\\pm x_{N+1})| < \\epsilon$\nholds for the updated value of $h$.\n\n\nEverything is based on [Wikipedia](https://en.wikipedia.org/wiki/Tanh-sinh_quadrature#Implementations) and [Bailey's paper](https://www.davidhbailey.com//dhbpapers/dhb-tanh-sinh.pdf).",
"_____no_output_____"
]
],
[
[
"import numpy as np \nfrom numpy import tanh,sinh,cosh,pi,abs\n\n\n#just for testing\nfrom scipy.integrate import quad",
"_____no_output_____"
],
[
"class DoubleExp:\n def g(self,t):\n return tanh( pi/2. * sinh(t) )\n def dgdt(self,t):\n return pi/2. *cosh(t)/cosh( pi/2. * sinh(t) )**2.\n \n \n def F(self,t):\n #this will be used to determine the error\n return self.func( self.g(t) )*self.dgdt(t)\n\n def d2Fdt(self,t,_h=1e-8):\n '''\n This will give the second derivatives we need for the error estimation.\n For the moment take derivatives numerically. \n Later I will do the derivatives of g analytically, but for the moment should be fine.\n '''\n return (self.F(t+_h )- 2 * self.F(t ) + self.F(t -_h ))/(_h**2.)\n \n \n \n def __init__(self,func,_exp=1,_exp_max=15,rtol=1e-5,atol=1e-5,p=10,Nmax=1000):\n '''\n func: function to be integrated in the interval [-1,1].\n exp: initial value of h=2^-exp\n exp_min: the minimum exp, with hmin= 2^{-exp_max} \n p: precision.\n \n Nmax=maximum number of evaluations\n \n Note that x_{-j}=-x_j and w_{-j}=-w_j .\n '''\n self.func=func\n \n self._exp=_exp\n self._exp_max=_exp_max\n \n self.h=2**-_exp\n self.hmin=2**-_exp_max\n \n self.rtol=rtol\n self.atol=atol\n self.eps=10**(-p)\n \n #initialize N\n self.N=0\n self.N_init=False\n \n #eval will tell us if we have already evaluated the integral for given N and h (no need to sum thingswe already have)\n self.eval=True\n self.h_stop=False\n \n \n #initialize the integral and the error. \n #As you update h and N, you need to add to the sum only new values produced\n #Also, since h changes, multipy by h at the end of the evaluation.\n self.integral=self.func( self.g(0) ) *self.dgdt(0)\n self.err=self.d2Fdt(0)\n \n\n \n def N_start(self):\n '''\n Find an appropriate N to start.\n As you update h, just update N->N*2 (later we may use something better) \n '''\n \n #start from this. \n tmp_N=self.N+1\n while True:\n #remember that x_j=-x_{-j}, w_j = w_{-j}\n _x=self.g(self.h*tmp_N)\n _w=self.dgdt(self.h*tmp_N)\n _f1=_w*self.func(_x) \n _f2= _w*self.func(-_x)\n \n \n #Note that we want N to start as N>0. This way we make sure that N gets updated correctly \n #(if N starts at 0, it's not going to be updated).\n if abs(_f1)<self.eps and abs(_f2 )<self.eps and self.N>1:\n self.eval=False\n break\n else:\n \n self.integral+=_f1+_f2\n self.err+=self.d2Fdt( tmp_N*self.h)\n \n self.N=tmp_N\n tmp_N+=1\n \n\n \n def evaluate(self):\n '''\n Evaluate the integral for given h and N.\n Also evaluate the error.\n \n Note for later: since we update h->h/2, we just need to update the sum including only the new\n addition we make. That is, you only calculate for odd j! \n '''\n j=1\n while self.eval:\n _x=self.g(self.h*j)\n _w=self.dgdt(self.h*j)\n\n self.integral+=_w*(self.func(_x) + self.func(-_x))\n self.err+=self.d2Fdt( j*self.h)+self.d2Fdt( -j*self.h)\n j+=2 \n if j>self.N-2:\n self.eval=False\n break\n\n \n \n def h_control(self):\n '''\n Determines if the error is acceptable. If not, decrese h until it is (or hmin is found).\n '''\n abs_err=abs(self.err*self.h*(self.h/(2*pi))**2.)\n \n _sc=self.atol + self.rtol*abs(self.integral)\n \n if abs_err/_sc <1 :\n self.h_stop=True\n else:\n if self.h<self.hmin:\n self.h_stop=True\n else:\n self.h=self.h/2\n self.N=self.N*2\n self.eval=True\n\n\n\n def integrate(self):\n if self.N_init==False:\n self.N_start()\n \n while self.h_stop==False:\n self.h_control()\n self.evaluate()\n \n self.eval=False\n return (self.integral*self.h , abs(self.err*self.h*(self.h/(2*pi))**2.) )\n \n\n\n ",
"_____no_output_____"
],
[
"def F(x):\n# return (x**2-1)/(x**2+1)*1/(x**2+5)**0.5\n# return 1/((1+x)**0.5 +(1-x)**0.5 +2 )\n# return x**4*5*np.exp(-x**2/5.)\n return np.exp(-x**2./1e-15)",
"_____no_output_____"
],
[
"DE=DoubleExp(func=F,_exp=10,_exp_max=50,p=20,rtol=1e-10,atol=1e-10)",
"_____no_output_____"
],
[
"DE.integrate()",
"_____no_output_____"
],
[
"quad(F,-1,1)",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0be9c872e043d835ee2bc0a816528da4e7f4e74 | 563,969 | ipynb | Jupyter Notebook | notebooks/02b-custom-tutorial.ipynb | b-biswas/BlendingToolKit | 2e85da4df99a84bcdff1d85ed66eb8589eb72499 | [
"MIT"
] | null | null | null | notebooks/02b-custom-tutorial.ipynb | b-biswas/BlendingToolKit | 2e85da4df99a84bcdff1d85ed66eb8589eb72499 | [
"MIT"
] | null | null | null | notebooks/02b-custom-tutorial.ipynb | b-biswas/BlendingToolKit | 2e85da4df99a84bcdff1d85ed66eb8589eb72499 | [
"MIT"
] | null | null | null | 628.02784 | 57,476 | 0.934415 | [
[
[
"# Autoreload packages in case they change.\n%load_ext autoreload\n%autoreload 2\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport os\nimport sys\nimport btk\nimport galsim\nimport warnings",
"_____no_output_____"
]
],
[
[
"# \"Custom\" tutorial\n\nThis tutorial is intended to showcase how to customize some elements of BTK, namely the sampling function, the surveys or the measure function. We encourage you to follow the intro tutorial first if you have not already done so.\n\n## Table of contents\n\n- [Custom sampling function](#custom_sampling_function)\n- [Custom survey](#custom_survey)\n- [Custom measure function](#custom_measure_function)\n- [Custom target measure](#custom_target_measure)",
"_____no_output_____"
],
[
"## Custom sampling function\n<a id='custom_sampling_function'></a>\n\nThe sampling function defines how galaxies are selected in the catalog and their positions in the blends. This is done by defining a custom class based on the `SamplingFunction` class, which will be called (like a function) when the blends are generated. The `__call__` method should normally only take as an argument the catalog (as an astropy table), and return a smaller astropy table containing the entries from the catalog corresponding to the galaxies, along with the shifts (in arcseconds) of the galaxies compared to the center of the image, in the columns \"ra\" and \"dec\".\nHere is an example with the default sampling function.",
"_____no_output_____"
]
],
[
[
"class DefaultSampling(btk.sampling_functions.SamplingFunction):\n \"\"\"Default sampling function used for producing blend tables.\"\"\"\n\n def __init__(self, max_number=2, stamp_size=24.0, maxshift=None):\n \"\"\"\n Args:\n max_number (int): Defined in parent class\n stamp_size (float): Size of the desired stamp.\n maxshift (float): Magnitude of maximum value of shift. If None then it\n is set as one-tenth the stamp size. (in arcseconds)\n \"\"\"\n super().__init__(max_number)\n self.stamp_size = stamp_size\n self.maxshift = maxshift if maxshift else self.stamp_size / 10.0\n\n @property\n def compatible_catalogs(self):\n return \"CatsimCatalog\", \"CosmosCatalog\"\n\n def __call__(self, table):\n \"\"\"Applies default sampling to the input CatSim-like catalog and returns an\n astropy table with entries corresponding to a blend centered close to postage\n stamp center.\n\n Function selects entries from input table that are brighter than 25.3 mag\n in the i band. Number of objects per blend is set at a random integer\n between 1 and Args.max_number. The blend table is then randomly sampled\n entries from the table after selection cuts. The centers are randomly\n distributed within 1/10th of the stamp size. Here even though the galaxies\n are sampled from a CatSim catalog, their spatial location are not\n representative of real blends.\n\n Args:\n table (astropy.table): Table containing entries corresponding to galaxies\n from which to sample.\n\n Returns:\n Astropy.table with entries corresponding to one blend.\n \"\"\"\n number_of_objects = np.random.randint(1, self.max_number + 1)\n (q,) = np.where(table[\"ref_mag\"] <= 25.3)\n\n blend_table = table[np.random.choice(q, size=number_of_objects)]\n blend_table[\"ra\"] = 0.0\n blend_table[\"dec\"] = 0.0\n x_peak, y_peak = _get_random_center_shift(number_of_objects, self.maxshift)\n blend_table[\"ra\"] += x_peak\n blend_table[\"dec\"] += y_peak\n\n if np.any(blend_table[\"ra\"] > self.stamp_size / 2.0) or np.any(\n blend_table[\"dec\"] > self.stamp_size / 2.0\n ):\n warnings.warn(\"Object center lies outside the stamp\")\n return blend_table\n \n\ndef _get_random_center_shift(num_objects, maxshift):\n \"\"\"Returns random shifts in x and y coordinates between + and - max-shift in arcseconds.\n\n Args:\n num_objects (int): Number of x and y shifts to return.\n\n Returns:\n x_peak (float): random shift along the x axis\n y_peak (float): random shift along the x axis\n \"\"\"\n x_peak = np.random.uniform(-maxshift, maxshift, size=num_objects)\n y_peak = np.random.uniform(-maxshift, maxshift, size=num_objects)\n return x_peak, y_peak",
"_____no_output_____"
]
],
[
[
"As you can see, this sampling function does 3 things: applying a magnitude cut to the catalog, selecting random galaxies uniformly (with a random number of galaxies, the maximum being specified at the initialization), and assigning them random uniform shifts.\n\nHere is how we would write a sampling function for generating two galaxies, one bright and centered, the other faint and randomly shifted.",
"_____no_output_____"
]
],
[
[
"class PairSampling(btk.sampling_functions.SamplingFunction):\n \n def __init__(self, stamp_size=24.0, maxshift=None):\n super().__init__(2)\n self.stamp_size = stamp_size\n self.maxshift = maxshift if maxshift else self.stamp_size / 10.0\n\n @property\n def compatible_catalogs(self):\n return \"CatsimCatalog\", \"CosmosCatalog\"\n\n def __call__(self,table):\n (q_bright,) = np.where(table[\"ref_mag\"] <= 25.3)\n (q_dim,) = np.where((table[\"ref_mag\"] > 25.3) & (table[\"ref_mag\"] <= 28))\n \n indexes = [np.random.choice(q_bright),np.random.choice(q_dim)]\n blend_table = table[indexes]\n \n blend_table[\"ra\"] = 0.0\n blend_table[\"dec\"] = 0.0\n \n x_peak, y_peak = _get_random_center_shift(1, self.maxshift)\n \n blend_table[\"ra\"][1] += x_peak\n blend_table[\"dec\"][1] += y_peak\n\n if np.any(blend_table[\"ra\"] > self.stamp_size / 2.0) or np.any(\n blend_table[\"dec\"] > self.stamp_size / 2.0\n ):\n warnings.warn(\"Object center lies outside the stamp\")\n return blend_table",
"_____no_output_____"
]
],
[
[
"You can try to write your own sampling function here if you wish.",
"_____no_output_____"
],
[
"Here is some code to test our new sampling function (please replace the first line if you wrote your own sampling function).",
"_____no_output_____"
]
],
[
[
"sampling_function = PairSampling()\ncatalog_name = \"../data/sample_input_catalog.fits\"\nstamp_size = 24\nsurvey = btk.survey.get_surveys(\"Rubin\")\ncatalog = btk.catalog.CatsimCatalog.from_file(catalog_name)\ndraw_blend_generator = btk.draw_blends.CatsimGenerator(\n catalog,\n sampling_function,\n survey,\n stamp_size=stamp_size,\n batch_size=5\n)",
"_____no_output_____"
],
[
"batch = next(draw_blend_generator)\nblend_images = batch['blend_images']\nblend_list = batch['blend_list']\nbtk.plot_utils.plot_blends(blend_images, blend_list, limits=(30,90))",
"_____no_output_____"
]
],
[
[
"<a id='custom_survey'></a>",
"_____no_output_____"
],
[
"## Custom survey\n<a id='custom_survey'></a>\nThe survey defines the observational parameters relative to the instrument and telescope making the observation; in particular, it serves to define the pixel scale, the number of bands, the noise level, the flux, and the PSF.\nA number of surveys is provided with BTK, so most users will not need to define a new one; however you may want to add one, or to modify one (for example to use a custom PSF). Here we will detail how to do so.\n\nA Survey is defined as a named tuple, that is a tuple where each slot has a name. Here are all the fields that a survey contains:\n- name: Name of the survey\n- pixel_scale: Pixel scale in arcseconds\n- effective_area: Light-collecting area of the telescope; depending on the optics of the telescope this can be different from $\\pi*r^2$, in the case of a Schmidt–Cassegrain telescope for instance.\n- mirror_diameter: Diameter of the primary telescope, in meters (without accounting for an eventual missing area)\n- airmass: Length of the optical path through atmosphere, relative to the zenith path length. An airmass of 1.2 means that light would travel the equivalent of 1.2 atmosphere when observing. \n- zeropoint_airmass: airmass which was used when computing the zeropoints. If in doubt, set it to the same value as the airmass.\n- filters: List of Filter objects, more on that below\n\nThe Filter object is, again, a named tuple, containing the informations relative to each filter; a single survey can contain multiple filters. Each filter contains:\n- name: Name of the filter\n- sky_brightness: brightness of the sky background, in mags/sq.arcsec\n- exp_time: total exposure time, in seconds\n- zeropoint: Magnitude of an object giving a measured flux of 1 electron per second\n- extinction: exponential coefficient describing the absorption of light by the atmosphere.\n- psf: PSF for the filter. This can be provided in two ways:\n - Providing a Galsim PSF model, e.g. `galsim.Kolmogorov(fwhm)` or any convolution of such models.\n - Providing a function which returns a Galsim model when called (with no arguments). This can be used when you \n you want to randomize the PSF.\n In the case of the default surveys, we only use the first possibility, computing the model using the get_psf function beforehand; those models have an atmospheric and an optical component.\n\nSurveys are usually imported using `btk.survey.get_surveys(survey_names)`, which will create the Survey object(s) from a config file (currently, the implemented surveys are Rubin, HSC, HST, Euclid, DES and CFHT); it is also possible to create them directly in Python.\nHere is the definition of the Rubin survey as an example. You may try changing the parameters if you wish to see the effects on the blends.",
"_____no_output_____"
]
],
[
[
"from btk.survey import Survey, Filter, get_psf\n\n_central_wavelength = {\n \"u\": 3592.13,\n \"g\": 4789.98,\n \"r\": 6199.52,\n \"i\": 7528.51,\n \"z\": 8689.83,\n \"y\": 9674.05,\n}\nRubin = btk.survey.Survey(\n \"Rubin\",\n pixel_scale=0.2,\n effective_area=32.4,\n mirror_diameter=8.36,\n airmass=1.2,\n zeropoint_airmass=1.2,\n filters=[\n btk.survey.Filter(\n name=\"y\",\n psf=get_psf(\n mirror_diameter=8.36,\n effective_area=32.4,\n filt_wavelength=_central_wavelength[\"y\"],\n fwhm=0.703,\n ),\n sky_brightness=18.6,\n exp_time=4800,\n zeropoint=26.56,\n extinction=0.138,\n ),\n btk.survey.Filter(\n name=\"z\",\n psf=get_psf(\n mirror_diameter=8.36,\n effective_area=32.4,\n filt_wavelength=_central_wavelength[\"z\"],\n fwhm=0.725,\n ),\n sky_brightness=19.6,\n exp_time=4800,\n zeropoint=27.39,\n extinction=0.043,\n ),\n btk.survey.Filter(\n name=\"i\",\n psf=get_psf(\n mirror_diameter=8.36,\n effective_area=32.4,\n filt_wavelength=_central_wavelength[\"i\"],\n fwhm=0.748,\n ),\n sky_brightness=20.5,\n exp_time=5520,\n zeropoint=27.78,\n extinction=0.07,\n ),\n btk.survey.Filter(\n name=\"r\",\n psf=get_psf(\n mirror_diameter=8.36,\n effective_area=32.4,\n filt_wavelength=_central_wavelength[\"r\"],\n fwhm=0.781,\n ),\n sky_brightness=21.2,\n exp_time=5520,\n zeropoint=28.10,\n extinction=0.10,\n ),\n btk.survey.Filter(\n name=\"g\",\n psf=get_psf(\n mirror_diameter=8.36,\n effective_area=32.4,\n filt_wavelength=_central_wavelength[\"g\"],\n fwhm=0.814,\n ),\n sky_brightness=22.3,\n exp_time=2400,\n zeropoint=28.26,\n extinction=0.163,\n ),\n btk.survey.Filter(\n name=\"u\",\n psf=get_psf(\n mirror_diameter=8.36,\n effective_area=32.4,\n filt_wavelength=_central_wavelength[\"u\"],\n fwhm=0.859,\n ),\n sky_brightness=22.9,\n exp_time=1680,\n zeropoint=26.40,\n extinction=0.451,\n ),\n ],\n)",
"_____no_output_____"
],
[
"sampling_function = btk.sampling_functions.DefaultSampling()\ncatalog_name = \"../data/sample_input_catalog.fits\"\nstamp_size = 24\nsurvey = Rubin\ncatalog = btk.catalog.CatsimCatalog.from_file(catalog_name)\ndraw_blend_generator = btk.draw_blends.CatsimGenerator(\n catalog,\n sampling_function,\n survey,\n stamp_size=stamp_size,\n batch_size=5\n)",
"_____no_output_____"
],
[
"batch = next(draw_blend_generator)\nblend_images = batch['blend_images']\nblend_list = batch['blend_list']\nbtk.plot_utils.plot_blends(blend_images, blend_list, limits=(30,90))",
"_____no_output_____"
]
],
[
[
"## Custom measure function\n<a id='custom_measure_function'></a>\nUsers who wish to test their own algorithm using BTK should consider writing a measure function. Morally, a measure function takes in blends and return measurements, ie detections, segmentation and deblended images. It is then fed to a MeasureGenerator, which will apply the function for every blend in the batch.\nMore precisely, the measure function takes in two main arguments, named `batch` and `idx`; the first one contains the whole results from the DrawBlendsGenerator, while the second contains the id of the blend on which the measurement should be carried. This is done so that the user access to every relevant information, including the PSF and WCS which are defined per batch and not per blend.\nThe results should be returned as a dictionary, with entries:\n- \"catalog\" containing the detections, as an astropy Table object with columns \"x_peak\" and \"y_peak\" containing the coordinates of the detection. The user may also include other measurements in it, even though they will not be covered by the metrics.\n- \"segmentation\" containing the measured segmentation. It should be a boolean array with shape (n_objects,stamp_size,stamp_size) where n_objects is the number of detected objects (must be coherent with the \"catalog\" object). The i-th channel should have pixels corresponding to the i-th object set to True.\n- \"deblended_images\" containing the deblended images. It should be an array with shape (n_objects, n_bands, stamp_size, stamp_size) where n_objects is the number of detected objects and n_bands the number of bands. If you set the channels_last option to True, it should instead be of shape (n_objects, stamp_size, stamp_size, n_bands).\n\nHere is an example with the sep measure function:",
"_____no_output_____"
]
],
[
[
"import sep\ndef sep_measure(batch, idx, channels_last=False, surveys=None, sigma_noise=1.5, **kwargs):\n \"\"\"Return detection, segmentation and deblending information with SEP.\n\n NOTE: If this function is used with the multiresolution feature,\n measurements will be carried on the first survey, and deblended images\n or segmentations will not be returned.\n\n Args:\n batch (dict): Output of DrawBlendsGenerator object's `__next__` method.\n idx (int): Index number of blend scene in the batch to preform\n measurement on.\n sigma_noise (float): Sigma threshold for detection against noise.\n\n Returns:\n dict with the centers of sources detected by SEP detection algorithm.\n \"\"\"\n channel_indx = 0 if not channels_last else -1\n\n # multiresolution\n if isinstance(batch[\"blend_images\"], dict):\n if surveys is None:\n raise ValueError(\"surveys are required in order to use the MR feature.\")\n survey_name = surveys[0].name\n image = batch[\"blend_images\"][survey_name][idx]\n avg_image = np.mean(image, axis=channel_indx)\n wcs = batch[\"wcs\"][survey_name]\n\n # single-survey\n else:\n image = batch[\"blend_images\"][idx]\n avg_image = np.mean(image, axis=channel_indx)\n wcs = batch[\"wcs\"]\n\n stamp_size = avg_image.shape[0]\n bkg = sep.Background(avg_image)\n catalog, segmentation = sep.extract(\n avg_image, sigma_noise, err=bkg.globalrms, segmentation_map=True\n )\n\n n_objects = len(catalog)\n segmentation_exp = np.zeros((n_objects, stamp_size, stamp_size), dtype=bool)\n deblended_images = np.zeros((n_objects, *image.shape), dtype=image.dtype)\n for i in range(n_objects):\n seg_i = segmentation == i + 1\n segmentation_exp[i] = seg_i\n seg_i_reshaped = np.zeros((np.min(image.shape), stamp_size, stamp_size))\n for j in range(np.min(image.shape)):\n seg_i_reshaped[j] = seg_i\n seg_i_reshaped = np.moveaxis(seg_i_reshaped, 0, np.argmin(image.shape))\n deblended_images[i] = image * seg_i_reshaped\n\n t = astropy.table.Table()\n t[\"ra\"], t[\"dec\"] = wcs.pixel_to_world_values(catalog[\"x\"], catalog[\"y\"])\n t[\"ra\"] *= 3600 #Converting to arcseconds\n t[\"dec\"] *= 3600\n\n # If multiresolution, return only the catalog\n if isinstance(batch[\"blend_images\"], dict):\n return {\"catalog\": t}\n else:\n return {\n \"catalog\": t,\n \"segmentation\": segmentation_exp,\n \"deblended_images\": deblended_images,\n }",
"_____no_output_____"
]
],
[
[
"You can see that the function takes `batch` and `idx` as arguments, but also has a `channels_last` argument and some kwargs. You can either specify arguments or catch them with `kwargs.get()`, as is done with `sigma_noise` there. To pass those arguments to the function, you can use the measure_kwargs, as detailed later in this tutorial. The `channels_last` (specifying if the channels are the first or last dimension of the image) and the `survey` (BTK survey object) are always passed to the function (you can choose if you want to catch them with `kwargs.get()` or not).",
"_____no_output_____"
],
[
"In the multiresolution case, the segmentation and the deblended images should be dictionaries indexed by the surveys, each entry containing the results as for the single resolution case. The catalog field does not change, as the ra and dec are independent from the resolution; it will be automatically split in the MeasureGenerator to get several catalogs containing the pixel coordinates.",
"_____no_output_____"
],
[
"## Custom target measure\n<a id='custom_target_measure'></a>\n\nIn order to evaluate the quality of reconstructed galaxy images, one may want to take a look at the actual measurements that will be carried on those images. The \"target measures\" refer to this kind of measurement, such as the shape or the photometric redshift, which will be done in a weak lensing pipeline. They are used in the metrics part to evaluate the deblended image, by making the measurements both on the deblended image and on the associated true isolated galaxy image and comparing the two. Be careful not to make the confusion with the measure functions, which correspond to making detections, segmentations and deblended images.\n\nThis can be achieved with the `target_meas` argument of the MetricsGenerator. To create a new target measure, you need to create a function with two arguments : the image on which the measurements will be done, and a second one corresponding to additional data that may be needed, including the PSF, the pixel scale, a band number on which the measurement should be done (if applicable), and a boolean for verbosity. The function should then return the measurement, either as a number for a single measurement or as a list if you measure several at the same time (e.g. the two components of ellipticity) ; in case there is an error it should return NaN or a list of NaNs. To pass the function to the MetricsGenerator, you need to put it in a dictionary indexed by the name you want to give to the target measure (e.g. \"ellipticity\" or \"redshift\").\n\nThe function will be ran on all the deblended images, and the results can be found in `metrics_results[\"reconstruction\"][<measure function>][<name of the target measure>]`, or directly in the galaxy summary in a column with the name of the target measure. For each target measure, you will also have the results for the true galaxies under the key `<name of the target measure>_true`. Also, if your target measure has several outputs, they will be denoted `name0`, `name1`, ... and `name0_true`,... instead.\n\nLet us see how it works through an example. First we instantiate a MeasureGenerator as usual.",
"_____no_output_____"
]
],
[
[
"catalog_name = \"../data/sample_input_catalog.fits\"\nstamp_size = 24\nsurvey = btk.survey.get_surveys(\"Rubin\")\ncatalog = btk.catalog.CatsimCatalog.from_file(catalog_name)\ndraw_blend_generator = btk.draw_blends.CatsimGenerator(\n catalog,\n btk.sampling_functions.DefaultSampling(),\n survey,\n stamp_size=stamp_size,\n batch_size=100\n)\nmeas_generator = btk.measure.MeasureGenerator(btk.measure.sep_measure,draw_blend_generator)",
"_____no_output_____"
]
],
[
[
"Then we can define a target measure function. This one is builtin with BTK and uses the Galsim implementation of the KSB method to measure the ellipticity of the galaxy.",
"_____no_output_____"
]
],
[
[
"def meas_ksb_ellipticity(image, additional_params):\n \"\"\"Utility function to measure ellipticity using the `galsim.hsm` package, with the KSB method.\n\n Args:\n image (np.array): Image of a single, isolated galaxy with shape (H, W).\n additional_params (dict): Containing keys 'psf', 'pixel_scale' and 'meas_band_num'.\n The psf should be a Galsim PSF model, and meas_band_num\n an integer indicating the band in which the measurement\n is done.\n \"\"\"\n meas_band_num = additional_params[\"meas_band_num\"]\n psf_image = galsim.Image(image.shape[1], image.shape[2])\n psf_image = additional_params[\"psf\"][meas_band_num].drawImage(psf_image)\n pixel_scale = additional_params[\"pixel_scale\"]\n verbose = additional_params[\"verbose\"]\n gal_image = galsim.Image(image[meas_band_num, :, :])\n gal_image.scale = pixel_scale\n shear_est = \"KSB\"\n\n res = galsim.hsm.EstimateShear(gal_image, psf_image, shear_est=shear_est, strict=False)\n result = [res.corrected_g1, res.corrected_g2, res.observed_shape.e]\n if res.error_message != \"\" and verbose:\n print(\n f\"Shear measurement error: '{res.error_message }'. \\\n This error may happen for faint galaxies or inaccurate detections.\"\n )\n result = [np.nan, np.nan, np.nan]\n return result\n",
"_____no_output_____"
],
[
"metrics_generator = btk.metrics.MetricsGenerator(meas_generator,\n target_meas={\"ellipticity\":meas_ksb_ellipticity},\n meas_band_num=2) # Note : the ellipticity will be computed in this band !\nblend_results,meas_results,metrics_results = next(metrics_generator)",
"_____no_output_____"
]
],
[
[
"We can now see the results :",
"_____no_output_____"
]
],
[
[
"print(\"Raw metrics results for deblended images : \",metrics_results[\"reconstruction\"][\"sep_measure\"][\"ellipticity0\"][:5])\nprint(\"Raw metrics results for true images : \",metrics_results[\"reconstruction\"][\"sep_measure\"][\"ellipticity0_true\"][:5])",
"Raw metrics results for deblended images : [[-0.02696719765663147, nan], [-10.0], [0.17775359749794006, -0.1585843712091446], [0.1856272965669632, -0.11457457393407822], [-0.026024628430604935]]\nRaw metrics results for true images : [[-0.0013318936107680202, nan], [-0.00601611565798521], [0.1264297366142273, -0.14031007885932922], [0.20856165885925293, -0.09992313385009766], [0.02358676679432392]]\n"
],
[
"print(metrics_results[\"galaxy_summary\"][\"sep_measure\"][\"ellipticity0\",\"ellipticity0_true\"][:5])",
" ellipticity0 ellipticity0_true \n-------------------- ----------------------\n-0.02696719765663147 -0.0013318936107680202\n nan nan\n -10.0 -0.00601611565798521\n 0.17775359749794006 0.1264297366142273\n -0.1585843712091446 -0.14031007885932922\n"
]
],
[
[
"As usual, we can use the interactive function to plot the results. In the case of target measures, you need to provide two additional arguments to have it work : a list of the names of the measures, and the interval in which they are comprised.",
"_____no_output_____"
]
],
[
[
"btk.plot_utils.plot_metrics_summary(metrics_results,interactive=True,\n target_meas_keys=['ellipticity0','ellipticity1'],\n target_meas_limits=[(-1, 1),(-1,1)])",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0bea03b08f2b070f406416da243874d1a279f75 | 120,654 | ipynb | Jupyter Notebook | 2018_06_11_Preview_SciPy_Bern.ipynb | jaykim-asset/datascience_review | c55782f5d4226e179088346da399e299433c6ca6 | [
"MIT"
] | 4 | 2018-05-30T10:39:47.000Z | 2018-11-10T15:39:53.000Z | 2018_06_11_Preview_SciPy_Bern.ipynb | jaykim-asset/datascience_review | c55782f5d4226e179088346da399e299433c6ca6 | [
"MIT"
] | null | null | null | 2018_06_11_Preview_SciPy_Bern.ipynb | jaykim-asset/datascience_review | c55782f5d4226e179088346da399e299433c6ca6 | [
"MIT"
] | null | null | null | 105.099303 | 11,572 | 0.812936 | [
[
[
"- Scipy의 stats 서브 패키지에 있는 binom 클래스는 이항 분포 클래스이다. n 인수와 p 인수를 사용하여 모수를 설정한다",
"_____no_output_____"
]
],
[
[
"N = 10\ntheta = 0.6\nrv = sp.stats.binom(N, theta)\nrv",
"_____no_output_____"
]
],
[
[
"- pmf 메서드를 사용하면, 확률 질량 함수 (pmf: probability mass function)를 계산할 수 있다. ",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\nxx = np.arange(N + 1)\nplt.bar(xx, rv.pmf(xx), align='center')\nplt.ylabel('p(x)')\nplt.title('binomial pmf')\nplt.show()",
"C:\\ProgramData\\Anaconda3\\lib\\site-packages\\matplotlib\\font_manager.py:1320: UserWarning: findfont: Font family ['nanumgothic'] not found. Falling back to DejaVu Sans\n (prop.get_family(), self.defaultFamily[fontext]))\n"
]
],
[
[
"- 시뮬레이션을 하려면 rvs 메서드를 사용한다.",
"_____no_output_____"
]
],
[
[
"np.random.seed(0)\nx = rv.rvs(100)\nx",
"_____no_output_____"
],
[
"sns.countplot(x)\nplt.title(\"Binomial Distribution's Simulation\")\nplt.xlabel('Sample')\nplt.show()",
"C:\\ProgramData\\Anaconda3\\lib\\site-packages\\matplotlib\\font_manager.py:1320: UserWarning: findfont: Font family ['nanumgothic'] not found. Falling back to DejaVu Sans\n (prop.get_family(), self.defaultFamily[fontext]))\n"
]
],
[
[
"- 이론적인 확률 분포와 샘플의 확률 분포를 동시에 나타내려면 다음과 같은 코드를 사용한다.",
"_____no_output_____"
]
],
[
[
"y = np.bincount(x, minlength=N+1)/float(len(x))\ndf = pd.DataFrame({'Theory': rv.pmf(xx), 'simulation': y}).stack()\ndf = df.reset_index()\ndf.columns = ['values', 'type', 'ratio']\ndf.pivot('values', 'type', 'ratio')\ndf",
"_____no_output_____"
],
[
"sns.barplot(x='values', y='ratio', hue='type', data=df)\nplt.show()",
"C:\\ProgramData\\Anaconda3\\lib\\site-packages\\matplotlib\\font_manager.py:1320: UserWarning: findfont: Font family ['nanumgothic'] not found. Falling back to DejaVu Sans\n (prop.get_family(), self.defaultFamily[fontext]))\n"
]
],
[
[
"#### 연습 문제 1\n- 이항 확률 분포의 모수가 다음과 같을 경우에 각각 샘플을 생성한 후, 기댓값과 분산을 구하고 앞의 예제와 같이 확률 밀도 함수와 비교한 카운트 플롯을 그린다.\n- 샘풀의 갯수가 10개인 경우와 1000개인 경우에 대해 각각 위의 계산을 한다.\n- 1. Theta = 0.5, N = 5\n- 2. Theta = 0.9, N = 20",
"_____no_output_____"
]
],
[
[
"# 연습문제 1 - 1\nN = 5\ntheta = 0.5\nrv = sp.stats.binom(N, theta)\n\nxx10 = np.arange(N + 1)\nplt.bar(xx, rv.pmf(xx10), align='center')\nplt.ylabel('P(x)')\nplt.title('Binomail Distribution pmdf')\nplt.show()",
"C:\\ProgramData\\Anaconda3\\lib\\site-packages\\matplotlib\\font_manager.py:1320: UserWarning: findfont: Font family ['nanumgothic'] not found. Falling back to DejaVu Sans\n (prop.get_family(), self.defaultFamily[fontext]))\n"
],
[
"# sample 갯수 10개 일 때\nnp.random.seed(0)\nx10 = rv.rvs(10)\nsns.countplot(x10)\nplt.title('binomail distribution Simulation 10')\nplt.xlabel('values')\nplt.show()",
"C:\\ProgramData\\Anaconda3\\lib\\site-packages\\matplotlib\\font_manager.py:1320: UserWarning: findfont: Font family ['nanumgothic'] not found. Falling back to DejaVu Sans\n (prop.get_family(), self.defaultFamily[fontext]))\n"
],
[
"# sample 갯수가 1000개 일 때\nx1000 = rv.rvs(1000)\nsns.countplot(x1000)\nplt.title('binomail distribution Simulation 10')\nplt.xlabel('values')\nplt.show()",
"C:\\ProgramData\\Anaconda3\\lib\\site-packages\\matplotlib\\font_manager.py:1320: UserWarning: findfont: Font family ['nanumgothic'] not found. Falling back to DejaVu Sans\n (prop.get_family(), self.defaultFamily[fontext]))\n"
],
[
"y10 = np.bincount(x10, minlength = N + 1)/float(len(x10))\ndf = pd.DataFrame({'Theory': rv.pmf(xx10), 'Simulation': y10}).stack()\ndf = df.reset_index()\ndf.columns = ['values', 'type', 'ratio']\ndf.pivot('values', 'type', 'ratio')\nsns.barplot(x='values', y='ratio', hue='type', data=df)\nplt.show()",
"C:\\ProgramData\\Anaconda3\\lib\\site-packages\\matplotlib\\font_manager.py:1320: UserWarning: findfont: Font family ['nanumgothic'] not found. Falling back to DejaVu Sans\n (prop.get_family(), self.defaultFamily[fontext]))\n"
],
[
"df",
"_____no_output_____"
]
],
[
[
"#### 샘플 갯수가 1000개일 경우에 theta = 0.9, N = 20",
"_____no_output_____"
]
],
[
[
"N = 20\ntheta = 0.9\nrv = sp.stats.binom(N, theta)\n\nxx = np.arange(N + 1)\nplt.bar(xx, rv.pmf(xx), align = 'center')\nplt.ylabel('P(x)')\nplt.title('binomial pmf when N=20')\nplt.show()",
"C:\\ProgramData\\Anaconda3\\lib\\site-packages\\matplotlib\\font_manager.py:1320: UserWarning: findfont: Font family ['nanumgothic'] not found. Falling back to DejaVu Sans\n (prop.get_family(), self.defaultFamily[fontext]))\n"
],
[
"x1000 = rv.rvs(1000) # sample 1000개 생성\nsns.countplot(x1000)\nplt.title(\"Binomial Distribution's Simulation\")\nplt.xlabel('values')\nplt.show()",
"C:\\ProgramData\\Anaconda3\\lib\\site-packages\\matplotlib\\font_manager.py:1320: UserWarning: findfont: Font family ['nanumgothic'] not found. Falling back to DejaVu Sans\n (prop.get_family(), self.defaultFamily[fontext]))\n"
],
[
"y1000 = np.bincount(x1000, minlength = N + 1)/float(len(x1000))\ndf = pd.DataFrame({'Theory':rv.pmf(xx), 'Simulation': y1000}).stack()\ndf = df.reset_index()\ndf.columns = ['values', 'type', 'ratio']\ndf.pivot('values', 'type', 'ratio')\ndf",
"_____no_output_____"
],
[
"sns.barplot(x='values', y='ratio', hue='type', data=df)\nplt.show()",
"C:\\ProgramData\\Anaconda3\\lib\\site-packages\\matplotlib\\font_manager.py:1320: UserWarning: findfont: Font family ['nanumgothic'] not found. Falling back to DejaVu Sans\n (prop.get_family(), self.defaultFamily[fontext]))\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
d0bea99eebc912ffaba045d477e6fbe0baa6b0c2 | 444,801 | ipynb | Jupyter Notebook | tutorial/10 - Exporting and Embedding.ipynb | bingyao/bokeh-notebooks | 4487fb6e37b0ee04e39f0758e6e67597c6b65b3a | [
"BSD-3-Clause"
] | 1 | 2019-04-28T20:38:23.000Z | 2019-04-28T20:38:23.000Z | tutorial/10 - Exporting and Embedding.ipynb | bingyao/bokeh-notebooks | 4487fb6e37b0ee04e39f0758e6e67597c6b65b3a | [
"BSD-3-Clause"
] | null | null | null | tutorial/10 - Exporting and Embedding.ipynb | bingyao/bokeh-notebooks | 4487fb6e37b0ee04e39f0758e6e67597c6b65b3a | [
"BSD-3-Clause"
] | null | null | null | 475.214744 | 143,850 | 0.880767 | [
[
[
"<table style=\"float:left; border:none\">\n <tr style=\"border:none; background-color: #ffffff\">\n <td style=\"border:none\">\n <a href=\"http://bokeh.pydata.org/\"> \n <img \n src=\"assets/bokeh-transparent.png\" \n style=\"width:50px\"\n >\n </a> \n </td>\n <td style=\"border:none\">\n <h1>Bokeh Tutorial</h1>\n </td>\n </tr>\n</table>\n\n<div style=\"float:right;\"><h2>07. Exporting and Embedding</h2></div>",
"_____no_output_____"
],
[
"So far we have seen how to generate interactive Bokeh output directly inline in Jupyter notbeooks. It also possible to embed interactive Bokeh plots and layouts in other contexts, such as standalone HTML files, or Jinja templates. Additionally, Bokeh can export plots to static (non-interactive) PNG and SVG formats. \n\nWe will look at all of these possibilities in this chapter. First we make the usual imports.",
"_____no_output_____"
]
],
[
[
"from bokeh.io import output_notebook, show\noutput_notebook()",
"_____no_output_____"
]
],
[
[
"And also load some data that will be used throughout this chapter",
"_____no_output_____"
]
],
[
[
"import pandas as pd\n\nfrom bokeh.plotting import figure\nfrom bokeh.sampledata.stocks import AAPL\n\ndf = pd.DataFrame(AAPL)\ndf['date'] = pd.to_datetime(df['date'])",
"_____no_output_____"
]
],
[
[
"# Embedding Interactive Content\n\nTo start we will look differnet ways of embedding live interactive Bokeh output in various situations. ",
"_____no_output_____"
],
[
"## Displaying in the Notebook\n\nThe first way to embed Bokeh output is in the Jupyter Notebooks, as we have already, seen. As a reminder, the cell below will generate a plot inline as output, because we executed `output_notebook` above.",
"_____no_output_____"
]
],
[
[
"p = figure(plot_width=800, plot_height=250, x_axis_type=\"datetime\")\np.line(df['date'], df['close'], color='navy', alpha=0.5)\n\nshow(p)",
"_____no_output_____"
]
],
[
[
"## Saving to an HTML File\n\nIt is also often useful to generate a standalone HTML script containing Bokeh content. This is accomplished by calling the `output_file(...)` function. It is especially common to do this from standard Python scripts, but here we see that it works in the notebook as well. ",
"_____no_output_____"
]
],
[
[
"from bokeh.io import output_file, show",
"_____no_output_____"
],
[
"output_file(\"plot.html\")",
"_____no_output_____"
],
[
"show(p) # save(p) will save without opening a new browser tab",
"_____no_output_____"
]
],
[
[
"In addition the inline plot above, you should also have seen a new browser tab open with the contents of the newly saved \"plot.html\" file. It is important to note that `output_file` initiates a *persistent mode of operation*. That is, all subsequent calls to show will generate output to the specified file. We can \"reset\" where output will go by calling `reset_output`:",
"_____no_output_____"
]
],
[
[
"from bokeh.io import reset_output\nreset_output()",
"_____no_output_____"
]
],
[
[
"## Templating in HTML Documents\n\nAnother use case is to embed Bokeh content in a Jinja HTML template. We will look at a simple explicit case first, and then see how this technique might be used in a web app framework such as Flask. \n\nThe simplest way to embed standalone (i.e. not Bokeh server) content is to use the `components` function. This function takes a Bokeh object, and returns a `<script>` tag and `<div>` tag that can be put in any HTML tempate. The script will eecute and load the Bokeh content into the associated div. \n\nThe cells below show a complete example, including loading BokehJS JS and CSS resources in the temlpate.",
"_____no_output_____"
]
],
[
[
"import jinja2\nfrom bokeh.embed import components\n\n# IMPORTANT NOTE!! The version of BokehJS loaded in the template should match \n# the version of Bokeh installed locally.\n\ntemplate = jinja2.Template(\"\"\"\n<!DOCTYPE html>\n<html lang=\"en-US\">\n\n<link\n href=\"http://cdn.pydata.org/bokeh/dev/bokeh-0.13.0.min.css\"\n rel=\"stylesheet\" type=\"text/css\"\n>\n<script \n src=\"http://cdn.pydata.org/bokeh/dev/bokeh-0.13.0.min.js\"\n></script>\n\n<body>\n\n <h1>Hello Bokeh!</h1>\n \n <p> Below is a simple plot of stock closing prices </p>\n \n {{ script }}\n \n {{ div }}\n\n</body>\n\n</html>\n\"\"\")",
"_____no_output_____"
],
[
"p = figure(plot_width=800, plot_height=250, x_axis_type=\"datetime\")\np.line(df['date'], df['close'], color='navy', alpha=0.5)\n\nscript, div = components(p)",
"_____no_output_____"
],
[
"from IPython.display import HTML\nHTML(template.render(script=script, div=div))",
"_____no_output_____"
]
],
[
[
"Note that it is possible to pass multiple objects to a single call to `components`, in order to template multiple Bokeh objects at once. See the [User's Guide for components](https://bokeh.pydata.org/en/latest/docs/user_guide/embed.html#components) for more information.\n\n\nOnce we have the script and div from `components`, it is straighforward to serve a rendered page containing Bokeh content in a web application, e.g. a Flask app as shown below.",
"_____no_output_____"
]
],
[
[
"from flask import Flask\napp = Flask(__name__)\n\[email protected]('/')\ndef hello_bokeh():\n return template.render(script=script, div=div)",
"_____no_output_____"
],
[
"# Uncomment to run the Flask Server. Use Kernel -> Interrupt from Notebook menubar to stop \n#app.run(port=5050)",
"_____no_output_____"
],
[
"# EXERCISE: Create your own template (or modify the one above) \n",
"_____no_output_____"
]
],
[
[
"# Exporting Static Images\n\nSometimes it is desirable to produce static images of plots or other Bokeh output, without any interactive capabilities. Bokeh supports exports to PNG and SVG formats. ",
"_____no_output_____"
],
[
"## PNG Export\n\nBokeh supports exporting a plot or layout to PNG image format with the `export_png` function. This function is alled with a Bokeh object to export, and a filename to write the PNG output to. Often the Bokeh object passed to `export_png` is a single plot, but it need not be. If a layout is exported, the entire lahyout is saved to one PNG image. \n\n***Important Note:*** *the PNG export capability requires installing some additional optional dependencies. The simplest way to obtain them is via conda:*\n\n conda install selenium phantomjs pillow\n",
"_____no_output_____"
]
],
[
[
"from bokeh.io import export_png\n\np = figure(plot_width=800, plot_height=250, x_axis_type=\"datetime\")\np.line(df['date'], df['close'], color='navy', alpha=0.5)\n\nexport_png(p, filename=\"plot.png\")",
"_____no_output_____"
],
[
"from IPython.display import Image\nImage('plot.png')",
"_____no_output_____"
],
[
"# EXERCISE: Save a layout of plots (e.g. row or column) as SVG and see what happens \n",
"_____no_output_____"
]
],
[
[
"## SVG Export\n\nBokeh can also generate SVG output in the browser, instead of rendering to HTML canvas. This is accomplished by setting `output_backend='svg'` on a figure. This can be be used to generate SVGs in `output_file` HTML files, or in content emebdded with `components`. It can also be used with the `export_svgs` function to save `.svg` files. Note that an SVG is created for *each canvas*. It is not possible to capture entire layouts or widgets in SVG output. \n\n***Important Note:*** *There a currently some known issue with SVG output, it may not work for all use-cases*",
"_____no_output_____"
]
],
[
[
"from bokeh.io import export_svgs\n\np = figure(plot_width=800, plot_height=250, x_axis_type=\"datetime\", output_backend='svg')\np.line(df['date'], df['close'], color='navy', alpha=0.5)\n\nexport_svgs(p, filename=\"plot.svg\")",
"_____no_output_____"
],
[
"from IPython.display import SVG\nSVG('plot.svg')",
"_____no_output_____"
],
[
"# EXERCISE: Save a layout of plots (e.g. row or column) as SVG and see what happens \n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
d0beace51bdfce194b2a4d079187a48163e9f7d6 | 18,566 | ipynb | Jupyter Notebook | data/label_behavior.ipynb | intelligent-control-lab/Auto_Vehicle_Simulator | 091c166cfb09653a0327016dcc603a3409e5d6c5 | [
"MIT"
] | 10 | 2019-10-07T06:14:15.000Z | 2021-08-28T03:41:52.000Z | data/label_behavior.ipynb | intelligent-control-lab/Auto_Vehicle_Simulator | 091c166cfb09653a0327016dcc603a3409e5d6c5 | [
"MIT"
] | null | null | null | data/label_behavior.ipynb | intelligent-control-lab/Auto_Vehicle_Simulator | 091c166cfb09653a0327016dcc603a3409e5d6c5 | [
"MIT"
] | 16 | 2019-02-22T01:50:34.000Z | 2022-03-04T07:26:52.000Z | 32.976909 | 124 | 0.34881 | [
[
[
"import pandas as pd\nimport numpy as np\nimport pickle as pk",
"_____no_output_____"
],
[
"file_name = '1_min'\ndf = pd.read_csv(file_name + '.csv')\ndf['behavior'] = np.zeros(len(df)).astype(np.int)",
"_____no_output_____"
],
[
"intention_2_action_delay = 3000\n\nacc_threshold = 1\n\n# 0 for changing to left\n# 1 for changing to right\n# 2 for following\n\nnext_lane_change_time = dict()\nnext_lane_change_direct = dict()\nnext_vel_change_time = dict()\nnext_vel_change_direct = dict()\n\ndef classify_behavior(v_id, cur_time):\n if next_lane_change_time[v_id] > -1 and next_lane_change_time[v_id] - cur_time < intention_2_action_delay:\n return next_lane_change_direct[v_id]\n return 2\n# if next_vel_change_time[v_id] > -1 and next_vel_change_time[v_id] - r.Global_Time < intention_2_action_delay:\n# return next_vel_change_direct[v_id] + 2\n# return 4\n\nlane_id = dict()\nbehavior_seq = dict()\nbehavior_seq_id = dict()\nchange_point = list()\ncnt = np.zeros((5, 5))\nfor i in reversed(range(len(df))):\n \n r = df.iloc[i]\n v_id = r.Vehicle_ID\n \n if v_id not in lane_id.keys():\n lane_id[v_id] = r.Lane_ID\n next_lane_change_time[v_id] = -1\n next_vel_change_time[v_id] = -1\n behavior_seq[v_id] = list()\n \n if r.Lane_ID != lane_id[v_id]:\n next_lane_change_time[v_id] = r.Global_Time\n next_lane_change_direct[v_id] = int(r.Lane_ID < lane_id[v_id])\n lane_id[v_id] = r.Lane_ID\n \n# if abs(r.v_Acc) > acc_threshold:\n# next_vel_change_time[v_id] = r.Global_Time\n# next_vel_change_direct[v_id] = int(r.v_Acc > 0)\n \n bhv = classify_behavior(v_id, r.Global_Time)\n \n if len(behavior_seq[v_id])>0 and behavior_seq[v_id][-1] < 2 and bhv != behavior_seq[v_id][-1]:\n change_point.append((v_id, behavior_seq_id[v_id]))\n\n behavior_seq[v_id].append(bhv)\n \n behavior_seq_id[v_id] = i\n \n df.at[i,'behavior']= bhv ",
"_____no_output_____"
],
[
"dT = 0.1\nx = list()\ny = list()\nvehicles = dict()\nshow_up = set()\nvel_sum = 0\nfor i in range(len(df)):\n r = df.iloc[i]\n v_id = r.Vehicle_ID\n \n show_up.add(v_id)\n \n if v_id not in vehicles.keys():\n df.at[i,'lateral_acc'] = 0\n df.at[i,'lateral_vel'] = 0\n vehicles[v_id] = df.iloc[i].copy()\n vel_sum += vehicles[v_id].v_Vel\n else:\n lateral_V = (r.Local_X - vehicles[v_id].Local_X) / dT\n vel_sum -= vehicles[v_id].v_Vel\n df.at[i,'lateral_acc'] = (lateral_V - vehicles[v_id]['lateral_vel']) /dT\n df.at[i,'lateral_vel'] = lateral_V\n vehicles[v_id] = df.iloc[i].copy()\n vel_sum += vehicles[v_id].v_Vel\n \n \n \n v_mean = vel_sum / len(vehicles)\n \n df.at[i,'mean_vel'] = v_mean\n\n #remove exited car after every moment\n if i == len(df)-1 or r.Global_Time != df.iloc[i+1].Global_Time:\n v_ids = list(vehicles.keys())\n for v_id in v_ids:\n if v_id not in show_up:\n vel_sum -= vehicles[v_id].v_Vel\n vehicles.pop(v_id)\n show_up = set()\n",
"_____no_output_____"
],
[
"df[:10]",
"_____no_output_____"
],
[
"df.to_csv(file_name + '_labeled.csv')",
"_____no_output_____"
],
[
"for i in range((len(change_point)-1) // 10):\n v_id, idx = change_point[i*10]\n df[max(0,idx-5000):min(idx+15000,len(df))].to_csv('lane_changing_data/'+str(v_id)+'_'+str(idx)+'.csv')",
"_____no_output_____"
],
[
"len(change_point)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0beb495c248b07700433a54ae2f0ba8a83b73df | 5,736 | ipynb | Jupyter Notebook | Validate_code.ipynb | qyu6/kbs | 9c75f57ba3a006b889c8cfc0390427ea450ab350 | [
"Apache-2.0"
] | null | null | null | Validate_code.ipynb | qyu6/kbs | 9c75f57ba3a006b889c8cfc0390427ea450ab350 | [
"Apache-2.0"
] | null | null | null | Validate_code.ipynb | qyu6/kbs | 9c75f57ba3a006b889c8cfc0390427ea450ab350 | [
"Apache-2.0"
] | null | null | null | 31.516484 | 161 | 0.466702 | [
[
[
"\n\n\nfrom os.path import exists\nimport openpyxl\nimport os\nimport pandas as pd\nimport re\nfrom collections import Counter\nimport streamlit as st\n\n\npd.set_option('display.max_colwidth',None)\n\nresult = 'searchoutput.csv'\nif exists(result):\n os.remove(result)\n# 创建结果文件\nwbResult = openpyxl.Workbook()\nwsResult = wbResult.worksheets[0]\nwsResult.append(['result'])\n# 读取原表两次,一次用来进行建表输入,一次用来做对应的输入\nwb = openpyxl.load_workbook('SourceDB.xlsx')\ninput_excel = 'SourceDB.xlsx'\ndata = pd.read_excel(input_excel)\nws = wb.worksheets[0]\n# 原表空白部分用*填充\nfor k in range(1,ws.max_column+1):\n for i in range(1,ws.max_row+1):\n if ws.cell(row=i,column=k).value is None:\n ws.cell(i,k,'****')\n\n\ninput_word = input(\"请输入搜索内容:\").strip().lower()\n# st.subheader('🐼[T.Q Knowledge Base]')\ninput_word1 = st.text_input('©TAILab|Last release:2022/3/3','')\ninput_word = input_word1.strip().lower()\ninput_word_exist = re.sub(u\"([u4e00-\\u9fa5\\u0030-\\u0039\\u0041-\\u005a\\u0061-\\u007a])\",\"\",input_word)\ninput_word = input_word.split()\n\n\n\nresult_list = []\nfor index,row in enumerate(ws.rows):\n\n if index == 0:\n continue\n rs_list = list(map(lambda cell: cell.value, row))\n list_str = \"\".join('%s' %id for id in rs_list).replace(\"\\n\",\" \").replace(\"\\n\",\" \").replace(\"\\t\",\" \").replace(\"\\r\",\" \").lower()\n result_list.append([index, list_str])\n\n\n\ndef search_onebyone(input_word_exist, input_word_list, result_list):\n new_list = []\n dict_list = []\n new_list_count = []\n # 精确匹配\n for i in range(len(result_list)):\n for m in input_word_list:\n pattern = m\n regex = re.compile(pattern)\n nz = regen.search(result_list[i][1])\n if nz:\n new_list.append([len(nz.group()),nz.stat(),result_list[i][0]-1])\n new_list_count.append(result_list[i][0]-1)\n\n new_list = sorted(new_list)\n new_index = [x for _,_,x in new_list]\n new_index = sorted(set(new_index),key=new_index.index)\n\n # 计数,只有当输入的全部单词全部出现以后,才取出\n dict_list.append([k for k,v in Counter(new_list_count).items() if v == len(input_word_list)])\n for m in dict_list:\n result_index = m\n temp = [j for j in new_index if j in result_index]\n return temp\nresult = search_onebyone(input_word_exist, input_word, result_list)\n\n\n\ndef display_highlighted_words(df, keywords):\n head = \"\"\"\n <talbe>\n <thead>\n \"\"\" + \\\n \"\".join([\"<th> %s </th>\" % c for c in df.columns])\\\n + \"\"\"\n </thead>\n </table>\"\"\"\n\n head = \"\"\"\n <table>\n <thead>\n <th> Keywords </th><th> Content </th>\n </thead>\n </table>\n \"\"\"\n\n for i,r in df.iterrows():\n row = \"<tr>\"\n for c in df.columns:\n matches = []\n for k in keywords:\n for match in re.finditer(k, str(r[c])):\n matches.append(match)\n\n # reverse sorting\n matches = sorted(matches, key = lambda x: x.start(), reverse=True)\n\n # building HTML row\n cell = str(r[c])\n\n # print(cell)\n for match in matches:\n cell = cell[:match.start()] +\\\n \"<span style='color:red;background-color:yellow'> %s </span>\" % cell[match.start():match.end()] +\\\n cell[match.end():]\n\n row += \"<td> %s </td>\" % cell\n\n row += \"</tr>\"\n\n head += row\n\n\n head += \"</tbody></table>\"\n\n return head\n\n# htmlcode1 = display_highlighted_words(dftest, input_word)\n# st.markdown(htmlcode1, unsafe_allow_html=True)\n\n\nif len(input_word)>0:\n display(data.loc[(x for x in result)])\n\n\ndata.loc[(x for x in result)].to_csv('searchoutput.csv', encoding= 'utf_8_sig')",
"请输入搜索内容:天局\n"
]
]
] | [
"code"
] | [
[
"code"
]
] |
d0bebd8f950855a4b764df0127ffeed4557ed648 | 61,809 | ipynb | Jupyter Notebook | examples/rlab_table_example.ipynb | lsternlicht/tia | fe74d1876260a946e52bd733bc32da0698749f2c | [
"BSD-3-Clause"
] | 366 | 2015-01-21T21:57:23.000Z | 2022-03-29T09:11:24.000Z | examples/rlab_table_example.ipynb | lsternlicht/tia | fe74d1876260a946e52bd733bc32da0698749f2c | [
"BSD-3-Clause"
] | 51 | 2015-03-01T14:20:44.000Z | 2021-08-19T15:46:51.000Z | examples/rlab_table_example.ipynb | lsternlicht/tia | fe74d1876260a946e52bd733bc32da0698749f2c | [
"BSD-3-Clause"
] | 160 | 2015-02-22T07:16:17.000Z | 2022-03-29T13:41:15.000Z | 128.23444 | 45,919 | 0.806679 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
d0bec9b708f65c4a2b9dc6d460254b4c04952ee3 | 132,608 | ipynb | Jupyter Notebook | supp_ntbks_arxiv.2106.08688/gp_ga_reconstruction.ipynb | reggiebernardo/notebooks | b54efe619e600679a5c84de689461e26cf1f82af | [
"MIT"
] | null | null | null | supp_ntbks_arxiv.2106.08688/gp_ga_reconstruction.ipynb | reggiebernardo/notebooks | b54efe619e600679a5c84de689461e26cf1f82af | [
"MIT"
] | null | null | null | supp_ntbks_arxiv.2106.08688/gp_ga_reconstruction.ipynb | reggiebernardo/notebooks | b54efe619e600679a5c84de689461e26cf1f82af | [
"MIT"
] | null | null | null | 117.769094 | 28,216 | 0.829415 | [
[
[
"## Gaussian processes with genetic algorithm for the reconstruction of late-time Hubble data",
"_____no_output_____"
],
[
"This notebook uses Gaussian processes (GP) with the genetic algorithm (GA) to reconstruct the cosmic chronometers and supernovae data sets ([2106.08688](https://arxiv.org/abs/2106.08688)). We shall construct our own GP class and use it with the python package ``pygad`` (https://pygad.readthedocs.io/) for the GA.\n\nReferences to the data can be found at the end of the notebook.",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\nimport numpy as np\nfrom numpy.random import uniform as unif\nimport matplotlib.pyplot as plt\nimport pygad",
"_____no_output_____"
]
],
[
[
"### 0. My GP class",
"_____no_output_____"
],
[
"Here is the GP class (written from scratch) that we shall use in this notebook.",
"_____no_output_____"
]
],
[
[
"class GP:\n '''Class for making GP predictions.\n \n rbf: k(r) = A^2 \\exp(-r^2/(2l^2))\n rq : k(r) = A^2 (1 + (r^2/(2 \\alpha l^2)))^{-\\alpha}\n m52: k(r) = A^2 \\exp(-\\sqrt{5}r/l)\n (1 + \\sqrt{5}r/l + 5r^2/(3l^2))\n mix: rbf + chy + m52\n \n Input:\n chromosome: list of kernel hyperparameters \n '''\n \n def __init__(self, chromosome):\n self.C_rbf = chromosome[0] # rbf genes\n self.l_rbf = chromosome[1]\n self.n_rbf = chromosome[2]\n self.C_rq = chromosome[3] # rq genes\n self.l_rq = chromosome[4]\n self.a_rq = chromosome[5]\n self.n_rq = chromosome[6]\n self.C_m52 = chromosome[7] # m52 genes\n self.l_m52 = chromosome[8]\n self.n_m52 = chromosome[9]\n \n def kernel(self, x, y):\n r = x - y\n # rbf term\n k_rbf = np.exp(-(r**2)/(2*(self.l_rbf**2)))\n rbf_term = (self.C_rbf**2)*(k_rbf**self.n_rbf)\n # rq term\n r = x - y\n R_sq = (r**2)/(2*(self.l_rq**2))\n k_rq = 1/((1 + R_sq/self.a_rq)**self.a_rq)\n rq_term = (self.C_rq**2)*(k_rq**self.n_rq)\n # m52 term\n X = np.sqrt(5)*np.abs(r)/self.l_m52\n B = 1 + X + ((X**2)/3)\n k_m52 = B*np.exp(-X)\n m52_term = (self.C_m52**2)*(k_m52**self.n_m52)\n return rbf_term + rq_term + m52_term\n \n def k_plus_c_inv(self, Z, C):\n k_ZZ = np.array([[self.kernel(z_i, z_j) \\\n for z_i in Z]\n for z_j in Z])\n return np.linalg.inv(k_ZZ + C)\n \n def cov(self, Z, C, Zs):\n '''Returns the covariance matrix at Zs.\n \n Note: Zs must be an array.'''\n kpc_inv = self.k_plus_c_inv(Z, C)\n return np.array([[self.kernel(z_i, z_j) \\\n -(self.kernel(z_i, Z) @ \\\n kpc_inv @ \\\n self.kernel(Z, z_j)) \\\n for z_i in Zs] \\\n for z_j in Zs])\n \n def var(self, Z, C, Zs):\n '''Returns the variance at Zs.\n \n Note: Zs must be an array.'''\n kpc_inv = self.k_plus_c_inv(Z, C)\n return np.array([self.kernel(zs, zs) \\\n -(self.kernel(zs, Z) @ \\\n kpc_inv @ \\\n self.kernel(Z, zs)) \\\n for zs in Zs])\n \n def get_logmlike(self, Z, Y, C):\n '''Returns the log-marginal likelihood.'''\n kpc_inv = self.k_plus_c_inv(Z, C)\n kpc = np.linalg.inv(kpc_inv)\n kpc_det = np.linalg.det(kpc)\n Ys = np.array([(self.kernel(zs, Z) @ kpc_inv \\\n @ Y) for zs in Z])\n delta_y = Y\n return -0.5*(delta_y @ kpc_inv @ delta_y) \\\n -0.5*np.log(kpc_det) \\\n -0.5*len(Z)*np.log(2*np.pi)\n \n def predict(self, Z, Y, C, Zs, with_cov = False, \\\n k_as_cov = False):\n kpc_inv = self.k_plus_c_inv(Z, C)\n mean = np.array([(self.kernel(zs, Z) @ kpc_inv \\\n @ Y) for zs in Zs])\n if with_cov == False:\n var_zz = self.var(Z, C, Zs)\n return {'z': Zs, 'Y': mean, \\\n 'varY': var_zz}\n elif (with_cov == True) and (k_as_cov == False):\n cov_zz = self.cov(Z, C, Zs)\n return {'z': Zs, 'Y': mean, \\\n 'covY': cov_zz}\n elif (with_cov == True) and (k_as_cov == True):\n cov_zz = np.array([[self.kernel(z_i, z_j) \\\n for z_i in Zs] \\\n for z_j in Zs])\n return {'z': Zs, 'Y': mean, \\\n 'covY': cov_zz}",
"_____no_output_____"
]
],
[
[
"This will be used for both the cosmic chronometers (Section 1) and supernovae applications (Section 2).",
"_____no_output_____"
],
[
"### 1. Cosmic chronometers",
"_____no_output_____"
],
[
"Importing the cosmic chronometers data set.",
"_____no_output_____"
]
],
[
[
"cc_data = np.loadtxt('cc_data.txt')\n\nz_cc = cc_data[:, 0]\nHz_cc = cc_data[:, 1]\nsigHz_cc = cc_data[:, 2]\n\nfig, ax = plt.subplots()\nax.errorbar(z_cc, Hz_cc, yerr = sigHz_cc,\n fmt = 'ro', ecolor = 'k',\n markersize = 7, capsize = 3)\nax.set_xlabel('$z$')\nax.set_ylabel('$H(z)$')\nplt.show()",
"_____no_output_____"
]
],
[
[
"To use the GA, we setup the log-marginal likelihood as a fitness function. In addition, we consider a Bayesian-information type penalty to fine complex kernels.",
"_____no_output_____"
]
],
[
[
"n_data = len(z_cc)\n\ndef penalty(chromosome):\n '''Identifies a penalty term to be factored in the fitness function\n so that longer/more complex kernels will be given a due weight.'''\n\n c_rbf = chromosome[0]\n l_rbf = chromosome[1]\n A_rbf = c_rbf*l_rbf\n\n c_rq = chromosome[3]\n l_rq = chromosome[4]\n A_rq = c_rq*l_rq\n \n c_m52 = chromosome[7]\n l_m52 = chromosome[8]\n A_m52 = c_m52*l_m52\n \n # set threshold to A_X = c_x*l_x\n A_th = 1e-3\n k = 0\n if A_rbf > A_th:\n k += 3\n if A_rq > A_th:\n k += 4\n if A_m52 > A_th:\n k += 3\n \n return k*np.log(n_data)/2\n\ndef get_fit(chromosome):\n '''Evaluates the fitness of the indivial with chromosome'''\n if all(hp > 0 for hp in chromosome) == True:\n pnl = penalty(chromosome)\n try:\n gp = GP(chromosome)\n lml = gp.get_logmlike(z_cc, Hz_cc,\n np.diag(sigHz_cc**2))\n return lml - pnl\n except:\n lml = -1000\n return lml\n else:\n lml = -1000\n return lml\n \ndef fitness_function(chromosome, chromosome_idx):\n return get_fit(chromosome)",
"_____no_output_____"
]
],
[
[
"In the next line, we setup an equally uniform population of pure-bred kernels and a diverse set of kernels. It is interesting to see the evolution of the uniform population compared to one which is a lot more diverse.",
"_____no_output_____"
]
],
[
[
"pop_size = 1000 # population size\n\ninit_uni = []\nfor i in range(0, pop_size):\n if i < int(pop_size/3):\n init_uni.append([unif(0, 300), unif(0, 10), unif(0, 5),\n 0, 0, 0, 0, 0, 0, 0])\n elif (i > int(pop_size/3)) and (i < int(2*pop_size/3)):\n init_uni.append([0, 0, 0,\n unif(0, 300), unif(0, 10), unif(0, 2), unif(0, 5),\n 0, 0, 0])\n else:\n init_uni.append([0, 0, 0, 0, 0, 0, 0,\n unif(0, 300), unif(0, 10), unif(0, 5)])\ninit_uni = np.array(init_uni)\n\ninit_div = []\nfor i in range(0, pop_size):\n init_div.append([unif(0, 300), unif(0, 10), unif(0, 5),\n unif(0, 300), unif(0, 10), unif(0, 2), unif(0, 5),\n unif(0, 300), unif(0, 10), unif(0, 5)])\ninit_div = np.array(init_div)",
"_____no_output_____"
]
],
[
[
"Given this, we prepare the parameters of the GA.",
"_____no_output_____"
]
],
[
[
"gene_space = [{'low': 0, 'high': 300}, {'low': 0, 'high': 10}, {'low': 0, 'high': 5}, # rbf lims\n {'low': 0, 'high': 300}, {'low': 0, 'high': 10}, # chy lims\n {'low': 0, 'high': 2}, {'low': 0, 'high': 5}, \n {'low': 0, 'high': 300}, {'low': 0, 'high': 10}, {'low': 0, 'high': 5}] # m52 lims\n\nnum_genes = 10 # length of chromosome\n\nn_gen = 100 # number of generations\nsel_rate = 0.3 # selection rate\n \n# parent selection\nparent_selection_type = \"rws\" # roulette wheel selection\nkeep_parents = int(sel_rate*pop_size)\nnum_parents_mating = int(sel_rate*pop_size)\n\n# crossover\n#crossover_type = \"single_point\"\n#crossover_type = \"two_points\"\n#crossover_type = \"uniform\"\ncrossover_type = \"scattered\"\ncrossover_prob = 1.0\n\n# mutation type options: random, swap, inversion, scramble, adaptive\nmutation_type = \"random\"\n#mutation_type = \"swap\"\n#mutation_type = \"inversion\"\n#mutation_type = \"scramble\"\n#mutation_type = \"adaptive\"\nmutation_prob = 0.5\n\ndef callback_generation(ga_instance):\n i_gen = ga_instance.generations_completed\n if i_gen in [i for i in range(0, n_gen, int(n_gen*0.1))]:\n last_best = ga_instance.best_solutions[-1]\n print(\"generation = {generation}\".format(generation = i_gen))\n print(\"fitness = {fitness}\".format(fitness = get_fit(last_best)))",
"_____no_output_____"
]
],
[
[
"The ``GA run`` is performed in the next line.\n\n*The next two code runs may be skipped if output have already been saved. In this case, proceed to the loading lines.",
"_____no_output_____"
]
],
[
[
"# setup GA instance, for random initial pop.\nga_inst_uni_cc = pygad.GA(initial_population = init_uni,\n num_genes = num_genes,\n num_generations = n_gen,\n num_parents_mating = num_parents_mating,\n fitness_func = fitness_function,\n parent_selection_type = parent_selection_type,\n keep_parents = keep_parents,\n crossover_type = crossover_type,\n crossover_probability = crossover_prob,\n mutation_type = mutation_type,\n mutation_probability = mutation_prob,\n mutation_by_replacement = True,\n on_generation = callback_generation,\n gene_space = gene_space,\n save_best_solutions = True)\n\n# perform GA run\nga_inst_uni_cc.run()\n\n# save results\nga_inst_uni_cc.save('gp_ga_cc_uniform_init')\n\n# best solution\nsolution = ga_inst_uni_cc.best_solutions[-1]\nprint(\"best chromosome: {solution}\".format(solution = solution))\nprint(\"best fitness = {solution_fitness}\".format(solution_fitness = \\\n get_fit(solution)))",
"_____no_output_____"
]
],
[
[
"Next run creates a GA instance with the same parameters as with the previous run, but a a diversified initial population.",
"_____no_output_____"
]
],
[
[
"ga_inst_div_cc = pygad.GA(initial_population = init_div,\n num_genes = num_genes,\n num_generations = n_gen,\n num_parents_mating = num_parents_mating,\n fitness_func = fitness_function,\n parent_selection_type = parent_selection_type,\n keep_parents = keep_parents,\n crossover_type = crossover_type,\n crossover_probability = crossover_prob,\n mutation_type = mutation_type,\n mutation_probability = mutation_prob,\n mutation_by_replacement = True,\n on_generation = callback_generation,\n gene_space = gene_space,\n save_best_solutions = True)\n\n# perform GA run\nga_inst_div_cc.run()\n\n# save results\nga_inst_div_cc.save('gp_ga_cc_diverse_init')\n\n# best solution\nsolution = ga_inst_div_cc.best_solutions[-1]\nprint(\"best chromosome: {solution}\".format(solution = solution))\nprint(\"best fitness = {solution_fitness}\".format(solution_fitness = \\\n get_fit(solution)))",
"_____no_output_____"
]
],
[
[
"``Loading lines``\n\nWe can load the ``pygad`` results should they have been saved already in previous runs.",
"_____no_output_____"
]
],
[
[
"load_ga_uniform = pygad.load('gp_ga_cc_uniform_init')\nload_ga_diverse = pygad.load('gp_ga_cc_diverse_init')",
"_____no_output_____"
]
],
[
[
"We can view the prediction based on this superior individual below.",
"_____no_output_____"
]
],
[
[
"# champion chromosomes\nchr_1 = load_ga_uniform.best_solutions[-1]\nchr_2 = load_ga_diverse.best_solutions[-1]\n\nz_min = 0\nz_max = 3\nn_div = 1000\nz_rec = np.linspace(z_min, z_max, n_div)\n\nchamps = {}\nchamps['uniform'] = {'chromosome': chr_1}\nchamps['diverse'] = {'chromosome': chr_2}\n\nfor champ in champs:\n chromosome = champs[champ]['chromosome']\n gp = GP(chromosome)\n rec = gp.predict(z_cc, Hz_cc, np.diag(sigHz_cc**2),\n z_rec)\n Hz_rec, sigHz_rec = rec['Y'], np.sqrt(rec['varY'])\n H0 = Hz_rec[0]\n sigH0 = sigHz_rec[0]\n \n # compute chi2\n Hz = gp.predict(z_cc, Hz_cc, np.diag(sigHz_cc**2),\n z_cc)['Y']\n chi2 = np.sum(((Hz - Hz_cc)/sigHz_cc)**2)\n \n # print GA measures\n print(champ)\n print('H0 =', np.round(H0, 1), '+/-', np.round(sigH0, 1))\n print('log-marginal likelihood',\n gp.get_logmlike(z_cc, Hz_cc, np.diag(sigHz_cc**2)))\n print('penalty', penalty(chromosome))\n print('fitness function', get_fit(chromosome))\n print('chi^2', chi2)\n print()\n \n champs[champ]['z'] = z_rec\n champs[champ]['Hz'] = Hz_rec\n champs[champ]['sigHz'] = sigHz_rec\n\n# plot champs' predictions\n \nfig, ax = plt.subplots()\nax.errorbar(z_cc, Hz_cc, yerr = sigHz_cc,\n fmt = 'kx', ecolor = 'k',\n elinewidth = 1, capsize = 2, label = 'CC')\n# color, line style, and hatch list\nclst = ['b', 'r']\nllst = ['-', '--']\nhlst = ['|', '-']\nfor champ in champs:\n i = list(champs.keys()).index(champ)\n Hz_rec = champs[champ]['Hz']\n sigHz_rec = champs[champ]['sigHz']\n ax.plot(z_rec, Hz_rec, clst[i] + llst[i],\n label = champ)\n ax.fill_between(z_rec,\n Hz_rec - 2*sigHz_rec,\n Hz_rec + 2*sigHz_rec,\n facecolor = clst[i], alpha = 0.2,\n edgecolor = clst[i], hatch = hlst[i])\nax.set_xlabel('$z$')\nax.set_xlim(z_min, z_max)\nax.set_ylim(1, 370)\nax.set_ylabel('$H(z)$')\nax.legend(loc = 'upper left', prop = {'size': 9.5})\nplt.show()",
"uniform\nH0 = 69.7 +/- 6.3\nlog-marginal likelihood -123.7946112476086\npenalty 16.83647914993237\nfitness function -140.63109039754096\nchi^2 12.241263910472332\n\ndiverse\nH0 = 67.3 +/- 5.9\nlog-marginal likelihood -123.76904376987132\npenalty 16.83647914993237\nfitness function -140.6055229198037\nchi^2 12.30756281659826\n\n"
]
],
[
[
"A plot of the generation vs fitness can also be shown.",
"_____no_output_____"
]
],
[
[
"fit_uni = [get_fit(c) for c in load_ga_uniform.best_solutions]\nfit_div = [get_fit(c) for c in load_ga_diverse.best_solutions]\n\nfig, ax = plt.subplots()\nax.plot(fit_uni, 'b-', label = 'uniform')\nax.plot(fit_div, 'r--', label = 'diverse')\nax.set_xlabel('generation')\nax.set_ylabel('best fitness')\nax.set_xlim(1, n_gen)\nax.set_ylim(-141.0, -140.5)\nax.legend(loc = 'lower right', prop = {'size': 9.5})\nplt.show()",
"_____no_output_____"
]
],
[
[
"### 2. Supernovae Type Ia",
"_____no_output_____"
],
[
"In this section, we perform the GP reconstruction with the compressed Pantheon data set.",
"_____no_output_____"
]
],
[
[
"# load pantheon compressed m(z) data\nloc_lcparam = 'lcparam_DS17f.txt'\nloc_lcparam_sys = 'sys_DS17f.txt'\nlcparam = np.loadtxt(loc_lcparam, usecols = (1, 4, 5))\nlcparam_sys = np.loadtxt(loc_lcparam_sys, skiprows = 1)\n\n# setup pantheon samples\nz_ps = lcparam[:, 0]\nlogz_ps = np.log(z_ps)\nmz_ps = lcparam[:, 1]\nsigmz_ps = lcparam[:, 2]\n\n# pantheon samples systematics\ncovmz_ps_sys = lcparam_sys.reshape(40, 40)\ncovmz_ps_tot = covmz_ps_sys + np.diag(sigmz_ps**2)\n\n# plot data set\nplt.errorbar(logz_ps, mz_ps,\n yerr = np.sqrt(np.diag(covmz_ps_tot)),\n fmt = 'kx', markersize = 4,\n ecolor = 'red', elinewidth = 2, capsize = 2)\nplt.xlabel('$\\ln(z)$')\nplt.ylabel('$m(z)$')\nplt.show()",
"_____no_output_____"
]
],
[
[
"The fitness function, now taking in the SNe data set, is prepared below for the GA.",
"_____no_output_____"
]
],
[
[
"n_data = len(z_ps)\n\ndef get_fit(chromosome):\n '''Evaluates the fitness of the indivial with chromosome'''\n if all(hp > 0 for hp in chromosome) == True:\n pnl = penalty(chromosome)\n try:\n gp = GP(chromosome)\n lml = gp.get_logmlike(logz_ps, mz_ps,\n covmz_ps_tot)\n if np.isnan(lml) == False:\n return lml - pnl\n else:\n return -1000\n except:\n lml = -1000\n return lml\n else:\n lml = -1000\n return lml\n \ndef fitness_function(chromosome, chromosome_idx):\n return get_fit(chromosome)",
"_____no_output_____"
]
],
[
[
"Then, we setup the initial uniform and diverse kernel populations.",
"_____no_output_____"
]
],
[
[
"pop_size = 1000 # population size\n\ninit_uni = []\nfor i in range(0, pop_size):\n if i < int(pop_size/3):\n init_uni.append([unif(0, 200), unif(0, 100), unif(0, 5),\n 0, 0, 0, 0, 0, 0, 0])\n elif (i > int(pop_size/3)) and (i < int(2*pop_size/3)):\n init_uni.append([0, 0, 0,\n unif(0, 200), unif(0, 100), unif(0, 2), unif(0, 5),\n 0, 0, 0])\n else:\n init_uni.append([0, 0, 0, 0, 0, 0, 0,\n unif(0, 200), unif(0, 100), unif(0, 5)])\ninit_uni = np.array(init_uni)\n\ninit_div = []\nfor i in range(0, pop_size):\n init_div.append([unif(0, 200), unif(0, 100), unif(0, 5),\n unif(0, 200), unif(0, 100), unif(0, 2), unif(0, 5),\n unif(0, 200), unif(0, 100), unif(0, 5)])\ninit_div = np.array(init_div)",
"_____no_output_____"
]
],
[
[
"The GA parameters can now be set for the SNe fitting.",
"_____no_output_____"
]
],
[
[
"gene_space = [{'low': 0, 'high': 200}, {'low': 0, 'high': 100}, {'low': 0, 'high': 5}, # rbf lims\n {'low': 0, 'high': 200}, {'low': 0, 'high': 100}, # chy lims\n {'low': 0, 'high': 2}, {'low': 0, 'high': 5}, \n {'low': 0, 'high': 200}, {'low': 0, 'high': 100}, {'low': 0, 'high': 5}] # m52 lims\n\nnum_genes = 10 # length of chromosome\n\nn_gen = 100 # number of generations\nsel_rate = 0.3 # selection rate\n \n# parent selection\nparent_selection_type = \"rws\" # roulette wheel selection\nkeep_parents = int(sel_rate*pop_size)\nnum_parents_mating = int(sel_rate*pop_size)\n\n# crossover\n#crossover_type = \"single_point\"\n#crossover_type = \"two_points\"\n#crossover_type = \"uniform\"\ncrossover_type = \"scattered\"\ncrossover_prob = 1.0\n\n# mutation type options: random, swap, inversion, scramble, adaptive\nmutation_type = \"random\"\n#mutation_type = \"swap\"\n#mutation_type = \"inversion\"\n#mutation_type = \"scramble\"\n#mutation_type = \"adaptive\"\nmutation_prob = 0.5",
"_____no_output_____"
]
],
[
[
"Here are the ``GA runs``. We start with the uniform population.\n\n*Skip the runs and jump ahead to loading lines if results have already been prepared.",
"_____no_output_____"
]
],
[
[
"ga_inst_uni_sn = pygad.GA(initial_population = init_uni,\n num_genes = num_genes,\n num_generations = n_gen,\n num_parents_mating = num_parents_mating,\n fitness_func = fitness_function,\n parent_selection_type = parent_selection_type,\n keep_parents = keep_parents,\n crossover_type = crossover_type,\n crossover_probability = crossover_prob,\n mutation_type = mutation_type,\n mutation_probability = mutation_prob,\n mutation_by_replacement = True,\n on_generation = callback_generation,\n gene_space = gene_space,\n save_best_solutions = True)\n\n# perform GA run\nga_inst_uni_sn.run()\n\n# save results\nga_inst_uni_sn.save('gp_ga_sn_uniform_init')\n\n# best solution\nsolution = ga_inst_uni_sn.best_solutions[-1]\nprint(\"best chromosome: {solution}\".format(solution = solution))\nprint(\"best fitness = {solution_fitness}\".format(solution_fitness = \\\n get_fit(solution)))",
"_____no_output_____"
]
],
[
[
"Here is the GA run for a diversified initial population.",
"_____no_output_____"
]
],
[
[
"ga_inst_div_sn = pygad.GA(initial_population = init_div,\n num_genes = num_genes,\n num_generations = n_gen,\n num_parents_mating = num_parents_mating,\n fitness_func = fitness_function,\n parent_selection_type = parent_selection_type,\n keep_parents = keep_parents,\n crossover_type = crossover_type,\n crossover_probability = crossover_prob,\n mutation_type = mutation_type,\n mutation_probability = mutation_prob,\n mutation_by_replacement = True,\n on_generation = callback_generation,\n gene_space = gene_space,\n save_best_solutions = True)\n\n# perform GA run\nga_inst_div_sn.run()\n\n# save results\nga_inst_div_sn.save('gp_ga_sn_diverse_init')\n\n# best solution\nsolution = ga_inst_div_sn.best_solutions[-1]\nprint(\"best chromosome: {solution}\".format(solution = solution))\nprint(\"best fitness = {solution_fitness}\".format(solution_fitness = \\\n get_fit(solution)))",
"_____no_output_____"
]
],
[
[
"``Load GA runs``\n\nSaved ``pygad`` output can be accessed. This is done for the SNe runs below.",
"_____no_output_____"
]
],
[
[
"load_ga_uniform = pygad.load('gp_ga_sn_uniform_init')\nload_ga_diverse = pygad.load('gp_ga_sn_diverse_init')",
"_____no_output_____"
]
],
[
[
"The GP reconstructions are shown below.",
"_____no_output_____"
]
],
[
[
"# champion chromosomes\nchr_1 = load_ga_uniform.best_solutions[-1]\nchr_2 = load_ga_diverse.best_solutions[-1]\n\nz_min = 1e-5\nz_max = 3\nn_div = 1000\nz_rec = np.logspace(np.log(z_min), np.log(z_max), n_div)\nlogz_rec = np.log(z_rec)\n\nchamps = {}\nchamps['uniform'] = {'chromosome': chr_1}\nchamps['diverse'] = {'chromosome': chr_2}\n\nfor champ in champs:\n chromosome = champs[champ]['chromosome']\n gp = GP(chromosome)\n rec = gp.predict(logz_ps, mz_ps, covmz_ps_tot,\n logz_rec)\n mz_rec, sigmz_rec = rec['Y'], np.sqrt(rec['varY'])\n \n # compute chi2\n mz = gp.predict(logz_ps, mz_ps, covmz_ps_tot,\n logz_ps)['Y']\n cov_inv = np.linalg.inv(covmz_ps_tot)\n delta_H = mz - mz_ps\n chi2 = ( delta_H @ cov_inv @ delta_H )\n \n # print GA measures\n print(champ)\n print('log-marginal likelihood',\n gp.get_logmlike(logz_ps, mz_ps, covmz_ps_tot))\n print('penalty', penalty(chromosome))\n print('fitness function', get_fit(chromosome))\n print('chi^2', chi2)\n print()\n \n champs[champ]['logz'] = logz_rec\n champs[champ]['mz'] = mz_rec\n champs[champ]['sigmz'] = sigmz_rec\n\n# plot champs' predictions\n \nfig, ax = plt.subplots()\nax.errorbar(logz_ps, mz_ps,\n yerr = np.sqrt(np.diag(covmz_ps_tot)),\n fmt = 'kx', ecolor = 'k',\n elinewidth = 1, capsize = 2, label = 'SNe')\n# color, line style, and hatch list\nclst = ['b', 'r']\nllst = ['-', '--']\nhlst = ['|', '-']\nfor champ in champs:\n i = list(champs.keys()).index(champ)\n mz_rec = champs[champ]['mz']\n sigmz_rec = champs[champ]['sigmz']\n ax.plot(logz_rec, mz_rec, clst[i] + llst[i],\n label = champ)\n ax.fill_between(logz_rec,\n mz_rec - 2*sigmz_rec,\n mz_rec + 2*sigmz_rec,\n facecolor = clst[i], alpha = 0.2,\n edgecolor = clst[i], hatch = hlst[i])\nax.set_xlabel('$\\ln(z)$')\nax.set_ylabel('$m(z)$')\nax.set_xlim(np.log(z_min), np.log(z_max))\nax.set_ylim(-10, 30)\nax.legend(loc = 'upper left', prop = {'size': 9.5})\nplt.show()",
"uniform\nlog-marginal likelihood 62.18631886438317\npenalty 18.44439727056968\nfitness function 43.74192159381349\nchi^2 35.429040731552476\n\ndiverse\nlog-marginal likelihood 62.55037313828511\npenalty 18.44439727056968\nfitness function 44.10597586771543\nchi^2 34.62680636352342\n\n"
]
],
[
[
"Here is the fitness per generation for the GPs above. ",
"_____no_output_____"
]
],
[
[
"fit_uni = [get_fit(c) for c in load_ga_uniform.best_solutions]\nfit_div = [get_fit(c) for c in load_ga_diverse.best_solutions]\n\nfig, ax = plt.subplots()\nax.plot(fit_uni, 'b-', label = 'uniform')\nax.plot(fit_div, 'r--', label = 'diverse')\nax.set_xlabel('generation')\nax.set_ylabel('best fitness')\nax.set_xlim(1, n_gen)\nax.set_ylim(41, 45)\nax.legend(loc = 'lower right', prop = {'size': 9.5})\nplt.show()",
"_____no_output_____"
]
],
[
[
"### References",
"_____no_output_____"
],
[
"***Pantheon***: D. M. Scolnic et al., The Complete Light-curve Sample of Spectroscopically Confirmed SNe Ia\nfrom Pan-STARRS1 and Cosmological Constraints from the Combined Pantheon Sample,\nAstrophys. J. 859 (2018) 101 [[1710.00845](https://arxiv.org/abs/1710.00845)].\n\n***Cosmic Chronometers***, from *various sources*:\n\n(1) M. Moresco, L. Pozzetti, A. Cimatti, R. Jimenez, C. Maraston, L. Verde et al., A 6%\nmeasurement of the Hubble parameter at z ∼ 0.45: direct evidence of the epoch of cosmic\nre-acceleration, JCAP 05 (2016) 014 [[1601.01701](https://arxiv.org/abs/1601.01701)].\n\n(2) M. Moresco, Raising the bar: new constraints on the Hubble parameter with cosmic\nchronometers at z ∼ 2, Mon. Not. Roy. Astron. Soc. 450 (2015) L16 [[1503.01116](https://arxiv.org/abs/1503.01116)].\n\n(3) C. Zhang, H. Zhang, S. Yuan, S. Liu, T.-J. Zhang and Y.-C. Sun, Four new observational H(z)\ndata from luminous red galaxies in the Sloan Digital Sky Survey data release seven, Research in\nAstronomy and Astrophysics 14 (2014) 1221 [[1207.4541](https://arxiv.org/abs/1207.4541)].\n\n(4) D. Stern, R. Jimenez, L. Verde, M. Kamionkowski and S. A. Stanford, Cosmic chronometers:\nconstraining the equation of state of dark energy. I: H(z) measurements, JCAP 2010 (2010)\n008 [[0907.3149](https://arxiv.org/abs/0907.3149)].\n\n(5) M. Moresco et al., Improved constraints on the expansion rate of the Universe up to z ˜1.1 from\nthe spectroscopic evolution of cosmic chronometers, JCAP 2012 (2012) 006 [[1201.3609](https://arxiv.org/abs/1201.3609)].\n\n(6) Ratsimbazafy et al. Age-dating Luminous Red Galaxies observed with the Southern African\nLarge Telescope, Mon. Not. Roy. Astron. Soc. 467 (2017) 3239 [[1702.00418](https://arxiv.org/abs/1702.00418)].",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
d0bed49c614b03571f60005dafcb30131bd4581e | 5,371 | ipynb | Jupyter Notebook | 11_declaraciones_y_bloques_de_codigo.ipynb | OscarLpzL/CursoPython | acff7d2cb45798353faf6c5661c888df240ba625 | [
"MIT"
] | null | null | null | 11_declaraciones_y_bloques_de_codigo.ipynb | OscarLpzL/CursoPython | acff7d2cb45798353faf6c5661c888df240ba625 | [
"MIT"
] | null | null | null | 11_declaraciones_y_bloques_de_codigo.ipynb | OscarLpzL/CursoPython | acff7d2cb45798353faf6c5661c888df240ba625 | [
"MIT"
] | null | null | null | 31.970238 | 405 | 0.59784 | [
[
[
"[](https://www.pythonista.io)",
"_____no_output_____"
],
[
"# Declaraciones y bloques de código.",
"_____no_output_____"
],
[
"## Flujo de ejecución del código.\n\nEl intérprete de Python es capaz de leer, evaluar y ejecutar una sucesión de instrucciones línea por línea de principio a fin. A esto se le conoce copmo flujo de ejecución de código.\n\nLos lenguajes de programacion modernos pueden ejecutar o no porciones de código dependiendo de ciertas condiciones. Estas prociondes de código también se conocen como \"bloques\" y deben de ser delimitados sintácticamente. \n\nAsí como algunos lenguajes de programación identifican el final de una expresión mendiante el uso del punto y coma ```;```, también suelen delimitar bloques de código encerrándolos entre llaves ```{``` ```}```.\n\n**Ejemplo:**\n\n* El siguiente código ejemplifca el uso de llaves en un código simple de Javascript.\n\n\n```javascript\n\nfor (let i = 1; i <= 10; i++) {\n console.log(i);\n}\nconsole.log(\"Esta es la línea final.\");\n```",
"_____no_output_____"
],
[
"## Declaraciones.\n\nLas declaraciones (statements) son expresiones capaces de contener a un bloque de código, las cuales se ejecutarán e incluso repetirán en caso de que se cumplan ciertas condiciones.\n\nLa inmensa mayoría de los lenguajes de programación utilizan a las declaraciones como parte fundamental de su estructura de código. \n\nEn el caso de Python, se colocan dos puntos ```:``` al final de la línea que define a una declaración y se indenta el código que pertence a dicha declaración.\n\n```\n<flujo principal>\n...\n...\n<declaración>:\n <bloque de código>\n<flujo principal>\n```",
"_____no_output_____"
],
[
"### Indentación.\n\nEs una buena práctica entre programadores usar la indentación (dejar espacios o tabualdores antes de ingresar el código en una línea) como una regla de estilo a fin de poder identificar visualmente los bloques de código.\n\nEn el caso de Python , la indentación no sólo es una regla de estilo, sino un elemento sintáctico, por lo que en vez de encerrar un bloque de código entre llaves, un bloque de código se define indentándolo. El PEP-8 indica que la indentación correcta es de cuatro espacios y no se usan tabuladores.",
"_____no_output_____"
],
[
"**Ejemplo:**",
"_____no_output_____"
],
[
"* La siguiente celda ejemplifica del uso de indentación en Python para delimitar bloques de código.",
"_____no_output_____"
]
],
[
[
"for i in range (1, 11):\n print(i)\nprint('Esta es la línea final.')",
"_____no_output_____"
]
],
[
[
"### Declaraciones anidadas.\n\nEs muy común que el código incluya declaraciones dentro de otras declaraciones, por lo que para delimitar el códifgo dentro de una declaración anidada se utiliza la misma regla de indentación dejando 4 espacios adicionales.",
"_____no_output_____"
],
[
"**Ejemplo:**",
"_____no_output_____"
]
],
[
[
"'''Esta celda realizará una iteración de números en el rango \nentre el ```1``` y ```10``` y mostrará un mensaje dependiendo\nsi el número es par o non.'''\n\nfor i in range (1, 11):\n if i % 2 == 0:\n print ('El número %d es par.' %i)\n else:\n print ('El número %d es non.' %i)\nprint('Esta es la línea final.')",
"_____no_output_____"
]
],
[
[
"<p style=\"text-align: center\"><a rel=\"license\" href=\"http://creativecommons.org/licenses/by/4.0/\"><img alt=\"Licencia Creative Commons\" style=\"border-width:0\" src=\"https://i.creativecommons.org/l/by/4.0/80x15.png\" /></a><br />Esta obra está bajo una <a rel=\"license\" href=\"http://creativecommons.org/licenses/by/4.0/\">Licencia Creative Commons Atribución 4.0 Internacional</a>.</p>\n<p style=\"text-align: center\">© José Luis Chiquete Valdivieso. 2021.</p>",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d0beeb1fbcbb461671999346b2958ee383697dc4 | 10,289 | ipynb | Jupyter Notebook | examples/dates/SeriesDtMethodTransformer.ipynb | munichpavel/tubular | 53e277dea2cc869702f2ed49f2b495bf79b92355 | [
"BSD-3-Clause"
] | null | null | null | examples/dates/SeriesDtMethodTransformer.ipynb | munichpavel/tubular | 53e277dea2cc869702f2ed49f2b495bf79b92355 | [
"BSD-3-Clause"
] | null | null | null | examples/dates/SeriesDtMethodTransformer.ipynb | munichpavel/tubular | 53e277dea2cc869702f2ed49f2b495bf79b92355 | [
"BSD-3-Clause"
] | null | null | null | 26.048101 | 332 | 0.463602 | [
[
[
"# SeriesDtMethodTransformer\nThis notebook shows the functionality in the `SeriesDtMethodTransformer` class. This transformer applys a `pd.Series.dt` method to a specific column in the input `X`. <br>\nThis generic transformer means that many `pd.Series.dt` methods are available for use within the package without having to directly implement a transformer for each specific function. <br>\nMost of the `pd.Series.dt` methods simply access attributes e.g. [pd.Series.dt.year](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.dt.year.html) with a few actually being callable e.g. [pd.Series.dt.to_period](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.to_period.html).",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np",
"_____no_output_____"
],
[
"import tubular\nfrom tubular.dates import SeriesDtMethodTransformer",
"_____no_output_____"
],
[
"tubular.__version__",
"_____no_output_____"
]
],
[
[
"## Load dummy dataset\nNote, the load_boston script modifies the original Boston dataset to include nulls values and pandas categorical dtypes.",
"_____no_output_____"
]
],
[
[
"df = tubular.testing.test_data.create_datediff_test_df()",
"_____no_output_____"
],
[
"df",
"_____no_output_____"
],
[
"df.dtypes",
"_____no_output_____"
]
],
[
[
"## Simple usage",
"_____no_output_____"
],
[
"### Initialising SeriesDtMethodTransformer",
"_____no_output_____"
],
[
"The user must specify the following; <br>\n- `new_column_name` the name of the column to assign the outputs of the `pd.Series.str` method to <br> \n- `pd_method_name` the name of the `pd.Series.dt` method to be called <br>\n- `column` the column in the `DataFrame` passed to the `transform` method to be transformed <br>\n- `pd_method_kwargs` a dictionary of keyword arguments that are passed to the `pd.Series.dt` method when called, only applicable if the method is `callable`, otherwise will be ignored <br>",
"_____no_output_____"
]
],
[
[
"month_transformer = SeriesDtMethodTransformer(\n column = 'a', \n pd_method_name = 'month',\n new_column_name = 'a_month'\n)",
"_____no_output_____"
]
],
[
[
"### SeriesDtMethodTransformer fit\nThere is no fit method for the `SeriesDtMethodTransformer` as the methods that it can run do not 'learn' anything from the data.",
"_____no_output_____"
],
[
"### SeriesDtMethodTransformer transform\nWhen running transform with this configuration a new column `a_month` is added to the input `X` which is the result or running `df['a'].dt.month`.",
"_____no_output_____"
]
],
[
[
"df_2 = month_transformer.transform(df)",
"_____no_output_____"
],
[
"df_2[['a', 'a_month']].head()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
]
] |
d0befa10d80dac5a828e47c632fe820c909b39b9 | 170,106 | ipynb | Jupyter Notebook | LL_Phase_1.ipynb | Aficionado45/IM-Fault-Detection | 3527b8362fdce7129dd3fe8116f4d80c5c3af1d4 | [
"MIT"
] | 1 | 2022-02-07T05:03:57.000Z | 2022-02-07T05:03:57.000Z | LL_Phase_1.ipynb | Aficionado45/IM-Fault-Detection | 3527b8362fdce7129dd3fe8116f4d80c5c3af1d4 | [
"MIT"
] | null | null | null | LL_Phase_1.ipynb | Aficionado45/IM-Fault-Detection | 3527b8362fdce7129dd3fe8116f4d80c5c3af1d4 | [
"MIT"
] | null | null | null | 32.010915 | 216 | 0.507689 | [
[
[
"### Testing accuracy of RF classifier for lightly loaded, testing and training with all the rotational speeds",
"_____no_output_____"
]
],
[
[
"from jupyterthemes import get_themes\nimport jupyterthemes as jt\nfrom jupyterthemes.stylefx import set_nb_theme\n\nset_nb_theme('chesterish')",
"_____no_output_____"
],
[
"import pandas as pd",
"_____no_output_____"
],
[
"data_10=pd.read_csv(r'D:\\Acads\\BTP\\Lightly Loaded\\Train10hz.csv')\ndata_20=pd.read_csv(r'D:\\Acads\\BTP\\Lightly Loaded\\Train20hz.csv')\ndata_30=pd.read_csv(r'D:\\Acads\\BTP\\Lightly Loaded\\Train30Hz.csv')\ndata_15=pd.read_csv(r'D:\\Acads\\BTP\\Lightly Loaded\\Train15hz.csv')\ndata_25=pd.read_csv(r'D:\\Acads\\BTP\\Lightly Loaded\\Train25hz.csv')\ndata_35=pd.read_csv(r'D:\\Acads\\BTP\\Lightly Loaded\\Train35Hz.csv')\ndata_40=pd.read_csv(r'D:\\Acads\\BTP\\Lightly Loaded\\Train40Hz.csv')",
"_____no_output_____"
],
[
"data_10=data_10.head(44990)\ndata_15=data_15.head(44990)\ndata_20=data_20.head(44990)\ndata_25=data_25.head(44990)\ndata_30=data_30.head(44990)\ndata_35=data_35.head(44990)\ndata_40=data_40.head(44990)",
"_____no_output_____"
],
[
"data_25.head",
"_____no_output_____"
],
[
"#shuffling\ndata_20=data_20.sample(frac=1)\ndata_30=data_30.sample(frac=1)\ndata_40=data_40.sample(frac=1)\ndata_10=data_10.sample(frac=1)\ndata_15=data_15.sample(frac=1)\ndata_25=data_25.sample(frac=1)\ndata_35=data_35.sample(frac=1)",
"_____no_output_____"
],
[
"data_25.head",
"_____no_output_____"
],
[
"import sklearn as sk",
"_____no_output_____"
]
],
[
[
"### Assigning X and y for training",
"_____no_output_____"
]
],
[
[
"dataset_10=data_10.values\nX_10= dataset_10[:,0:9]\nprint(X_10)\ny_10=dataset_10[:,9]\nprint(y_10)",
"[[ 5.085000e+00 -1.707228e+00 6.246600e-02 ... 1.432430e-01\n -3.060000e-04 5.167400e-02]\n [ 1.129900e+00 -2.264587e+00 -2.125112e+00 ... 9.209500e-02\n -5.230000e-04 7.830000e-03]\n [ 9.996950e+00 2.665160e-01 -6.387330e-01 ... -1.365330e-01\n -5.360000e-04 -2.994500e-02]\n ...\n [ 1.207200e+00 -9.250320e-01 -1.818652e+00 ... -1.519890e-01\n -1.390000e-04 6.438000e-03]\n [ 1.065600e+00 -2.914142e+00 -2.603219e+00 ... 2.164220e-01\n -3.250000e-04 5.515400e-02]\n [ 2.662200e+00 -3.081750e-01 -2.236965e+00 ... -3.668300e-02\n -4.300000e-04 -7.643100e-02]]\n[2. 0. 3. ... 2. 2. 1.]\n"
],
[
"dataset_15=data_15.values\nX_15= dataset_15[:,0:9]\nprint(X_15)\ny_15=dataset_15[:,9]\nprint(y_15)",
"[[ 1.206950e+00 5.416310e-01 7.948510e-01 ... -2.216420e-01\n -1.065000e-03 2.374000e-03]\n [ 9.973550e+00 -6.526800e-02 7.123430e-01 ... 1.843410e-01\n 2.543000e-03 -2.515200e-02]\n [ 1.241700e+00 6.011280e-01 -1.698450e+00 ... 2.108030e-01\n -5.020000e-04 -4.509600e-02]\n ...\n [ 1.139700e+00 3.278570e-01 2.858298e+00 ... -2.113930e-01\n 1.490000e-03 -1.438000e-03]\n [ 1.179350e+00 -1.784796e+00 1.856288e+00 ... 1.607360e-01\n -3.800000e-04 -7.120300e-02]\n [ 1.168300e+00 4.705228e+00 9.671000e-03 ... 5.926900e-02\n -3.060000e-04 -3.767500e-02]]\n[0. 0. 0. ... 3. 3. 1.]\n"
],
[
"dataset_20=data_20.values\nX_20= dataset_20[:,0:9]\nprint(X_20)\ny_20=dataset_20[:,9]\nprint(y_20)",
"[[ 1.2344500e+00 -3.4222600e-01 -4.5284300e-01 ... 7.0553000e-02\n -4.6000000e-04 1.0201200e-01]\n [ 1.2510950e+01 1.0227740e+00 1.0982410e+00 ... 1.2791800e-01\n -9.5900000e-04 1.2842100e-01]\n [ 1.0294500e+00 -1.2169200e-01 -1.3180258e+01 ... 2.0500300e-01\n -2.8700000e-04 3.9176000e-02]\n ...\n [ 1.2101500e+00 1.5878780e+00 2.4058520e+00 ... -6.1962000e-02\n 1.4540000e-03 7.1056000e-02]\n [ 1.0334500e+00 -3.0818170e+00 3.7764400e-01 ... 1.5896200e-01\n -1.1900000e-03 -1.7939000e-02]\n [ 2.6943500e+00 -3.0298180e+00 -3.0449830e+00 ... 1.2097300e-01\n 7.6700000e-04 1.9543100e-01]]\n[4. 3. 1. ... 4. 3. 1.]\n"
],
[
"dataset_25=data_25.values\nX_25= dataset_25[:,0:9]\nprint(X_25)\ny_25=dataset_25[:,9]\nprint(y_25)",
"[[ 2.2757400e+01 -2.6387820e+00 5.1920900e-01 ... -2.3064200e-01\n -1.1050000e-03 -4.2601000e-02]\n [ 1.1189000e+00 -3.4519550e+00 -1.1227927e+01 ... -1.8113800e-01\n -2.0800000e-04 1.5830300e-01]\n [ 1.1831500e+00 2.6126120e+00 -5.0281320e+00 ... 2.2829500e-01\n -6.1000000e-04 -1.2593200e-01]\n ...\n [ 1.2519100e+01 1.0532600e+00 7.3638480e+00 ... 2.3148800e-01\n 7.6600000e-04 4.1550000e-03]\n [ 2.2905900e+01 -2.7195460e+00 6.8334360e+00 ... -1.9495600e-01\n -5.7900000e-04 8.7372000e-02]\n [ 1.0548500e+00 4.6446240e+00 9.2954100e-01 ... -2.4461200e-01\n 2.2570000e-03 3.7807000e-02]]\n[3. 1. 4. ... 1. 0. 0.]\n"
],
[
"dataset_30=data_30.values\nX_30= dataset_30[:,0:9]\nprint(X_30)\ny_30=dataset_30[:,9]\nprint(y_30)",
"[[ 2.5115500e+00 -5.7290880e+00 -1.3638740e+00 ... 4.0533000e-02\n -4.6800000e-04 -6.3050000e-03]\n [ 1.0827000e+00 -2.6000600e+00 2.5565040e+00 ... -1.6953300e-01\n 2.7490000e-03 -2.3015700e-01]\n [ 2.6536000e+00 1.3286210e+00 7.9533000e-02 ... -2.3207200e-01\n -4.3400000e-04 -2.1344000e-02]\n ...\n [ 1.2654450e+01 -6.8439290e+00 2.7389400e-01 ... 1.3921700e-01\n -6.8300000e-04 -2.5010700e-01]\n [ 1.1869000e+00 1.4193420e+00 1.5044621e+01 ... -3.4181000e-02\n -2.7100000e-04 1.8717000e-02]\n [ 1.0338500e+00 -2.9479470e+00 1.9802960e+00 ... -1.3068600e-01\n -4.8800000e-04 7.1684000e-02]]\n[2. 0. 2. ... 0. 1. 4.]\n"
],
[
"dataset_35=data_35.values\nX_35= dataset_35[:,0:9]\nprint(X_35)\ny_35=dataset_35[:,9]\nprint(y_35)",
"[[ 1.182350e+00 4.193967e+00 4.230983e+00 ... -1.793500e-01\n -2.580000e-04 -2.314590e-01]\n [ 1.168200e+00 2.092992e+00 7.382878e+00 ... 1.138160e-01\n -3.820000e-04 -6.749700e-02]\n [ 1.243150e+00 6.022350e-01 -1.963901e+00 ... -2.342570e-01\n 9.590000e-04 1.766870e-01]\n ...\n [ 1.267945e+01 -2.644560e+00 -4.413020e-01 ... 2.186320e-01\n -7.810000e-04 2.516990e-01]\n [ 2.729550e+00 -4.012142e+00 -1.812881e+00 ... 4.241100e-02\n 8.560000e-04 -9.895400e-02]\n [ 1.060250e+00 -4.106551e+00 -5.288917e+00 ... -2.001160e-01\n -2.630000e-04 3.047600e-02]]\n[2. 1. 2. ... 0. 1. 1.]\n"
],
[
"dataset_40=data_40.values\nX_40= dataset_40[:,0:9]\nprint(X_40)\ny_40=dataset_40[:,9]\nprint(y_40)",
"[[ 1.027550e+00 4.547200e-01 7.451636e+00 ... -2.186170e-01\n 8.330000e-04 -1.628680e-01]\n [ 1.026400e+00 1.122346e+00 -1.123838e+00 ... 9.845900e-02\n -9.220000e-04 -1.297100e-02]\n [ 4.594950e+00 -3.482196e+00 -5.994413e+00 ... 3.765800e-02\n 3.358000e-03 -1.788950e-01]\n ...\n [ 1.244615e+01 4.545544e+00 -8.181009e+00 ... 6.341400e-02\n -2.480000e-04 -2.883800e-02]\n [ 1.153700e+00 -1.391793e+00 -1.089951e+00 ... 2.341350e-01\n -9.450000e-04 -4.838600e-02]\n [ 1.222200e+00 -6.719402e+00 -4.400356e+00 ... 2.135660e-01\n -5.040000e-04 6.016100e-02]]\n[1. 0. 3. ... 1. 0. 2.]\n"
]
],
[
[
"### Training Random Forest Classifier",
"_____no_output_____"
]
],
[
[
"from sklearn.ensemble import RandomForestClassifier\nrf_10 = RandomForestClassifier(n_estimators = 1000, random_state = 42)\nrf_10.fit(X_10, y_10);",
"_____no_output_____"
],
[
"from sklearn.ensemble import RandomForestClassifier\nrf_15 = RandomForestClassifier(n_estimators = 1000, random_state = 42)\nrf_15.fit(X_15, y_15);",
"_____no_output_____"
],
[
"from sklearn.ensemble import RandomForestClassifier\nrf_20 = RandomForestClassifier(n_estimators = 1000, random_state = 42)\nrf_20.fit(X_20, y_20);",
"_____no_output_____"
],
[
"from sklearn.ensemble import RandomForestClassifier\nrf_25 = RandomForestClassifier(n_estimators = 1000, random_state = 42)\nrf_25.fit(X_25, y_25);",
"_____no_output_____"
],
[
"from sklearn.ensemble import RandomForestClassifier\nrf_30 = RandomForestClassifier(n_estimators = 1000, random_state = 42)\nrf_30.fit(X_30, y_30);",
"_____no_output_____"
],
[
"from sklearn.ensemble import RandomForestClassifier\nrf_35 = RandomForestClassifier(n_estimators = 1000, random_state = 42)\nrf_35.fit(X_35, y_35);",
"_____no_output_____"
],
[
"from sklearn.ensemble import RandomForestClassifier\nrf_40 = RandomForestClassifier(n_estimators = 1000, random_state = 42)\nrf_40.fit(X_40, y_40);",
"_____no_output_____"
]
],
[
[
"### Importing Testing data",
"_____no_output_____"
]
],
[
[
"test_10=pd.read_csv(r'D:\\Acads\\BTP\\Lightly Loaded\\Test10hz.csv')\ntest_20=pd.read_csv(r'D:\\Acads\\BTP\\Lightly Loaded\\Test20hz.csv')\ntest_30=pd.read_csv(r'D:\\Acads\\BTP\\Lightly Loaded\\Test30Hz.csv')\ntest_15=pd.read_csv(r'D:\\Acads\\BTP\\Lightly Loaded\\Test15hz.csv')\ntest_25=pd.read_csv(r'D:\\Acads\\BTP\\Lightly Loaded\\Test25hz.csv')\ntest_35=pd.read_csv(r'D:\\Acads\\BTP\\Lightly Loaded\\Test35Hz.csv')\ntest_40=pd.read_csv(r'D:\\Acads\\BTP\\Lightly Loaded\\Test40Hz.csv')",
"_____no_output_____"
],
[
"test_10=test_10.head(99990)\ntest_15=test_15.head(99990)\ntest_20=test_20.head(99990)\ntest_25=test_25.head(99990)\ntest_30=test_30.head(99990)\ntest_35=test_35.head(99990)\ntest_40=test_40.head(99990)",
"_____no_output_____"
],
[
"#shuffling\ntest_20=test_20.sample(frac=1)\ntest_30=test_30.sample(frac=1)\ntest_40=test_40.sample(frac=1)\ntest_10=test_10.sample(frac=1)\ntest_15=test_15.sample(frac=1)\ntest_25=test_25.sample(frac=1)\ntest_35=test_35.sample(frac=1)",
"_____no_output_____"
]
],
[
[
"### Assigning X and y for testing",
"_____no_output_____"
]
],
[
[
"dataset_test_10 = test_10.values\nX_test_10 = dataset_test_10[:,0:9]\nprint(X_test_10)\n\ny_test_10= dataset_test_10[:,9]\nprint(y_test_10)",
"[[ 5.242000e-01 2.735230e-01 5.153800e-02 ... -3.137600e-02\n 2.073000e-03 7.139000e-03]\n [ 7.577000e-01 -8.405800e-01 -1.463571e+00 ... 2.037150e-01\n -5.350000e-04 7.330000e-04]\n [ 5.345000e-02 -3.146200e-02 -2.847552e+00 ... 2.079200e-02\n -5.130000e-04 -5.458500e-02]\n ...\n [ 7.888500e-01 -8.252140e-01 -1.401322e+00 ... 5.255300e-02\n 1.126000e-03 -3.020900e-02]\n [ 8.341000e-01 -1.718906e+00 -4.401970e-01 ... 2.091240e-01\n -3.000000e-04 -2.487700e-02]\n [ 7.106500e-01 1.508589e+00 -6.054590e-01 ... -1.643150e-01\n -2.580000e-04 1.386500e-02]]\n[0. 3. 1. ... 2. 1. 0.]\n"
],
[
"dataset_test_15 = test_15.values\nX_test_15 = dataset_test_15[:,0:9]\nprint(X_test_15)\n\ny_test_15= dataset_test_15[:,9]\nprint(y_test_15)",
"[[ 9.310000e-02 1.191432e+00 1.065951e+00 ... -6.482000e-02\n 1.253000e-03 -1.111800e-01]\n [ 5.108000e-01 -8.521350e-01 6.375690e-01 ... 2.166050e-01\n -8.960000e-04 -5.141000e-03]\n [ 8.508000e-01 2.587903e+00 -3.101339e+00 ... 2.075780e-01\n -1.220000e-03 4.961800e-02]\n ...\n [ 1.078500e-01 -5.073200e-01 -7.300810e-01 ... 2.120510e-01\n 9.840000e-04 -2.735700e-02]\n [ 9.747000e-01 -1.268126e+00 -2.169436e+00 ... 2.234050e-01\n -4.520000e-04 1.972200e-02]\n [ 7.856500e-01 7.555270e-01 6.087160e-01 ... 1.548390e-01\n -3.380000e-04 -4.104100e-02]]\n[2. 3. 0. ... 2. 2. 2.]\n"
],
[
"dataset_test_20 = test_20.values\nX_test_20 = dataset_test_20[:,0:9]\nprint(X_test_20)\n\ny_test_20= dataset_test_20[:,9]\nprint(y_test_20)",
"[[ 5.245500e-01 4.036741e+00 1.846343e+00 ... 2.204160e-01\n -4.600000e-04 8.449700e-02]\n [ 8.379500e-01 -6.125470e-01 -9.297220e-01 ... 7.865200e-02\n -8.400000e-04 9.840000e-03]\n [ 5.791500e-01 -3.755344e+00 -2.086191e+00 ... 2.168840e-01\n -4.910000e-04 -1.639200e-02]\n ...\n [ 7.150000e-01 -6.048020e-01 -4.252896e+00 ... -9.831000e-03\n -1.211000e-03 -9.982600e-02]\n [ 2.557500e-01 -2.465699e+00 -2.071457e+00 ... -2.322560e-01\n -9.590000e-04 -9.629000e-02]\n [ 4.954000e-01 -1.780002e+00 -2.858970e+00 ... -1.081930e-01\n 2.446000e-03 7.495700e-02]]\n[2. 3. 2. ... 3. 3. 3.]\n"
],
[
"dataset_test_25 = test_25.values\nX_test_25 = dataset_test_25[:,0:9]\nprint(X_test_25)\n\ny_test_25= dataset_test_25[:,9]\nprint(y_test_25)",
"[[ 9.400500e-01 -8.020111e+00 -5.245944e+00 ... 1.828050e-01\n -3.550000e-04 4.339500e-02]\n [ 3.325000e-02 -6.542999e+00 -2.229844e+00 ... -2.195170e-01\n -2.940000e-04 -1.188540e-01]\n [ 4.270500e-01 -9.489400e-02 -4.414844e+00 ... 1.941880e-01\n 1.389000e-03 -8.942800e-02]\n ...\n [ 3.215000e-02 -4.435878e+00 -6.415550e+00 ... -7.151500e-02\n -2.630000e-04 -4.566600e-02]\n [ 4.624000e-01 -3.592586e+00 -2.097241e+00 ... -6.214300e-02\n 2.769000e-03 -1.469430e-01]\n [ 1.494000e-01 2.939602e+00 4.161735e+00 ... 1.523630e-01\n -5.350000e-04 2.117000e-02]]\n[1. 2. 4. ... 1. 3. 4.]\n"
],
[
"dataset_test_30 = test_30.values\nX_test_30 = dataset_test_30[:,0:9]\nprint(X_test_30)\n\ny_test_30= dataset_test_30[:,9]\nprint(y_test_30)",
"[[ 4.211500e-01 3.280975e+00 1.756467e+00 ... -8.645000e-03\n -3.680000e-04 -2.126710e-01]\n [ 2.559500e-01 4.166800e+00 6.938782e+00 ... -8.604600e-02\n -3.140000e-04 2.275240e-01]\n [ 4.722500e-01 -6.410660e-01 1.192537e+00 ... 1.517400e-01\n 3.165000e-03 -1.712610e-01]\n ...\n [ 7.096500e-01 6.644675e+00 5.647009e+00 ... 1.723670e-01\n -5.360000e-04 -5.488900e-02]\n [ 3.110000e-02 -7.973399e+00 -1.703239e+00 ... 5.423400e-02\n -8.640000e-04 1.334950e-01]\n [ 9.180000e-01 -3.079290e-01 -5.772917e+00 ... 3.031000e-03\n 3.446000e-03 -1.058300e-01]]\n[4. 1. 3. ... 2. 0. 3.]\n"
],
[
"dataset_test_35 = test_35.values\nX_test_35 = dataset_test_35[:,0:9]\nprint(X_test_35)\n\ny_test_35= dataset_test_35[:,9]\nprint(y_test_35)",
"[[ 1.9210000e-01 9.9652270e+00 7.4246240e+00 ... -2.5363300e-01\n 1.1190000e-03 -1.6205800e-01]\n [ 5.1065000e-01 -6.3516600e-01 -7.4485020e+00 ... -1.2424600e-01\n -3.1800000e-04 -7.2538000e-02]\n [ 8.3700000e-02 -3.6407740e+00 3.1929980e+00 ... -1.5626000e-02\n -3.9100000e-04 8.4166000e-02]\n ...\n [ 3.3370000e-01 4.0451010e+00 1.4296260e+00 ... 1.7250900e-01\n -1.4720000e-03 -1.8749000e-01]\n [ 9.6190000e-01 2.1382300e+00 1.5793581e+01 ... 2.5179800e-01\n 8.9700000e-04 -1.1528000e-02]\n [ 1.2235000e-01 7.4616600e+00 3.8102150e+00 ... -4.5536000e-02\n -2.8300000e-04 3.7073000e-02]]\n[4. 4. 4. ... 3. 1. 2.]\n"
],
[
"dataset_test_40 = test_40.values\nX_test_40 = dataset_test_40[:,0:9]\nprint(X_test_40)\n\ny_test_40= dataset_test_40[:,9]\nprint(y_test_40)",
"[[ 5.9940000e-01 2.2963700e-01 -2.8239780e+00 ... -2.4300000e-02\n -3.6200000e-04 2.4489600e-01]\n [ 7.3290000e-01 -9.8103230e+00 -7.1901720e+00 ... 2.5711800e-01\n -6.0500000e-04 -7.1654000e-02]\n [ 2.1705000e-01 -3.7397320e+00 4.0531970e+00 ... 1.9575700e-01\n -1.3330000e-03 -8.8163000e-02]\n ...\n [ 2.5000000e-01 1.0901083e+01 7.0995010e+00 ... 1.2544300e-01\n -4.1500000e-04 2.2458200e-01]\n [ 8.2825000e-01 -5.1418580e+00 1.8279100e-01 ... 1.5684400e-01\n -7.5200000e-04 2.0937300e-01]\n [ 3.8300000e-02 -1.1228797e+01 1.1330514e+01 ... 3.3639000e-02\n -3.0800000e-04 8.7114000e-02]]\n[1. 0. 3. ... 2. 0. 1.]\n"
]
],
[
[
"### Predictions with 10Hz Trained Model",
"_____no_output_____"
]
],
[
[
"import numpy as np",
"_____no_output_____"
],
[
"predictions_10 = rf_10.predict(X_test_10)\nerrors_10 = abs(predictions_10 - y_test_10)\nprint('Mean Absolute Error 10Hz with 10Hz:', round(np.mean(errors_10), 3), 'degrees.')\n\naccuracy = 100 - np.mean(errors_10)\nprint('Accuracy:', round(accuracy, 3), '%.')",
"Mean Absolute Error 10Hz with 10Hz: 1.697 degrees.\nAccuracy: 98.303 %.\n"
],
[
"predictions_15 = rf_10.predict(X_test_15)\nerrors_15 = abs(predictions_15 - y_test_15)\nprint('Mean Absolute Error 15Hz with 10Hz:', round(np.mean(errors_15), 3), 'degrees.')\n\naccuracy = 100 - np.mean(errors_15)\nprint('Accuracy:', round(accuracy, 3), '%.')",
"Mean Absolute Error 15Hz with 10Hz: 1.536 degrees.\nAccuracy: 98.464 %.\n"
],
[
"predictions_20 = rf_10.predict(X_test_20)\nerrors_20 = abs(predictions_20 - y_test_20)\nprint('Mean Absolute Error 20Hz with 10Hz:', round(np.mean(errors_20), 3), 'degrees.')\n\naccuracy = 100 - np.mean(errors_20)\nprint('Accuracy:', round(accuracy, 3), '%.')",
"Mean Absolute Error 20Hz with 10Hz: 1.515 degrees.\nAccuracy: 98.485 %.\n"
],
[
"predictions_25 = rf_10.predict(X_test_25)\nerrors_25 = abs(predictions_25 - y_test_25)\nprint('Mean Absolute Error 25Hz with 10Hz:', round(np.mean(errors_25), 3), 'degrees.')\n\naccuracy = 100 - np.mean(errors_25)\nprint('Accuracy:', round(accuracy, 3), '%.')",
"Mean Absolute Error 25Hz with 10Hz: 1.507 degrees.\nAccuracy: 98.493 %.\n"
],
[
"predictions_30 = rf_10.predict(X_test_30)\nerrors_30 = abs(predictions_30 - y_test_30)\nprint('Mean Absolute Error 30Hz with 10Hz:', round(np.mean(errors_30), 3), 'degrees.')\n\naccuracy = 100 - np.mean(errors_30)\nprint('Accuracy:', round(accuracy, 3), '%.')",
"Mean Absolute Error 30Hz with 10Hz: 1.528 degrees.\nAccuracy: 98.472 %.\n"
],
[
"predictions_35 = rf_10.predict(X_test_35)\nerrors_35 = abs(predictions_35 - y_test_35)\nprint('Mean Absolute Error 35Hz with 10Hz:', round(np.mean(errors_35), 3), 'degrees.')\n\naccuracy = 100 - np.mean(errors_35)\nprint('Accuracy:', round(accuracy, 3), '%.')",
"Mean Absolute Error 35Hz with 10Hz: 1.55 degrees.\nAccuracy: 98.45 %.\n"
],
[
"predictions_40 = rf_10.predict(X_test_40)\nerrors_40 = abs(predictions_40 - y_test_40)\nprint('Mean Absolute Error 40Hz with 10Hz:', round(np.mean(errors_40), 3), 'degrees.')\n\naccuracy = 100 - np.mean(errors_40)\nprint('Accuracy:', round(accuracy, 3), '%.')",
"Mean Absolute Error 40Hz with 10Hz: 1.541 degrees.\nAccuracy: 98.459 %.\n"
]
],
[
[
"### Predictions with 15Hz model",
"_____no_output_____"
]
],
[
[
"predictions_10 = rf_15.predict(X_test_10)\nerrors_10 = abs(predictions_10 - y_test_10)\nprint('Mean Absolute Error 10Hz with 15Hz:', round(np.mean(errors_10), 3), 'degrees.')\n\naccuracy = 100 - np.mean(errors_10)\nprint('Accuracy:', round(accuracy, 3), '%.')",
"Mean Absolute Error 10Hz with 15Hz: 1.385 degrees.\nAccuracy: 98.615 %.\n"
],
[
"predictions_15 = rf_15.predict(X_test_15)\nerrors_15 = abs(predictions_15 - y_test_15)\nprint('Mean Absolute Error 15Hz with 15Hz:', round(np.mean(errors_15), 3), 'degrees.')\n\naccuracy = 100 - np.mean(errors_15)\nprint('Accuracy:', round(accuracy, 3), '%.')",
"Mean Absolute Error 15Hz with 15Hz: 1.241 degrees.\nAccuracy: 98.759 %.\n"
],
[
"predictions_20 = rf_15.predict(X_test_20)\nerrors_20 = abs(predictions_20 - y_test_20)\nprint('Mean Absolute Error 20Hz with 15Hz:', round(np.mean(errors_20), 3), 'degrees.')\n\naccuracy = 100 - np.mean(errors_20)\nprint('Accuracy:', round(accuracy, 3), '%.')",
"Mean Absolute Error 20Hz with 15Hz: 1.231 degrees.\nAccuracy: 98.769 %.\n"
],
[
"predictions_25 = rf_15.predict(X_test_25)\nerrors_25 = abs(predictions_25 - y_test_25)\nprint('Mean Absolute Error 25Hz with 15Hz:', round(np.mean(errors_25), 3), 'degrees.')\n\naccuracy = 100 - np.mean(errors_25)\nprint('Accuracy:', round(accuracy, 3), '%.')",
"Mean Absolute Error 25Hz with 15Hz: 1.225 degrees.\nAccuracy: 98.775 %.\n"
],
[
"predictions_30 = rf_15.predict(X_test_30)\nerrors_30 = abs(predictions_30 - y_test_30)\nprint('Mean Absolute Error 30Hz with 15Hz:', round(np.mean(errors_30), 3), 'degrees.')\n\naccuracy = 100 - np.mean(errors_30)\nprint('Accuracy:', round(accuracy, 3), '%.')",
"Mean Absolute Error 30Hz with 15Hz: 1.205 degrees.\nAccuracy: 98.795 %.\n"
],
[
"predictions_35 = rf_15.predict(X_test_35)\nerrors_35 = abs(predictions_35 - y_test_35)\nprint('Mean Absolute Error 35Hz with 15Hz:', round(np.mean(errors_35), 3), 'degrees.')\n\naccuracy = 100 - np.mean(errors_35)\nprint('Accuracy:', round(accuracy, 3), '%.')",
"Mean Absolute Error 35Hz with 15Hz: 1.299 degrees.\nAccuracy: 98.701 %.\n"
],
[
"predictions_40 = rf_15.predict(X_test_40)\nerrors_40 = abs(predictions_40 - y_test_40)\nprint('Mean Absolute Error 40Hz with 15Hz:', round(np.mean(errors_40), 3), 'degrees.')\n\naccuracy = 100 - np.mean(errors_40)\nprint('Accuracy:', round(accuracy, 3), '%.')",
"Mean Absolute Error 40Hz with 15Hz: 1.296 degrees.\nAccuracy: 98.704 %.\n"
]
],
[
[
"### Predictions with 20Hz model",
"_____no_output_____"
]
],
[
[
"predictions_10 = rf_20.predict(X_test_10)\nerrors_10 = abs(predictions_10 - y_test_10)\nprint('Mean Absolute Error 10Hz with 20Hz:', round(np.mean(errors_10), 3), 'degrees.')\n\naccuracy = 100 - np.mean(errors_10)\nprint('Accuracy:', round(accuracy, 3), '%.')",
"Mean Absolute Error 10Hz with 20Hz: 1.348 degrees.\nAccuracy: 98.652 %.\n"
],
[
"predictions_15 = rf_20.predict(X_test_15)\nerrors_15 = abs(predictions_15 - y_test_15)\nprint('Mean Absolute Error 15Hz with 20Hz:', round(np.mean(errors_15), 3), 'degrees.')\n\naccuracy = 100 - np.mean(errors_15)\nprint('Accuracy:', round(accuracy, 3), '%.')",
"Mean Absolute Error 15Hz with 20Hz: 1.29 degrees.\nAccuracy: 98.71 %.\n"
],
[
"predictions_20 = rf_20.predict(X_test_20)\nerrors_20 = abs(predictions_20 - y_test_20)\nprint('Mean Absolute Error 20Hz with 20Hz:', round(np.mean(errors_20), 3), 'degrees.')\n\naccuracy = 100 - np.mean(errors_20)\nprint('Accuracy:', round(accuracy, 3), '%.')",
"Mean Absolute Error 20Hz with 20Hz: 1.124 degrees.\nAccuracy: 98.876 %.\n"
],
[
"predictions_25 = rf_20.predict(X_test_25)\nerrors_25 = abs(predictions_25 - y_test_25)\nprint('Mean Absolute Error 25Hz with 20Hz:', round(np.mean(errors_25), 3), 'degrees.')\n\naccuracy = 100 - np.mean(errors_25)\nprint('Accuracy:', round(accuracy, 3), '%.')",
"Mean Absolute Error 25Hz with 20Hz: 1.123 degrees.\nAccuracy: 98.877 %.\n"
],
[
"predictions_30 = rf_20.predict(X_test_30)\nerrors_30 = abs(predictions_30 - y_test_30)\nprint('Mean Absolute Error 30Hz with 20Hz:', round(np.mean(errors_30), 3), 'degrees.')\n\naccuracy = 100 - np.mean(errors_30)\nprint('Accuracy:', round(accuracy, 3), '%.')",
"Mean Absolute Error 30Hz with 20Hz: 1.2 degrees.\nAccuracy: 98.8 %.\n"
],
[
"predictions_35 = rf_20.predict(X_test_35)\nerrors_35 = abs(predictions_35 - y_test_35)\nprint('Mean Absolute Error 35Hz with 20Hz:', round(np.mean(errors_35), 3), 'degrees.')\n\naccuracy = 100 - np.mean(errors_35)\nprint('Accuracy:', round(accuracy, 3), '%.')",
"Mean Absolute Error 35Hz with 20Hz: 1.143 degrees.\nAccuracy: 98.857 %.\n"
],
[
"predictions_40 = rf_20.predict(X_test_40)\nerrors_40 = abs(predictions_40 - y_test_40)\nprint('Mean Absolute Error 40Hz with 20Hz:', round(np.mean(errors_40), 3), 'degrees.')\n\naccuracy = 100 - np.mean(errors_40)\nprint('Accuracy:', round(accuracy, 3), '%.')",
"Mean Absolute Error 40Hz with 20Hz: 1.19 degrees.\nAccuracy: 98.81 %.\n"
]
],
[
[
"### Predictions with 25Hz model",
"_____no_output_____"
]
],
[
[
"predictions_10 = rf_25.predict(X_test_10)\nerrors_10 = abs(predictions_10 - y_test_10)\nprint('Mean Absolute Error 10Hz with 25Hz:', round(np.mean(errors_10), 3), 'degrees.')\n\naccuracy = 100 - np.mean(errors_10)\nprint('Accuracy:', round(accuracy, 3), '%.')",
"Mean Absolute Error 10Hz with 25Hz: 1.428 degrees.\nAccuracy: 98.572 %.\n"
],
[
"predictions_15 = rf_25.predict(X_test_15)\nerrors_15 = abs(predictions_15 - y_test_15)\nprint('Mean Absolute Error 15Hz with 25Hz:', round(np.mean(errors_15), 3), 'degrees.')\n\naccuracy = 100 - np.mean(errors_15)\nprint('Accuracy:', round(accuracy, 3), '%.')",
"Mean Absolute Error 15Hz with 25Hz: 1.312 degrees.\nAccuracy: 98.688 %.\n"
],
[
"predictions_20 = rf_25.predict(X_test_20)\nerrors_20 = abs(predictions_20 - y_test_20)\nprint('Mean Absolute Error 20Hz with 25Hz:', round(np.mean(errors_20), 3), 'degrees.')\n\naccuracy = 100 - np.mean(errors_20)\nprint('Accuracy:', round(accuracy, 3), '%.')",
"Mean Absolute Error 20Hz with 25Hz: 1.211 degrees.\nAccuracy: 98.789 %.\n"
],
[
"predictions_25 = rf_25.predict(X_test_25)\nerrors_25 = abs(predictions_25 - y_test_25)\nprint('Mean Absolute Error 25Hz with 25Hz:', round(np.mean(errors_25), 3), 'degrees.')\n\naccuracy = 100 - np.mean(errors_25)\nprint('Accuracy:', round(accuracy, 3), '%.')",
"Mean Absolute Error 25Hz with 25Hz: 1.025 degrees.\nAccuracy: 98.975 %.\n"
],
[
"predictions_30 = rf_25.predict(X_test_30)\nerrors_30 = abs(predictions_30 - y_test_30)\nprint('Mean Absolute Error 30Hz with 25Hz:', round(np.mean(errors_30), 3), 'degrees.')\n\naccuracy = 100 - np.mean(errors_30)\nprint('Accuracy:', round(accuracy, 3), '%.')",
"Mean Absolute Error 30Hz with 25Hz: 1.037 degrees.\nAccuracy: 98.963 %.\n"
],
[
"predictions_35 = rf_25.predict(X_test_35)\nerrors_35 = abs(predictions_35 - y_test_35)\nprint('Mean Absolute Error 35Hz with 25Hz:', round(np.mean(errors_35), 3), 'degrees.')\n\naccuracy = 100 - np.mean(errors_35)\nprint('Accuracy:', round(accuracy, 3), '%.')",
"Mean Absolute Error 35Hz with 25Hz: 0.976 degrees.\nAccuracy: 99.024 %.\n"
],
[
"predictions_40 = rf_25.predict(X_test_40)\nerrors_40 = abs(predictions_40 - y_test_40)\nprint('Mean Absolute Error 40Hz with 25Hz:', round(np.mean(errors_40), 3), 'degrees.')\n\naccuracy = 100 - np.mean(errors_40)\nprint('Accuracy:', round(accuracy, 3), '%.')",
"Mean Absolute Error 40Hz with 25Hz: 0.995 degrees.\nAccuracy: 99.005 %.\n"
]
],
[
[
"### Predictions with 30Hz model",
"_____no_output_____"
]
],
[
[
"predictions_10 = rf_30.predict(X_test_10)\nerrors_10 = abs(predictions_10 - y_test_10)\nprint('Mean Absolute Error 10Hz with 30Hz:', round(np.mean(errors_10), 3), 'degrees.')\n\naccuracy = 100 - np.mean(errors_10)\nprint('Accuracy:', round(accuracy, 3), '%.')",
"Mean Absolute Error 10Hz with 30Hz: 1.463 degrees.\nAccuracy: 98.537 %.\n"
],
[
"predictions_15 = rf_30.predict(X_test_15)\nerrors_15 = abs(predictions_15 - y_test_15)\nprint('Mean Absolute Error 15Hz with 30Hz:', round(np.mean(errors_15), 3), 'degrees.')\n\naccuracy = 100 - np.mean(errors_15)\nprint('Accuracy:', round(accuracy, 3), '%.')",
"Mean Absolute Error 15Hz with 30Hz: 1.354 degrees.\nAccuracy: 98.646 %.\n"
],
[
"predictions_20 = rf_30.predict(X_test_20)\nerrors_20 = abs(predictions_20 - y_test_20)\nprint('Mean Absolute Error 20Hz with 30Hz:', round(np.mean(errors_20), 3), 'degrees.')\n\naccuracy = 100 - np.mean(errors_20)\nprint('Accuracy:', round(accuracy, 3), '%.')",
"Mean Absolute Error 20Hz with 30Hz: 1.153 degrees.\nAccuracy: 98.847 %.\n"
],
[
"predictions_25 = rf_30.predict(X_test_25)\nerrors_25 = abs(predictions_25 - y_test_25)\nprint('Mean Absolute Error 25Hz with 30Hz:', round(np.mean(errors_25), 3), 'degrees.')\n\naccuracy = 100 - np.mean(errors_25)\nprint('Accuracy:', round(accuracy, 3), '%.')",
"Mean Absolute Error 25Hz with 30Hz: 0.932 degrees.\nAccuracy: 99.068 %.\n"
],
[
"predictions_30 = rf_30.predict(X_test_30)\nerrors_30 = abs(predictions_30 - y_test_30)\nprint('Mean Absolute Error 30Hz with 30Hz:', round(np.mean(errors_30), 3), 'degrees.')\n\naccuracy = 100 - np.mean(errors_30)\nprint('Accuracy:', round(accuracy, 3), '%.')",
"Mean Absolute Error 30Hz with 30Hz: 0.905 degrees.\nAccuracy: 99.095 %.\n"
],
[
"predictions_35 = rf_30.predict(X_test_35)\nerrors_35 = abs(predictions_35 - y_test_35)\nprint('Mean Absolute Error 35Hz with 30Hz:', round(np.mean(errors_35), 3), 'degrees.')\n\naccuracy = 100 - np.mean(errors_35)\nprint('Accuracy:', round(accuracy, 3), '%.')",
"Mean Absolute Error 35Hz with 30Hz: 0.853 degrees.\nAccuracy: 99.147 %.\n"
],
[
"predictions_40 = rf_30.predict(X_test_40)\nerrors_40 = abs(predictions_40 - y_test_40)\nprint('Mean Absolute Error 40Hz with 30Hz:', round(np.mean(errors_40), 3), 'degrees.')\n\naccuracy = 100 - np.mean(errors_40)\nprint('Accuracy:', round(accuracy, 3), '%.')",
"Mean Absolute Error 40Hz with 30Hz: 0.775 degrees.\nAccuracy: 99.225 %.\n"
]
],
[
[
"### Testing with 35Hz model",
"_____no_output_____"
]
],
[
[
"predictions_10 = rf_35.predict(X_test_10)\nerrors_10 = abs(predictions_10 - y_test_10)\nprint('Mean Absolute Error 10Hz with 35Hz:', round(np.mean(errors_10), 3), 'degrees.')\n\naccuracy = 100 - np.mean(errors_10)\nprint('Accuracy:', round(accuracy, 3), '%.')",
"Mean Absolute Error 10Hz with 35Hz: 1.41 degrees.\nAccuracy: 98.59 %.\n"
],
[
"predictions_15 = rf_35.predict(X_test_15)\nerrors_15 = abs(predictions_15 - y_test_15)\nprint('Mean Absolute Error 15Hz with 35Hz:', round(np.mean(errors_15), 3), 'degrees.')\n\naccuracy = 100 - np.mean(errors_15)\nprint('Accuracy:', round(accuracy, 3), '%.')",
"Mean Absolute Error 15Hz with 35Hz: 1.345 degrees.\nAccuracy: 98.655 %.\n"
],
[
"predictions_20 = rf_35.predict(X_test_20)\nerrors_20 = abs(predictions_20 - y_test_20)\nprint('Mean Absolute Error 20Hz with 35Hz:', round(np.mean(errors_20), 3), 'degrees.')\n\naccuracy = 100 - np.mean(errors_20)\nprint('Accuracy:', round(accuracy, 3), '%.')",
"Mean Absolute Error 20Hz with 35Hz: 1.202 degrees.\nAccuracy: 98.798 %.\n"
],
[
"predictions_25 = rf_35.predict(X_test_25)\nerrors_25 = abs(predictions_25 - y_test_25)\nprint('Mean Absolute Error 25Hz with 35Hz:', round(np.mean(errors_25), 3), 'degrees.')\n\naccuracy = 100 - np.mean(errors_25)\nprint('Accuracy:', round(accuracy, 3), '%.')",
"Mean Absolute Error 25Hz with 35Hz: 0.996 degrees.\nAccuracy: 99.004 %.\n"
],
[
"predictions_30 = rf_35.predict(X_test_30)\nerrors_30 = abs(predictions_30 - y_test_30)\nprint('Mean Absolute Error 30Hz with 35Hz:', round(np.mean(errors_30), 3), 'degrees.')\n\naccuracy = 100 - np.mean(errors_30)\nprint('Accuracy:', round(accuracy, 3), '%.')",
"Mean Absolute Error 30Hz with 35Hz: 1.02 degrees.\nAccuracy: 98.98 %.\n"
],
[
"predictions_35 = rf_35.predict(X_test_35)\nerrors_35 = abs(predictions_35 - y_test_35)\nprint('Mean Absolute Error 35Hz with 35Hz:', round(np.mean(errors_35), 3), 'degrees.')\n\naccuracy = 100 - np.mean(errors_35)\nprint('Accuracy:', round(accuracy, 3), '%.')",
"Mean Absolute Error 35Hz with 35Hz: 0.843 degrees.\nAccuracy: 99.157 %.\n"
],
[
"predictions_40 = rf_35.predict(X_test_40)\nerrors_40 = abs(predictions_40 - y_test_40)\nprint('Mean Absolute Error 40Hz with 35Hz:', round(np.mean(errors_40), 3), 'degrees.')\n\naccuracy = 100 - np.mean(errors_40)\nprint('Accuracy:', round(accuracy, 3), '%.')",
"Mean Absolute Error 40Hz with 35Hz: 0.933 degrees.\nAccuracy: 99.067 %.\n"
]
],
[
[
"### Training with 40Hz model",
"_____no_output_____"
]
],
[
[
"predictions_10 = rf_35.predict(X_test_10)\nerrors_10 = abs(predictions_10 - y_test_10)\nprint('Mean Absolute Error 10Hz with 35Hz:', round(np.mean(errors_10), 3), 'degrees.')\n\naccuracy = 100 - np.mean(errors_10)\nprint('Accuracy:', round(accuracy, 3), '%.')",
"Mean Absolute Error 10Hz with 35Hz: 1.41 degrees.\nAccuracy: 98.59 %.\n"
],
[
"predictions_15 = rf_35.predict(X_test_15)\nerrors_15 = abs(predictions_15 - y_test_15)\nprint('Mean Absolute Error 15Hz with 35Hz:', round(np.mean(errors_15), 3), 'degrees.')\n\naccuracy = 100 - np.mean(errors_15)\nprint('Accuracy:', round(accuracy, 3), '%.')",
"Mean Absolute Error 15Hz with 35Hz: 1.345 degrees.\nAccuracy: 98.655 %.\n"
],
[
"predictions_20 = rf_35.predict(X_test_20)\nerrors_20 = abs(predictions_20 - y_test_20)\nprint('Mean Absolute Error 20Hz with 35Hz:', round(np.mean(errors_20), 3), 'degrees.')\n\naccuracy = 100 - np.mean(errors_20)\nprint('Accuracy:', round(accuracy, 3), '%.')",
"Mean Absolute Error 20Hz with 35Hz: 1.202 degrees.\nAccuracy: 98.798 %.\n"
],
[
"predictions_20 = rf_35.predict(X_test_20)\nerrors_20 = abs(predictions_20 - y_test_20)\nprint('Mean Absolute Error 20Hz with 35Hz:', round(np.mean(errors_20), 3), 'degrees.')\n\naccuracy = 100 - np.mean(errors_20)\nprint('Accuracy:', round(accuracy, 3), '%.')",
"Mean Absolute Error 20Hz with 35Hz: 1.202 degrees.\nAccuracy: 98.798 %.\n"
],
[
"predictions_25 = rf_35.predict(X_test_25)\nerrors_25 = abs(predictions_25 - y_test_25)\nprint('Mean Absolute Error 25Hz with 35Hz:', round(np.mean(errors_25), 3), 'degrees.')\n\naccuracy = 100 - np.mean(errors_25)\nprint('Accuracy:', round(accuracy, 3), '%.')",
"Mean Absolute Error 25Hz with 35Hz: 0.996 degrees.\nAccuracy: 99.004 %.\n"
],
[
"predictions_30 = rf_35.predict(X_test_30)\nerrors_30 = abs(predictions_30 - y_test_30)\nprint('Mean Absolute Error 30Hz with 35Hz:', round(np.mean(errors_30), 3), 'degrees.')\n\naccuracy = 100 - np.mean(errors_30)\nprint('Accuracy:', round(accuracy, 3), '%.')",
"Mean Absolute Error 30Hz with 35Hz: 1.02 degrees.\nAccuracy: 98.98 %.\n"
],
[
"predictions_35 = rf_35.predict(X_test_35)\nerrors_35 = abs(predictions_35 - y_test_35)\nprint('Mean Absolute Error 35Hz with 35Hz:', round(np.mean(errors_35), 3), 'degrees.')\n\naccuracy = 100 - np.mean(errors_35)\nprint('Accuracy:', round(accuracy, 3), '%.')",
"Mean Absolute Error 35Hz with 35Hz: 0.843 degrees.\nAccuracy: 99.157 %.\n"
],
[
"predictions_40 = rf_35.predict(X_test_40)\nerrors_40 = abs(predictions_40 - y_test_40)\nprint('Mean Absolute Error 40Hz with 35Hz:', round(np.mean(errors_40), 3), 'degrees.')\n\naccuracy = 100 - np.mean(errors_40)\nprint('Accuracy:', round(accuracy, 3), '%.')",
"Mean Absolute Error 40Hz with 35Hz: 0.933 degrees.\nAccuracy: 99.067 %.\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0befeee26976a1e5d73f25c64751f25fc57d213 | 2,601 | ipynb | Jupyter Notebook | 2_GeneID_Compiler.ipynb | avihani/Deorphanization | ddf80189769e1c17bd64cf85e4959a1ec87b1432 | [
"MIT"
] | null | null | null | 2_GeneID_Compiler.ipynb | avihani/Deorphanization | ddf80189769e1c17bd64cf85e4959a1ec87b1432 | [
"MIT"
] | null | null | null | 2_GeneID_Compiler.ipynb | avihani/Deorphanization | ddf80189769e1c17bd64cf85e4959a1ec87b1432 | [
"MIT"
] | null | null | null | 24.771429 | 107 | 0.524414 | [
[
[
"import pandas as pd\nimport numpy as np\nimport os\n\nprint(pd.__version__)\nprint(np.__version__)\n\nos.chdir(\"./seq_preprocess\")",
"1.3.3\n1.21.2\n"
],
[
"counts_data = pd.read_csv(\"./compiledExpectedCounts_allGene.csv\", index_col = 0)\nbiomart = pd.read_csv(\"./biomart_eID_symbol.csv\", index_col = 0)\nmm_name = pd.read_csv(\"./mm_symbol_name.csv\", index_col = 0)",
"_____no_output_____"
],
[
"eID = counts_data.index.values.tolist()\neID = [i.split('.', 1)[0] for i in eID]\neID_compile = pd.DataFrame(eID, columns = ['eID'])",
"_____no_output_____"
],
[
"eID_sym_name = pd.DataFrame()\n\nfor i in range(eID_compile.shape[0]):\n eID = eID_compile.iloc[i,:].values[0]\n if len(biomart[biomart.loc[:,'ensembl_gene_id'] == eID]['mgi_symbol'].values) > 0:\n sym = biomart[biomart.loc[:,'ensembl_gene_id'] == eID]['mgi_symbol'].values[0]\n else:\n sym = 'no_symbol'\n if len(mm_name[mm_name['symbol'] == sym]['gene_name'].values) > 0:\n name = mm_name[mm_name['symbol'] == sym]['gene_name'].values[0]\n else:\n name = 'no_name'\n eID_sym_name = pd.concat([eID_sym_name, pd.DataFrame([eID, sym, name]).transpose()], axis = 0)\n ",
"_____no_output_____"
],
[
"#eID_sym_name.to_csv(\"./eID_sym_name.csv\")\neID_sym_name = pd.read_csv(\"./eID_sym_name.csv\", index_col = 0)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code"
]
] |
d0bf0258567146cb1e0e486e7cde569f4c239cf0 | 578,904 | ipynb | Jupyter Notebook | notebooks/08_Neural_Networks.ipynb | Zanah-Tech/MadeWithML | c2901b0a7438d7971430edb9790d20c2d48cb4b8 | [
"MIT"
] | null | null | null | notebooks/08_Neural_Networks.ipynb | Zanah-Tech/MadeWithML | c2901b0a7438d7971430edb9790d20c2d48cb4b8 | [
"MIT"
] | null | null | null | notebooks/08_Neural_Networks.ipynb | Zanah-Tech/MadeWithML | c2901b0a7438d7971430edb9790d20c2d48cb4b8 | [
"MIT"
] | null | null | null | 578,904 | 578,904 | 0.928917 | [
[
[
"<div align=\"center\">\n<h1><img width=\"30\" src=\"https://madewithml.com/static/images/rounded_logo.png\"> <a href=\"https://madewithml.com/\">Made With ML</a></h1>\nApplied ML · MLOps · Production\n<br>\nJoin 20K+ developers in learning how to responsibly <a href=\"https://madewithml.com/about/\">deliver value</a> with ML.\n <br>\n</div>\n\n<br>\n\n<div align=\"center\">\n <a target=\"_blank\" href=\"https://newsletter.madewithml.com\"><img src=\"https://img.shields.io/badge/Subscribe-20K-brightgreen\"></a> \n <a target=\"_blank\" href=\"https://github.com/GokuMohandas/MadeWithML\"><img src=\"https://img.shields.io/github/stars/GokuMohandas/MadeWithML.svg?style=social&label=Star\"></a> \n <a target=\"_blank\" href=\"https://www.linkedin.com/in/goku\"><img src=\"https://img.shields.io/badge/style--5eba00.svg?label=LinkedIn&logo=linkedin&style=social\"></a> \n <a target=\"_blank\" href=\"https://twitter.com/GokuMohandas\"><img src=\"https://img.shields.io/twitter/follow/GokuMohandas.svg?label=Follow&style=social\"></a>\n <br>\n 🔥 Among the <a href=\"https://github.com/topics/deep-learning\" target=\"_blank\">top ML</a> repositories on GitHub\n</div>\n\n<br>\n<hr>",
"_____no_output_____"
],
[
"# Neural Networks\n\nIn this lesson, we will explore multilayer perceptrons (MLPs) which are a basic type of neural network. We'll first motivate non-linear activation functions by trying to fit a linear model (logistic regression) on our non-linear spiral data. Then we'll implement an MLP using just NumPy and then with PyTorch.",
"_____no_output_____"
],
[
"<div align=\"left\">\n<a target=\"_blank\" href=\"https://madewithml.com/courses/ml-foundations/neural-networks/\"><img src=\"https://img.shields.io/badge/📖 Read-blog post-9cf\"></a> \n<a href=\"https://github.com/GokuMohandas/MadeWithML/blob/main/notebooks/08_Neural_Networks.ipynb\" role=\"button\"><img src=\"https://img.shields.io/static/v1?label=&message=View%20On%20GitHub&color=586069&logo=github&labelColor=2f363d\"></a> \n<a href=\"https://colab.research.google.com/github/GokuMohandas/MadeWithML/blob/main/notebooks/08_Neural_Networks.ipynb\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"></a>\n</div>",
"_____no_output_____"
],
[
"# Overview",
"_____no_output_____"
],
[
"Our goal is to learn a model $\\hat{y}$ that models $y$ given $X$ . You'll notice that neural networks are just extensions of the generalized linear methods we've seen so far but with non-linear activation functions since our data will be highly non-linear.\n\n<div align=\"left\">\n<img src=\"https://raw.githubusercontent.com/GokuMohandas/MadeWithML/main/images/basics/neural-networks/mlp.png\" width=\"500\">\n</div>\n\n$z_1 = XW_1$\n\n$a_1 = f(z_1)$\n\n$z_2 = a_1W_2$\n\n$\\hat{y} = softmax(z_2)$ # classification\n\n* $X$ = inputs | $\\in \\mathbb{R}^{NXD}$ ($D$ is the number of features)\n* $W_1$ = 1st layer weights | $\\in \\mathbb{R}^{DXH}$ ($H$ is the number of hidden units in layer 1)\n* $z_1$ = outputs from first layer $\\in \\mathbb{R}^{NXH}$\n* $f$ = non-linear activation function\n* $a_1$ = activation applied first layer's outputs | $\\in \\mathbb{R}^{NXH}$\n* $W_2$ = 2nd layer weights | $\\in \\mathbb{R}^{HXC}$ ($C$ is the number of classes)\n* $z_2$ = outputs from second layer $\\in \\mathbb{R}^{NXH}$\n* $\\hat{y}$ = prediction | $\\in \\mathbb{R}^{NXC}$ ($N$ is the number of samples)",
"_____no_output_____"
],
[
"* **Objective:** Predict the probability of class $y$ given the inputs $X$. Non-linearity is introduced to model the complex, non-linear data.\n* **Advantages:**\n * Can model non-linear patterns in the data really well.\n* **Disadvantages:**\n * Overfits easily.\n * Computationally intensive as network increases in size.\n * Not easily interpretable.\n* **Miscellaneous:** Future neural network architectures that we'll see use the MLP as a modular unit for feed forward operations (affine transformation (XW) followed by a non-linear operation).",
"_____no_output_____"
],
[
"> We're going to leave out the bias terms $\\beta$ to avoid further crowding the backpropagation calculations.",
"_____no_output_____"
],
[
"# Set up",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport random",
"_____no_output_____"
],
[
"SEED = 1234",
"_____no_output_____"
],
[
"# Set seed for reproducibility\nnp.random.seed(SEED)\nrandom.seed(SEED)",
"_____no_output_____"
]
],
[
[
"## Load data",
"_____no_output_____"
],
[
"I created some non-linearly separable spiral data so let's go ahead and download it for our classification task.",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\nimport pandas as pd",
"_____no_output_____"
],
[
"# Load data\nurl = \"https://raw.githubusercontent.com/GokuMohandas/MadeWithML/main/datasets/spiral.csv\"\ndf = pd.read_csv(url, header=0) # load\ndf = df.sample(frac=1).reset_index(drop=True) # shuffle\ndf.head()",
"_____no_output_____"
],
[
"# Data shapes\nX = df[['X1', 'X2']].values\ny = df['color'].values\nprint (\"X: \", np.shape(X))\nprint (\"y: \", np.shape(y))",
"X: (1500, 2)\ny: (1500,)\n"
],
[
"# Visualize data\nplt.title(\"Generated non-linear data\")\ncolors = {'c1': 'red', 'c2': 'yellow', 'c3': 'blue'}\nplt.scatter(X[:, 0], X[:, 1], c=[colors[_y] for _y in y], edgecolors='k', s=25)\nplt.show()",
"_____no_output_____"
]
],
[
[
"## Split data",
"_____no_output_____"
],
[
"We'll shuffle our dataset (since it's ordered by class) and then create our data splits (stratified on class).",
"_____no_output_____"
]
],
[
[
"import collections\nfrom sklearn.model_selection import train_test_split",
"_____no_output_____"
],
[
"TRAIN_SIZE = 0.7\nVAL_SIZE = 0.15\nTEST_SIZE = 0.15",
"_____no_output_____"
],
[
"def train_val_test_split(X, y, train_size):\n \"\"\"Split dataset into data splits.\"\"\"\n X_train, X_, y_train, y_ = train_test_split(X, y, train_size=TRAIN_SIZE, stratify=y)\n X_val, X_test, y_val, y_test = train_test_split(X_, y_, train_size=0.5, stratify=y_)\n return X_train, X_val, X_test, y_train, y_val, y_test",
"_____no_output_____"
],
[
"# Create data splits\nX_train, X_val, X_test, y_train, y_val, y_test = train_val_test_split(\n X=X, y=y, train_size=TRAIN_SIZE)\nprint (f\"X_train: {X_train.shape}, y_train: {y_train.shape}\")\nprint (f\"X_val: {X_val.shape}, y_val: {y_val.shape}\")\nprint (f\"X_test: {X_test.shape}, y_test: {y_test.shape}\")\nprint (f\"Sample point: {X_train[0]} → {y_train[0]}\")",
"X_train: (1050, 2), y_train: (1050,)\nX_val: (225, 2), y_val: (225,)\nX_test: (225, 2), y_test: (225,)\nSample point: [-0.63919105 -0.69724176] → c1\n"
]
],
[
[
"## Label encoding",
"_____no_output_____"
],
[
"In the previous lesson we wrote our own label encoder class to see the inner functions but this time we'll use scikit-learn [`LabelEncoder`](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.LabelEncoder.html) class which does the same operations as ours.",
"_____no_output_____"
]
],
[
[
"from sklearn.preprocessing import LabelEncoder",
"_____no_output_____"
],
[
"# Output vectorizer\nlabel_encoder = LabelEncoder()",
"_____no_output_____"
],
[
"# Fit on train data\nlabel_encoder = label_encoder.fit(y_train)\nclasses = list(label_encoder.classes_)\nprint (f\"classes: {classes}\")",
"classes: ['c1', 'c2', 'c3']\n"
],
[
"# Convert labels to tokens\nprint (f\"y_train[0]: {y_train[0]}\")\ny_train = label_encoder.transform(y_train)\ny_val = label_encoder.transform(y_val)\ny_test = label_encoder.transform(y_test)\nprint (f\"y_train[0]: {y_train[0]}\")",
"y_train[0]: c1\ny_train[0]: 0\n"
],
[
"# Class weights\ncounts = np.bincount(y_train)\nclass_weights = {i: 1.0/count for i, count in enumerate(counts)}\nprint (f\"counts: {counts}\\nweights: {class_weights}\")",
"counts: [350 350 350]\nweights: {0: 0.002857142857142857, 1: 0.002857142857142857, 2: 0.002857142857142857}\n"
]
],
[
[
"## Standardize data",
"_____no_output_____"
],
[
"We need to standardize our data (zero mean and unit variance) so a specific feature's magnitude doesn't affect how the model learns its weights. We're only going to standardize the inputs X because our outputs y are class values.",
"_____no_output_____"
]
],
[
[
"from sklearn.preprocessing import StandardScaler",
"_____no_output_____"
],
[
"# Standardize the data (mean=0, std=1) using training data\nX_scaler = StandardScaler().fit(X_train)",
"_____no_output_____"
],
[
"# Apply scaler on training and test data (don't standardize outputs for classification)\nX_train = X_scaler.transform(X_train)\nX_val = X_scaler.transform(X_val)\nX_test = X_scaler.transform(X_test)",
"_____no_output_____"
],
[
"# Check (means should be ~0 and std should be ~1)\nprint (f\"X_test[0]: mean: {np.mean(X_test[:, 0], axis=0):.1f}, std: {np.std(X_test[:, 0], axis=0):.1f}\")\nprint (f\"X_test[1]: mean: {np.mean(X_test[:, 1], axis=0):.1f}, std: {np.std(X_test[:, 1], axis=0):.1f}\")",
"X_test[0]: mean: 0.1, std: 0.9\nX_test[1]: mean: 0.0, std: 1.0\n"
]
],
[
[
"# Linear model",
"_____no_output_____"
],
[
"Before we get to our neural network, we're going to motivate non-linear activation functions by implementing a generalized linear model (logistic regression). We'll see why linear models (with linear activations) won't suffice for our dataset.",
"_____no_output_____"
]
],
[
[
"import torch",
"_____no_output_____"
],
[
"# Set seed for reproducibility\ntorch.manual_seed(SEED)",
"_____no_output_____"
]
],
[
[
"## Model",
"_____no_output_____"
]
],
[
[
"from torch import nn\nimport torch.nn.functional as F",
"_____no_output_____"
],
[
"INPUT_DIM = X_train.shape[1] # X is 2-dimensional\nHIDDEN_DIM = 100\nNUM_CLASSES = len(classes) # 3 classes",
"_____no_output_____"
],
[
"class LinearModel(nn.Module):\n def __init__(self, input_dim, hidden_dim, num_classes):\n super(LinearModel, self).__init__()\n self.fc1 = nn.Linear(input_dim, hidden_dim)\n self.fc2 = nn.Linear(hidden_dim, num_classes)\n \n def forward(self, x_in, apply_softmax=False):\n z = self.fc1(x_in) # linear activation\n y_pred = self.fc2(z)\n if apply_softmax:\n y_pred = F.softmax(y_pred, dim=1) \n return y_pred",
"_____no_output_____"
],
[
"# Initialize model\nmodel = LinearModel(input_dim=INPUT_DIM, hidden_dim=HIDDEN_DIM, num_classes=NUM_CLASSES)\nprint (model.named_parameters)",
"<bound method Module.named_parameters of LinearModel(\n (fc1): Linear(in_features=2, out_features=100, bias=True)\n (fc2): Linear(in_features=100, out_features=3, bias=True)\n)>\n"
]
],
[
[
"## Training",
"_____no_output_____"
]
],
[
[
"from torch.optim import Adam",
"_____no_output_____"
],
[
"LEARNING_RATE = 1e-2\nNUM_EPOCHS = 10\nBATCH_SIZE = 32",
"_____no_output_____"
],
[
"# Define Loss\nclass_weights_tensor = torch.Tensor(list(class_weights.values()))\nloss_fn = nn.CrossEntropyLoss(weight=class_weights_tensor)",
"_____no_output_____"
],
[
"# Accuracy\ndef accuracy_fn(y_pred, y_true):\n n_correct = torch.eq(y_pred, y_true).sum().item()\n accuracy = (n_correct / len(y_pred)) * 100\n return accuracy",
"_____no_output_____"
],
[
"# Optimizer\noptimizer = Adam(model.parameters(), lr=LEARNING_RATE) ",
"_____no_output_____"
],
[
"# Convert data to tensors\nX_train = torch.Tensor(X_train)\ny_train = torch.LongTensor(y_train)\nX_val = torch.Tensor(X_val)\ny_val = torch.LongTensor(y_val)\nX_test = torch.Tensor(X_test)\ny_test = torch.LongTensor(y_test)",
"_____no_output_____"
],
[
"# Training\nfor epoch in range(NUM_EPOCHS):\n # Forward pass\n y_pred = model(X_train)\n\n # Loss\n loss = loss_fn(y_pred, y_train)\n\n # Zero all gradients\n optimizer.zero_grad()\n\n # Backward pass\n loss.backward()\n\n # Update weights\n optimizer.step()\n\n if epoch%1==0: \n predictions = y_pred.max(dim=1)[1] # class\n accuracy = accuracy_fn(y_pred=predictions, y_true=y_train)\n print (f\"Epoch: {epoch} | loss: {loss:.2f}, accuracy: {accuracy:.1f}\")",
"Epoch: 0 | loss: 1.13, accuracy: 49.9\nEpoch: 1 | loss: 0.91, accuracy: 50.3\nEpoch: 2 | loss: 0.79, accuracy: 55.3\nEpoch: 3 | loss: 0.74, accuracy: 54.6\nEpoch: 4 | loss: 0.74, accuracy: 53.7\nEpoch: 5 | loss: 0.75, accuracy: 53.6\nEpoch: 6 | loss: 0.76, accuracy: 53.7\nEpoch: 7 | loss: 0.77, accuracy: 53.8\nEpoch: 8 | loss: 0.77, accuracy: 53.9\nEpoch: 9 | loss: 0.78, accuracy: 53.9\n"
]
],
[
[
"## Evaluation",
"_____no_output_____"
]
],
[
[
"import json\nimport matplotlib.pyplot as plt\nfrom sklearn.metrics import precision_recall_fscore_support",
"_____no_output_____"
],
[
"def get_performance(y_true, y_pred, classes):\n \"\"\"Per-class performance metrics.\"\"\"\n # Performance\n performance = {\"overall\": {}, \"class\": {}}\n\n # Overall performance\n metrics = precision_recall_fscore_support(y_true, y_pred, average=\"weighted\")\n performance[\"overall\"][\"precision\"] = metrics[0]\n performance[\"overall\"][\"recall\"] = metrics[1]\n performance[\"overall\"][\"f1\"] = metrics[2]\n performance[\"overall\"][\"num_samples\"] = np.float64(len(y_true))\n\n # Per-class performance\n metrics = precision_recall_fscore_support(y_true, y_pred, average=None)\n for i in range(len(classes)):\n performance[\"class\"][classes[i]] = {\n \"precision\": metrics[0][i],\n \"recall\": metrics[1][i],\n \"f1\": metrics[2][i],\n \"num_samples\": np.float64(metrics[3][i]),\n }\n\n return performance",
"_____no_output_____"
],
[
"# Predictions\ny_prob = model(X_test, apply_softmax=True)\nprint (f\"sample probability: {y_prob[0]}\")\ny_pred = y_prob.max(dim=1)[1]\nprint (f\"sample class: {y_pred[0]}\")",
"sample probability: tensor([0.8995, 0.0286, 0.0719], grad_fn=<SelectBackward>)\nsample class: 0\n"
],
[
"# Performance report\nperformance = get_performance(y_true=y_test, y_pred=y_pred, classes=classes)\nprint (json.dumps(performance, indent=2))",
"{\n \"overall\": {\n \"precision\": 0.5326832791621524,\n \"recall\": 0.5333333333333333,\n \"f1\": 0.5327986224880954,\n \"num_samples\": 225.0\n },\n \"class\": {\n \"c1\": {\n \"precision\": 0.5,\n \"recall\": 0.5066666666666667,\n \"f1\": 0.5033112582781457,\n \"num_samples\": 75.0\n },\n \"c2\": {\n \"precision\": 0.5211267605633803,\n \"recall\": 0.49333333333333335,\n \"f1\": 0.5068493150684932,\n \"num_samples\": 75.0\n },\n \"c3\": {\n \"precision\": 0.5769230769230769,\n \"recall\": 0.6,\n \"f1\": 0.5882352941176471,\n \"num_samples\": 75.0\n }\n }\n}\n"
],
[
"def plot_multiclass_decision_boundary(model, X, y):\n x_min, x_max = X[:, 0].min() - 0.1, X[:, 0].max() + 0.1\n y_min, y_max = X[:, 1].min() - 0.1, X[:, 1].max() + 0.1\n xx, yy = np.meshgrid(np.linspace(x_min, x_max, 101), np.linspace(y_min, y_max, 101))\n cmap = plt.cm.Spectral\n \n X_test = torch.from_numpy(np.c_[xx.ravel(), yy.ravel()]).float()\n y_pred = model(X_test, apply_softmax=True)\n _, y_pred = y_pred.max(dim=1)\n y_pred = y_pred.reshape(xx.shape)\n plt.contourf(xx, yy, y_pred, cmap=plt.cm.Spectral, alpha=0.8)\n plt.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap=plt.cm.RdYlBu)\n plt.xlim(xx.min(), xx.max())\n plt.ylim(yy.min(), yy.max())",
"_____no_output_____"
],
[
"# Visualize the decision boundary\nplt.figure(figsize=(12,5))\nplt.subplot(1, 2, 1)\nplt.title(\"Train\")\nplot_multiclass_decision_boundary(model=model, X=X_train, y=y_train)\nplt.subplot(1, 2, 2)\nplt.title(\"Test\")\nplot_multiclass_decision_boundary(model=model, X=X_test, y=y_test)\nplt.show()",
"_____no_output_____"
]
],
[
[
"# Activation functions",
"_____no_output_____"
],
[
"Using the generalized linear method (logistic regression) yielded poor results because of the non-linearity present in our data yet our activation functions were linear. We need to use an activation function that can allow our model to learn and map the non-linearity in our data. There are many different options so let's explore a few.",
"_____no_output_____"
]
],
[
[
"# Fig size\nplt.figure(figsize=(12,3))\n\n# Data\nx = torch.arange(-5., 5., 0.1)\n\n# Sigmoid activation (constrain a value between 0 and 1.)\nplt.subplot(1, 3, 1)\nplt.title(\"Sigmoid activation\")\ny = torch.sigmoid(x)\nplt.plot(x.numpy(), y.numpy())\n\n# Tanh activation (constrain a value between -1 and 1.)\nplt.subplot(1, 3, 2)\ny = torch.tanh(x)\nplt.title(\"Tanh activation\")\nplt.plot(x.numpy(), y.numpy())\n\n# Relu (clip the negative values to 0)\nplt.subplot(1, 3, 3)\ny = F.relu(x)\nplt.title(\"ReLU activation\")\nplt.plot(x.numpy(), y.numpy())\n\n# Show plots\nplt.show()",
"_____no_output_____"
]
],
[
[
"The ReLU activation function ($max(0,z)$) is by far the most widely used activation function for neural networks. But as you can see, each activation function has its own constraints so there are circumstances where you'll want to use different ones. For example, if we need to constrain our outputs between 0 and 1, then the sigmoid activation is the best choice.",
"_____no_output_____"
],
[
"> In some cases, using a ReLU activation function may not be sufficient. For instance, when the outputs from our neurons are mostly negative, the activation function will produce zeros. This effectively creates a \"dying ReLU\" and a recovery is unlikely. To mitigate this effect, we could lower the learning rate or use [alternative ReLU activations](https://medium.com/@danqing/a-practical-guide-to-relu-b83ca804f1f7), ex. leaky ReLU or parametric ReLU (PReLU), which have a small slope for negative neuron outputs. ",
"_____no_output_____"
],
[
"# NumPy\n\nNow let's create our multilayer perceptron (MLP) which is going to be exactly like the logistic regression model but with the activation function to map the non-linearity in our data. \n\n> It's normal to find the math and code in this section slightly complex. You can still read each of the steps to build intuition for when we implement this using PyTorch.\n",
"_____no_output_____"
],
[
"Our goal is to learn a model 𝑦̂ that models 𝑦 given 𝑋 . You'll notice that neural networks are just extensions of the generalized linear methods we've seen so far but with non-linear activation functions since our data will be highly non-linear.\n\n$z_1 = XW_1$\n\n$a_1 = f(z_1)$\n\n$z_2 = a_1W_2$\n\n$\\hat{y} = softmax(z_2)$ # classification\n\n* $X$ = inputs | $\\in \\mathbb{R}^{NXD}$ ($D$ is the number of features)\n* $W_1$ = 1st layer weights | $\\in \\mathbb{R}^{DXH}$ ($H$ is the number of hidden units in layer 1)\n* $z_1$ = outputs from first layer $\\in \\mathbb{R}^{NXH}$\n* $f$ = non-linear activation function\n* $a_1$ = activation applied first layer's outputs | $\\in \\mathbb{R}^{NXH}$\n* $W_2$ = 2nd layer weights | $\\in \\mathbb{R}^{HXC}$ ($C$ is the number of classes)\n* $z_2$ = outputs from second layer $\\in \\mathbb{R}^{NXH}$\n* $\\hat{y}$ = prediction | $\\in \\mathbb{R}^{NXC}$ ($N$ is the number of samples)",
"_____no_output_____"
],
[
"## Initialize weights",
"_____no_output_____"
],
[
"1. Randomly initialize the model's weights $W$ (we'll cover more effective initialization strategies later in this lesson).",
"_____no_output_____"
]
],
[
[
"# Initialize first layer's weights\nW1 = 0.01 * np.random.randn(INPUT_DIM, HIDDEN_DIM)\nb1 = np.zeros((1, HIDDEN_DIM))\nprint (f\"W1: {W1.shape}\")\nprint (f\"b1: {b1.shape}\")",
"W1: (2, 100)\nb1: (1, 100)\n"
]
],
[
[
"## Model",
"_____no_output_____"
],
[
"2. Feed inputs $X$ into the model to do the forward pass and receive the probabilities.",
"_____no_output_____"
],
[
"First we pass the inputs into the first layer.\n * $z_1 = XW_1$",
"_____no_output_____"
]
],
[
[
"# z1 = [NX2] · [2X100] + [1X100] = [NX100]\nz1 = np.dot(X_train, W1) + b1\nprint (f\"z1: {z1.shape}\")",
"z1: (1050, 100)\n"
]
],
[
[
"Next we apply the non-linear activation function, ReLU ($max(0,z)$) in this case.\n * $a_1 = f(z_1)$",
"_____no_output_____"
]
],
[
[
"# Apply activation function\na1 = np.maximum(0, z1) # ReLU\nprint (f\"a_1: {a1.shape}\")",
"a_1: (1050, 100)\n"
]
],
[
[
"We pass the activations to the second layer to get our logits.\n * $z_2 = a_1W_2$",
"_____no_output_____"
]
],
[
[
"# Initialize second layer's weights\nW2 = 0.01 * np.random.randn(HIDDEN_DIM, NUM_CLASSES)\nb2 = np.zeros((1, NUM_CLASSES))\nprint (f\"W2: {W2.shape}\")\nprint (f\"b2: {b2.shape}\")",
"W2: (100, 3)\nb2: (1, 3)\n"
],
[
"# z2 = logits = [NX100] · [100X3] + [1X3] = [NX3]\nlogits = np.dot(a1, W2) + b2\nprint (f\"logits: {logits.shape}\")\nprint (f\"sample: {logits[0]}\")",
"logits: (1050, 3)\nsample: [-0.00010001 0.00418463 -0.00067274]\n"
]
],
[
[
"We'll apply the softmax function to normalize the logits and btain class probabilities.\n * $\\hat{y} = softmax(z_2)$",
"_____no_output_____"
]
],
[
[
"# Normalization via softmax to obtain class probabilities\nexp_logits = np.exp(logits)\ny_hat = exp_logits / np.sum(exp_logits, axis=1, keepdims=True)\nprint (f\"y_hat: {y_hat.shape}\")\nprint (f\"sample: {y_hat[0]}\")",
"y_hat: (1050, 3)\nsample: [0.33292037 0.33434987 0.33272975]\n"
]
],
[
[
"## Loss",
"_____no_output_____"
],
[
"3. Compare the predictions $\\hat{y}$ (ex. [0.3, 0.3, 0.4]) with the actual target values $y$ (ex. class 2 would look like [0, 0, 1]) with the objective (cost) function to determine loss $J$. A common objective function for classification tasks is cross-entropy loss. \n * $J(\\theta) = - \\sum_i ln(\\hat{y_i}) = - \\sum_i ln (\\frac{e^{X_iW_y}}{\\sum_j e^{X_iW}}) $",
"_____no_output_____"
]
],
[
[
"# Loss\ncorrect_class_logprobs = -np.log(y_hat[range(len(y_hat)), y_train])\nloss = np.sum(correct_class_logprobs) / len(y_train)",
"_____no_output_____"
]
],
[
[
"## Gradients",
"_____no_output_____"
],
[
"4. Calculate the gradient of loss $J(\\theta)$ w.r.t to the model weights. \n\nThe gradient of the loss w.r.t to $W_2$ is the same as the gradients from logistic regression since $\\hat{y} = softmax(z_2)$.\n * $\\frac{\\partial{J}}{\\partial{W_{2j}}} = \\frac{\\partial{J}}{\\partial{\\hat{y}}}\\frac{\\partial{\\hat{y}}}{\\partial{W_{2j}}} = - \\frac{1}{\\hat{y}}\\frac{\\partial{\\hat{y}}}{\\partial{W_{2j}}} = - \\frac{1}{\\frac{e^{W_{2y}a_1}}{\\sum_j e^{a_1W}}}\\frac{\\sum_j e^{a_1W}e^{a_1W_{2y}}0 - e^{a_1W_{2y}}e^{a_1W_{2j}}a_1}{(\\sum_j e^{a_1W})^2} = \\frac{a_1e^{a_1W_{2j}}}{\\sum_j e^{a_1W}} = a_1\\hat{y}$\n * $\\frac{\\partial{J}}{\\partial{W_{2y}}} = \\frac{\\partial{J}}{\\partial{\\hat{y}}}\\frac{\\partial{\\hat{y}}}{\\partial{W_{2y}}} = - \\frac{1}{\\hat{y}}\\frac{\\partial{\\hat{y}}}{\\partial{W_{2y}}} = - \\frac{1}{\\frac{e^{W_{2y}a_1}}{\\sum_j e^{a_1W}}}\\frac{\\sum_j e^{a_1W}e^{a_1W_{2y}}a_1 - e^{a_1W_{2y}}e^{a_1W_{2y}}a_1}{(\\sum_j e^{a_1W})^2} = \\frac{1}{\\hat{y}}(a_1\\hat{y} - a_1\\hat{y}^2) = a_1(\\hat{y}-1)$\n\nThe gradient of the loss w.r.t $W_1$ is a bit trickier since we have to backpropagate through two sets of weights.\n * $ \\frac{\\partial{J}}{\\partial{W_1}} = \\frac{\\partial{J}}{\\partial{\\hat{y}}} \\frac{\\partial{\\hat{y}}}{\\partial{a_1}} \\frac{\\partial{a_1}}{\\partial{z_1}} \\frac{\\partial{z_1}}{\\partial{W_1}} = W_2(\\partial{scores})(\\partial{ReLU})X $",
"_____no_output_____"
]
],
[
[
"# dJ/dW2\ndscores = y_hat\ndscores[range(len(y_hat)), y_train] -= 1\ndscores /= len(y_train)\ndW2 = np.dot(a1.T, dscores)\ndb2 = np.sum(dscores, axis=0, keepdims=True)",
"_____no_output_____"
],
[
"# dJ/dW1\ndhidden = np.dot(dscores, W2.T)\ndhidden[a1 <= 0] = 0 # ReLu backprop\ndW1 = np.dot(X_train.T, dhidden)\ndb1 = np.sum(dhidden, axis=0, keepdims=True)",
"_____no_output_____"
]
],
[
[
"## Update weights",
"_____no_output_____"
],
[
"5. Update the weights $W$ using a small learning rate $\\alpha$. The updates will penalize the probability for the incorrect classes ($j$) and encourage a higher probability for the correct class ($y$).\n * $W_i = W_i - \\alpha\\frac{\\partial{J}}{\\partial{W_i}}$",
"_____no_output_____"
]
],
[
[
"# Update weights\nW1 += -LEARNING_RATE * dW1\nb1 += -LEARNING_RATE * db1\nW2 += -LEARNING_RATE * dW2\nb2 += -LEARNING_RATE * db2",
"_____no_output_____"
]
],
[
[
"## Training",
"_____no_output_____"
],
[
"6. Repeat steps 2 - 4 until model performs well.",
"_____no_output_____"
]
],
[
[
"# Convert tensors to NumPy arrays\nX_train = X_train.numpy()\ny_train = y_train.numpy()\nX_val = X_val.numpy()\ny_val = y_val.numpy()\nX_test = X_test.numpy()\ny_test = y_test.numpy()",
"_____no_output_____"
],
[
"# Initialize random weights\nW1 = 0.01 * np.random.randn(INPUT_DIM, HIDDEN_DIM)\nb1 = np.zeros((1, HIDDEN_DIM))\nW2 = 0.01 * np.random.randn(HIDDEN_DIM, NUM_CLASSES)\nb2 = np.zeros((1, NUM_CLASSES))\n\n# Training loop\nfor epoch_num in range(1000):\n\n # First layer forward pass [NX2] · [2X100] = [NX100]\n z1 = np.dot(X_train, W1) + b1\n\n # Apply activation function\n a1 = np.maximum(0, z1) # ReLU\n\n # z2 = logits = [NX100] · [100X3] = [NX3]\n logits = np.dot(a1, W2) + b2\n \n # Normalization via softmax to obtain class probabilities\n exp_logits = np.exp(logits)\n y_hat = exp_logits / np.sum(exp_logits, axis=1, keepdims=True)\n\n # Loss\n correct_class_logprobs = -np.log(y_hat[range(len(y_hat)), y_train])\n loss = np.sum(correct_class_logprobs) / len(y_train)\n\n # show progress\n if epoch_num%100 == 0:\n # Accuracy\n y_pred = np.argmax(logits, axis=1)\n accuracy = np.mean(np.equal(y_train, y_pred))\n print (f\"Epoch: {epoch_num}, loss: {loss:.3f}, accuracy: {accuracy:.3f}\")\n\n # dJ/dW2\n dscores = y_hat\n dscores[range(len(y_hat)), y_train] -= 1\n dscores /= len(y_train)\n dW2 = np.dot(a1.T, dscores)\n db2 = np.sum(dscores, axis=0, keepdims=True)\n\n # dJ/dW1\n dhidden = np.dot(dscores, W2.T)\n dhidden[a1 <= 0] = 0 # ReLu backprop\n dW1 = np.dot(X_train.T, dhidden)\n db1 = np.sum(dhidden, axis=0, keepdims=True)\n\n # Update weights\n W1 += -1e0 * dW1\n b1 += -1e0 * db1\n W2 += -1e0 * dW2\n b2 += -1e0 * db2",
"Epoch: 0, loss: 1.099, accuracy: 0.349\nEpoch: 100, loss: 0.545, accuracy: 0.687\nEpoch: 200, loss: 0.247, accuracy: 0.903\nEpoch: 300, loss: 0.142, accuracy: 0.949\nEpoch: 400, loss: 0.099, accuracy: 0.974\nEpoch: 500, loss: 0.076, accuracy: 0.986\nEpoch: 600, loss: 0.062, accuracy: 0.990\nEpoch: 700, loss: 0.052, accuracy: 0.994\nEpoch: 800, loss: 0.046, accuracy: 0.995\nEpoch: 900, loss: 0.041, accuracy: 0.995\n"
]
],
[
[
"## Evaluation",
"_____no_output_____"
]
],
[
[
"class MLPFromScratch():\n def predict(self, x):\n z1 = np.dot(x, W1) + b1\n a1 = np.maximum(0, z1)\n logits = np.dot(a1, W2) + b2\n exp_logits = np.exp(logits)\n y_hat = exp_logits / np.sum(exp_logits, axis=1, keepdims=True)\n return y_hat",
"_____no_output_____"
],
[
"# Evaluation\nmodel = MLPFromScratch()\ny_prob = model.predict(X_test)\ny_pred = np.argmax(y_prob, axis=1)",
"_____no_output_____"
],
[
"# Performance report\nperformance = get_performance(y_true=y_test, y_pred=y_pred, classes=classes)\nprint (json.dumps(performance, indent=2))",
"{\n \"overall\": {\n \"precision\": 0.9956140350877193,\n \"recall\": 0.9955555555555556,\n \"f1\": 0.9955553580159119,\n \"num_samples\": 225.0\n },\n \"class\": {\n \"c1\": {\n \"precision\": 1.0,\n \"recall\": 0.9866666666666667,\n \"f1\": 0.9932885906040269,\n \"num_samples\": 75.0\n },\n \"c2\": {\n \"precision\": 1.0,\n \"recall\": 1.0,\n \"f1\": 1.0,\n \"num_samples\": 75.0\n },\n \"c3\": {\n \"precision\": 0.9868421052631579,\n \"recall\": 1.0,\n \"f1\": 0.9933774834437086,\n \"num_samples\": 75.0\n }\n }\n}\n"
],
[
"def plot_multiclass_decision_boundary_numpy(model, X, y, savefig_fp=None):\n \"\"\"Plot the multiclass decision boundary for a model that accepts 2D inputs.\n Credit: https://cs231n.github.io/neural-networks-case-study/\n\n Arguments:\n model {function} -- trained model with function model.predict(x_in).\n X {numpy.ndarray} -- 2D inputs with shape (N, 2).\n y {numpy.ndarray} -- 1D outputs with shape (N,).\n \"\"\"\n # Axis boundaries\n x_min, x_max = X[:, 0].min() - 0.1, X[:, 0].max() + 0.1\n y_min, y_max = X[:, 1].min() - 0.1, X[:, 1].max() + 0.1\n xx, yy = np.meshgrid(np.linspace(x_min, x_max, 101),\n np.linspace(y_min, y_max, 101))\n\n # Create predictions\n x_in = np.c_[xx.ravel(), yy.ravel()]\n y_pred = model.predict(x_in)\n y_pred = np.argmax(y_pred, axis=1).reshape(xx.shape)\n\n # Plot decision boundary\n plt.contourf(xx, yy, y_pred, cmap=plt.cm.Spectral, alpha=0.8)\n plt.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap=plt.cm.RdYlBu)\n plt.xlim(xx.min(), xx.max())\n plt.ylim(yy.min(), yy.max())\n\n # Plot\n if savefig_fp:\n plt.savefig(savefig_fp, format='png')",
"_____no_output_____"
],
[
"# Visualize the decision boundary\nplt.figure(figsize=(12,5))\nplt.subplot(1, 2, 1)\nplt.title(\"Train\")\nplot_multiclass_decision_boundary_numpy(model=model, X=X_train, y=y_train)\nplt.subplot(1, 2, 2)\nplt.title(\"Test\")\nplot_multiclass_decision_boundary_numpy(model=model, X=X_test, y=y_test)\nplt.show()",
"_____no_output_____"
]
],
[
[
"# PyTorch",
"_____no_output_____"
],
[
"## Model",
"_____no_output_____"
],
[
"We'll be using two linear layers along with PyTorch [Functional](https://pytorch.org/docs/stable/nn.functional.html) API's [ReLU](https://pytorch.org/docs/stable/nn.functional.html#torch.nn.functional.relu) operation. ",
"_____no_output_____"
]
],
[
[
"class MLP(nn.Module):\n def __init__(self, input_dim, hidden_dim, num_classes):\n super(MLP, self).__init__()\n self.fc1 = nn.Linear(input_dim, hidden_dim)\n self.fc2 = nn.Linear(hidden_dim, num_classes)\n \n def forward(self, x_in, apply_softmax=False):\n z = F.relu(self.fc1(x_in)) # ReLU activaton function added!\n y_pred = self.fc2(z)\n if apply_softmax:\n y_pred = F.softmax(y_pred, dim=1) \n return y_pred",
"_____no_output_____"
],
[
"# Initialize model\nmodel = MLP(input_dim=INPUT_DIM, hidden_dim=HIDDEN_DIM, num_classes=NUM_CLASSES)\nprint (model.named_parameters)",
"<bound method Module.named_parameters of MLP(\n (fc1): Linear(in_features=2, out_features=100, bias=True)\n (fc2): Linear(in_features=100, out_features=3, bias=True)\n)>\n"
]
],
[
[
"## Training",
"_____no_output_____"
]
],
[
[
"# Define Loss\nclass_weights_tensor = torch.Tensor(list(class_weights.values()))\nloss_fn = nn.CrossEntropyLoss(weight=class_weights_tensor)",
"_____no_output_____"
],
[
"# Accuracy\ndef accuracy_fn(y_pred, y_true):\n n_correct = torch.eq(y_pred, y_true).sum().item()\n accuracy = (n_correct / len(y_pred)) * 100\n return accuracy",
"_____no_output_____"
],
[
"# Optimizer\noptimizer = Adam(model.parameters(), lr=LEARNING_RATE) ",
"_____no_output_____"
],
[
"# Convert data to tensors\nX_train = torch.Tensor(X_train)\ny_train = torch.LongTensor(y_train)\nX_val = torch.Tensor(X_val)\ny_val = torch.LongTensor(y_val)\nX_test = torch.Tensor(X_test)\ny_test = torch.LongTensor(y_test)",
"_____no_output_____"
],
[
"# Training\nfor epoch in range(NUM_EPOCHS*10):\n # Forward pass\n y_pred = model(X_train)\n\n # Loss\n loss = loss_fn(y_pred, y_train)\n\n # Zero all gradients\n optimizer.zero_grad()\n\n # Backward pass\n loss.backward()\n\n # Update weights\n optimizer.step()\n\n if epoch%10==0: \n predictions = y_pred.max(dim=1)[1] # class\n accuracy = accuracy_fn(y_pred=predictions, y_true=y_train)\n print (f\"Epoch: {epoch} | loss: {loss:.2f}, accuracy: {accuracy:.1f}\")",
"Epoch: 0 | loss: 1.11, accuracy: 24.3\nEpoch: 10 | loss: 0.67, accuracy: 55.4\nEpoch: 20 | loss: 0.51, accuracy: 70.6\nEpoch: 30 | loss: 0.39, accuracy: 88.5\nEpoch: 40 | loss: 0.29, accuracy: 90.3\nEpoch: 50 | loss: 0.22, accuracy: 93.4\nEpoch: 60 | loss: 0.18, accuracy: 94.7\nEpoch: 70 | loss: 0.15, accuracy: 95.9\nEpoch: 80 | loss: 0.12, accuracy: 97.3\nEpoch: 90 | loss: 0.11, accuracy: 97.7\n"
]
],
[
[
"## Evaluation",
"_____no_output_____"
]
],
[
[
"# Predictions\ny_prob = model(X_test, apply_softmax=True)\ny_pred = y_prob.max(dim=1)[1]",
"_____no_output_____"
],
[
"# Performance report\nperformance = get_performance(y_true=y_test, y_pred=y_pred, classes=classes)\nprint (json.dumps(performance, indent=2))",
"{\n \"overall\": {\n \"precision\": 0.9913419913419913,\n \"recall\": 0.9911111111111112,\n \"f1\": 0.9911095305832148,\n \"num_samples\": 225.0\n },\n \"class\": {\n \"c1\": {\n \"precision\": 1.0,\n \"recall\": 0.9733333333333334,\n \"f1\": 0.9864864864864865,\n \"num_samples\": 75.0\n },\n \"c2\": {\n \"precision\": 1.0,\n \"recall\": 1.0,\n \"f1\": 1.0,\n \"num_samples\": 75.0\n },\n \"c3\": {\n \"precision\": 0.974025974025974,\n \"recall\": 1.0,\n \"f1\": 0.9868421052631579,\n \"num_samples\": 75.0\n }\n }\n}\n"
],
[
"# Visualize the decision boundary\nplt.figure(figsize=(12,5))\nplt.subplot(1, 2, 1)\nplt.title(\"Train\")\nplot_multiclass_decision_boundary(model=model, X=X_train, y=y_train)\nplt.subplot(1, 2, 2)\nplt.title(\"Test\")\nplot_multiclass_decision_boundary(model=model, X=X_test, y=y_test)\nplt.show()",
"_____no_output_____"
]
],
[
[
"## Inference",
"_____no_output_____"
]
],
[
[
"# Inputs for inference\nX_infer = pd.DataFrame([{'X1': 0.1, 'X2': 0.1}])\nX_infer.head()",
"_____no_output_____"
],
[
"# Standardize\nX_infer = X_scaler.transform(X_infer)\nprint (X_infer)",
"[[0.29906749 0.30544029]]\n"
],
[
"# Predict\ny_infer = model(torch.Tensor(X_infer), apply_softmax=True)\nprob, _class = y_infer.max(dim=1)\nlabel = label_encoder.inverse_transform(_class.detach().numpy())[0]\nprint (f\"The probability that you have {label} is {prob.detach().numpy()[0]*100.0:.0f}%\")",
"The probability that you have c1 is 92%\n"
]
],
[
[
"# Initializing weights",
"_____no_output_____"
],
[
"So far we have been initializing weights with small random values and this isn't optimal for convergence during training. The objective is to have weights that are able to produce outputs that follow a similar distribution across all neurons. We can do this by enforcing weights to have unit variance prior the affine and non-linear operations.",
"_____no_output_____"
],
[
"> A popular method is to apply [xavier initialization](http://andyljones.tumblr.com/post/110998971763/an-explanation-of-xavier-initialization), which essentially initializes the weights to allow the signal from the data to reach deep into the network. You may be wondering why we don't do this for every forward pass and that's a great question. We'll look at more advanced strategies that help with optimization like batch/layer normalization, etc. in future lessons. Meanwhile you can check out other initializers [here](https://pytorch.org/docs/stable/nn.init.html).",
"_____no_output_____"
]
],
[
[
"from torch.nn import init",
"_____no_output_____"
],
[
"class MLP(nn.Module):\n def __init__(self, input_dim, hidden_dim, num_classes):\n super(MLP, self).__init__()\n self.fc1 = nn.Linear(input_dim, hidden_dim)\n self.fc2 = nn.Linear(hidden_dim, num_classes)\n\n def init_weights(self):\n init.xavier_normal(self.fc1.weight, gain=init.calculate_gain('relu')) \n \n def forward(self, x_in, apply_softmax=False):\n z = F.relu(self.fc1(x_in)) # ReLU activaton function added!\n y_pred = self.fc2(z)\n if apply_softmax:\n y_pred = F.softmax(y_pred, dim=1) \n return y_pred",
"_____no_output_____"
]
],
[
[
"# Dropout",
"_____no_output_____"
],
[
"A great technique to have our models generalize (perform well on test data) is to increase the size of your data but this isn't always an option. Fortuntely, there are methods like regularization and dropout that can help create a more robust model. \n\nDropout is a technique (used only during training) that allows us to zero the outputs of neurons. We do this for `dropout_p`% of the total neurons in each layer and it changes every batch. Dropout prevents units from co-adapting too much to the data and acts as a sampling strategy since we drop a different set of neurons each time.\n\n<div align=\"left\">\n<img src=\"https://raw.githubusercontent.com/GokuMohandas/MadeWithML/main/images/basics/neural-networks/dropout.png\" width=\"350\">\n</div>\n\n* [Dropout: A Simple Way to Prevent Neural Networks from\nOverfitting](http://jmlr.org/papers/volume15/srivastava14a/srivastava14a.pdf)",
"_____no_output_____"
]
],
[
[
"DROPOUT_P = 0.1 # % of the neurons that are dropped each pass",
"_____no_output_____"
],
[
"class MLP(nn.Module):\n def __init__(self, input_dim, hidden_dim, dropout_p, num_classes):\n super(MLP, self).__init__()\n self.fc1 = nn.Linear(input_dim, hidden_dim)\n self.dropout = nn.Dropout(dropout_p) # dropout\n self.fc2 = nn.Linear(hidden_dim, num_classes)\n\n def init_weights(self):\n init.xavier_normal(self.fc1.weight, gain=init.calculate_gain('relu')) \n \n def forward(self, x_in, apply_softmax=False):\n z = F.relu(self.fc1(x_in)) \n z = self.dropout(z) # dropout\n y_pred = self.fc2(z)\n if apply_softmax:\n y_pred = F.softmax(y_pred, dim=1) \n return y_pred",
"_____no_output_____"
],
[
"# Initialize model\nmodel = MLP(input_dim=INPUT_DIM, hidden_dim=HIDDEN_DIM, \n dropout_p=DROPOUT_P, num_classes=NUM_CLASSES)\nprint (model.named_parameters)",
"<bound method Module.named_parameters of MLP(\n (fc1): Linear(in_features=2, out_features=100, bias=True)\n (dropout): Dropout(p=0.1, inplace=False)\n (fc2): Linear(in_features=100, out_features=3, bias=True)\n)>\n"
]
],
[
[
"# Overfitting",
"_____no_output_____"
],
[
"Though neural networks are great at capturing non-linear relationships they are highly susceptible to overfitting to the training data and failing to generalize on test data. Just take a look at the example below where we generate completely random data and are able to fit a model with [$2*N*C + D$](https://arxiv.org/abs/1611.03530) hidden units. The training performance is good (~70%) but the overfitting leads to very poor test performance. We'll be covering strategies to tackle overfitting in future lessons.",
"_____no_output_____"
]
],
[
[
"NUM_EPOCHS = 500\nNUM_SAMPLES_PER_CLASS = 50\nLEARNING_RATE = 1e-1\nHIDDEN_DIM = 2 * NUM_SAMPLES_PER_CLASS * NUM_CLASSES + INPUT_DIM # 2*N*C + D",
"_____no_output_____"
],
[
"# Generate random data\nX = np.random.rand(NUM_SAMPLES_PER_CLASS * NUM_CLASSES, INPUT_DIM)\ny = np.array([[i]*NUM_SAMPLES_PER_CLASS for i in range(NUM_CLASSES)]).reshape(-1)\nprint (\"X: \", format(np.shape(X)))\nprint (\"y: \", format(np.shape(y)))",
"X: (150, 2)\ny: (150,)\n"
],
[
"# Create data splits\nX_train, X_val, X_test, y_train, y_val, y_test = train_val_test_split(\n X=X, y=y, train_size=TRAIN_SIZE)\nprint (f\"X_train: {X_train.shape}, y_train: {y_train.shape}\")\nprint (f\"X_val: {X_val.shape}, y_val: {y_val.shape}\")\nprint (f\"X_test: {X_test.shape}, y_test: {y_test.shape}\")\nprint (f\"Sample point: {X_train[0]} → {y_train[0]}\")",
"X_train: (105, 2), y_train: (105,)\nX_val: (22, 2), y_val: (22,)\nX_test: (23, 2), y_test: (23,)\nSample point: [0.52553355 0.33956916] → 0\n"
],
[
"# Standardize the inputs (mean=0, std=1) using training data\nX_scaler = StandardScaler().fit(X_train)\nX_train = X_scaler.transform(X_train)\nX_val = X_scaler.transform(X_val)\nX_test = X_scaler.transform(X_test)",
"_____no_output_____"
],
[
"# Convert data to tensors\nX_train = torch.Tensor(X_train)\ny_train = torch.LongTensor(y_train)\nX_val = torch.Tensor(X_val)\ny_val = torch.LongTensor(y_val)\nX_test = torch.Tensor(X_test)\ny_test = torch.LongTensor(y_test)",
"_____no_output_____"
],
[
"# Initialize model\nmodel = MLP(input_dim=INPUT_DIM, hidden_dim=HIDDEN_DIM, \n dropout_p=DROPOUT_P, num_classes=NUM_CLASSES)\nprint (model.named_parameters)",
"<bound method Module.named_parameters of MLP(\n (fc1): Linear(in_features=2, out_features=302, bias=True)\n (dropout): Dropout(p=0.1, inplace=False)\n (fc2): Linear(in_features=302, out_features=3, bias=True)\n)>\n"
],
[
"# Optimizer\noptimizer = Adam(model.parameters(), lr=LEARNING_RATE) ",
"_____no_output_____"
],
[
"# Training\nfor epoch in range(NUM_EPOCHS):\n # Forward pass\n y_pred = model(X_train)\n\n # Loss\n loss = loss_fn(y_pred, y_train)\n\n # Zero all gradients\n optimizer.zero_grad()\n\n # Backward pass\n loss.backward()\n\n # Update weights\n optimizer.step()\n\n if epoch%20==0: \n predictions = y_pred.max(dim=1)[1] # class\n accuracy = accuracy_fn(y_pred=predictions, y_true=y_train)\n print (f\"Epoch: {epoch} | loss: {loss:.2f}, accuracy: {accuracy:.1f}\")",
"Epoch: 0 | loss: 1.15, accuracy: 37.1\nEpoch: 20 | loss: 1.04, accuracy: 47.6\nEpoch: 40 | loss: 0.98, accuracy: 51.4\nEpoch: 60 | loss: 0.90, accuracy: 57.1\nEpoch: 80 | loss: 0.87, accuracy: 59.0\nEpoch: 100 | loss: 0.88, accuracy: 58.1\nEpoch: 120 | loss: 0.84, accuracy: 64.8\nEpoch: 140 | loss: 0.86, accuracy: 61.0\nEpoch: 160 | loss: 0.81, accuracy: 64.8\nEpoch: 180 | loss: 0.89, accuracy: 59.0\nEpoch: 200 | loss: 0.91, accuracy: 60.0\nEpoch: 220 | loss: 0.82, accuracy: 63.8\nEpoch: 240 | loss: 0.86, accuracy: 59.0\nEpoch: 260 | loss: 0.77, accuracy: 66.7\nEpoch: 280 | loss: 0.82, accuracy: 67.6\nEpoch: 300 | loss: 0.88, accuracy: 57.1\nEpoch: 320 | loss: 0.81, accuracy: 61.9\nEpoch: 340 | loss: 0.79, accuracy: 63.8\nEpoch: 360 | loss: 0.80, accuracy: 61.0\nEpoch: 380 | loss: 0.86, accuracy: 64.8\nEpoch: 400 | loss: 0.77, accuracy: 64.8\nEpoch: 420 | loss: 0.79, accuracy: 64.8\nEpoch: 440 | loss: 0.81, accuracy: 65.7\nEpoch: 460 | loss: 0.77, accuracy: 70.5\nEpoch: 480 | loss: 0.80, accuracy: 67.6\n"
],
[
"# Predictions\ny_prob = model(X_test, apply_softmax=True)\ny_pred = y_prob.max(dim=1)[1]",
"_____no_output_____"
],
[
"# Performance report\nperformance = get_performance(y_true=y_test, y_pred=y_pred, classes=classes)\nprint (json.dumps(performance, indent=2))",
"{\n \"overall\": {\n \"precision\": 0.17857142857142858,\n \"recall\": 0.16666666666666666,\n \"f1\": 0.1722222222222222,\n \"num_samples\": 23.0\n },\n \"class\": {\n \"c1\": {\n \"precision\": 0.0,\n \"recall\": 0.0,\n \"f1\": 0.0,\n \"num_samples\": 7.0\n },\n \"c2\": {\n \"precision\": 0.2857142857142857,\n \"recall\": 0.25,\n \"f1\": 0.26666666666666666,\n \"num_samples\": 8.0\n },\n \"c3\": {\n \"precision\": 0.25,\n \"recall\": 0.25,\n \"f1\": 0.25,\n \"num_samples\": 8.0\n }\n }\n}\n"
],
[
"# Visualize the decision boundary\nplt.figure(figsize=(12,5))\nplt.subplot(1, 2, 1)\nplt.title(\"Train\")\nplot_multiclass_decision_boundary(model=model, X=X_train, y=y_train)\nplt.subplot(1, 2, 2)\nplt.title(\"Test\")\nplot_multiclass_decision_boundary(model=model, X=X_test, y=y_test)\nplt.show()",
"_____no_output_____"
]
],
[
[
"It's important that we experiment, starting with simple models that underfit (high bias) and improve it towards a good fit. Starting with simple models (linear/logistic regression) let's us catch errors without the added complexity of more sophisticated models (neural networks). ",
"_____no_output_____"
],
[
"<div align=\"left\">\n<img src=\"https://raw.githubusercontent.com/GokuMohandas/MadeWithML/main/images/basics/neural-networks/fit.png\" width=\"700\">\n</div>",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
d0bf03c369a3496b4b6d25b7272f7098831605d3 | 8,079 | ipynb | Jupyter Notebook | _notebooks/2021-11-11-aws-s3-buckets.ipynb | pockerman/qubit_opus | 6824a86b302377616b89f92fe7716e96c6abaa12 | [
"Apache-2.0"
] | null | null | null | _notebooks/2021-11-11-aws-s3-buckets.ipynb | pockerman/qubit_opus | 6824a86b302377616b89f92fe7716e96c6abaa12 | [
"Apache-2.0"
] | null | null | null | _notebooks/2021-11-11-aws-s3-buckets.ipynb | pockerman/qubit_opus | 6824a86b302377616b89f92fe7716e96c6abaa12 | [
"Apache-2.0"
] | null | null | null | 27.762887 | 396 | 0.57371 | [
[
[
"# AWS. S3 Buckets\n\n> 'Working with AWS S3 buckets'\n\n\n- toc:true\n- branch: master\n- badges: false\n- comments: false\n- author: Alexandros Giavaras\n- categories: [aws, s3-buckets, cloud-computing, data-storage, data-engineering, data-storage, boto3]",
"_____no_output_____"
],
[
"## Overview",
"_____no_output_____"
],
[
"In this notebook, we are going to have a brief view on AWS S3 storage. Concretely, we are going to discuss the following: ",
"_____no_output_____"
],
[
"- How to create an AWS S3 Bucket\n- How to upload and download items \n- How to do multi-part file transfer\n- How to generate pre-signed URLS.\n- How to set up bucket policies",
"_____no_output_____"
],
[
"Moreover, we will work with AWS S3 buckets using the Boto3 Python package.",
"_____no_output_____"
],
[
"## S3 Buckets",
"_____no_output_____"
],
[
"AWS S3 is an object storage system. S3 stands for Simple Storage Service. By design, S3 has 11 9's of durability and stores data for millions of applications. S3 files are referred to as objects. You can find more information about S3 <a href=\"https://aws.amazon.com/s3/?p=ft&c=st&z=3\">here</a> and <a href=\"https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html\">here</a>.",
"_____no_output_____"
],
[
"We will use the Boto3 Python package to interact with AWS S3; that is to create a bucket, upload and download files in the created bucket.",
"_____no_output_____"
]
],
[
[
"import logging\nimport boto3\nfrom botocore.exceptions import ClientError",
"_____no_output_____"
]
],
[
[
"## Create S3 Bucket",
"_____no_output_____"
]
],
[
[
"# credentials to be used\nAWS_ACCESS_KEY_ID = 'Use your own credentials'\nAWS_SECRET_ACCESS_KEY = 'Use your own credentials'",
"_____no_output_____"
],
[
"# create a client for the resource we will use\ns3_client = boto3.client('s3', region_name='us-west-2',\n aws_access_key_id=AWS_ACCESS_KEY_ID, \n aws_secret_access_key=AWS_SECRET_ACCESS_KEY)",
"_____no_output_____"
],
[
"location = {'LocationConstraint': 'us-west-2'}\ns3_client.create_bucket(Bucket='coursera-s3-bucket',\n CreateBucketConfiguration=location)",
"_____no_output_____"
]
],
[
[
"The response of the function all above is shown below:",
"_____no_output_____"
],
[
"```\n{'ResponseMetadata': {'RequestId': '355VX5QNYSQBTSCM',\n 'HostId': '7jXN853VP175Fw/il1Zvx8UXkfRsdQRXH3VrAFOcCYZl4y2ZTF6zNPp6tXvwnpBGlmAKTCP9RFA=',\n 'HTTPStatusCode': 200,\n 'HTTPHeaders': {'x-amz-id-2': '7jXN853VP175Fw/il1Zvx8UXkfRsdQRXH3VrAFOcCYZl4y2ZTF6zNPp6tXvwnpBGlmAKTCP9RFA=',\n 'x-amz-request-id': '355VX5QNYSQBTSCM',\n 'date': 'Thu, 11 Nov 2021 11:01:14 GMT',\n 'location': 'http://coursera-s3-bucket.s3.amazonaws.com/',\n 'server': 'AmazonS3',\n 'content-length': '0'},\n 'RetryAttempts': 0},\n 'Location': 'http://coursera-s3-bucket.s3.amazonaws.com/'}\n```",
"_____no_output_____"
],
[
"## Upload and object to a bucket",
"_____no_output_____"
],
[
"## Bucket policies",
"_____no_output_____"
],
[
"### Retrieve the policies attached to a bucket",
"_____no_output_____"
]
],
[
[
"result = 3_client.get_bucket_policy(Bucket='bucket-name')",
"_____no_output_____"
]
],
[
[
"The call above fails because by default there are no policies set. A bucket's policy can be set by calling the ```put_bucket_policy``` method. Moreover, a policy is defined in the same JSON format as an IAM policy. ",
"_____no_output_____"
],
[
"The **Sid (statement ID)** is an optional identifier that you provide for the policy statement. You can assign a Sid value to each statement in a statement array.\n\nThe **Effect** element is required and specifies whether the statement results in an allow or an explicit deny. Valid values for Effect are Allow and Deny.\n\nBy default, access to resources is denied. \n\nUse the **Principal** element in a policy to specify the principal that is allowed or denied access to a resource.\n\nYou can specify any of the following principals in a policy:\n\n- AWS account and root user\n- IAM users\n- Federated users (using web identity or SAML federation)\n- IAM roles\n- Assumed-role sessions\n- AWS services\n- Anonymous users\n\n\nThe **Action** element describes the specific action or actions that will be allowed or denied. \n\nWe specify a value using a service namespace as an action prefix (iam, ec2, sqs, sns, s3, etc.) followed by the name of the action to allow or deny.\n\nThe **Resource** element specifies the object or objects that the statement covers. We specify a resource using an ARN. Amazon Resource Names (ARNs) uniquely identify AWS resources.",
"_____no_output_____"
],
[
"## CORS Configuration",
"_____no_output_____"
]
],
[
[
"response = s3_client.get_bucket_cors(Bucket=bucket_name)\nprint(response['CORSRules'])",
"_____no_output_____"
],
[
"cors_configuration = {\n 'CORSRules':[{'AllowHeaders':['Authorization'],\n 'AllowedMethods':['GET', 'PUT'],\n 'AllowedOrigins':['*'],\n 'ExposeHeaders':['GET', 'PUT'],\n 'MaxAgeSeconds':3000}\n ]\n}\n\nresponse = s3_client.put_bucket_cors(Bucket=bucket_name, CORSConfiguration=cors_configuration)",
"_____no_output_____"
]
],
[
[
"## References",
"_____no_output_____"
],
[
"1. <a href=\"https://aws.amazon.com/s3/?p=ft&c=st&z=3\">AWS S3</a>\n2. <a href=\"https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html\">What is Amazon S3?</a>",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
d0bf091945d6d49b34c050f69650875818a9251a | 201,635 | ipynb | Jupyter Notebook | benchmarks/gravmag_eqlayer.ipynb | silky/fatiando | 5041c6b29758a5e73e9d7b2b906fa5e493fd9aba | [
"BSD-3-Clause"
] | 1 | 2019-06-27T11:32:56.000Z | 2019-06-27T11:32:56.000Z | benchmarks/gravmag_eqlayer.ipynb | silky/fatiando | 5041c6b29758a5e73e9d7b2b906fa5e493fd9aba | [
"BSD-3-Clause"
] | null | null | null | benchmarks/gravmag_eqlayer.ipynb | silky/fatiando | 5041c6b29758a5e73e9d7b2b906fa5e493fd9aba | [
"BSD-3-Clause"
] | null | null | null | 240.041667 | 86,224 | 0.841451 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
d0bf1dc3dec59dd5cce24d997ecf21eb16e45ffb | 5,126 | ipynb | Jupyter Notebook | matplotlib/gallery_jupyter/axes_grid1/demo_axes_grid.ipynb | kingreatwill/penter | 2d027fd2ae639ac45149659a410042fe76b9dab0 | [
"MIT"
] | 13 | 2020-01-04T07:37:38.000Z | 2021-08-31T05:19:58.000Z | matplotlib/gallery_jupyter/axes_grid1/demo_axes_grid.ipynb | kingreatwill/penter | 2d027fd2ae639ac45149659a410042fe76b9dab0 | [
"MIT"
] | 3 | 2020-06-05T22:42:53.000Z | 2020-08-24T07:18:54.000Z | matplotlib/gallery_jupyter/axes_grid1/demo_axes_grid.ipynb | kingreatwill/penter | 2d027fd2ae639ac45149659a410042fe76b9dab0 | [
"MIT"
] | 9 | 2020-10-19T04:53:06.000Z | 2021-08-31T05:20:01.000Z | 94.925926 | 4,077 | 0.538432 | [
[
[
"%matplotlib inline",
"_____no_output_____"
]
],
[
[
"\n# Demo Axes Grid\n\n\nGrid of 2x2 images with single or own colorbar.\n",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\nfrom mpl_toolkits.axes_grid1 import ImageGrid\n\n\nplt.rcParams[\"mpl_toolkits.legacy_colorbar\"] = False\n\n\ndef get_demo_image():\n import numpy as np\n from matplotlib.cbook import get_sample_data\n f = get_sample_data(\"axes_grid/bivariate_normal.npy\", asfileobj=False)\n z = np.load(f)\n # z is a numpy array of 15x15\n return z, (-3, 4, -4, 3)\n\n\ndef demo_simple_grid(fig):\n \"\"\"\n A grid of 2x2 images with 0.05 inch pad between images and only\n the lower-left axes is labeled.\n \"\"\"\n grid = ImageGrid(fig, 141, # similar to subplot(141)\n nrows_ncols=(2, 2),\n axes_pad=0.05,\n label_mode=\"1\",\n )\n Z, extent = get_demo_image()\n for ax in grid:\n ax.imshow(Z, extent=extent, interpolation=\"nearest\")\n # This only affects axes in first column and second row as share_all=False.\n grid.axes_llc.set_xticks([-2, 0, 2])\n grid.axes_llc.set_yticks([-2, 0, 2])\n\n\ndef demo_grid_with_single_cbar(fig):\n \"\"\"\n A grid of 2x2 images with a single colorbar\n \"\"\"\n grid = ImageGrid(fig, 142, # similar to subplot(142)\n nrows_ncols=(2, 2),\n axes_pad=0.0,\n share_all=True,\n label_mode=\"L\",\n cbar_location=\"top\",\n cbar_mode=\"single\",\n )\n\n Z, extent = get_demo_image()\n for ax in grid:\n im = ax.imshow(Z, extent=extent, interpolation=\"nearest\")\n grid.cbar_axes[0].colorbar(im)\n\n for cax in grid.cbar_axes:\n cax.toggle_label(False)\n\n # This affects all axes as share_all = True.\n grid.axes_llc.set_xticks([-2, 0, 2])\n grid.axes_llc.set_yticks([-2, 0, 2])\n\n\ndef demo_grid_with_each_cbar(fig):\n \"\"\"\n A grid of 2x2 images. Each image has its own colorbar.\n \"\"\"\n grid = ImageGrid(fig, 143, # similar to subplot(143)\n nrows_ncols=(2, 2),\n axes_pad=0.1,\n label_mode=\"1\",\n share_all=True,\n cbar_location=\"top\",\n cbar_mode=\"each\",\n cbar_size=\"7%\",\n cbar_pad=\"2%\",\n )\n Z, extent = get_demo_image()\n for ax, cax in zip(grid, grid.cbar_axes):\n im = ax.imshow(Z, extent=extent, interpolation=\"nearest\")\n cax.colorbar(im)\n cax.toggle_label(False)\n\n # This affects all axes because we set share_all = True.\n grid.axes_llc.set_xticks([-2, 0, 2])\n grid.axes_llc.set_yticks([-2, 0, 2])\n\n\ndef demo_grid_with_each_cbar_labelled(fig):\n \"\"\"\n A grid of 2x2 images. Each image has its own colorbar.\n \"\"\"\n grid = ImageGrid(fig, 144, # similar to subplot(144)\n nrows_ncols=(2, 2),\n axes_pad=(0.45, 0.15),\n label_mode=\"1\",\n share_all=True,\n cbar_location=\"right\",\n cbar_mode=\"each\",\n cbar_size=\"7%\",\n cbar_pad=\"2%\",\n )\n Z, extent = get_demo_image()\n\n # Use a different colorbar range every time\n limits = ((0, 1), (-2, 2), (-1.7, 1.4), (-1.5, 1))\n for ax, cax, vlim in zip(grid, grid.cbar_axes, limits):\n im = ax.imshow(Z, extent=extent, interpolation=\"nearest\",\n vmin=vlim[0], vmax=vlim[1])\n cb = cax.colorbar(im)\n cb.set_ticks((vlim[0], vlim[1]))\n\n # This affects all axes because we set share_all = True.\n grid.axes_llc.set_xticks([-2, 0, 2])\n grid.axes_llc.set_yticks([-2, 0, 2])\n\n\nfig = plt.figure(figsize=(10.5, 2.5))\nfig.subplots_adjust(left=0.05, right=0.95)\n\ndemo_simple_grid(fig)\ndemo_grid_with_single_cbar(fig)\ndemo_grid_with_each_cbar(fig)\ndemo_grid_with_each_cbar_labelled(fig)\n\nplt.show()",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0bf2273f348618757c525a4ecc2e2c974d60c97 | 551,753 | ipynb | Jupyter Notebook | dataset_integration_3.ipynb | isacco-v/hit-song-prediction | 82fe7f9dfd46d4f55e6b89aa55d897b98ca8f7c9 | [
"MIT"
] | null | null | null | dataset_integration_3.ipynb | isacco-v/hit-song-prediction | 82fe7f9dfd46d4f55e6b89aa55d897b98ca8f7c9 | [
"MIT"
] | null | null | null | dataset_integration_3.ipynb | isacco-v/hit-song-prediction | 82fe7f9dfd46d4f55e6b89aa55d897b98ca8f7c9 | [
"MIT"
] | null | null | null | 551,753 | 551,753 | 0.751144 | [
[
[
"#Importo librerie",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\nimport concurrent.futures\nimport time\nfrom requests.exceptions import ReadTimeout",
"_____no_output_____"
],
[
"!pip install -U -q PyDrive",
"_____no_output_____"
],
[
"from pydrive.auth import GoogleAuth\nfrom pydrive.drive import GoogleDrive\nfrom google.colab import auth\nfrom oauth2client.client import GoogleCredentials",
"_____no_output_____"
]
],
[
[
"#Autenticazione Spotify API",
"_____no_output_____"
]
],
[
[
"!pip uninstall spotipy",
"_____no_output_____"
],
[
"!pip install spotipy",
"Collecting spotipy\n Downloading https://files.pythonhosted.org/packages/fb/69/21f1ccc881438bdfa1056ea131b6ac2b1cfbe656cf3676b6167d3cbc4d69/spotipy-2.17.1-py3-none-any.whl\nCollecting requests>=2.25.0\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/29/c1/24814557f1d22c56d50280771a17307e6bf87b70727d975fd6b2ce6b014a/requests-2.25.1-py2.py3-none-any.whl (61kB)\n\u001b[K |████████████████████████████████| 61kB 3.5MB/s \n\u001b[?25hRequirement already satisfied: six>=1.15.0 in /usr/local/lib/python3.7/dist-packages (from spotipy) (1.15.0)\nCollecting urllib3>=1.26.0\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/09/c6/d3e3abe5b4f4f16cf0dfc9240ab7ce10c2baa0e268989a4e3ec19e90c84e/urllib3-1.26.4-py2.py3-none-any.whl (153kB)\n\u001b[K |████████████████████████████████| 153kB 11.3MB/s \n\u001b[?25hRequirement already satisfied: chardet<5,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests>=2.25.0->spotipy) (3.0.4)\nRequirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests>=2.25.0->spotipy) (2020.12.5)\nRequirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests>=2.25.0->spotipy) (2.10)\n\u001b[31mERROR: google-colab 1.0.0 has requirement requests~=2.23.0, but you'll have requests 2.25.1 which is incompatible.\u001b[0m\n\u001b[31mERROR: datascience 0.10.6 has requirement folium==0.2.1, but you'll have folium 0.8.3 which is incompatible.\u001b[0m\nInstalling collected packages: urllib3, requests, spotipy\n Found existing installation: urllib3 1.24.3\n Uninstalling urllib3-1.24.3:\n Successfully uninstalled urllib3-1.24.3\n Found existing installation: requests 2.23.0\n Uninstalling requests-2.23.0:\n Successfully uninstalled requests-2.23.0\nSuccessfully installed requests-2.25.1 spotipy-2.17.1 urllib3-1.26.4\n"
],
[
"# autenticazione Spotify API con spotipy\nimport spotipy\nfrom spotipy.oauth2 import SpotifyClientCredentials\nauth_manager = SpotifyClientCredentials(client_id='caf57b996b464996bff50ab59186f265', client_secret='0bfcdbff8015426cae855b56b692f69b')\nsp = spotipy.Spotify(auth_manager=auth_manager, requests_timeout=10)",
"_____no_output_____"
]
],
[
[
"#Importo dataset",
"_____no_output_____"
]
],
[
[
"# autenticazione google drive\nauth.authenticate_user()\ngauth = GoogleAuth()\ngauth.credentials = GoogleCredentials.get_application_default()\ndrive = GoogleDrive(gauth)",
"_____no_output_____"
],
[
"# importo quarta parte del dataset --> 7087 datapoints\n\ndrive.CreateFile({'id':'1YAAvPUaPeBIddkuVzWgw7fhPPcT2uJTY'}).GetContentFile('billboard_dataset_unique.csv')\ndf_billboard = pd.read_csv(\"billboard_dataset_unique.csv\").drop('Unnamed: 0',axis=1).iloc[21261:]\n\ndrive.CreateFile({'id':'1eOqgPk_izGXKIT5y6KfqPkmKWqBonVc0'}).GetContentFile('dataset2_X_billboard.csv')\ndf_songs = pd.read_csv(\"dataset2_X_billboard.csv\").drop('Unnamed: 0',axis=1)\n\n\n# df_billboard.iloc[:7087]\n\n# df_billboard.iloc[7087:14174]\n\n# df_billboard.iloc[14174:21261]\n\n# df_billboard.iloc[21261:]",
"_____no_output_____"
],
[
"df_billboard.head()",
"_____no_output_____"
],
[
"df_billboard.shape",
"_____no_output_____"
]
],
[
[
"#Definizione funzioni",
"_____no_output_____"
]
],
[
[
"def print_exec_time(start):\n print(\"Esecuzione completata in %.4f secondi\" % (time.perf_counter()-start))",
"_____no_output_____"
],
[
"# funzione che effettua ricerca con Spotify API considerando i casi in cui nel campo 'artist' siano presenti più artisti (featuring)\n\ndef search_fix(artist, title):\n artist_separators = ['%%%', ' Featuring', ' featuring', ' feat.', ' Feat.', ' feat', ' Feat', ' &', ' x', ' X', ' with', ' With', ', ', '/', ' duet', ' Duet', '+', ' and']\n title_separators = ['%%%', ' (']\n title_fix = [\"%%%\", \"'s\", \"'\"]\n\n id = None\n\n for x in artist_separators:\n for y in title_separators:\n for z in title_fix:\n try:\n id = sp.search(q='artist:'+artist.split(x)[0]+' track:'+title.split(y)[0].replace(z, ''), type='track', limit=1)['tracks']['items'][0]['id']\n except IndexError:\n pass\n if(id != None):\n break\n if(id != None):\n break\n if(id != None):\n break\n\n return id",
"_____no_output_____"
],
[
"# funzione che prendendo una singola riga del Billboard dataset restituisce una lista con id, artista e titolo\n# --> in caso di errore l'id viene impostato a None\n\ndef get_id(row):\n\n artist = row[1]\n title = row[0]\n\n print(\"fetching id for %s by %s ...\" % (title, artist))\n\n try:\n try:\n id = sp.search(q='artist:'+artist+' track:'+title, type='track', limit=1)['tracks']['items'][0]['id']\n except IndexError:\n id = search_fix(artist, title)\n except ReadTimeout:\n id = None\n \n if(id == None):\n print('--> [error] %s by %s' % (title, artist))\n\n return [id, artist, title]",
"_____no_output_____"
],
[
"# funzione che, preso un id, restituisce un array con le features (audio e non) della traccia corrispondente\n\ndef get_features(id):\n print(\"fetching features for id: %s\" % id)\n\n # audio features\n danceability = []\n energy = []\n key = []\n loudness =[]\n mode = []\n speechiness = []\n acousticness = []\n instrumentalness = []\n liveness = []\n valence = []\n tempo = []\n duration_ms = []\n\n audio_features_array = [danceability, energy, key, loudness, mode, speechiness,\n acousticness, instrumentalness, liveness, valence, tempo, duration_ms]\n\n # altre features\n release_date = []\n explicit = []\n\n release_date.append(sp.track(id)['album']['release_date'])\n explicit.append(sp.track(id)['explicit'])\n\n audio_features = sp.audio_features(id)[0]\n\n try:\n # rimuovo campi non necessari\n to_remove = ['type', 'id', 'uri', 'track_href', 'analysis_url', 'time_signature']\n for rmv in to_remove:\n audio_features.pop(rmv)\n\n for i, feature in enumerate(audio_features.keys()):\n audio_features_array[i].append(audio_features[feature])\n\n except AttributeError:\n print(\"--> [error] id = %s\" % id)\n\n for i in range(12):\n audio_features_array[i].append(None)\n\n\n audio_features_array.append(release_date)\n audio_features_array.append(explicit) \n\n return audio_features_array",
"_____no_output_____"
]
],
[
[
"#Integrazione dataset",
"_____no_output_____"
],
[
"##Recupero gli id del Billboard dataset",
"_____no_output_____"
]
],
[
[
"time_0 = time.perf_counter()\n\nwith concurrent.futures.ProcessPoolExecutor() as executor:\n results = executor.map(get_id, df_billboard.values.tolist())\n \n output = []\n for result in results:\n output.append(result)\n\nprint_exec_time(time_0)",
"\u001b[1;30;43mOutput streaming troncato alle ultime 5000 righe.\u001b[0m\nfetching id for Keep On Running by The Spencer Davis Group ...\nfetching id for (I'm Just A) Fool For You by Gene Chandler ...\nfetching id for The Boogaloo Party by The Flamingos ...\n--> [error] Uptight (Everything's Alright) by The Jazz Crusaders\nfetching id for Baby I Need You by The Manhattans ...\nfetching id for Sharing You by Mitty Collier ...\nfetching id for I Spy (For The FBI) by Jamo Thomas & His Party Brothers Orchestra ...\n--> [error] The Boogaloo Party by The Flamingos\nfetching id for When Liking Turns To Loving by Ronnie Dove ...\nfetching id for Don't Mess With Bill by The Marvelettes ...\nfetching id for What Now My Love by Sonny & Cher ...\nfetching id for At The Scene by The Dave Clark Five ...\nfetching id for Uptight (Everything's Alright) by Stevie Wonder ...\nfetching id for My Love by Petula Clark ...\nfetching id for Crying Time by Ray Charles ...\n--> [error] I Spy (For The FBI) by Jamo Thomas & His Party Brothers Orchestra\nfetching id for Up And Down by The McCoys ...\nfetching id for In My Room (El Amor) by Verdelle Smith ...\nfetching id for What Goes On by The Beatles ...\nfetching id for Moulty by The Barbarians ...\n--> [error] Crying Time by Ray Charles\nfetching id for Big Time by Lou Christie ...\nfetching id for Call Me by Chris Montez ...\nfetching id for I See The Light by The Five Americans ...\nfetching id for Barbara Ann by The Beach Boys ...\nfetching id for Zorba The Greek by Herb Alpert & The Tijuana Brass ...\nfetching id for Just Like Me by Paul Revere & The Raiders Featuring Mark Lindsay ...\nfetching id for Going To A Go-Go by The Miracles ...\nfetching id for Breakin' Up Is Breakin' My Heart by Roy Orbison ...\nfetching id for Long Live Our Love by The Shangri-Las ...\nfetching id for Andrea by The Sunrays ...\n--> [error] Moulty by The Barbarians\nfetching id for Love Is All We Need by Mel Carter ...\nfetching id for Batman by Jan & Dean ...\nfetching id for It Won't Be Wrong by The Byrds ...\nfetching id for Waitin' In Your Welfare Line by Buck Owens ...\nfetching id for Hide & Seek by The Sheep ...\n--> [error] Andrea by The Sunrays\nfetching id for Promise Her Anything by Tom Jones ...\nfetching id for Smokey Joe's La La by Googie Rene Combo ...\nfetching id for Take Me For What I'm Worth by The Searchers ...\nfetching id for I Confess by New Colony Six ...\nfetching id for This Golden Ring by The Fortunes ...\nfetching id for Shake Hands (And Come Out Crying) by The Newbeats ...\nfetching id for Superman by Dino, Desi & Billy ...\nfetching id for Put Yourself In My Place by The Elgins ...\nfetching id for My Babe by Roy Head And The Traits ...\nfetching id for No Matter What Shape (Your Stomach's In) by The T-Bones ...\nfetching id for A Well Respected Man by The Kinks ...\nfetching id for Michelle by David & Jonathan ...\nfetching id for Night Time by The Strangeloves ...\nfetching id for We Can Work It Out by The Beatles ...\nfetching id for Get Out Of My Life, Woman by Lee Dorsey ...\nfetching id for Bye Bye Blues by Bert Kaempfert And His Orchestra ...\n--> [error] Hide & Seek by The Sheep\nfetching id for A Little Bit Of Soap by The Exciters ...\n--> [error] Bye Bye Blues by Bert Kaempfert And His Orchestra\nfetching id for Georgia On My Mind by The Righteous Brothers ...\nfetching id for Red Hot by Sam The Sham and the Pharaohs ...\nfetching id for Lies by The Knickerbockers ...\nfetching id for Jenny Take A Ride! by Mitch Ryder And The Detroit Wheels ...\nfetching id for A Hard Day's Night by Ramsey Lewis Trio ...\nfetching id for Five O'Clock World by The Vogues ...\nfetching id for She's Just My Style by Gary Lewis And The Playboys ...\nfetching id for Like A Baby by Len Barry ...\nfetching id for (You're Gonna) Hurt Yourself by Frankie Valli ...\nfetching id for My Ship Is Comin' In by The Walker Bros. ...\n--> [error] A Little Bit Of Soap by The Exciters\nfetching id for Michelle by Bud Shank ...\nfetching id for I'll Go Crazy by James Brown And The Famous Flames ...\nfetching id for Set You Free This Time by The Byrds ...\nfetching id for The Answer To My Prayer by Neil Sedaka ...\nfetching id for Where Am I Going? by Barbra Streisand ...\nfetching id for Feel It by Sam Cooke ...\nfetching id for The Men In My Little Girl's Life by Mike Douglas ...\nfetching id for The Sound Of Silence by Simon & Garfunkel ...\nfetching id for A Must To Avoid by Herman's Hermits ...\nfetching id for The Duck by Jackie Lee ...\nfetching id for Spanish Eyes by Al Martino ...\nfetching id for Day Tripper by The Beatles ...\nfetching id for As Tears Go By by The Rolling Stones ...\nfetching id for Cleo's Mood by Jr. Walker & The All Stars ...\nfetching id for Spread It On Thick by The Gentrys ...\n--> [error] My Ship Is Comin' In by The Walker Bros.\nfetching id for Snow Flake by Jim Reeves ...\nfetching id for I Ain't Gonna Eat Out My Heart Anymore by The Young Rascals ...\nfetching id for We Know We're In Love by Lesley Gore ...\nfetching id for The Loop by Johnny Lytle ...\nfetching id for Something I Want To Tell You by Johnny and The Expressions ...\nfetching id for Tijuana Taxi by Herb Alpert & The Tijuana Brass ...\nfetching id for You Didn't Have To Be So Nice by The Lovin' Spoonful ...\nfetching id for Sandy by Ronny And The Daytonas ...\nfetching id for It Was A Very Good Year by Frank Sinatra ...\nfetching id for Attack by The Toys ...\nfetching id for Tell Me Why by Elvis Presley With The Jordanaires ...\n--> [error] Spread It On Thick by The Gentrys\nfetching id for Are You There (With Another Girl) by Dionne Warwick ...\nfetching id for Second Hand Rose by Barbra Streisand ...\nfetching id for Michael by The C.O.D.'s ...\nfetching id for I'm Too Far Gone (To Turn Around) by Bobby Bland ...\nfetching id for Recovery by Fontella Bass ...\nfetching id for My Generation by The Who ...\nfetching id for Michelle by Billy Vaughn And His Orchestra ...\nfetching id for Since I Lost The One I Love by The Impressions ...\nfetching id for Don't Forget About Me by Barbara Lewis ...\nfetching id for Lost Someone by James Brown And The Famous Flames ...\nfetching id for Is It Me? by Barbara Mason ...\nfetching id for We Got The Winning Hand by Little Milton ...\nfetching id for Flowers On The Wall by The Statler Brothers ...\nfetching id for Thunderball by Tom Jones ...\nfetching id for Satin Pillows by Bobby Vinton ...\nfetching id for Look Through Any Window by The Hollies ...\nfetching id for It's Good News Week by Hedgehoppers Anonymous ...\nfetching id for Under Your Spell Again by Johnny Rivers ...\nfetching id for Hurt by Little Anthony And The Imperials ...\nfetching id for Can You Please Crawl Out Your Window? by Bob Dylan ...\nfetching id for If You Gotta Make A Fool Of Somebody by Maxine Brown ...\nfetching id for I Can't Believe You Love Me by Tammi Terrell ...\nfetching id for Can't You See (You're Losing Me) by Mary Wells ...\nfetching id for Ebb Tide by The Righteous Brothers ...\nfetching id for Over And Over by The Dave Clark Five ...\nfetching id for I Got You (I Feel Good) by James Brown And The Famous Flames ...\nfetching id for A Sweet Woman Like You by Joe Tex ...\nfetching id for Broomstick Cowboy by Bobby Goldsboro ...\nfetching id for Little Boy (In Grown Up Clothes) by The 4 Seasons ...\nfetching id for Rainbow '65 (Part I) by Gene Chandler ...\nfetching id for Tired Of Being Lonely by Sharpees ...\n--> [error] Michelle by Billy Vaughn And His Orchestra\nfetching id for The Pain Gets A Little Deeper by Darrow Fletcher ...\nfetching id for My Answer by Jimmy McCracklin ...\nfetching id for Rib Tip's (Part 1) by Andre Williams & His Orch. ...\nfetching id for Because I Love You by Billy Stewart ...\nfetching id for Fly Me To The Moon by Sam & Bill ...\n--> [error] Tired Of Being Lonely by Sharpees\nfetching id for It's My Life by The Animals ...\nfetching id for Fever by The McCoys ...\nfetching id for Turn! Turn! Turn! (To Everything There Is A Season) by The Byrds ...\nfetching id for Let's Hang On! by The 4 Seasons Featuring the \"Sound of Frankie Valli\" ...\nfetching id for Harlem Nocturne by The Viscounts ...\nfetching id for I've Got To Be Somebody by Billy Joe Royal ...\nfetching id for A Young Girl by Noel Harrison ...\nfetching id for Love (Makes Me Do Foolish Things) by Martha & The Vandellas ...\nfetching id for Please Don't Fight It by Dino, Desi & Billy ...\nfetching id for Hole In The Wall by The Packers ...\nfetching id for Giddyup Go by Red Sovine ...\nfetching id for Get Back by Roy Head ...\nfetching id for You Don't Know Like I Know by Sam & Dave ...\nfetching id for Goodnight My Love by Ben E. King ...\nfetching id for Spanish Harlem by King Curtis ...\nfetching id for Where The Sun Has Never Shone by Jonathan King ...\nfetching id for Make The World Go Away by Eddy Arnold ...\nfetching id for Don't Think Twice by The Wonder Who? ...\n--> [error] A Young Girl by Noel Harrison\nfetching id for England Swings by Roger Miller ...\nfetching id for One Has My Name (The Other Has My Heart) by Barry Young ...\nfetching id for Puppet On A String by Elvis Presley With The Jordanaires ...\n--> [error] Don't Think Twice by The Wonder Who?\nfetching id for I Can Never Go Home Anymore by The Shangri-Las ...\nfetching id for You've Been Cheatin' by The Impressions ...\nfetching id for The Little Girl I Once Knew by The Beach Boys ...\nfetching id for Jealous Heart by Connie Francis ...\nfetching id for Crystal Chandelier by Vic Dana ...\nfetching id for C.C. Rider by Bobby Powell ...\nfetching id for Love Bug by Jack Jones ...\nfetching id for Don't Look Back by The Temptations ...\nfetching id for Think Twice by Jackie Wilson And LaVern Baker ...\nfetching id for Baby Come On Home by Solomon Burke ...\nfetching id for Sunday And Me by Jay & The Americans ...\nfetching id for Taste Of Honey by Herb Alpert & The Tijuana Brass ...\nfetching id for Hang On Sloopy by Ramsey Lewis Trio ...\nfetching id for Apple Of My Eye by Roy Head And The Traits ...\n--> [error] Love Bug by Jack Jones\nfetching id for Princess In Rags by Gene Pitney ...\nfetching id for Try Me by James Brown At The Organ ...\n--> [error] Apple Of My Eye by Roy Head And The Traits\nfetching id for Don't Fight It by Wilson Pickett ...\nfetching id for Buckaroo by Buck Owens and The Buckaroos ...\nfetching id for All Or Nothing by Patti LaBelle And The Blue Belles ...\n--> [error] Try Me by James Brown At The Organ\nfetching id for Seesaw by Don Covay & The Goodtimers ...\n--> [error] All Or Nothing by Patti LaBelle And The Blue Belles\nfetching id for A Time To Love-A Time To Cry (Petite Fleur) by Lou Johnson ...\n--> [error] Seesaw by Don Covay & The Goodtimers\nfetching id for Follow Your Heart by The Manhattans ...\nfetching id for Black Nights by Lowell Fulsom ...\nfetching id for I Hear A Symphony by The Supremes ...\nfetching id for I Will by Dean Martin ...\nfetching id for I'm A Man by The Yardbirds ...\nfetching id for Let's Get Together by We Five ...\nfetching id for 1-2-3 by Len Barry ...\nfetching id for Just One More Day by Otis Redding ...\nfetching id for Yesterday Man by Chris Andrews ...\nfetching id for Blue River by Elvis Presley ...\nfetching id for Look In My Eyes by The Three Degrees ...\nfetching id for Mountain Of Love by Billy Stewart ...\nfetching id for Something About You by Four Tops ...\n--> [error] A Time To Love-A Time To Cry (Petite Fleur) by Lou Johnson\nfetching id for Get Off Of My Cloud by The Rolling Stones ...\nfetching id for Run, Baby Run (Back Into My Arms) by The Newbeats ...\nfetching id for Rescue Me by Fontella Bass ...\nfetching id for Ain't That Peculiar by Marvin Gaye ...\nfetching id for Here It Comes Again by The Fortunes ...\nfetching id for Kiss Away by Ronnie Dove ...\nfetching id for Mother Nature, Father Time by Brook Benton ...\nfetching id for Our World by Johnny Tillotson ...\nfetching id for Everybody Do The Sloopy by Johnny Thunder ...\nfetching id for Go Away From My World by Marianne Faithfull ...\nfetching id for Good Time Music by The Beau Brummels ...\nfetching id for On A Clear Day You Can See Forever by Johnny Mathis ...\nfetching id for Love Theme From \"The Sandpiper\" (The Shadow Of Your Smile) by Tony Bennett ...\nfetching id for You've Got To Hide Your Love Away by The Silkie ...\nfetching id for Make It Easy On Yourself by The Walker Bros. ...\n--> [error] You've Got To Hide Your Love Away by The Silkie\nfetching id for Mystic Eyes by Them ...\nfetching id for May The Bird Of Paradise Fly Up Your Nose by \"Little\" Jimmy Dickens ...\nfetching id for A Lover's Concerto by The Toys ...\n--> [error] Make It Easy On Yourself by The Walker Bros.\nfetching id for Crawling Back by Roy Orbison ...\nfetching id for The Revolution Kind by Sonny ...\nfetching id for I Really Love You by Dee Dee Sharp ...\nfetching id for I Won't Love You Anymore (Sorry) by Lesley Gore ...\nfetching id for I'm Satisfied by The San Remo Golden Strings ...\nfetching id for Quiet Nights Of Quiet Stars by Andy Williams ...\nfetching id for Back Street by Edwin Starr ...\nfetching id for Do I Make Myself Clear by Etta James & Sugar Pie DeSanto ...\n--> [error] I'm Satisfied by The San Remo Golden Strings\nfetching id for The Drinking Man's Diet by Allan Sherman ...\nfetching id for My Baby by The Temptations ...\nfetching id for Let Me Be by The Turtles ...\nfetching id for My Girl Has Gone by The Miracles ...\n--> [error] Do I Make Myself Clear by Etta James & Sugar Pie DeSanto\nfetching id for Just A Little Bit by Roy Head ...\nfetching id for (All Of A Sudden) My Heart Sings by Mel Carter ...\nfetching id for Road Runner by The Gants ...\nfetching id for Sinner Man by Trini Lopez ...\nfetching id for I Want To Meet Him by The Royalettes ...\nfetching id for Stand By Me by Earl Grant ...\nfetching id for Run To My Lovin' Arms by Lenny Welch ...\n--> [error] Run To My Lovin' Arms by Lenny Welch\nfetching id for Stay Away From My Baby by Ted Taylor ...\n--> [error] I Want To Meet Him by The Royalettes\nfetching id for Keep On Dancing by The Gentrys ...\nfetching id for You're The One by The Vogues ...\nfetching id for Everyone's Gone To The Moon by Jonathan King ...\nfetching id for Everybody Loves A Clown by Gary Lewis And The Playboys ...\nfetching id for Yesterday by The Beatles ...\nfetching id for Ring Dang Doo by Sam The Sham and the Pharaohs ...\nfetching id for Rusty Bells by Brenda Lee ...\nfetching id for He Touched Me by Barbra Streisand ...\nfetching id for Dance With Me by The Mojo Men ...\nfetching id for Misty by The Vibrations ...\nfetching id for Let's Move & Groove (Together) by Johnny Nash ...\nfetching id for She's With Her Other Love by Leon Hayward ...\n--> [error] Rusty Bells by Brenda Lee\nfetching id for Only Love (Can Save Me Now) by Solomon Burke ...\nfetching id for But You're Mine by Sonny & Cher ...\nfetching id for I Knew You When by Billy Joe Royal ...\nfetching id for Make Me Your Baby by Barbara Lewis ...\nfetching id for I Found A Girl by Jan & Dean ...\nfetching id for Where Do You Go by Cher ...\n--> [error] She's With Her Other Love by Leon Hayward\nfetching id for Round Every Corner by Petula Clark ...\nfetching id for Positively 4th Street by Bob Dylan ...\nfetching id for Cleo's Back by Jr. Walker & The All Stars ...\nfetching id for Where Have All The Flowers Gone by Johnny Rivers ...\nfetching id for Don't Talk To Strangers by The Beau Brummels ...\nfetching id for Child Of Our Times by Barry McGuire ...\nfetching id for Honky Tonk '65 by Lonnie Mack ...\nfetching id for Don't Pity Me by Peter And Gordon ...\nfetching id for Pied Piper by The Changin' Times ...\nfetching id for Let The Good Times Roll by Roy Orbison ...\nfetching id for I Don't Know What You've Got But It's Got Me - Part I by Little Richard ...\n--> [error] Pied Piper by The Changin' Times\nfetching id for For You by The Spellbinders ...\nfetching id for Don't Have To Shop Around by The Mad Lads ...\nfetching id for Say Something Funny by Patty Duke ...\nfetching id for Chapel In The Moonlight by The Bachelors ...\nfetching id for Just A Little Bit Better by Herman's Hermits ...\n--> [error] I Don't Know What You've Got But It's Got Me - Part I by Little Richard\nfetching id for I Want To (Do Everything For You) by Joe Tex ...\nfetching id for Take Me In Your Arms (Rock Me A Little While) by Kim Weston ...\nfetching id for Looking With My Eyes by Dionne Warwick ...\nfetching id for Forgive Me by Al Martino ...\n--> [error] Forgive Me by Al Martino\nfetching id for Roses And Rainbows by Danny Hutton ...\n--> [error] Looking With My Eyes by Dionne Warwick\nfetching id for The Letter by Sonny & Cher ...\nfetching id for If You Don't (Love Me, Tell Me So) by Barbara Mason ...\nfetching id for I'm So Thankful by The Ikettes ...\nfetching id for Try To Remember by The Brothers Four ...\nfetching id for Treat Her Right by Roy Head And The Traits ...\nfetching id for Hang On Sloopy by The McCoys ...\nfetching id for Liar, Liar by The Castaways ...\nfetching id for Not The Lovin' Kind by Dino, Desi & Billy ...\nfetching id for Respect by Otis Redding ...\nfetching id for The \"In\" Crowd by Ramsey Lewis Trio ...\nfetching id for I Miss You So by Little Anthony And The Imperials ...\nfetching id for Do You Believe In Magic by The Lovin' Spoonful ...\nfetching id for Hungry For Love by San Remo Golden Strings ...\nfetching id for If You've Got A Heart by Bobby Goldsboro ...\nfetching id for Steppin' Out by Paul Revere & The Raiders ...\nfetching id for I Still Love You by The Vejtables ...\nfetching id for Mohair Sam by Charlie Rich ...\nfetching id for Baby Don't Go by Sonny & Cher ...\nfetching id for Some Enchanted Evening by Jay & The Americans ...\nfetching id for Cara-Lin by The Strangeloves ...\nfetching id for I'm Yours by Elvis Presley ...\nfetching id for The Universal Soldier by Glen Campbell ...\nfetching id for Act Naturally by The Beatles ...\nfetching id for Universal Soldier by Donovan ...\nfetching id for I Live For The Sun by The Sunrays ...\n--> [error] I'm So Thankful by The Ikettes\nfetching id for Secretly by The Lettermen ...\nfetching id for Remember When by Wayne Newton ...\nfetching id for A Lifetime Of Loneliness by Jackie DeShannon ...\nfetching id for Just Yesterday by Jack Jones ...\n--> [error] I Live For The Sun by The Sunrays\nfetching id for Inky Dinky Spider (The Spider Song) by The Kids Next Door ...\n--> [error] Just Yesterday by Jack Jones\nfetching id for So Long Babe by Nancy Sinatra ...\nfetching id for I Have Dreamed by Chad & Jeremy ...\nfetching id for You Were On My Mind by We Five ...\nfetching id for Help! by The Beatles ...\nfetching id for Eve Of Destruction by Barry McGuire ...\nfetching id for You've Got Your Troubles by The Fortunes ...\nfetching id for Catch Us If You Can by The Dave Clark Five ...\nfetching id for My Town, My Guy And Me by Lesley Gore ...\nfetching id for The Dawn Of Correction by The Spokesmen ...\nfetching id for What Color (Is A Man) by Bobby Vinton ...\n--> [error] Inky Dinky Spider (The Spider Song) by The Kids Next Door\nfetching id for Are You A Boy Or Are You A Girl by The Barbarians ...\nfetching id for Just One Kiss From You by The Impressions ...\nfetching id for Think by Jimmy McCracklin ...\nfetching id for I Believe I'll Love On by Jackie Wilson ...\n--> [error] What Color (Is A Man) by Bobby Vinton\nfetching id for Autumn Leaves - 1965 by Roger Williams ...\n--> [error] I Believe I'll Love On by Jackie Wilson\nfetching id for The Organ Grinder's Swing by Jimmy Smith With Kenny Burrell And Grady Tate ...\n--> [error] Autumn Leaves - 1965 by Roger Williams\nfetching id for Home Of The Brave by Jody Miller ...\nfetching id for We Gotta Get Out Of This Place by The Animals ...\nfetching id for Laugh At Me by Sonny ...\nfetching id for Just You by Sonny & Cher ...\nfetching id for Kansas City Star by Roger Miller ...\nfetching id for I'll Make All Your Dreams Come True by Ronnie Dove ...\nfetching id for Ride Away by Roy Orbison ...\nfetching id for There But For Fortune by Joan Baez ...\nfetching id for The World Through A Tear by Neil Sedaka ...\nfetching id for Funny Little Butterflies by Patty Duke ...\nfetching id for Early Morning Rain by Peter, Paul & Mary ...\nfetching id for I Need You So by Chuck Jackson & Maxine Brown ...\n--> [error] Funny Little Butterflies by Patty Duke\nfetching id for With These Hands by Tom Jones ...\nfetching id for Ain't It True by Andy Williams ...\nfetching id for Heartaches By The Number by Johnny Tillotson ...\nfetching id for It Ain't Me Babe by The Turtles ...\nfetching id for Heart Full Of Soul by The Yardbirds ...\nfetching id for Agent Double-O-Soul by Edwin Starr ...\nfetching id for 3rd Man Theme by Herb Alpert & The Tijuana Brass ...\nfetching id for Little Miss Sad by The Five Emprees ...\n--> [error] Ain't It True by Andy Williams\nfetching id for These Hands (Small But Mighty) by Bobby Bland ...\nfetching id for Tossing & Turning by The Ivy League ...\nfetching id for Right Now And Not Later by The Shangri-Las ...\nfetching id for Like A Rolling Stone by Bob Dylan ...\nfetching id for I Got You Babe by Sonny & Cher ...\nfetching id for Action by Freddy Cannon ...\nfetching id for Papa's Got A Brand New Bag (Part I) by James Brown And The Famous Flames ...\n--> [error] Little Miss Sad by The Five Emprees\nfetching id for Summer Nights by Marianne Faithfull ...\nfetching id for Two Different Worlds by Lenny Welch ...\nfetching id for High Heel Sneakers by Stevie Wonder ...\nfetching id for The Girl From Peyton Place by Dickey Lee ...\nfetching id for You Can't Take It Away by Fred Hughes ...\n--> [error] Two Different Worlds by Lenny Welch\nfetching id for California Girls by The Beach Boys ...\nfetching id for Sad, Sad Girl by Barbara Mason ...\nfetching id for Hold Me, Thrill Me, Kiss Me by Mel Carter ...\n--> [error] You Can't Take It Away by Fred Hughes\nfetching id for Houston by Dean Martin ...\nfetching id for The Tracks Of My Tears by The Miracles ...\nfetching id for I'm A Happy Man by The Jive Five ...\nfetching id for (My Girl) Sloopy by Little Caesar And The Consuls ...\n--> [error] I'm A Happy Man by The Jive Five\nfetching id for Moonlight And Roses (Bring Mem'ries Of You) by Vic Dana ...\n--> [error] (My Girl) Sloopy by Little Caesar And The Consuls\nfetching id for N-E-R-V-O-U-S! by Ian Whitcomb ...\nfetching id for First I Look At The Purse by The Contours ...\nfetching id for Danger Heartbreak Dead Ahead by The Marvelettes ...\nfetching id for The Sins Of A Family by P.F. Sloan ...\nfetching id for The Way Of Love by Kathy Kirby ...\n--> [error] The Sins Of A Family by P.F. Sloan\nfetching id for Roundabout by Connie Francis ...\n--> [error] The Way Of Love by Kathy Kirby\nfetching id for For Your Love by Sam & Bill ...\nfetching id for It's The Same Old Song by Four Tops ...\nfetching id for Nothing But Heartaches by The Supremes ...\nfetching id for Who'll Be The Next In Line by The Kinks ...\nfetching id for Since I Lost My Baby by The Temptations ...\nfetching id for It's Gonna Take A Miracle by The Royalettes ...\n--> [error] Roundabout by Connie Francis\nfetching id for In The Midnight Hour by Wilson Pickett ...\nfetching id for Down In The Boondocks by Billy Joe Royal ...\nfetching id for You've Been In Love Too Long by Martha & The Vandellas ...\nfetching id for Annie Fanny by The Kingsmen ...\nfetching id for I Need You by The Impressions ...\nfetching id for Home Of The Brave by Bonnie & The Treasures ...\n--> [error] It's Gonna Take A Miracle by The Royalettes\nfetching id for Colours by Donovan ...\nfetching id for How Nice It Is by Billy Stewart ...\nfetching id for Shake And Fingerpop by Jr. Walker & The All Stars ...\nfetching id for Baby, I'm Yours by Barbara Lewis ...\nfetching id for If I Didn't Love You by Chuck Jackson ...\nfetching id for All I Really Want To Do by Cher ...\nfetching id for I'll Take You Where The Music's Playing by The Drifters ...\nfetching id for Sugar Dumpling by Sam Cooke ...\nfetching id for I Don't Wanna Lose You Baby by Chad & Jeremy ...\nfetching id for Only Those In Love by Baby Washington ...\nfetching id for Give All Your Love To Me by Gerry And The Pacemakers ...\nfetching id for If You Wait For Love by Bobby Goldsboro ...\n--> [error] Home Of The Brave by Bonnie & The Treasures\nfetching id for Is It Really Over? by Jim Reeves ...\nfetching id for You're Gonna Make Me Cry by O.V. Wright ...\nfetching id for You Can't Be True, Dear by Patti Page ...\nfetching id for Me Without You by Mary Wells ...\n--> [error] If You Wait For Love by Bobby Goldsboro\nfetching id for The Silence (Il Silenzio) by Al Hirt ...\n--> [error] Me Without You by Mary Wells\nfetching id for Save Your Heart For Me by Gary Lewis And The Playboys ...\nfetching id for Looking Through The Eyes Of Love by Gene Pitney ...\nfetching id for Ju Ju Hand by Sam The Sham and the Pharaohs ...\nfetching id for (I Can't Get No) Satisfaction by The Rolling Stones ...\nfetching id for I'm A Fool by Dino, Desi & Billy ...\nfetching id for A Little You by Freddie And The Dreamers ...\nfetching id for Moon Over Naples by Bert Kaempfert And His Orchestra ...\nfetching id for It's A Man Down There by G.L. Crockett ...\n--> [error] The Silence (Il Silenzio) by Al Hirt\nfetching id for Someone Is Watching by Solomon Burke ...\nfetching id for Can't Let You Out Of My Sight by Chuck Jackson & Maxine Brown ...\nfetching id for Too Hot To Hold by Major Lance ...\nfetching id for Soul Heaven by The Dixie Drifter ...\nfetching id for Don't Just Stand There by Patty Duke ...\nfetching id for What's New Pussycat? by Tom Jones ...\nfetching id for You'd Better Come Home by Petula Clark ...\nfetching id for I'm Henry VIII, I Am by Herman's Hermits ...\nfetching id for Take Me Back by Little Anthony And The Imperials ...\nfetching id for You Tell Me Why by The Beau Brummels ...\nfetching id for All I Really Want To Do by The Byrds ...\nfetching id for Candy by The Astors ...\nfetching id for It's Too Late, Baby Too Late by Arthur Prysock ...\nfetching id for No Pity (In The Naked City) by Jackie Wilson ...\nfetching id for You Better Go by Derek Martin ...\n--> [error] Soul Heaven by The Dixie Drifter\nfetching id for Simpel Gimpel by Horst Jankowski ...\nfetching id for Good Times by Gene Chandler ...\nfetching id for Sunshine, Lollipops And Rainbows by Lesley Gore ...\nfetching id for I Want Candy by The Strangeloves ...\nfetching id for I Like It Like That by The Dave Clark Five ...\nfetching id for Cara, Mia by Jay & The Americans ...\nfetching id for I'll Always Love You by The Spinners ...\nfetching id for Ride Your Pony by Lee Dorsey ...\nfetching id for New Orleans by Eddie Hodges ...\n--> [error] You Better Go by Derek Martin\nfetching id for Hung On You by The Righteous Brothers ...\nfetching id for Summer Wind by Wayne Newton ...\nfetching id for What Are We Going To Do? by David Jones ...\n--> [error] New Orleans by Eddie Hodges\nfetching id for Pretty Little Baby by Marvin Gaye ...\nfetching id for To Know You Is To Love You by Peter And Gordon ...\nfetching id for One Dyin' And A Buryin' by Roger Miller ...\nfetching id for Theme From \"A Summer Place\" by The Lettermen ...\nfetching id for Too Many Rivers by Brenda Lee ...\nfetching id for (Say) You're My Girl by Roy Orbison ...\nfetching id for You're My Baby (And Don't You Forget It) by The Vacels ...\nfetching id for Here I Am by Dionne Warwick ...\nfetching id for It's Gonna Be Fine by Glenn Yarbrough ...\nfetching id for One Step At A Time by Maxine Brown ...\nfetching id for Tickle Me by Elvis Presley With The Jordanaires ...\n--> [error] You're My Baby (And Don't You Forget It) by The Vacels\nfetching id for Oowee, Oowee by Perry Como ...\n--> [error] Tickle Me by Elvis Presley With The Jordanaires\nfetching id for Canadian Sunset by Sounds Orchestral ...\n--> [error] Oowee, Oowee by Perry Como\nfetching id for Yes, I'm Ready by Barbara Mason ...\nfetching id for What The World Needs Now Is Love by Jackie DeShannon ...\nfetching id for I Can't Help Myself (Sugar Pie Honey Bunch) by Four Tops ...\nfetching id for Marie by The Bachelors ...\nfetching id for Seventh Son by Johnny Rivers ...\nfetching id for You Turn Me On (Turn On Song) by Ian Whitcomb And Bluesville ...\n--> [error] Canadian Sunset by Sounds Orchestral\nfetching id for Nobody Knows What's Goin' On (In My Mind But Me) by The Chiffons ...\nfetching id for The Loser by The Skyliners ...\nfetching id for He's Got No Love by The Searchers ...\nfetching id for Fly Me To The Moon (In Other Words) by Tony Bennett ...\nfetching id for I've Cried My Last Tear by The O'Jays ...\n--> [error] The Loser by The Skyliners\nfetching id for Sitting In The Park by Billy Stewart ...\nfetching id for (Such An) Easy Question by Elvis Presley With The Jordanaires ...\n--> [error] I've Cried My Last Tear by The O'Jays\nfetching id for Mr. Tambourine Man by The Byrds ...\nfetching id for A Little Bit Of Heaven by Ronnie Dove ...\nfetching id for Laurie (Strange Things Happen) by Dickey Lee ...\nfetching id for Trains And Boats And Planes by Billy J. Kramer With The Dakotas ...\nfetching id for Seein' The Right Love Go Wrong by Jack Jones ...\nfetching id for I Can't Work No Longer by Billy Butler & The Chanters ...\nfetching id for Theme From \"Harlow\" (Lonely Girl) by Bobby Vinton ...\nfetching id for Boot-Leg by Booker T. & The MG's ...\nfetching id for Forget Domani by Frank Sinatra ...\n--> [error] Seein' The Right Love Go Wrong by Jack Jones\nfetching id for Forget Domani by Connie Francis ...\nfetching id for I'm A Fool To Care by Ray Charles ...\n--> [error] Forget Domani by Frank Sinatra\nfetching id for My Man by Barbra Streisand ...\nfetching id for After Loving You by Della Reese ...\n--> [error] I'm A Fool To Care by Ray Charles\nfetching id for We're Doing Fine by Dee Dee Warwick ...\nfetching id for Where Were You When I Needed You by Jerry Vale ...\nfetching id for Here Comes The Night by Them ...\nfetching id for Set Me Free by The Kinks ...\nfetching id for A Walk In The Black Forest by Horst Jankowski ...\nfetching id for Wooly Bully by Sam The Sham and the Pharaohs ...\nfetching id for Tonight's The Night by Solomon Burke ...\nfetching id for For Your Love by The Yardbirds ...\nfetching id for A World Of Our Own by The Seekers ...\nfetching id for Wonderful World by Herman's Hermits ...\nfetching id for Girl Come Running by The 4 Seasons Featuring the \"Sound of Frankie Valli\" ...\nfetching id for Oo Wee Baby, I Love You by Fred Hughes ...\nfetching id for It's Just A Little Bit Too Late by Wayne Fontana & The Mindbenders ...\nfetching id for Silver Threads And Golden Needles by Jody Miller ...\nfetching id for Darling Take Me Back by Lenny Welch ...\n--> [error] After Loving You by Della Reese\nfetching id for Around The Corner by The Duprees ...\nfetching id for Ain't That Love by Four Tops ...\nfetching id for I've Been Loving You Too Long (To Stop Now) by Otis Redding ...\nfetching id for This Little Bird by Marianne Faithfull ...\nfetching id for Crying In The Chapel by Elvis Presley With The Jordanaires ...\n--> [error] Darling Take Me Back by Lenny Welch\nfetching id for Hush, Hush, Sweet Charlotte by Patti Page ...\nfetching id for Who's Cheating Who? by Little Milton ...\nfetching id for Meeting Over Yonder by The Impressions ...\nfetching id for Little Lonely One by Tom Jones ...\nfetching id for It Feels So Right by Elvis Presley With The Jordanaires ...\nfetching id for Watermelon Man by Gloria Lynne ...\nfetching id for You've Never Been In Love Like This Before by Unit Four plus Two ...\nfetching id for Buster Browne by Willie Mitchell ...\nfetching id for If You Really Want Me To, I'll Go by The Ron-Dels ...\n--> [error] Little Lonely One by Tom Jones\nfetching id for Yakety Axe by Chet Atkins ...\nfetching id for You Really Know How To Hurt A Guy by Jan & Dean ...\nfetching id for Shakin' All Over by The Guess Who ...\nfetching id for Help Me, Rhonda by The Beach Boys ...\nfetching id for Catch The Wind by Donovan ...\nfetching id for Give Us Your Blessings by The Shangri-Las ...\nfetching id for Do The Boomerang by Jr. Walker & The All Stars ...\nfetching id for Summer Sounds by Robert Goulet ...\nfetching id for One Monkey Don't Stop No Show by Joe Tex ...\nfetching id for Justine by The Righteous Brothers ...\nfetching id for Follow Me by The Drifters ...\nfetching id for Stop! Look What You're Doing by Carla Thomas ...\nfetching id for From A Window by Chad & Jeremy ...\nfetching id for Back In My Arms Again by The Supremes ...\nfetching id for Voodoo Woman by Bobby Goldsboro ...\n--> [error] If You Really Want Me To, I'll Go by The Ron-Dels\nfetching id for Before And After by Chad & Jeremy ...\nfetching id for Last Chance To Turn Around by Gene Pitney ...\nfetching id for I Do by The Marvelows ...\nfetching id for I'll Keep Holding On by The Marvelettes ...\nfetching id for You'll Never Walk Alone by Gerry And The Pacemakers ...\nfetching id for Temptation 'Bout To Get Me by The Knight Bros. ...\n--> [error] Voodoo Woman by Bobby Goldsboro\nfetching id for What's He Doing In My World by Eddy Arnold ...\nfetching id for He's A Lover by Mary Wells ...\nfetching id for I Love You So by Bobbi Martin ...\n--> [error] Temptation 'Bout To Get Me by The Knight Bros.\nfetching id for The First Thing Ev'ry Morning (And The Last Thing Ev'ry Night) by Jimmy Dean ...\nfetching id for I Want You Back Again by The Zombies ...\nfetching id for Just A Little by The Beau Brummels ...\nfetching id for Engine Engine #9 by Roger Miller ...\nfetching id for Nothing Can Stop Me by Gene Chandler ...\nfetching id for Ticket To Ride by The Beatles ...\nfetching id for And I Love Him by Esther Phillips ...\nfetching id for When A Boy Falls In Love by Sam Cooke ...\nfetching id for (Remember Me) I'm The One Who Loves You by Dean Martin ...\nfetching id for Operator by Brenda Holloway ...\nfetching id for Soul Sauce (Guacha Guaro) by Cal Tjader ...\nfetching id for Cast Your Fate To The Wind by Steve Alaimo ...\nfetching id for Blue Shadows by B.B. King ...\nfetching id for Love Me Now by Brook Benton ...\nfetching id for Mrs. Brown You've Got A Lovely Daughter by Herman's Hermits ...\nfetching id for True Love Ways by Peter And Gordon ...\nfetching id for It's Not Unusual by Tom Jones ...\nfetching id for L-O-N-E-L-Y by Bobby Vinton ...\nfetching id for Concrete And Clay by Unit Four plus Two ...\nfetching id for Silhouettes by Herman's Hermits ...\nfetching id for I'll Be With You In Apple Blossom Time by Wayne Newton ...\nfetching id for Concrete And Clay by Eddie Rambeau ...\nfetching id for Tell Her (You Love Her Every Day) by Frank Sinatra ...\nfetching id for It's Wonderful To Be In Love by The Ovations (Featuring Louis Williams) ...\n--> [error] I Love You So by Bobbi Martin\nfetching id for Bring A Little Sunshine (To My Heart) by Vic Dana ...\nfetching id for The Puzzle Song (A Puzzle In Song) by Shirley Ellis ...\nfetching id for Are You Sincere by Trini Lopez ...\nfetching id for Then I'll Count Again by Johnny Tillotson ...\nfetching id for My Cherie by Al Martino ...\nfetching id for Girl On The Billboard by Del Reeves ...\nfetching id for Long Live Love by Sandie Shaw ...\nfetching id for Just Once In My Life by The Righteous Brothers ...\nfetching id for You Were Only Fooling (While I Was Falling In Love) by Vic Damone ...\nfetching id for Cast Your Fate To The Wind by Sounds Orchestral ...\nfetching id for Bring It On Home To Me by The Animals ...\nfetching id for Three O'Clock In The Morning by Bert Kaempfert And His Orchestra ...\n--> [error] It's Wonderful To Be In Love by The Ovations (Featuring Louis Williams)\nfetching id for Queen Of The House by Jody Miller ...\nfetching id for I'll Never Find Another You by The Seekers ...\nfetching id for She's About A Mover by Sir Douglas Quintet ...\nfetching id for Lipstick Traces (On A Cigarette) by The O'Jays ...\n--> [error] Three O'Clock In The Morning by Bert Kaempfert And His Orchestra\nfetching id for Something You Got by Chuck Jackson & Maxine Brown ...\nfetching id for Boo-Ga-Loo by Tom and Jerrio ...\n--> [error] Lipstick Traces (On A Cigarette) by The O'Jays\nfetching id for Love Is A 5-Letter Word by James Phelps ...\nfetching id for Is This What I Get For Loving You? by The Ronettes Featuring Veronica ...\nfetching id for Lip Sync (To The Tongue Twisters) by Len Barry ...\nfetching id for Ain't It A Shame by Major Lance ...\nfetching id for From The Bottom Of My Heart (I Love You) by The Moody Blues ...\nfetching id for Welcome Home by Walter Jackson ...\nfetching id for Baby The Rain Must Fall by Glenn Yarbrough ...\nfetching id for Do The Freddie by Freddie And The Dreamers ...\nfetching id for You Were Made For Me by Freddie And The Dreamers ...\nfetching id for Count Me In by Gary Lewis And The Playboys ...\nfetching id for Reelin' And Rockin' by The Dave Clark Five ...\nfetching id for Dream On Little Dreamer by Perry Como ...\nfetching id for Wishing It Was You by Connie Francis ...\nfetching id for The Climb by The Kingsmen ...\nfetching id for It's Almost Tomorrow by Jimmy Velvet ...\nfetching id for Tears Keep On Falling by Jerry Vale ...\nfetching id for We're Gonna Make It by Little Milton ...\nfetching id for Ooo Baby Baby by The Miracles ...\nfetching id for Iko Iko by The Dixie Cups ...\nfetching id for I'll Be Doggone by Marvin Gaye ...\nfetching id for I Know A Place by Petula Clark ...\nfetching id for Now That You've Gone by Connie Stevens ...\n--> [error] Wishing It Was You by Connie Francis\nfetching id for You Can Have Her by The Righteous Brothers ...\nfetching id for Three O'Clock In The Morning by Lou Rawls ...\nfetching id for Keep On Trying by Bobby Vee ...\nfetching id for You'll Miss Me (When I'm Gone) by Fontella Bass & Bobby McClure ...\n--> [error] Now That You've Gone by Connie Stevens\nfetching id for Break Up by Del Shannon ...\nfetching id for Game Of Love by Wayne Fontana & The Mindbenders ...\nfetching id for The Last Time by The Rolling Stones ...\nfetching id for It's Growing by The Temptations ...\nfetching id for Land Of 1000 Dances by Cannibal And The Headhunters ...\nfetching id for It's Gonna Be Alright by Gerry And The Pacemakers ...\nfetching id for Let's Do The Freddie by Chubby Checker ...\nfetching id for Al's Place by Al (He's the King) Hirt ...\n--> [error] You'll Miss Me (When I'm Gone) by Fontella Bass & Bobby McClure\nfetching id for What Do You Want With Me by Chad & Jeremy ...\nfetching id for Peanuts (La Cacahuata) by The Sunglows ...\n--> [error] Al's Place by Al (He's the King) Hirt\nfetching id for Georgie Porgie by Jewel Akens ...\nfetching id for Gotta Have Your Love by The Sapphires ...\nfetching id for Good Lovin' by The Olympics ...\nfetching id for The Mouse by Soupy Sales ...\nfetching id for When The Ship Comes In by Peter, Paul & Mary ...\nfetching id for No One by Brenda Lee ...\nfetching id for It Ain't No Big Thing by Radiants ...\nfetching id for One Kiss For Old Times' Sake by Ronnie Dove ...\nfetching id for I'm Telling You Now by Freddie And The Dreamers ...\nfetching id for Tired Of Waiting For You by The Kinks ...\nfetching id for Go Now! by The Moody Blues ...\nfetching id for Subterranean Homesick Blues by Bob Dylan ...\nfetching id for It's Got The Whole World Shakin' by Sam Cooke ...\nfetching id for The Entertainer by Tony Clarke ...\nfetching id for Yes It Is by The Beatles ...\nfetching id for Come On Over To My Place by The Drifters ...\nfetching id for A Woman Can Change A Man by Joe Tex ...\nfetching id for Super-cali-fragil-istic-expi-ali-docious by Julie Andrews-Dick Van Dyke ...\n--> [error] Peanuts (La Cacahuata) by The Sunglows\nfetching id for Before You Go by Buck Owens ...\nfetching id for Chim, Chim, Cheree by The New Christy Minstrels ...\n--> [error] Super-cali-fragil-istic-expi-ali-docious by Julie Andrews-Dick Van Dyke\nfetching id for Tommy by Reparata And The Delrons ...\n--> [error] Chim, Chim, Cheree by The New Christy Minstrels\nfetching id for Play With Fire by The Rolling Stones ...\nfetching id for Woman's Got Soul by The Impressions ...\nfetching id for The Clapping Song (Clap Pat Clap Slap) by Shirley Ellis ...\nfetching id for Shotgun by Jr. Walker & The All Stars ...\nfetching id for ......And Roses And Roses by Andy Williams ...\nfetching id for Crazy Downtown by Allan Sherman ...\nfetching id for Goodbye My Lover Goodbye by The Searchers ...\nfetching id for Think Of The Good Times by Jay & The Americans ...\nfetching id for (The Bees Are For The Birds) The Birds Are For The Bees by The Newbeats ...\nfetching id for She's Coming Home by The Zombies ...\nfetching id for I Need You by Chuck Jackson ...\nfetching id for (See You At The) \"Go-Go\" by Dobie Gray ...\nfetching id for Stop! In The Name Of Love by The Supremes ...\nfetching id for Nowhere To Run by Martha & The Vandellas ...\nfetching id for Can't You Hear My Heartbeat by Herman's Hermits ...\nfetching id for Got To Get You Off My Mind by Solomon Burke ...\nfetching id for Bumble Bee by The Searchers ...\nfetching id for I Understand (Just How You Feel) by Freddie And The Dreamers ...\nfetching id for The Race Is On by Jack Jones ...\nfetching id for Out In The Streets by The Shangri-Las ...\nfetching id for The Barracuda by Alvin Cash & The Crawlers ...\n--> [error] Tommy by Reparata And The Delrons\nfetching id for Truly, Truly, True by Brenda Lee ...\nfetching id for Whipped Cream by Herb Alpert's Tijuana Brass ...\nfetching id for Toy Soldier by The 4 Seasons Featuring the \"Sound of Frankie Valli\" ...\nfetching id for In The Meantime by Georgie Fame And The Blue Flames ...\nfetching id for Peaches \"N\" Cream by The Ikettes ...\n--> [error] Truly, Truly, True by Brenda Lee\nfetching id for When I'm Gone by Brenda Holloway ...\nfetching id for Girl Don't Come by Sandie Shaw ...\nfetching id for I Can't Stop Thinking Of You by Bobbi Martin ...\n--> [error] Peaches \"N\" Cream by The Ikettes\nfetching id for Somebody Else Is Taking My Place by Al Martino ...\n--> [error] I Can't Stop Thinking Of You by Bobbi Martin\nfetching id for 10 Little Bottles by Johnny Bond ...\nfetching id for You Can Have Him by Dionne Warwick ...\nfetching id for Why Did I Choose You by Barbra Streisand ...\nfetching id for All Of My Life by Lesley Gore ...\nfetching id for I Gotta Woman (Part One) by Ray Charles and his Orchestra ...\n--> [error] Somebody Else Is Taking My Place by Al Martino\nfetching id for Chains Of Love by The Drifters ...\nfetching id for Sad Tomorrows by Trini Lopez ...\nfetching id for Apples And Bananas by Lawrence Welk And His Orchestra ...\n--> [error] I Gotta Woman (Part One) by Ray Charles and his Orchestra\nfetching id for He Ain't No Angel by The Ad Libs ...\nfetching id for King Of The Road by Roger Miller ...\nfetching id for The Birds And The Bees by Jewel Akens ...\nfetching id for Red Roses For A Blue Lady by Vic Dana ...\nfetching id for Come And Stay With Me by Marianne Faithfull ...\nfetching id for Red Roses For A Blue Lady by Wayne Newton ...\nfetching id for Eight Days A Week by The Beatles ...\nfetching id for Goldfinger by Shirley Bassey ...\nfetching id for Never, Never Leave Me by Mary Wells ...\n--> [error] Apples And Bananas by Lawrence Welk And His Orchestra\nfetching id for Not Too Long Ago by The Uniques Featuring Joe Stampley ...\nfetching id for Hawaii Honeymoon by The Waikikis ...\n--> [error] Never, Never Leave Me by Mary Wells\nfetching id for Come Back Baby by Roddie Joy ...\n--> [error] Hawaii Honeymoon by The Waikikis\nfetching id for Ain't No Telling by Bobby Bland ...\nfetching id for Mexican Pearls by Billy Vaughn And His Orchestra ...\n--> [error] Mexican Pearls by Billy Vaughn And His Orchestra\nfetching id for Dear Dad by Chuck Berry ...\nfetching id for Talk About Love by Adam Faith ...\nfetching id for Do You Wanna Dance? by The Beach Boys ...\nfetching id for Long Lonely Nights by Bobby Vinton ...\nfetching id for Red Roses For A Blue Lady by Bert Kaempfert And His Orchestra ...\nfetching id for Do The Clam by Elvis Presley With The Jordanaires, Jubilee Four & Carol Lombard Trio ...\n--> [error] Come Back Baby by Roddie Joy\nfetching id for If I Loved You by Chad & Jeremy ...\nfetching id for Ferry Cross The Mersey by Gerry And The Pacemakers ...\nfetching id for Send Me The Pillow You Dream On by Dean Martin ...\nfetching id for Don't Mess Up A Good Thing by Fontella Bass & Bobby McClure ...\nfetching id for Anytime At All by Frank Sinatra ...\nfetching id for Come See by Major Lance ...\nfetching id for Mr. Pitiful by Otis Redding ...\nfetching id for The Record (Baby I Love You) by Ben E. King ...\nfetching id for Try To Remember by Roger Williams ...\n--> [error] Anytime At All by Frank Sinatra\nfetching id for I've Got Five Dollars And It's Saturday Night by George & Gene ...\nfetching id for Don't Let Me Be Misunderstood by The Animals ...\nfetching id for My Girl by The Temptations ...\nfetching id for Little Things by Bobby Goldsboro ...\nfetching id for If I Ruled The World by Tony Bennett ...\nfetching id for I Must Be Seeing Things by Gene Pitney ...\nfetching id for For Mama (La Mamma) by Connie Francis ...\n--> [error] Try To Remember by Roger Williams\nfetching id for For Mama by Jerry Vale ...\nfetching id for Poor Man's Son by The Reflections ...\nfetching id for (Here They Come) From All Over The World by Jan & Dean ...\nfetching id for Gee Baby (I'm Sorry) by The Three Degrees ...\nfetching id for Every Night, Every Day by Jimmy McCracklin ...\nfetching id for Don't Let Your Left Hand Know by Joe Tex ...\nfetching id for This Diamond Ring by Gary Lewis And The Playboys ...\nfetching id for Yeh, Yeh by Georgie Fame And The Blue Flames ...\nfetching id for People Get Ready by The Impressions ...\nfetching id for Come Home by The Dave Clark Five ...\nfetching id for Hurt So Bad by Little Anthony And The Imperials ...\nfetching id for Stranger In Town by Del Shannon ...\nfetching id for You Better Get It by Joe Tex ...\nfetching id for You Got What It Takes by Joe Tex ...\nfetching id for Who Can I Turn To by Dionne Warwick ...\nfetching id for Please Let Me Wonder by The Beach Boys ...\nfetching id for Land Of A Thousand Dances (Part I) by Thee Midniters ...\nfetching id for Double-O-Seven by The Detergents ...\n--> [error] Poor Man's Son by The Reflections\nfetching id for Losing You by Dusty Springfield ...\nfetching id for I Can't Explain by The Who ...\nfetching id for El Pussy Cat by Mongo Santamaria ...\nfetching id for The Jolly Green Giant by The Kingsmen ...\nfetching id for Goodnight by Roy Orbison ...\nfetching id for Ask The Lonely by Four Tops ...\nfetching id for Downtown by Petula Clark ...\nfetching id for Midnight Special by Johnny Rivers ...\nfetching id for You've Lost That Lovin' Feelin' by The Righteous Brothers ...\nfetching id for I Don't Want To Spoil The Party by The Beatles ...\nfetching id for New York's A Lonely Town by The Trade Winds ...\n--> [error] Double-O-Seven by The Detergents\nfetching id for Come Tomorrow by Manfred Mann ...\nfetching id for Angel by Johnny Tillotson ...\nfetching id for 4 - By The Beatles by The Beatles ...\nfetching id for Goldfinger by John Barry and His Orchestra ...\nfetching id for It's Gonna Be Alright by Maxine Brown ...\nfetching id for Apache '65 by The Arrows Featuring Davie Allan ...\nfetching id for Good Times by Jerry Butler ...\nfetching id for Be My Baby by Dick and DeeDee ...\n--> [error] New York's A Lonely Town by The Trade Winds\nfetching id for Mean Old World by Rick Nelson ...\nfetching id for Teasin' You by Willie Tee ...\nfetching id for The Boy From New York City by The Ad Libs ...\nfetching id for Tell Her No by The Zombies ...\nfetching id for Laugh, Laugh by The Beau Brummels ...\nfetching id for I Go To Pieces by Peter And Gordon ...\nfetching id for Shake by Sam Cooke ...\nfetching id for I've Got A Tiger By The Tail by Buck Owens ...\nfetching id for Goldfinger by Billy Strange ...\nfetching id for Cry by Ray Charles ...\nfetching id for This Is My Prayer by The Ray Charles Singers ...\nfetching id for Did You Ever by The Hullaballoos ...\nfetching id for Orange Blossom Special by Johnny Cash ...\nfetching id for Find My Way Back Home by The Nashville Teens ...\nfetching id for Twine Time by Alvin Cash & The Crawlers ...\nfetching id for The Name Game by Shirley Ellis ...\nfetching id for All Day And All Of The Night by The Kinks ...\nfetching id for A Change Is Gonna Come by Sam Cooke ...\nfetching id for What Have They Done To The Rain by The Searchers ...\nfetching id for Bye, Bye, Baby (Baby, Goodbye) by The 4 Seasons Featuring the \"Sound of Frankie Valli\" ...\nfetching id for Born To Be Together by The Ronettes Featuring Veronica ...\nfetching id for Like A Child by Julie Rogers ...\n--> [error] This Is My Prayer by The Ray Charles Singers\nfetching id for Cupid by Johnny Rivers ...\nfetching id for Real Live Girl by Steve Alaimo ...\nfetching id for This Is It by Jim Reeves ...\nfetching id for Pass Me By by Peggy Lee ...\nfetching id for Danny Boy by Jackie Wilson ...\nfetching id for This Sporting Life by Ian Whitcomb And Bluesville ...\nfetching id for The \"In\" Crowd by Dobie Gray ...\nfetching id for Lemon Tree by Trini Lopez ...\nfetching id for For Lovin' Me by Peter, Paul & Mary ...\nfetching id for Paper Tiger by Sue Thompson ...\nfetching id for Heart Of Stone by The Rolling Stones ...\nfetching id for It's Alright by Adam Faith With The Roulettes ...\n--> [error] Like A Child by Julie Rogers\nfetching id for Break Away (From That Boy) by The Newbeats ...\nfetching id for At The Club by The Drifters ...\nfetching id for Whose Heart Are You Breaking Tonight by Connie Francis ...\nfetching id for Whenever A Teenager Cries by Reparata And The Delrons ...\nfetching id for He Was Really Sayin' Somethin' by The Velvelettes ...\nfetching id for It's Gotta Last Forever by Billy J. Kramer With The Dakotas ...\nfetching id for I Wanna Be (Your Everything) by The Manhattans ...\nfetching id for Does He Really Care For Me by Ruby And The Romantics ...\n--> [error] Whenever A Teenager Cries by Reparata And The Delrons\nfetching id for You Can't Hurt Me No More by Gene Chandler ...\nfetching id for Goldfinger by Jack Laforge His Piano and Orchestra ...\nfetching id for You're Next by Jimmy Witherspoon ...\nfetching id for Let's Lock The Door (And Throw Away The Key) by Jay & The Americans ...\nfetching id for Love Potion Number Nine by The Searchers ...\nfetching id for No Arms Can Ever Hold You by The Bachelors ...\nfetching id for Hold What You've Got by Joe Tex ...\nfetching id for Thanks A Lot by Brenda Lee ...\n--> [error] Goldfinger by Jack Laforge His Piano and Orchestra\nfetching id for Fancy Pants by Al (He's the King) Hirt ...\n--> [error] Thanks A Lot by Brenda Lee\nfetching id for My Heart Would Know by Al Martino ...\n--> [error] Fancy Pants by Al (He's the King) Hirt\nfetching id for Dusty by The Rag Dolls ...\n--> [error] My Heart Would Know by Al Martino\nfetching id for Don't Come Running Back To Me by Nancy Wilson ...\nfetching id for Married Man by Richard Burton ...\nfetching id for Comin' On Too Strong by Wayne Newton ...\nfetching id for Try To Remember by Ed Ames ...\nfetching id for You Can Have Him by Timi Yuro ...\nfetching id for Keep Searchin' (We'll Follow The Sun) by Del Shannon ...\nfetching id for How Sweet It Is To Be Loved By You by Marvin Gaye ...\nfetching id for Look Of Love by Lesley Gore ...\nfetching id for Give Him A Great Big Kiss by The Shangri-Las ...\nfetching id for Have You Looked Into Your Heart by Jerry Vale ...\nfetching id for Somewhere In Your Heart by Frank Sinatra ...\nfetching id for Do What You Do Do Well by Ned Miller ...\nfetching id for Voice Your Choice by The Radiants ...\nfetching id for Hello Pretty Girl by Ronnie Dove ...\nfetching id for That's How Strong My Love Is by Otis Redding ...\nfetching id for Hello, Dolly! by Bobby Darin ...\nfetching id for Fly Me To The Moon by LaVern Baker ...\nfetching id for Jerk And Twine by Jackie Ross ...\nfetching id for Somewhere by P.J. Proby ...\nfetching id for I'm Over You by Jan Bradley ...\nfetching id for Come See About Me by The Supremes ...\nfetching id for Thou Shalt Not Steal by Dick and DeeDee ...\nfetching id for Don't Forget I Still Love You by Bobbi Martin ...\n--> [error] Dusty by The Rag Dolls\nfetching id for I Feel Fine by The Beatles ...\nfetching id for I'll Be There by Gerry And The Pacemakers ...\nfetching id for Little Bell by The Dixie Cups ...\nfetching id for Ode To The Little Brown Shack Out Back by Billy Edd Wheeler ...\nfetching id for Diamond Head by The Ventures ...\nfetching id for The Man by Lorne Greene ...\nfetching id for Bring Your Love To Me by The Righteous Brothers ...\nfetching id for Crying In The Chapel by Adam Wade ...\n--> [error] Don't Forget I Still Love You by Bobbi Martin\nfetching id for I Want My Baby Back by Jimmy Cross ...\nfetching id for Suddenly I'm All Alone by Walter Jackson ...\nfetching id for Diana by Bobby Rydell ...\nfetching id for Goin' Out Of My Head by Little Anthony And The Imperials ...\nfetching id for The Jerk by The Larks ...\nfetching id for You're Nobody Till Somebody Loves You by Dean Martin ...\nfetching id for Mr. Lonely by Bobby Vinton ...\nfetching id for Use Your Head by Mary Wells ...\n--> [error] Crying In The Chapel by Adam Wade\nfetching id for Dear Heart by Andy Williams ...\nfetching id for Willow Weep For Me by Chad & Jeremy ...\nfetching id for Dear Heart by Jack Jones ...\nfetching id for He's My Guy by Irma Thomas ...\nfetching id for Cousin Of Mine by Sam Cooke ...\nfetching id for No Faith, No Love by Mitty Collier ...\nfetching id for Can't You Just See Me by Aretha Franklin ...\nfetching id for Cross My Heart by Bobby Vee ...\nfetching id for Love Me As Though There Were No Tomorrow by Sonny Knight ...\n--> [error] Use Your Head by Mary Wells\nfetching id for She's A Woman by The Beatles ...\nfetching id for The Wedding by Julie Rogers ...\nfetching id for Any Way You Want It by The Dave Clark Five ...\nfetching id for My Love, Forgive Me (Amore, Scusami) by Robert Goulet ...\nfetching id for Amen by The Impressions ...\nfetching id for Can You Jerk Like Me by The Contours ...\nfetching id for I Can't Stop by The Honeycombs ...\nfetching id for Sha La La by Manfred Mann ...\nfetching id for Come On Do The Jerk by The Miracles ...\nfetching id for Hawaii Tattoo by The Waikikis ...\nfetching id for Seven Letters by Ben E. King ...\nfetching id for Makin' Whoopee by Ray Charles ...\n--> [error] Love Me As Though There Were No Tomorrow by Sonny Knight\nfetching id for Bucket \"T\" by Ronny And The Daytonas ...\nfetching id for Lovin' Place by Gale Garnett ...\nfetching id for Hey-Da-Da-Dow by The Dolphins ...\nfetching id for The Crying Game by Brenda Lee ...\nfetching id for Too Many Fish In The Sea by The Marvelettes ...\nfetching id for She's Not There by The Zombies ...\nfetching id for Walk Away by Matt Monro ...\nfetching id for As Tears Go By by Marianne Faithfull ...\nfetching id for Leader Of The Laundromat by The Detergents ...\nfetching id for Promised Land by Chuck Berry ...\nfetching id for What Now by Gene Chandler ...\nfetching id for Roses Are Red My Love by The \"You Know Who\" Group! ...\n--> [error] Makin' Whoopee by Ray Charles\nfetching id for The 81 by Candy & The Kisses ...\nfetching id for You'll Always Be The One I Love by Dean Martin ...\nfetching id for I'm Gonna Love You Too by The Hullaballoos ...\nfetching id for Lovely, Lovely (Loverly, Loverly) by Chubby Checker ...\nfetching id for Sometimes I Wonder by Major Lance ...\nfetching id for Blind Man by Bobby Bland ...\nfetching id for Dear Heart by Henry Mancini And His Orchestra ...\n--> [error] Roses Are Red My Love by The \"You Know Who\" Group!\nfetching id for The Crusher by The Novas ...\nfetching id for Blind Man by Little Milton ...\nfetching id for The Race Is On by George Jones ...\nfetching id for You're The Only World I Know by Sonny James ...\nfetching id for Do-Wacka-Do by Roger Miller ...\nfetching id for Wild One by Martha & The Vandellas ...\nfetching id for One More Time by The Ray Charles Singers ...\n--> [error] Dear Heart by Henry Mancini And His Orchestra\nfetching id for Oh No Not My Baby by Maxine Brown ...\nfetching id for Dance, Dance, Dance by The Beach Boys ...\nfetching id for Ringo by Lorne Greene ...\nfetching id for Boom Boom by The Animals ...\nfetching id for Smile by Betty Everett & Jerry Butler ...\n--> [error] One More Time by The Ray Charles Singers\nfetching id for I Found A Love Oh What A Love by Jo Ann & Troy ...\n--> [error] Smile by Betty Everett & Jerry Butler\nfetching id for Are You Still My Baby by The Shirelles ...\n--> [error] I Found A Love Oh What A Love by Jo Ann & Troy\nfetching id for It's Better To Have It by Barbara Lynn ...\nfetching id for Finders Keepers, Losers Weepers by Nella Dodds ...\n--> [error] Are You Still My Baby by The Shirelles\nfetching id for I Want You To Be My Boy by The Exciters ...\nfetching id for I'm Into Something Good by Herman's Hermits ...\nfetching id for Saturday Night At The Movies by The Drifters ...\nfetching id for Mountain Of Love by Johnny Rivers ...\nfetching id for She Understands Me by Johnny Tillotson ...\nfetching id for I'm Gonna Be Strong by Gene Pitney ...\nfetching id for Time Is On My Side by The Rolling Stones ...\nfetching id for (There's) Always Something There To Remind Me by Sandie Shaw ...\nfetching id for Do It Right by Brook Benton ...\nfetching id for Danny Boy by Patti LaBelle And The Blue Belles ...\n--> [error] Finders Keepers, Losers Weepers by Nella Dodds\nfetching id for (There'll Come A Day When) Ev'ry Little Bit Hurts by Bobby Vee ...\n--> [error] Danny Boy by Patti LaBelle And The Blue Belles\nfetching id for The Sidewinder, Part 1 by Lee Morgan ...\n--> [error] (There'll Come A Day When) Ev'ry Little Bit Hurts by Bobby Vee\nfetching id for Walking In The Rain by The Ronettes ...\nfetching id for You Really Got Me by The Kinks ...\nfetching id for Since I Don't Have You by Chuck Jackson ...\nfetching id for Run, Run, Run by The Gestures ...\nfetching id for Getting Mighty Crowded by Betty Everett ...\nfetching id for Do Anything You Wanna (Part I) by Harold Betters ...\nfetching id for Percolatin' by Willie Mitchell ...\nfetching id for My Buddy Seat by The Hondells ...\nfetching id for So What by Bill Black's Combo ...\nfetching id for Maybe by The Shangri-Las ...\nfetching id for Have Mercy Baby by James Brown And The Famous Flames ...\nfetching id for A Little Bit Of Soap by Garnet Mimms ...\nfetching id for Take This Hurt Off Me by Don Covay ...\nfetching id for Black Night by Bobby Bland ...\nfetching id for Leader Of The Pack by The Shangri-Las ...\nfetching id for Baby Love by The Supremes ...\nfetching id for Right Or Wrong by Ronnie Dove ...\nfetching id for Ask Me by Elvis Presley With The Jordanaires ...\n--> [error] The Sidewinder, Part 1 by Lee Morgan\nfetching id for Without The One You Love (Life's Not Worth While) by Four Tops ...\nfetching id for The Price by Solomon Burke ...\nfetching id for It's All Over by Walter Jackson ...\nfetching id for Scratchy by Travis Wammack ...\nfetching id for Endless Sleep by Hank Williams Jr. ...\nfetching id for Big Man In Town by The 4 Seasons Featuring the \"Sound of Frankie Valli\" ...\nfetching id for Everything's Alright by The Newbeats ...\nfetching id for Sidewalk Surfin' by Jan & Dean ...\nfetching id for Come A Little Bit Closer by Jay & The Americans ...\nfetching id for Gone, Gone, Gone by The Everly Brothers ...\nfetching id for Ain't It The Truth by Mary Wells ...\nfetching id for It Ain't Me, Babe by Johnny Cash ...\nfetching id for A Woman's Love by Carla Thomas ...\nfetching id for I Don't Want To Walk Without You by Phyllis McGuire ...\nfetching id for A Happy Guy by Rick Nelson ...\nfetching id for Party Girl by Tommy Roe ...\nfetching id for I Just Can't Say Goodbye by Bobby Rydell ...\nfetching id for Reach Out For Me by Dionne Warwick ...\nfetching id for Is It True by Brenda Lee ...\nfetching id for Ain't That Loving You Baby by Elvis Presley ...\nfetching id for We Could by Al Martino ...\nfetching id for Have I The Right? by The Honeycombs ...\nfetching id for Listen Lonely Girl by Johnny Mathis ...\nfetching id for Opportunity by The Jewels ...\nfetching id for Four Strong Winds by Bobby Bare ...\nfetching id for If You Want This Love by Sonny Knight ...\nfetching id for Almost There by Andy Williams ...\nfetching id for Talk To Me Baby by Barry Mann ...\nfetching id for Chained And Bound by Otis Redding ...\nfetching id for Fiddler On The Roof by The Village Stompers ...\nfetching id for It'll Never Be Over For Me by Baby Washington ...\nfetching id for Unless You Care by Terry Black ...\nfetching id for Oh, Pretty Woman by Roy Orbison And The Candy Men ...\n--> [error] I Don't Want To Walk Without You by Phyllis McGuire\nfetching id for The Door Is Still Open To My Heart by Dean Martin ...\nfetching id for Slaughter On Tenth Avenue by The Ventures ...\nfetching id for Who Can I Turn To (When Nobody Needs Me) by Tony Bennett ...\nfetching id for Shaggy Dog by Mickey Lee Lane ...\nfetching id for Don't Ever Leave Me by Connie Francis ...\nfetching id for S-W-I-M by Bobby Freeman ...\nfetching id for Needle In A Haystack by The Velvelettes ...\nfetching id for California Bound by Ronny And The Daytonas ...\nfetching id for Hey Little One by J. Frank Wilson and The Cavaliers ...\n--> [error] Oh, Pretty Woman by Roy Orbison And The Candy Men\nfetching id for Here She Comes by The Tymes ...\nfetching id for I Won't Forget You by Jim Reeves ...\nfetching id for Pretend You Don't See Her by Bobby Vee ...\nfetching id for Rome Will Never Leave You by Richard Chamberlain ...\n--> [error] Hey Little One by J. Frank Wilson and The Cavaliers\nfetching id for Do Wah Diddy Diddy by Manfred Mann ...\nfetching id for Let It Be Me by Betty Everett & Jerry Butler ...\n--> [error] Rome Will Never Leave You by Richard Chamberlain\nfetching id for I Don't Want To See You Again by Peter And Gordon ...\nfetching id for Little Honda by The Hondells ...\nfetching id for We'll Sing In The Sunshine by Gale Garnett ...\nfetching id for Chug-A-Lug by Roger Miller ...\nfetching id for When You Walk In The Room by The Searchers ...\nfetching id for You Should Have Seen The Way He Looked At Me by The Dixie Cups ...\nfetching id for Ain't Doing Too Bad (Part 1) by Bobby Bland ...\nfetching id for Little Marie by Chuck Berry ...\nfetching id for I Had A Talk With My Man by Mitty Collier ...\nfetching id for What Good Am I Without You by Marvin Gaye & Kim Weston ...\nfetching id for Come See About Me by Nella Dodds ...\n--> [error] Ain't Doing Too Bad (Part 1) by Bobby Bland\nfetching id for Silly Little Girl by The Tams ...\nfetching id for Stop Takin' Me For Granted by Mary Wells ...\n--> [error] Come See About Me by Nella Dodds\nfetching id for I Like It by Gerry And The Pacemakers ...\nfetching id for Everybody Knows (I Still Love You) by The Dave Clark Five ...\nfetching id for I'm Crying by The Animals ...\nfetching id for Tobacco Road by The Nashville Teens ...\nfetching id for Dancing In The Street by Martha & The Vandellas ...\nfetching id for Bless Our Love by Gene Chandler ...\n--> [error] Stop Takin' Me For Granted by Mary Wells\nfetching id for Wendy by The Beach Boys ...\nfetching id for Runnin' Out Of Fools by Aretha Franklin ...\nfetching id for Jump Back by Rufus Thomas ...\nfetching id for Teen Beat '65 by Sandy Nelson ...\nfetching id for When You're Young And In Love by Ruby And The Romantics ...\nfetching id for Hey Now by Lesley Gore ...\n--> [error] Bless Our Love by Gene Chandler\nfetching id for Beautician Blues by B.B. King ...\nfetching id for The Dodo by Jumpin' Gene Simmons ...\nfetching id for Sometimes I Wish I Were A Boy by Lesley Gore ...\nfetching id for I've Got The Skill by Jackie Ross ...\n--> [error] The Dodo by Jumpin' Gene Simmons\nfetching id for Never Trust A Woman by B.B. King ...\nfetching id for High Heel Sneakers by Jerry Lee Lewis ...\nfetching id for Why (Doncha Be My Girl) by The Chartbusters ...\n--> [error] I've Got The Skill by Jackie Ross\nfetching id for A Summer Song by Chad & Jeremy ...\nfetching id for Baby Don't You Do It by Marvin Gaye ...\nfetching id for Softly, As I Leave You by Frank Sinatra ...\nfetching id for I Don't Want To See Tomorrow by Nat King Cole ...\nfetching id for Something You Got by Ramsey Lewis Trio ...\nfetching id for Lumberjack by Brook Benton ...\nfetching id for Little Honda by The Beach Boys ...\nfetching id for Look Away by Garnet Mimms ...\nfetching id for That's Where It's At by Sam Cooke ...\nfetching id for Times Have Changed by Irma Thomas ...\nfetching id for When I Grow Up (To Be A Man) by The Beach Boys ...\nfetching id for Ride The Wild Surf by Jan & Dean ...\nfetching id for You Must Believe Me by The Impressions ...\nfetching id for I've Got Sand In My Shoes by The Drifters ...\nfetching id for Mercy, Mercy by Don Covay & The Goodtimers ...\nfetching id for All Cried Out by Dusty Springfield ...\nfetching id for Death Of An Angel by The Kingsmen ...\nfetching id for Funny Girl by Barbra Streisand ...\nfetching id for Smack Dab In The Middle by Ray Charles and his Orchestra ...\nfetching id for Baby Be Mine by The Jelly Beans ...\n--> [error] Smack Dab In The Middle by Ray Charles and his Orchestra\nfetching id for So Long Dearie by Louis Armstrong ...\nfetching id for Up Above My Head (I Hear Music In The Air) by Al (He's the King) Hirt ...\n--> [error] Baby Be Mine by The Jelly Beans\nfetching id for Maybe Tonight by The Shirelles ...\nfetching id for Heartbreak Hill by Fats Domino ...\n--> [error] Up Above My Head (I Hear Music In The Air) by Al (He's the King) Hirt\nfetching id for It Hurts To Be In Love by Gene Pitney ...\nfetching id for Baby I Need Your Loving by Four Tops ...\nfetching id for Remember (Walkin' in the Sand) by The Shangri-Las ...\nfetching id for Bread And Butter by The Newbeats ...\nfetching id for On The Street Where You Live by Andy Williams ...\nfetching id for Girl (Why You Wanna Make Me Blue) by The Temptations ...\n--> [error] Heartbreak Hill by Fats Domino\nfetching id for Funny (How Time Slips Away) by Joe Hinton ...\nfetching id for 20-75 by Willie Mitchell ...\nfetching id for Do You Want To Dance by Del Shannon ...\nfetching id for I Wanna Be With You by Nancy Wilson ...\n--> [error] Do You Want To Dance by Del Shannon\nfetching id for La La La La La by The Blendells ...\n--> [error] I Wanna Be With You by Nancy Wilson\nfetching id for Baby Baby All The Time by The Superbs ...\nfetching id for Garden In The Rain by Vic Dana ...\nfetching id for I'm On The Outside (Looking In) by Little Anthony And The Imperials ...\nfetching id for G.T.O. by Ronny And The Daytonas ...\nfetching id for That's What Love Is Made Of by The Miracles ...\nfetching id for Rhythm by Major Lance ...\nfetching id for From A Window by Billy J. Kramer With The Dakotas ...\nfetching id for Matchbox by The Beatles ...\nfetching id for Good Night Baby by The Butterflys ...\nfetching id for The James Bond Theme by Billy Strange ...\nfetching id for Beach Girl by Pat Boone ...\n--> [error] Good Night Baby by The Butterflys\nfetching id for Little Queenie by Bill Black's Combo ...\nfetching id for Yes I Do by Solomon Burke ...\nfetching id for Haunted House by Jumpin' Gene Simmons ...\nfetching id for House Of The Rising Sun by The Animals ...\nfetching id for Slow Down by The Beatles ...\nfetching id for Save It For Me by The 4 Seasons Featuring the \"Sound of Frankie Valli\" ...\nfetching id for Out Of Sight by James Brown And His Orchestra ...\n--> [error] Beach Girl by Pat Boone\nfetching id for Try Me by Jimmy Hughes ...\nfetching id for Pearly Shells (Popo O Ewa) by Burl Ives ...\nfetching id for I Wouldn't Trade You For The World by The Bachelors ...\nfetching id for The Anaheim, Azusa & Cucamonga Sewing Circle, Book Review And Timing Associ by Jan & Dean ...\nfetching id for L-O-V-E by Nat King Cole ...\nfetching id for I See You by Cathy & Joe ...\nfetching id for The Things In This House by Bobby Darin ...\n--> [error] Out Of Sight by James Brown And His Orchestra\nfetching id for The Dartell Stomp by The Mustangs ...\n--> [error] The Things In This House by Bobby Darin\nfetching id for Don't Spread It Around by Barbara Lynn ...\nfetching id for I Can't Believe What You Say (For Seeing What You Do) by Ike & Tina Turner ...\nfetching id for Where Did Our Love Go by The Supremes ...\nfetching id for You'll Never Get To Heaven (If You Break My Heart) by Dionne Warwick ...\nfetching id for Maybelline by Johnny Rivers ...\nfetching id for A Hard Day's Night by The Beatles ...\nfetching id for Michael by Trini Lopez ...\nfetching id for She Wants T' Swim by Chubby Checker ...\nfetching id for The Cat by Jimmy Smith ...\nfetching id for It's All Over by Ben E. King ...\n--> [error] The Dartell Stomp by The Mustangs\nfetching id for Somebody New by Chuck Jackson ...\nfetching id for I Guess I'm Crazy by Jim Reeves ...\nfetching id for Just A Moment Ago by Soul Sisters ...\n--> [error] Somebody New by Chuck Jackson\nfetching id for Everybody Loves Somebody by Dean Martin ...\nfetching id for Because by The Dave Clark Five ...\n--> [error] Just A Moment Ago by Soul Sisters\nfetching id for Always Together by Al Martino ...\nfetching id for In The Misty Moonlight by Jerry Wallace ...\nfetching id for Selfish One by Jackie Ross ...\nfetching id for Some Day We're Gonna Love Again by The Searchers ...\nfetching id for (There's) Always Something There To Remind Me by Lou Johnson ...\nfetching id for Candy To Me by Eddie Holland ...\nfetching id for Knock! Knock! (Who's There?) by The Orlons ...\nfetching id for Yet...I Know (Et Pourtant) by Steve Lawrence ...\n--> [error] Some Day We're Gonna Love Again by The Searchers\nfetching id for It's For You by Cilla Black ...\n--> [error] Yet...I Know (Et Pourtant) by Steve Lawrence\nfetching id for Till The End Of Time by The Ray Charles Singers ...\nfetching id for Soon I'll Wed My Love by John Gary ...\nfetching id for Clinging Vine by Bobby Vinton ...\nfetching id for C'mon And Swim by Bobby Freeman ...\nfetching id for It's All Over Now by The Rolling Stones ...\nfetching id for Under The Boardwalk by The Drifters ...\nfetching id for Say You by Ronnie Dove ...\nfetching id for Maybe I Know by Lesley Gore ...\nfetching id for He's In Town by The Tokens ...\nfetching id for You Never Can Tell by Chuck Berry ...\nfetching id for There's Nothing I Can Say by Rick Nelson ...\nfetching id for Gonna Send You Back To Walker (Gonna Send You Back To Georgia) by The Animals ...\nfetching id for If I Fell by The Beatles ...\nfetching id for I've Got No Time To Lose by Carla Thomas ...\nfetching id for Hold Me by P.J. Proby ...\nfetching id for Society Girl by The Rag Dolls ...\nfetching id for Gator Tails And Monkey Ribs by The Spats Featuring Dick Johnson ...\n--> [error] Till The End Of Time by The Ray Charles Singers\nfetching id for Sally Was A Good Old Girl by Fats Domino ...\n--> [error] Gator Tails And Monkey Ribs by The Spats Featuring Dick Johnson\nfetching id for And I Love Her by The Beatles ...\nfetching id for How Do You Do It? by Gerry And The Pacemakers ...\nfetching id for Walk-Don't Run '64 by The Ventures ...\nfetching id for Sweet William by Millie Small ...\nfetching id for Worry by Johnny Tillotson ...\nfetching id for When You Loved Me by Brenda Lee ...\n--> [error] Sally Was A Good Old Girl by Fats Domino\nfetching id for Ringo's Theme (This Boy) by George Martin And His Orch. ...\n--> [error] When You Loved Me by Brenda Lee\nfetching id for Invisible Tears by The Ray Conniff Singers ...\n--> [error] Ringo's Theme (This Boy) by George Martin And His Orch.\nfetching id for Where Love Has Gone by Jack Jones ...\nfetching id for I Stand Accused by Jerry Butler ...\nfetching id for Sincerely by The 4 Seasons ...\nfetching id for Lovers Always Forgive by Gladys Knight And The Pips ...\nfetching id for I Wanna Thank You by The Enchanters ...\nfetching id for Rockin' Robin by The Rivieras ...\nfetching id for Someone, Someone by Brian Poole And The Tremeloes ...\nfetching id for The Clock by Baby Washington ...\nfetching id for Wishin' And Hopin' by Dusty Springfield ...\nfetching id for People Say by The Dixie Cups ...\nfetching id for Such A Night by Elvis Presley With The Jordanaires ...\n--> [error] Invisible Tears by The Ray Conniff Singers\nfetching id for Just Be True by Gene Chandler ...\nfetching id for I'll Cry Instead by The Beatles ...\nfetching id for Ain't She Sweet by The Beatles ...\nfetching id for I Want You To Meet My Baby by Eydie Gorme ...\nfetching id for Johnny B. Goode by Dion Di Muci ...\n--> [error] I Want You To Meet My Baby by Eydie Gorme\nfetching id for Me Japanese Boy I Love You by Bobby Goldsboro ...\n--> [error] Johnny B. Goode by Dion Di Muci\nfetching id for A Quiet Place by Garnet Mimms & The Enchanters ...\nfetching id for Can't Get Over (The Bossa Nova) by Eydie Gorme ...\n--> [error] Me Japanese Boy I Love You by Bobby Goldsboro\nfetching id for One More Tear by The Raindrops ...\n--> [error] Can't Get Over (The Bossa Nova) by Eydie Gorme\nfetching id for Rag Doll by The 4 Seasons Featuring the \"Sound of Frankie Valli\" ...\nfetching id for Steal Away by Jimmy Hughes ...\nfetching id for The Little Old Lady (From Pasadena) by Jan & Dean ...\nfetching id for Handy Man by Del Shannon ...\nfetching id for Hey Girl Don't Bother Me by The Tams ...\nfetching id for I Wanna Love Him So Bad by The Jelly Beans ...\nfetching id for (You Don't Know) How Glad I Am by Nancy Wilson ...\nfetching id for Tell Me (You're Coming Back) by The Rolling Stones ...\nfetching id for I'll Keep You Satisfied by Billy J. Kramer With The Dakotas ...\n--> [error] One More Tear by The Raindrops\nfetching id for Little Latin Lupe Lu by The Kingsmen ...\nfetching id for I'll Always Love You by Brenda Holloway ...\nfetching id for Everybody Needs Somebody To Love by Solomon Burke ...\nfetching id for If I'm A Fool For Loving You by Bobby Wood ...\nfetching id for A House Is Not A Home by Dionne Warwick ...\nfetching id for I Don't Care (Just As Long As You Love Me) by Buck Owens ...\nfetching id for A Taste Of Honey by Tony Bennett ...\nfetching id for Lover's Prayer by Wallace Brothers ...\nfetching id for I Get Around by The Beach Boys ...\nfetching id for She's The One by The Chartbusters ...\nfetching id for Keep On Pushing by The Impressions ...\nfetching id for Mixed-Up, Shook-Up, Girl by Patty & The Emblems ...\nfetching id for Angelito by Rene & Rene ...\nfetching id for A Tear Fell by Ray Charles ...\nfetching id for Looking For Love by Connie Francis ...\nfetching id for Hello Mudduh, Hello Fadduh! (A Letter From Camp) (New 1964 Version) by Allan Sherman ...\nfetching id for A House Is Not A Home by Brook Benton ...\nfetching id for Thank You Baby by The Shirelles ...\nfetching id for Frankie And Johnny by The Greenwood County Singers ...\n--> [error] A Tear Fell by Ray Charles\nfetching id for Father Sebastian by The Ramblers ...\n--> [error] Frankie And Johnny by The Greenwood County Singers\nfetching id for Squeeze Her-Tease Her (But Love Her) by Jackie Wilson ...\nfetching id for Dang Me by Roger Miller ...\nfetching id for Sugar Lips by Al (He's the King) Hirt ...\n--> [error] Father Sebastian by The Ramblers\nfetching id for The Girl From Ipanema by Stan Getz/Astrud Gilberto ...\n--> [error] Sugar Lips by Al (He's the King) Hirt\nfetching id for Nobody I Know by Peter And Gordon ...\nfetching id for I Like It Like That by The Miracles ...\nfetching id for Love Is All We Need by Vic Dana ...\nfetching id for No One To Cry To by Ray Charles ...\nfetching id for Summer Means Fun by Bruce & Terry ...\nfetching id for Soul Dressing by Booker T. & The MG's ...\nfetching id for I Can't Get You Out Of My Heart by Al Martino ...\nfetching id for Memphis by Johnny Rivers ...\nfetching id for Al-Di-La by The Ray Charles Singers ...\n--> [error] No One To Cry To by Ray Charles\nfetching id for Can't You See That She's Mine by The Dave Clark Five ...\nfetching id for You're My World by Cilla Black ...\nfetching id for Try It Baby by Marvin Gaye ...\nfetching id for I Believe by The Bachelors ...\nfetching id for Farmer John by The Premiers ...\nfetching id for Do I Love You? by The Ronettes ...\nfetching id for I'm Into Somethin' Good by Earl-Jean ...\nfetching id for I Should Have Known Better by The Beatles ...\nfetching id for Sole Sole Sole by Siw Malmkvist-Umberto Marcato ...\n--> [error] Al-Di-La by The Ray Charles Singers\nfetching id for You're My Remedy by The Marvelettes ...\nfetching id for Sailor Boy by The Chiffons ...\nfetching id for Silly Ol' Summertime by The New Christy Minstrels ...\n--> [error] Sole Sole Sole by Siw Malmkvist-Umberto Marcato\nfetching id for It's A Cotton Candy World by Jerry Wallace ...\n--> [error] Silly Ol' Summertime by The New Christy Minstrels\nfetching id for People by Barbra Streisand ...\nfetching id for Good Times by Sam Cooke ...\nfetching id for Don't Throw Your Love Away by The Searchers ...\nfetching id for My Boy Lollipop by Millie Small ...\nfetching id for Don't Let The Sun Catch You Crying by Gerry And The Pacemakers ...\nfetching id for Anyone Who Knows What Love Is (will understand) by Irma Thomas ...\nfetching id for Share Your Love With Me by Bobby Bland ...\nfetching id for I Want To Hold Your Hand by Boston Pops Orchestra Arthur Fiedler ...\n--> [error] It's A Cotton Candy World by Jerry Wallace\nfetching id for Oh! Baby (We Got A Good Thing Goin') by Barbara Lynn ...\nfetching id for Baby Come Home by Ruby And The Romantics ...\nfetching id for Bama Lama Bama Loo by Little Richard ...\nfetching id for Sunny by Neil Sedaka ...\nfetching id for One Piece Topless Bathing Suit by The Rip Chords ...\nfetching id for You're No Good by The Swinging Blue Jeans ...\nfetching id for Don't Worry Baby by The Beach Boys ...\nfetching id for Bad To Me by Billy J. Kramer With The Dakotas ...\n--> [error] I Want To Hold Your Hand by Boston Pops Orchestra Arthur Fiedler\nfetching id for Hey Harmonica Man by Stevie Wonder ...\nfetching id for Alone by The 4 Seasons ...\nfetching id for What Have I Got Of My Own by Trini Lopez ...\nfetching id for Tennessee Waltz by Sam Cooke ...\nfetching id for No Particular Place To Go by Chuck Berry ...\nfetching id for I Still Get Jealous by Louis Armstrong And The All Stars ...\nfetching id for Remember Me by Rita Pavone ...\nfetching id for Girls by Major Lance ...\nfetching id for I'm Happy Just To Dance With You by The Beatles ...\nfetching id for All Grown Up by The Crystals ...\nfetching id for Bachelor Boy by Cliff Richard And The Shadows ...\nfetching id for A World Without Love by Peter And Gordon ...\nfetching id for Chapel Of Love by The Dixie Cups ...\nfetching id for I'll Be In Trouble by The Temptations ...\nfetching id for Little Children by Billy J. Kramer With The Dakotas ...\n--> [error] I Still Get Jealous by Louis Armstrong And The All Stars\nfetching id for The World I Used To Know by Jimmie Rodgers ...\nfetching id for Not Fade Away by The Rolling Stones ...\nfetching id for Something You Got by Alvin Robinson ...\nfetching id for Beg Me by Chuck Jackson ...\nfetching id for The First Night Of The Full Moon by Jack Jones ...\n--> [error] Something You Got by Alvin Robinson\nfetching id for Peg O' My Heart by Robert Maxwell His Harp And Orchestra ...\n--> [error] The First Night Of The Full Moon by Jack Jones\nfetching id for I Can't Hear You by Betty Everett ...\nfetching id for The Ferris Wheel by The Everly Brothers ...\nfetching id for It Ain't No Use by Major Lance ...\nfetching id for The Mexican Shuffle by Herb Alpert's Tijuana Brass ...\n--> [error] Peg O' My Heart by Robert Maxwell His Harp And Orchestra\nfetching id for A Shot In The Dark by Henry Mancini And His Orchestra ...\n--> [error] The Mexican Shuffle by Herb Alpert's Tijuana Brass\nfetching id for Love Me With All Your Heart (Cuando Calienta El Sol) by The Ray Charles Singers ...\n--> [error] A Shot In The Dark by Henry Mancini And His Orchestra\nfetching id for Yesterday's Gone by Chad & Jeremy ...\nfetching id for Walk On By by Dionne Warwick ...\nfetching id for What's The Matter With You Baby by Marvin Gaye & Mary Wells ...\n--> [error] Love Me With All Your Heart (Cuando Calienta El Sol) by The Ray Charles Singers\nfetching id for Lazy Elsie Molly by Chubby Checker ...\nfetching id for Alone With You by Brenda Lee ...\nfetching id for Kick That Little Foot Sally Ann by Round Robin ...\n--> [error] Alone With You by Brenda Lee\nfetching id for Don't Take Your Love From Me by Gloria Lynne ...\nfetching id for Hickory, Dick And Doc by Bobby Vee ...\nfetching id for It Will Stand by The Showmen ...\nfetching id for I'm The One by Gerry And The Pacemakers ...\nfetching id for She's My Girl by Bobby Shafto ...\n--> [error] Kick That Little Foot Sally Ann by Round Robin\nfetching id for Love Me Do by The Beatles ...\nfetching id for Today by The New Christy Minstrels ...\nfetching id for Beans In My Ears by The Serendipity Singers ...\nfetching id for I'll Touch A Star by Terry Stafford ...\nfetching id for Diane by The Bachelors ...\nfetching id for Hello, Dolly! by Louis Armstrong And The All Stars ...\n--> [error] She's My Girl by Bobby Shafto\nfetching id for My Guy by Mary Wells ...\nfetching id for Tell Me Why by Bobby Vinton ...\nfetching id for Giving Up by Gladys Knight And The Pips ...\nfetching id for Milord by Bobby Darin ...\nfetching id for Party Girl by Bernadette Carroll ...\nfetching id for Just Ain't Enough Love by Eddie Holland ...\nfetching id for A Little Toy Balloon by Danny Williams ...\n--> [error] Hello, Dolly! by Louis Armstrong And The All Stars\nfetching id for If You See My Love by Lenny Welch ...\nfetching id for Oh, Rock My Soul (Part I) by Peter, Paul & Mary ...\nfetching id for My Heart Skips A Beat by Buck Owens ...\nfetching id for Like Columbus Did by The Reflections ...\nfetching id for Jamaica Ska by The Ska Kings ...\n--> [error] A Little Toy Balloon by Danny Williams\nfetching id for It's A Sin To Tell A Lie by Tony Bennett ...\nfetching id for Tears And Roses by Al Martino ...\nfetching id for Do You Love Me by The Dave Clark Five ...\nfetching id for Every Little Bit Hurts by Brenda Holloway ...\nfetching id for My Baby Don't Dig Me by Ray Charles and his Orchestra ...\n--> [error] Jamaica Ska by The Ska Kings\nfetching id for The French Song by Lucille Starr ...\n--> [error] My Baby Don't Dig Me by Ray Charles and his Orchestra\nfetching id for The World Of Lonely People by Anita Bryant ...\n--> [error] The French Song by Lucille Starr\nfetching id for Yesterday's Gone by The Overlanders ...\nfetching id for Taste Of Tears by Johnny Mathis ...\nfetching id for Dream Lover by The Paris Sisters ...\n--> [error] The World Of Lonely People by Anita Bryant\nfetching id for Viva Las Vegas by Elvis Presley ...\nfetching id for All My Loving by The Hollyridge Strings ...\n--> [error] Dream Lover by The Paris Sisters\nfetching id for It's All Over Now by The Valentinos ...\nfetching id for I Don't Want To Hear Anymore by Jerry Butler ...\nfetching id for The Things That I Used To Do by James Brown And His Orchestra ...\n--> [error] All My Loving by The Hollyridge Strings\nfetching id for Once Upon A Time by Marvin Gaye & Mary Wells ...\n--> [error] The Things That I Used To Do by James Brown And His Orchestra\nfetching id for What'd I Say by Elvis Presley With The Jubilee Four And Carole Lombard Quartet ...\nfetching id for (Just Like) Romeo & Juliet by The Reflections ...\nfetching id for P.S. I Love You by The Beatles ...\nfetching id for Rock Me Baby by B.B. King ...\nfetching id for Be Anything (But Be Mine) by Connie Francis ...\nfetching id for Cotton Candy by Al Hirt ...\nfetching id for Another Cup Of Coffee by Brook Benton ...\nfetching id for Too Late To Turn Back Now by Brook Benton ...\nfetching id for Everybody Knows by Steve Lawrence ...\nfetching id for I Wanna Be Loved by Dean And Jean ...\nfetching id for Four By The Beatles by The Beatles ...\nfetching id for Sie Liebt Dich (She Loves You) by Die Beatles ...\n--> [error] Everybody Knows by Steve Lawrence\nfetching id for Three Window Coupe by The Rip Chords ...\nfetching id for I Don't Want To Be Hurt Anymore by Nat King Cole ...\nfetching id for I Don't Wanna Be A Loser by Lesley Gore ...\nfetching id for It's Over by Roy Orbison ...\nfetching id for Viva Las Vegas by Elvis Presley With The Jordanaires ...\n--> [error] Sie Liebt Dich (She Loves You) by Die Beatles\nfetching id for I Rise, I Fall by Johnny Tillotson ...\nfetching id for Good Golly Miss Molly by The Swinging Blue Jeans ...\nfetching id for Goodbye Baby (Baby Goodbye) by Solomon Burke ...\nfetching id for Sugar And Spice by The Searchers ...\nfetching id for Hurt By Love by Inez Foxx ...\nfetching id for Donnie by The Bermudas ...\nfetching id for Rules Of Love by The Orlons ...\nfetching id for Be My Girl by The Four-Evers ...\nfetching id for A World Without Love by Bobby Rydell ...\nfetching id for My Dreams by Brenda Lee ...\n--> [error] Be My Girl by The Four-Evers\nfetching id for Night Time Is The Right Time by Rufus & Carla ...\nfetching id for Bits And Pieces by The Dave Clark Five ...\nfetching id for I'm So Proud by The Impressions ...\n--> [error] My Dreams by Brenda Lee\nfetching id for Ronnie by The 4 Seasons Featuring the \"Sound of Frankie Valli\" ...\nfetching id for Wish Someone Would Care by Irma Thomas ...\nfetching id for Wrong For Each Other by Andy Williams ...\nfetching id for Tall Cool One by The Wailers ...\nfetching id for Gonna Get Along Without You Now by Skeeter Davis ...\nfetching id for Gonna' Get Along Without You Now by Tracey Dey ...\n--> [error] Wrong For Each Other by Andy Williams\nfetching id for I Knew It All The Time by The Dave Clark Five ...\n--> [error] Gonna' Get Along Without You Now by Tracey Dey\nfetching id for One Way Love by The Drifters ...\nfetching id for Kiko by Jimmy McGriff ...\nfetching id for That's Really Some Good by Rufus & Carla ...\nfetching id for Good Time Tonight by The Soul Sisters ...\nfetching id for The Magic Of Our Summer Love by The Tymes ...\nfetching id for Dead Man's Curve by Jan & Dean ...\nfetching id for Do You Want To Know A Secret by The Beatles ...\nfetching id for White On White by Danny Williams ...\nfetching id for The Very Thought Of You by Rick Nelson ...\nfetching id for Shangri-La by Robert Maxwell His Harp And Orchestra ...\n--> [error] I Knew It All The Time by The Dave Clark Five\nfetching id for Kiss Me Quick by Elvis Presley With The Jordanaires ...\n--> [error] Shangri-La by Robert Maxwell His Harp And Orchestra\nfetching id for Whenever He Holds You by Bobby Goldsboro ...\nfetching id for One Girl by Garnet Mimms ...\nfetching id for Yesterday's Hero by Gene Pitney ...\nfetching id for Trouble I've Had by Clarence Ashe ...\nfetching id for Don't Let The Rain Come Down (Crooked Little Man) by The Serendipity Singers ...\nfetching id for Suspicion by Terry Stafford ...\nfetching id for Can't Buy Me Love by The Beatles ...\nfetching id for Shangri-La by Vic Dana ...\nfetching id for Tea For Two by Nino Tempo & April Stevens ...\nfetching id for Soul Serenade by King Curtis ...\nfetching id for The Loneliest Night by Dale & Grace ...\nfetching id for Carol by Tommy Roe ...\nfetching id for Winkin', Blinkin' And Nod by The Simon Sisters ...\nfetching id for From Russia With Love by The Village Stompers ...\nfetching id for Tell Me Mamma by Christine Quaite ...\nfetching id for Tequila by Bill Black's Combo ...\nfetching id for Across The Street by Lenny O'Henry ...\nfetching id for Let's Have A Party by The Rivieras ...\nfetching id for Money by The Kingsmen ...\nfetching id for My Girl Sloopy by The Vibrations ...\n--> [error] Tell Me Mamma by Christine Quaite\nfetching id for The Shoop Shoop Song (It's In His Kiss) by Betty Everett ...\nfetching id for The Pink Panther Theme by Henry Mancini And His Orchestra ...\n--> [error] My Girl Sloopy by The Vibrations\nfetching id for (The Best Part Of) Breakin' Up by The Ronettes ...\nfetching id for Kiss Me Sailor by Diane Renay ...\nfetching id for That's The Way Boys Are by Lesley Gore ...\nfetching id for Ain't That Just Like Me by The Searchers ...\nfetching id for Loving You More Every Day by Etta James ...\n--> [error] The Pink Panther Theme by Henry Mancini And His Orchestra\nfetching id for Who's Afraid Of Virginia Woolf? (Part I) by Jimmy Smith ...\nfetching id for Big Boss Line by Jackie Wilson ...\nfetching id for Long Tall Shorty by Tommy Tucker ...\n--> [error] Loving You More Every Day by Etta James\nfetching id for Security by Otis Redding ...\nfetching id for Glad All Over by The Dave Clark Five ...\nfetching id for The Matador by Major Lance ...\nfetching id for Charade by Sammy Kaye And His Orchestra ...\n--> [error] Long Tall Shorty by Tommy Tucker\nfetching id for Ebb Tide by Lenny Welch ...\nfetching id for Thank You Girl by The Beatles ...\nfetching id for You're A Wonderful One by Marvin Gaye ...\nfetching id for Forever by Pete Drake And His Talking Steel Guitar ...\n--> [error] Charade by Sammy Kaye And His Orchestra\nfetching id for Slip-In Mules (No High Heel Sneakers) by Sugar Pie DeSanto ...\nfetching id for In My Lonely Room by Martha & The Vandellas ...\nfetching id for Look Homeward Angel by The Monarchs ...\n--> [error] Forever by Pete Drake And His Talking Steel Guitar\nfetching id for That's When It Hurts by Ben E. King ...\nfetching id for Hey, Mr. Sax Man by Boots Randolph ...\nfetching id for Little Donna by The Rivieras ...\nfetching id for Have I Stayed Away Too Long by Bobby Bare ...\nfetching id for The Way You Do The Things You Do by The Temptations ...\n--> [error] Look Homeward Angel by The Monarchs\nfetching id for Needles And Pins by The Searchers ...\nfetching id for Stay Awhile by Dusty Springfield ...\nfetching id for Can You Do It by The Contours ...\nfetching id for Hey, Bobba Needle by Chubby Checker ...\nfetching id for Nadine (Is It You?) by Chuck Berry ...\nfetching id for The New Girl In School by Jan & Dean ...\nfetching id for Baby Baby Baby by Anna King-Bobby Byrd ...\nfetching id for Giving Up On Love by Jerry Butler ...\nfetching id for The Wonder Of You by Ray Peterson ...\nfetching id for Soul Hootenanny (Pt. I) by Gene Chandler ...\n--> [error] Baby Baby Baby by Anna King-Bobby Byrd\nfetching id for Caldonia by James Brown And His Orchestra ...\n--> [error] Soul Hootenanny (Pt. I) by Gene Chandler\nfetching id for Big Party by Barbara & The Browns ...\nfetching id for The Little White Cloud That Cried by Wayne Newton ...\n--> [error] Caldonia by James Brown And His Orchestra\nfetching id for Ain't Nothing You Can Do by Bobby Bland ...\n--> [error] The Little White Cloud That Cried by Wayne Newton\nfetching id for She Loves You by The Beatles ...\nfetching id for Think by Brenda Lee ...\nfetching id for Make Me Forget by Bobby Rydell ...\nfetching id for All My Loving by The Beatles ...\nfetching id for T'ain't Nothin' To Me by The Coasters ...\nfetching id for I Should Care by Gloria Lynne ...\nfetching id for Come To Me by Otis Redding ...\nfetching id for Little Boxes by The Womenfolk ...\nfetching id for Gee by The Pixies Three ...\nfetching id for Where Does Love Go by Freddie Scott ...\n--> [error] Little Boxes by The Womenfolk\nfetching id for I Want To Hold Your Hand by The Beatles ...\nfetching id for Please Please Me by The Beatles ...\n--> [error] Where Does Love Go by Freddie Scott\nfetching id for My Heart Belongs To Only You by Bobby Vinton ...\nfetching id for Stay by The 4 Seasons ...\nfetching id for Hippy Hippy Shake by The Swinging Blue Jeans ...\nfetching id for Fun, Fun, Fun by The Beach Boys ...\nfetching id for Dawn (Go Away) by The 4 Seasons ...\nfetching id for I Can't Stand It by Soul Sisters ...\nfetching id for Ain't Gonna Tell Anybody by Jimmy Gilmer And The Fireballs ...\nfetching id for You Can't Do That by The Beatles ...\nfetching id for Castles In The Sand by Little Stevie Wonder ...\n--> [error] Ain't Gonna Tell Anybody by Jimmy Gilmer And The Fireballs\nfetching id for Book Of Love by The Raindrops ...\n--> [error] Castles In The Sand by Little Stevie Wonder\nfetching id for Our Everlasting Love by Ruby And The Romantics ...\nfetching id for Mexican Drummer Man by Herb Alpert's Tijuana Brass ...\n--> [error] Book Of Love by The Raindrops\nfetching id for Be Anything (But Be Mine) by Gloria Lynne ...\n--> [error] Mexican Drummer Man by Herb Alpert's Tijuana Brass\nfetching id for When Joanna Loved Me by Tony Bennett ...\nfetching id for I'm The Lonely One by Cliff Richard ...\nfetching id for I'm Confessin' (That I Love You) by Nino Tempo & April Stevens ...\nfetching id for Hi-Heel Sneakers by Tommy Tucker ...\nfetching id for Tell It On The Mountain by Peter, Paul & Mary ...\nfetching id for Java by Al (He's the King) Hirt ...\n--> [error] Be Anything (But Be Mine) by Gloria Lynne\nfetching id for Kissin' Cousins by Elvis Presley With The Jordanaires ...\n--> [error] Java by Al (He's the King) Hirt\nfetching id for We Love You Beatles by The Carefrees ...\nfetching id for I Saw Her Standing There by The Beatles ...\nfetching id for You Lied To Your Daddy by The Tams ...\nfetching id for It's All Right (You're Just In Love) by The Tams ...\nfetching id for Somebody Stole My Dog by Rufus Thomas ...\nfetching id for Why by The Beatles With Tony Sheridan ...\n--> [error] We Love You Beatles by The Carefrees\nfetching id for Vanishing Point by The Marketts ...\nfetching id for High On A Hill by Scott English ...\nfetching id for Blue Winter by Connie Francis ...\nfetching id for It Hurts Me by Elvis Presley With The Jordanaires ...\nfetching id for Navy Blue by Diane Renay ...\nfetching id for Rip Van Winkle by The Devotions ...\nfetching id for I Love You More And More Every Day by Al Martino ...\n--> [error] Vanishing Point by The Marketts\nfetching id for My Heart Cries For You by Ray Charles ...\nfetching id for Hey Jean, Hey Dean by Dean And Jean ...\nfetching id for From Me To You by The Beatles ...\nfetching id for I'll Make You Mine by Bobby Vee With The Eligibles ...\nfetching id for Congratulations by Rick Nelson ...\nfetching id for There's A Place by The Beatles ...\nfetching id for Sha-La-La by The Shirelles ...\nfetching id for Roll Over Beethoven by The Beatles ...\nfetching id for Hand It Over by Chuck Jackson ...\nfetching id for I'm On Fire by Jerry Lee Lewis ...\nfetching id for People by Nat King Cole ...\nfetching id for See The Funny Little Clown by Bobby Goldsboro ...\n--> [error] My Heart Cries For You by Ray Charles\nfetching id for Understand Your Man by Johnny Cash ...\nfetching id for Penetration by The Pyramids ...\nfetching id for Baby, Don't You Cry (The New Swingova Rhythm) by Ray Charles and his Orchestra ...\nfetching id for Worried Guy by Johnny Tillotson ...\nfetching id for He's A Good Guy (Yes He Is) by The Marvelettes ...\nfetching id for (You Can't Let The Boy Overpower) The Man In You by The Miracles ...\nfetching id for Love With The Proper Stranger by Jack Jones ...\nfetching id for To Each His Own by The Tymes ...\nfetching id for A Letter To The Beatles by The Four Preps ...\nfetching id for I Can't Wait Until I See My Baby by Justine Washington ...\n--> [error] Baby, Don't You Cry (The New Swingova Rhythm) by Ray Charles and his Orchestra\nfetching id for How Blue Can You Get by B.B. King And His Orchestra ...\n--> [error] I Can't Wait Until I See My Baby by Justine Washington\nfetching id for California Sun by The Rivieras ...\nfetching id for Good News by Sam Cooke ...\nfetching id for I Only Want To Be With You by Dusty Springfield ...\nfetching id for Stardust by Nino Tempo & April Stevens ...\nfetching id for Oh Baby Don't You Weep (Part 1) by James Brown And The Famous Flames ...\n--> [error] How Blue Can You Get by B.B. King And His Orchestra\nfetching id for The Shelter Of Your Arms by Sammy Davis Jr. ...\nfetching id for Who Do You Love by The Sapphires ...\nfetching id for I Wish You Love by Gloria Lynne ...\nfetching id for Puppy Love by Barbara Lewis ...\nfetching id for Can Your Monkey Do The Dog by Rufus Thomas ...\nfetching id for He'll Have To Go by Solomon Burke ...\nfetching id for Young And In Love by Chris Crosby ...\nfetching id for My True Carrie, Love by Nat King Cole ...\n--> [error] Oh Baby Don't You Weep (Part 1) by James Brown And The Famous Flames\nfetching id for Long Gone Lonesome Blues by Hank Williams Jr. ...\nfetching id for (That's) What The Nitty Gritty Is by Shirley Ellis ...\nfetching id for Tell Me Baby by Garnet Mimms ...\nfetching id for Jailer, Bring Me Water by Trini Lopez ...\nfetching id for Abigail Beecher by Freddy Cannon ...\nfetching id for Stop And Think It Over by Dale & Grace ...\nfetching id for Bird Dance Beat by The Trashmen ...\nfetching id for My Bonnie (My Bonnie Lies Over The Ocean) by The Beatles With Tony Sheridan ...\nfetching id for Miller's Cave by Bobby Bare ...\nfetching id for You Don't Own Me by Lesley Gore ...\nfetching id for Live Wire by Martha & The Vandellas ...\nfetching id for Out Of Sight - Out Of Mind by Sunny & The Sunliners ...\n--> [error] My True Carrie, Love by Nat King Cole\nfetching id for Always In My Heart by Los Indios Tabajaras ...\nfetching id for My Boyfriend Got A Beatle Haircut by Donna Lynn ...\n--> [error] Out Of Sight - Out Of Mind by Sunny & The Sunliners\nfetching id for Searchin' by Ace Cannon ...\nfetching id for The Boy With The Beatle Hair by The Swans ...\nfetching id for Lazy Lady by Fats Domino ...\n--> [error] My Boyfriend Got A Beatle Haircut by Donna Lynn\nfetching id for Stockholm by Lawrence Welk And His Orchestra ...\n--> [error] Lazy Lady by Fats Domino\nfetching id for Run, Run, Run by The Supremes ...\nfetching id for I Wonder Who's Kissing Her Now by Bobby Darin ...\nfetching id for Um, Um, Um, Um, Um, Um by Major Lance ...\nfetching id for A Fool Never Learns by Andy Williams ...\n--> [error] Stockholm by Lawrence Welk And His Orchestra\nfetching id for What Kind Of Fool (Do You Think I Am) by The Tams ...\nfetching id for Hey Little Cobra by The Rip Chords ...\nfetching id for Talking About My Baby by The Impressions ...\nfetching id for What's Easy For Two Is So Hard For One by Mary Wells ...\nfetching id for Vaya Con Dios by The Drifters ...\nfetching id for Bye Bye Barbara by Johnny Mathis ...\nfetching id for He Walks Like A Man by Jody Miller ...\nfetching id for Custom Machine by Bruce & Terry ...\nfetching id for (The Story Of) Woman, Love And A Man (Part 1) by Tony Clarke ...\nfetching id for Going Back To Louisiana by Bruce Channel ...\n--> [error] Going Back To Louisiana by Bruce Channel\nfetching id for For You by Rick Nelson ...\nfetching id for Southtown, U.S.A. by The Dixiebelles ...\n--> [error] (The Story Of) Woman, Love And A Man (Part 1) by Tony Clarke\nfetching id for Out Of Limits by The Marketts ...\nfetching id for Letter From Sherry by Dale Ward ...\n--> [error] Southtown, U.S.A. by The Dixiebelles\nfetching id for Anyone Who Had A Heart by Dionne Warwick ...\nfetching id for Hooka Tooka by Chubby Checker ...\nfetching id for Gonna Send You Back To Georgia (A City Slick) by Timmy Shaw ...\n--> [error] Letter From Sherry by Dale Ward\nfetching id for Going Going Gone by Brook Benton ...\nfetching id for Come On by Tommy Roe ...\nfetching id for He Says The Same Things To Me by Skeeter Davis ...\nfetching id for Leaving Here by Eddie Holland ...\nfetching id for All My Trials by Dick and DeeDee ...\n--> [error] Gonna Send You Back To Georgia (A City Slick) by Timmy Shaw\nfetching id for So Far Away by Hank Jacobs ...\nfetching id for Mo-Onions by Booker T. & The MG's ...\nfetching id for You Were Wrong by Z.Z. Hill ...\nfetching id for It's All In The Game by Cliff Richard ...\nfetching id for Surfin' Bird by The Trashmen ...\nfetching id for Wow Wow Wee (He's The Boy For Me) by The Angels ...\n--> [error] All My Trials by Dick and DeeDee\nfetching id for That Girl Belongs To Yesterday by Gene Pitney ...\nfetching id for I'll Remember (In The Still Of The Night) by Santo & Johnny ...\nfetching id for 442 Glenwood Avenue by The Pixies Three ...\nfetching id for Harlem Shuffle by Bob And Earl ...\nfetching id for Shimmy Shimmy by The Orlons ...\nfetching id for Comin' On by Bill Black's Combo ...\nfetching id for Little Boxes by Pete Seeger ...\nfetching id for Pink Dominos by The Crescents Featuring Chiyo ...\nfetching id for Saginaw, Michigan by Lefty Frizzell ...\nfetching id for Have You Ever Been Lonely (Have You Ever Been Blue) by The Caravelles ...\nfetching id for Willyam, Willyam by Dee Dee Sharp ...\nfetching id for Forget Him by Bobby Rydell ...\nfetching id for Daisy Petal Pickin' by Jimmy Gilmer And The Fireballs ...\n--> [error] Pink Dominos by The Crescents Featuring Chiyo\nfetching id for There! I've Said It Again by Bobby Vinton ...\nfetching id for As Usual by Brenda Lee ...\nfetching id for Popsicles And Icicles by The Murmaids ...\nfetching id for You'll Never Walk Alone by Patti LaBelle And The Blue Belles ...\n--> [error] Daisy Petal Pickin' by Jimmy Gilmer And The Fireballs\nfetching id for (It's No) Sin by The Duprees featuring Joey Vann ...\nfetching id for Where Did I Go Wrong by Dee Dee Sharp ...\nfetching id for How Much Can A Lonely Heart Stand by Skeeter Davis ...\nfetching id for Please, Please, Please by James Brown And The Famous Flames ...\nfetching id for Baby, I Love You by The Ronettes ...\nfetching id for Charade by Henry Mancini And His Orchestra ...\nfetching id for Drag City by Jan & Dean ...\nfetching id for Somewhere by The Tymes ...\nfetching id for Whispering by Nino Tempo & April Stevens ...\nfetching id for The Nitty Gritty by Shirley Ellis ...\nfetching id for I Can't Stop Talking About You by Steve & Eydie ...\nfetching id for Comin' In The Back Door by The Baja Marimba Band ...\n--> [error] You'll Never Walk Alone by Patti LaBelle And The Blue Belles\nfetching id for The Little Boy by Tony Bennett ...\nfetching id for Pain In My Heart by Otis Redding ...\nfetching id for Tell Him by The Drew-Vels ...\nfetching id for I Didn't Know What Time It Was by The Crampton Sisters ...\n--> [error] Comin' In The Back Door by The Baja Marimba Band\nfetching id for (I'm Watching) Every Little Move You Make by Little Peggy March ...\n--> [error] I Didn't Know What Time It Was by The Crampton Sisters\nfetching id for Baby What You Want Me To Do by Etta James ...\nfetching id for Little Boy by The Crystals ...\nfetching id for Here's A Heart by The Diplomats ...\nfetching id for Strange Things Happening by Little Jr. Parker ...\n--> [error] (I'm Watching) Every Little Move You Make by Little Peggy March\nfetching id for True Love Goes On And On by Burl Ives ...\nfetching id for Since I Fell For You by Lenny Welch ...\nfetching id for When The Lovelight Starts Shining Through His Eyes by The Supremes ...\nfetching id for Girls Grow Up Faster Than Boys by The Cookies ...\nfetching id for Quicksand by Martha & The Vandellas ...\nfetching id for My One And Only, Jimmy Boy by The Girlfriends ...\nfetching id for Dumb Head by Ginny Arnell ...\n--> [error] Strange Things Happening by Little Jr. Parker\nfetching id for Tonight You're Gonna Fall In Love With Me by The Shirelles ...\nfetching id for Watch Your Step by Brooks O'Dell ...\nfetching id for If Somebody Told You by Anna King ...\nfetching id for Stranger In Your Arms by Bobby Vee ...\nfetching id for Ask Me by Inez Foxx ...\nfetching id for Deep In The Heart Of Harlem by Clyde McPhatter ...\nfetching id for Since I Found A New Love by Little Johnny Taylor ...\n--> [error] Dumb Head by Ginny Arnell\nfetching id for Midnight Mary by Joey Powers ...\nfetching id for Dominique by The Singing Nun (Soeur Sourire) ...\n--> [error] Since I Found A New Love by Little Johnny Taylor\nfetching id for Wives And Lovers by Jack Jones ...\nfetching id for Talk Back Trembling Lips by Johnny Tillotson ...\nfetching id for That Lucky Old Sun by Ray Charles ...\n--> [error] Dominique by The Singing Nun (Soeur Sourire)\nfetching id for Need To Belong by Jerry Butler ...\nfetching id for Can I Get A Witness by Marvin Gaye ...\nfetching id for In The Summer Of His Years by Connie Francis ...\nfetching id for You're No Good by Betty Everett ...\nfetching id for Who Cares by Fats Domino ...\n--> [error] That Lucky Old Sun by Ray Charles\nfetching id for As Long As I Know He's Mine by The Marvelettes ...\nfetching id for Stay With Me by Frank Sinatra ...\nfetching id for Here Comes The Boy by Tracey Dey ...\nfetching id for You Don't Have To Be A Baby To Cry by The Caravelles ...\nfetching id for Drip Drop by Dion Di Muci ...\n--> [error] Who Cares by Fats Domino\nfetching id for I Gotta Dance To Keep From Crying by The Miracles ...\nfetching id for Pretty Paper by Roy Orbison ...\nfetching id for Tra La La La Suzy by Dean And Jean ...\nfetching id for Loddy Lo by Chubby Checker ...\nfetching id for The Marvelous Toy by The Chad Mitchell Trio ...\n--> [error] Drip Drop by Dion Di Muci\nfetching id for Today's Teardrops by Rick Nelson ...\nfetching id for Snap Your Fingers by Barbara Lewis ...\nfetching id for Do-Wah-Diddy by The Exciters ...\nfetching id for We Belong Together by Jimmy Velvet ...\nfetching id for Billie Baby by Lloyd Price ...\nfetching id for His Kiss by Betty Harris ...\nfetching id for Judy Loves Me by Johnny Crawford ...\n--> [error] The Marvelous Toy by The Chad Mitchell Trio\nfetching id for Slipin' And Slidin' by Jim and Monica ...\n--> [error] Judy Loves Me by Johnny Crawford\nfetching id for When You Walk In The Room by Jackie DeShannon ...\nfetching id for Who's Been Sleeping In My Bed? by Linda Scott ...\nfetching id for Kansas City by Trini Lopez ...\nfetching id for For Your Precious Love by Garnet Mimms & The Enchanters ...\nfetching id for Turn Around by Dick and DeeDee ...\n--> [error] Slipin' And Slidin' by Jim and Monica\nfetching id for Be True To Your School by The Beach Boys ...\nfetching id for I Have A Boyfriend by The Chiffons ...\nfetching id for Bon-Doo-Wah by The Orlons ...\nfetching id for Long Tall Texan by Murry Kellum ...\nfetching id for That Boy John by The Raindrops ...\nfetching id for Please by Frank Ifield ...\n--> [error] That Boy John by The Raindrops\nfetching id for Come Dance With Me by Jay & The Americans ...\nfetching id for Charade by Andy Williams ...\nfetching id for I'm Leaving It Up To You by Dale & Grace ...\nfetching id for The Boy Next Door by The Secrets ...\nfetching id for Everybody by Tommy Roe ...\n--> [error] Please by Frank Ifield\nfetching id for In My Room by The Beach Boys ...\nfetching id for Stewball by Peter, Paul & Mary ...\nfetching id for Have You Heard by The Duprees featuring Joey Vann ...\nfetching id for Baby Don't You Weep by Garnet Mimms & The Enchanters ...\nfetching id for The Cheer Leader by Paul Petersen ...\nfetching id for For Your Sweet Love by The Cascades ...\nfetching id for The Feeling Is Gone by Bobby Bland ...\nfetching id for Baby's Gone by Gene Thomas ...\n--> [error] The Cheer Leader by Paul Petersen\nfetching id for The Son Of Rebel Rouser by Duane Eddy ...\nfetching id for Cold Cold Winter by The Pixies Three ...\nfetching id for Coming Back To You by Maxine Brown ...\nfetching id for Wonderful Summer by Robin Ward ...\nfetching id for Walking The Dog by Rufus Thomas ...\nfetching id for Bad Girl by Neil Sedaka ...\nfetching id for She's A Fool by Lesley Gore ...\nfetching id for Misery by The Dynamics ...\n--> [error] Baby's Gone by Gene Thomas\nfetching id for Ally Ally Oxen Free by The Kingston Trio ...\nfetching id for Be Mad Little Girl by Bobby Darin ...\nfetching id for I'll Search My Heart by Johnny Mathis ...\nfetching id for Where Or When by The Lettermen ...\n--> [error] Misery by The Dynamics\nfetching id for Little Red Rooster by Sam Cooke ...\nfetching id for Living A Lie by Al Martino ...\nfetching id for Twenty Four Hours From Tulsa by Gene Pitney ...\nfetching id for Sugar Shack by Jimmy Gilmer And The Fireballs ...\n--> [error] Where Or When by The Lettermen\nfetching id for It's All Right by The Impressions ...\nfetching id for Rags To Riches by Sunny & The Sunliners ...\nfetching id for The Impossible Happened by Little Peggy March ...\nfetching id for Dawn by The David Rockingham Trio ...\n--> [error] Dawn by The David Rockingham Trio\nfetching id for Begging To You by Marty Robbins ...\n--> [error] The Impossible Happened by Little Peggy March\nfetching id for Hootenanny Saturday Night by The Brothers Four ...\nfetching id for Did You Have A Happy Birthday? by Paul Anka ...\n--> [error] Hootenanny Saturday Night by The Brothers Four\nfetching id for Never Love A Robin by Bobby Vee ...\n--> [error] Did You Have A Happy Birthday? by Paul Anka\nfetching id for Washington Square by The Village Stompers ...\nfetching id for Hey Little Girl by Major Lance ...\nfetching id for Deep Purple by Nino Tempo & April Stevens ...\nfetching id for I Wonder What She's Doing Tonight by Barry & The Tamerlanes ...\nfetching id for You're Good For Me by Solomon Burke ...\nfetching id for Maria Elena by Los Indios Tabajaras ...\nfetching id for (Down At) Papa Joe's by The Dixiebelles ...\nfetching id for Bossa Nova Baby by Elvis Presley With The Jordanaires ...\nfetching id for Yesterday And You (Armen's Theme) by Bobby Vee ...\nfetching id for Baby I Do Love You by The Galens ...\nfetching id for Gotta Lotta Love by Steve Alaimo ...\n--> [error] (Down At) Papa Joe's by The Dixiebelles\nfetching id for Thank You And Goodnight by The Angels ...\nfetching id for Why Do Kids Grow Up by Randy & The Rainbows ...\nfetching id for 500 Miles Away From Home by Bobby Bare ...\nfetching id for Walking Proud by Steve Lawrence ...\n--> [error] Gotta Lotta Love by Steve Alaimo\nfetching id for She's Got Everything by The Essex Featuring Anita Humes ...\nfetching id for I Got A Woman by Freddie Scott ...\n--> [error] Walking Proud by Steve Lawrence\nfetching id for Young Wings Can Fly (Higher Than You Know) by Ruby And The Romantics ...\nfetching id for Sue's Gotta Be Mine by Del Shannon ...\nfetching id for Crossfire Time by Dee Clark ...\nfetching id for Surfer Street by The Allisons ...\n--> [error] I Got A Woman by Freddie Scott\nfetching id for I Adore Him by The Angels ...\nfetching id for Fools Rush In by Rick Nelson ...\nfetching id for Down The Aisle (Wedding Song) by Patti LaBelle And The Blue Belles ...\n--> [error] Surfer Street by The Allisons\nfetching id for Saturday Night by The New Christy Minstrels ...\nfetching id for The Matador by Johnny Cash ...\nfetching id for Shirl Girl by Wayne Newton And The Newton Brothers ...\n--> [error] Down The Aisle (Wedding Song) by Patti LaBelle And The Blue Belles\nfetching id for I Am A Witness by Tommy Hunt ...\n--> [error] Shirl Girl by Wayne Newton And The Newton Brothers\nfetching id for Reach Out For Me by Lou Johnson ...\n--> [error] I Am A Witness by Tommy Hunt\nfetching id for Any Other Way by Chuck Jackson ...\nfetching id for I'm Down To My Last Heartbreak by Wilson Pickett ...\nfetching id for Hi Diddle Diddle by Inez Foxx ...\n--> [error] Reach Out For Me by Lou Johnson\nfetching id for Hey Lover by Debbie Dovale ...\n--> [error] Hi Diddle Diddle by Inez Foxx\nfetching id for I Can't Stay Mad At You by Skeeter Davis ...\nfetching id for Mean Woman Blues by Roy Orbison ...\nfetching id for Cry To Me by Betty Harris ...\nfetching id for Your Other Love by Connie Francis ...\nfetching id for Misty by Lloyd Price ...\nfetching id for Wild! by Dee Dee Sharp ...\nfetching id for Witchcraft by Elvis Presley With The Jordanaires ...\n--> [error] Hey Lover by Debbie Dovale\nfetching id for Unchained Melody by Vito & The Salutations ...\nfetching id for Rumble by Jack Nitzsche ...\nfetching id for Now! by Lena Horne ...\nfetching id for Baby, What's Wrong by Lonnie Mack ...\nfetching id for Baby, We've Got Love by Johnnie Taylor ...\nfetching id for Busted by Ray Charles and his Orchestra ...\n--> [error] Rumble by Jack Nitzsche\nfetching id for That Sunday, That Summer by Nat King Cole ...\nfetching id for You Lost The Sweetest Boy by Mary Wells ...\nfetching id for Talk To Me by Sunny & The Sunglows ...\nfetching id for Donna The Prima Donna by Dion (Di Muci) ...\nfetching id for Be My Baby by The Ronettes ...\nfetching id for Cross Fire! by The Orlons ...\nfetching id for A Fine Fine Boy by Darlene Love ...\nfetching id for Don't Wait Too Long by Tony Bennett ...\nfetching id for Funny How Time Slips Away by Johnny Tillotson ...\nfetching id for Gotta Travel On by Timi Yuro ...\n--> [error] Donna The Prima Donna by Dion (Di Muci)\nfetching id for I Could Have Danced All Night by Ben E. King ...\nfetching id for Your Teenage Dreams by Johnny Mathis ...\n--> [error] Gotta Travel On by Timi Yuro\nfetching id for Two-Ten, Six-Eighteen (Doesn't Anybody Know My Name) by Jimmie Rodgers ...\nfetching id for Cuando Calienta El Sol (When The Sun Is Hot) by Steve Allen and His Orchestra with The Copacabana Trio ...\n--> [error] Your Teenage Dreams by Johnny Mathis\nfetching id for Please Don't Kiss Me Again by The Charmettes ...\nfetching id for Don't Think Twice, It's All Right by Peter, Paul & Mary ...\nfetching id for Cry Baby by Garnet Mimms & The Enchanters ...\nfetching id for The Grass Is Greener by Brenda Lee ...\nfetching id for Blue Bayou by Roy Orbison ...\nfetching id for Blue Velvet by Bobby Vinton ...\nfetching id for New Mexican Rose by The 4 Seasons ...\nfetching id for Blue Guitar by Richard Chamberlain ...\n--> [error] Cuando Calienta El Sol (When The Sun Is Hot) by Steve Allen and His Orchestra with The Copacabana Trio\nfetching id for Point Panic by The Surfaris ...\nfetching id for Night Life by Rusty Draper ...\nfetching id for Come Back by Johnny Mathis ...\nfetching id for Sally, Go 'round The Roses by The Jaynetts ...\nfetching id for Stop Monkeyin' Aroun' by The Dovells ...\nfetching id for Honolulu Lulu by Jan & Dean ...\nfetching id for Workout Stevie, Workout by Little Stevie Wonder ...\n--> [error] Blue Guitar by Richard Chamberlain\nfetching id for Red Sails In The Sunset by Fats Domino ...\nfetching id for I'll Take You Home by The Drifters ...\nfetching id for Monkey-Shine by Bill Black's Combo ...\n--> [error] Workout Stevie, Workout by Little Stevie Wonder\nfetching id for Two Tickets To Paradise by Brook Benton ...\nfetching id for Part Time Love by Little Johnny Taylor ...\nfetching id for Bust Out by The Busters ...\nfetching id for Signed, Sealed, And Delivered by James Brown And The Famous Flames ...\nfetching id for We Shall Overcome by Joan Baez ...\nfetching id for Saltwater Taffy by Morty Jay And The Surferin' Cats ...\n--> [error] Monkey-Shine by Bill Black's Combo\nfetching id for When The Boy's Happy (The Girl's Happy Too) by The Four Pennies ...\nfetching id for Surfer Girl by The Beach Boys ...\nfetching id for Mickey's Monkey by The Miracles ...\nfetching id for Heat Wave by Martha & The Vandellas ...\nfetching id for Then He Kissed Me by The Crystals ...\nfetching id for A Love So Fine by The Chiffons ...\nfetching id for Hello Heartache, Goodbye Love by Little Peggy March ...\n--> [error] Saltwater Taffy by Morty Jay And The Surferin' Cats\nfetching id for My Boyfriend's Back by The Angels ...\nfetching id for (Native Girl) Elephant Walk by Donald Jenkins & The Delighters ...\n--> [error] Hello Heartache, Goodbye Love by Little Peggy March\nfetching id for Enamorado by Keith Colley ...\nfetching id for Strange Feeling by Billy Stewart ...\nfetching id for I'm Crazy 'Bout My Baby by Marvin Gaye ...\nfetching id for Everybody Go Home by Eydie Gorme ...\n--> [error] (Native Girl) Elephant Walk by Donald Jenkins & The Delighters\nfetching id for Dear Abby by The Hearts ...\nfetching id for (Theme From) Any Number Can Win by Jimmy Smith ...\nfetching id for 31 Flavors by The Shirelles ...\nfetching id for Little Deuce Coupe by The Beach Boys ...\nfetching id for Wonderful! Wonderful! by The Tymes ...\nfetching id for Only In America by Jay & The Americans ...\nfetching id for If I Had A Hammer by Trini Lopez ...\nfetching id for A Walkin' Miracle by The Essex Featuring Anita Humes ...\nfetching id for Martian Hop by The Ran-Dells ...\nfetching id for September Song by Jimmy Durante ...\nfetching id for Speed Ball by Ray Stevens ...\nfetching id for First Day Back At School by Paul and Paula ...\nfetching id for Two Sides (To Every Story) by Etta James ...\nfetching id for He's Mine (I Love Him, I Love Him, I Love Him) by Alice Wonder Land ...\n--> [error] Everybody Go Home by Eydie Gorme\nfetching id for Sweet Impossible You by Brenda Lee ...\n--> [error] Sweet Impossible You by Brenda Lee\nfetching id for Little Eeefin Annie by Joe Perkins ...\n--> [error] He's Mine (I Love Him, I Love Him, I Love Him) by Alice Wonder Land\nfetching id for It's A Mad, Mad, Mad, Mad World by The Shirelles ...\nfetching id for The Scavenger by Dick Dale and The Del-Tones ...\nfetching id for Jenny Brown by The Smothers Brothers ...\nfetching id for The Monkey Time by Major Lance ...\nfetching id for The Kind Of Boy You Can't Forget by The Raindrops ...\nfetching id for Mockingbird by Inez Foxx with Charlie Foxx ...\nfetching id for Treat My Baby Good by Bobby Darin ...\nfetching id for More by Vic Dana ...\nfetching id for Wham! by Lonnie Mack ...\nfetching id for I'm Confessin' (That I Love You) by Frank Ifield ...\n--> [error] Little Eeefin Annie by Joe Perkins\nfetching id for Baby Get It (And Don't Quit It) by Jackie Wilson ...\n--> [error] I'm Confessin' (That I Love You) by Frank Ifield\nfetching id for My Babe by The Righteous Brothers ...\nfetching id for Cindy's Gonna Cry by Johnny Crawford ...\n--> [error] Baby Get It (And Don't Quit It) by Jackie Wilson\nfetching id for That's The Only Way by The 4 Seasons ...\nfetching id for Painted, Tainted Rose by Al Martino ...\nfetching id for You Can Never Stop Me Loving You by Johnny Tillotson ...\nfetching id for Hey, Girl by Freddie Scott ...\nfetching id for Birthday Party by The Pixies Three ...\nfetching id for Betty In Bermudas by The Dovells ...\nfetching id for What Does A Girl Do? by The Shirelles ...\nfetching id for More by Kai Winding & Orchestra ...\nfetching id for Toys In The Attic by Joe Sherman, his Orchestra and Chorus ...\n--> [error] Cindy's Gonna Cry by Johnny Crawford\nfetching id for Teenage Cleopatra by Tracey Dey ...\n--> [error] Toys In The Attic by Joe Sherman, his Orchestra and Chorus\nfetching id for Better To Give Than Receive by Joe Hinton ...\nfetching id for Detroit City No. 2 by Ben Colder ...\n--> [error] Teenage Cleopatra by Tracey Dey\nfetching id for Cowboy Boots by Dave Dudley ...\nfetching id for Lonely Drifter by The O'Jays ...\n--> [error] Detroit City No. 2 by Ben Colder\nfetching id for Hey Lonely One by Baby Washington ...\n--> [error] Lonely Drifter by The O'Jays\nfetching id for Hello Mudduh, Hello Fadduh! (A Letter From Camp) by Allan Sherman ...\nfetching id for Hey There Lonely Boy by Ruby And The Romantics ...\nfetching id for Frankie And Johnny by Sam Cooke ...\nfetching id for Blowin' In The Wind by Peter, Paul & Mary ...\nfetching id for Why Don't You Believe Me by The Duprees featuring Joey Vann ...\nfetching id for Denise by Randy & The Rainbows ...\nfetching id for Surfer Joe by The Surfaris ...\nfetching id for This Is My Prayer by Theola Kilgore ...\nfetching id for Toys In The Attic by Jack Jones ...\nfetching id for Where Did The Good Times Go by Dick and DeeDee ...\n--> [error] Hey Lonely One by Baby Washington\nfetching id for That's How It Goes by George Maharis ...\nfetching id for Let's Make Love Tonight by Bobby Rydell ...\nfetching id for Michael - Pt. 1 by Steve Alaimo ...\nfetching id for Please Don't Talk To The Lifeguard by Diane Ray ...\n--> [error] Michael - Pt. 1 by Steve Alaimo\nfetching id for Candy Girl by The 4 Seasons ...\nfetching id for I Want To Stay Here by Steve & Eydie ...\nfetching id for Make The World Go Away by Timi Yuro ...\nfetching id for The Lonely Surfer by Jack Nitzsche ...\nfetching id for Danke Schoen by Wayne Newton And The Newton Brothers ...\n--> [error] Please Don't Talk To The Lifeguard by Diane Ray\nfetching id for Straighten Up Your Heart by Barbara Lewis ...\nfetching id for 8 X 10 by Bill Anderson ...\nfetching id for Fingertips - Pt 2 by Little Stevie Wonder ...\n--> [error] Danke Schoen by Wayne Newton And The Newton Brothers\nfetching id for China Nights (Shina No Yoru) by Kyu Sakamoto ...\n--> [error] Fingertips - Pt 2 by Little Stevie Wonder\nfetching id for It's Too Late by Wilson Pickett ...\nfetching id for (I Cried at) Laura's Wedding by Barbara Lynn ...\nfetching id for Tell Me The Truth by Nancy Wilson ...\nfetching id for Mr. Wishing Well by Nat King Cole ...\nfetching id for Nick Teen And Al K. Hall by Rolf Harris ...\nfetching id for Desert Pete by The Kingston Trio ...\nfetching id for Que Sera, Sera (Whatever Will Be, Will Be) by The High Keyes ...\n--> [error] China Nights (Shina No Yoru) by Kyu Sakamoto\nfetching id for Wait Til' My Bobby Gets Home by Darlene Love ...\nfetching id for Drownin' My Sorrows by Connie Francis ...\n--> [error] Que Sera, Sera (Whatever Will Be, Will Be) by The High Keyes\nfetching id for Lucky Lips by Cliff Richard ...\nfetching id for Abilene by George Hamilton IV ...\nfetching id for Leave Me Alone by Baby Washington ...\nfetching id for It's A Lonely Town (Lonely Without You) by Gene McDaniels ...\nfetching id for Sooner Or Later by Johnny Mathis ...\nfetching id for Chinese Checkers by Booker T. & The MG's ...\nfetching id for Man's Temptation by Gene Chandler ...\nfetching id for Something Old, Something New by Paul and Paula ...\nfetching id for Hear The Bells by The Tokens ...\nfetching id for Faded Love by Patsy Cline ...\n--> [error] Drownin' My Sorrows by Connie Francis\nfetching id for Your Boyfriend's Back by Bobby Comstock And The Counts ...\nfetching id for Judy's Turn To Cry by Lesley Gore ...\nfetching id for Green, Green by The New Christy Minstrels ...\nfetching id for True Love Never Runs Smooth by Gene Pitney ...\nfetching id for I (Who Have Nothing) by Ben E. King ...\nfetching id for It Hurts To Be Sixteen by Andrea Carroll ...\n--> [error] It Hurts To Be Sixteen by Andrea Carroll\nfetching id for Everybody Monkey by Freddy Cannon ...\nfetching id for Pay Back by Etta James ...\nfetching id for Your Baby's Gone Surfin' by Duane Eddy ...\nfetching id for So Much In Love by The Tymes ...\nfetching id for Twist It Up by Chubby Checker ...\n--> [error] Your Boyfriend's Back by Bobby Comstock And The Counts\nfetching id for (You're the) Devil In Disguise by Elvis Presley With The Jordanaires ...\nfetching id for Surf City by Jan & Dean ...\nfetching id for Groovy Baby by Billy Abbott And The Jewels ...\nfetching id for Just One Look by Doris Troy ...\nfetching id for When A Boy Falls In Love by Mel Carter ...\nfetching id for The Dreamer by Neil Sedaka ...\nfetching id for My Daddy Knows Best by The Marvelettes ...\nfetching id for A Breath Taking Guy by The Supremes ...\nfetching id for Organ Shout by Dave \"Baby\" Cortez ...\nfetching id for Gone by The Rip Chords ...\nfetching id for Do The Monkey by King Curtis ...\nfetching id for Love Me All The Way by Kim Weston ...\nfetching id for My Whole World Is Falling Down by Brenda Lee ...\nfetching id for Easier Said Than Done by The Essex ...\nfetching id for Marlena by The 4 Seasons ...\nfetching id for Detroit City by Bobby Bare ...\nfetching id for Memphis by Lonnie Mack ...\nfetching id for Hopeless by Andy Williams ...\nfetching id for Surfin' Hootenanny by Al Casey ...\nfetching id for Mama Don't Allow by The Rooftop Singers ...\nfetching id for Daughter by The Blenders ...\nfetching id for I Wonder by Brenda Lee ...\nfetching id for This Is All I Ask by Burl Ives ...\nfetching id for This Is All I Ask by Tony Bennett ...\nfetching id for Shake! Shake! Shake! by Jackie Wilson ...\nfetching id for These Foolish Things by James Brown And The Famous Flames ...\n--> [error] Organ Shout by Dave \"Baby\" Cortez\nfetching id for Make The Music Play by Dionne Warwick ...\nfetching id for Make The World Go Away by Ray Price ...\nfetching id for Ring Of Fire by Johnny Cash ...\nfetching id for Sometimes You Gotta Cry A Little by Bobby Bland ...\nfetching id for Can't Nobody Love You by Solomon Burke ...\nfetching id for Dum Dum Dee Dum by Johnny Cymbal ...\nfetching id for It Won't Be This Way (Always) by The King Pins ...\n--> [error] These Foolish Things by James Brown And The Famous Flames\nfetching id for I'm Not A Fool Anymore by T.K. Hulin ...\nfetching id for Dance, Everybody, Dance by The Dartells ...\n--> [error] It Won't Be This Way (Always) by The King Pins\nfetching id for Tie Me Kangaroo Down, Sport by Rolf Harris ...\nfetching id for Till Then by The Classics ...\nfetching id for Pride And Joy by Marvin Gaye ...\nfetching id for Not Me by The Orlons ...\nfetching id for Six Days On The Road by Dave Dudley ...\nfetching id for Tips Of My Fingers by Roy Clark ...\nfetching id for Be Careful Of Stones That You Throw by Dion ...\nfetching id for How Many Teardrops by Lou Christie ...\nfetching id for I'm Afraid To Go Home by Brian Hyland ...\nfetching id for Surf Party by Chubby Checker ...\nfetching id for I Will Love You by Richard Chamberlain ...\nfetching id for Land Of 1000 Dances by Chris Kenner ...\nfetching id for I Cried by Tammy Montgomery ...\n--> [error] Dance, Everybody, Dance by The Dartells\nfetching id for Sukiyaki by Kyu Sakamoto ...\nfetching id for My True Confession by Brook Benton ...\nfetching id for Goodnight My Love by The Fleetwoods ...\nfetching id for Blue On Blue by Bobby Vinton ...\nfetching id for Rock Me In The Cradle Of Love by Dee Dee Sharp ...\nfetching id for Hootenanny by The Glencoves ...\nfetching id for No One by Ray Charles ...\nfetching id for Don't Say Goodnight And Mean Goodbye by The Shirelles ...\nfetching id for Brenda by The Cupids ...\nfetching id for Harry The Hairy Ape by Ray Stevens ...\nfetching id for Dancin' Holiday by The Olympics ...\nfetching id for Saturday Sunshine by Burt Bacharach ...\nfetching id for The Minute You're Gone by Sonny James ...\nfetching id for Still No. 2 by Ben Colder ...\n--> [error] I Cried by Tammy Montgomery\nfetching id for One Fine Day by The Chiffons ...\nfetching id for Without Love (There Is Nothing) by Ray Charles ...\n--> [error] Still No. 2 by Ben Colder\nfetching id for Hello Stranger by Barbara Lewis ...\nfetching id for Be True To Yourself by Bobby Vee ...\nfetching id for It's My Party by Lesley Gore ...\nfetching id for Shake A Tail Feather by The Five Du-Tones ...\nfetching id for Cottonfields by Ace Cannon ...\nfetching id for Will Power by The Cookies ...\n--> [error] Without Love (There Is Nothing) by Ray Charles\nfetching id for Jack The Ripper by Link Wray And The Wraymen ...\nfetching id for Dance, Dance, Dance by Joey Dee ...\nfetching id for At The Shore by Johnny Caswell ...\nfetching id for True Blue Lou by Tony Bennett ...\nfetching id for On Top Of Spaghetti by Tom Glazer And The Do-Re-Mi Children's Chorus ...\n--> [error] Will Power by The Cookies\nfetching id for My Summer Love by Ruby And The Romantics ...\nfetching id for Swinging On A Star by Big Dee Irwin (with Little Eva) ...\n--> [error] On Top Of Spaghetti by Tom Glazer And The Do-Re-Mi Children's Chorus\nfetching id for (I Love You) Don't You Forget It by Perry Como ...\n--> [error] Swinging On A Star by Big Dee Irwin (with Little Eva)\nfetching id for Falling by Roy Orbison ...\nfetching id for Those Lazy-Hazy-Crazy Days Of Summer by Nat King Cole ...\nfetching id for You Can't Sit Down by The Dovells ...\nfetching id for Summer's Comin' by Kirby St. Romain ...\n--> [error] (I Love You) Don't You Forget It by Perry Como\nfetching id for Like The Big Guys Do by The Rocky Fellers ...\nfetching id for Antony And Cleopatra Theme by Ferrante & Teicher ...\n--> [error] Summer's Comin' by Kirby St. Romain\nfetching id for I Can't Stop Loving You by Count Basie ...\nfetching id for What A Fool I've Been by Carla Thomas ...\nfetching id for Baja by The Astronauts ...\nfetching id for True Love by Richard Chamberlain ...\nfetching id for Still by Bill Anderson ...\nfetching id for Shut Down by The Beach Boys ...\nfetching id for String Along by Rick Nelson ...\nfetching id for First Quarrel by Paul and Paula ...\nfetching id for Come Go With Me by Dion ...\nfetching id for Da Doo Ron Ron (When He Walked Me Home) by The Crystals ...\nfetching id for Come And Get These Memories by Martha & The Vandellas ...\nfetching id for I Love You Because by Al Martino ...\nfetching id for My Block by The Four Pennies ...\n--> [error] Antony And Cleopatra Theme by Ferrante & Teicher\nfetching id for From Me To You by Del Shannon ...\nfetching id for Rat Race by The Drifters ...\nfetching id for A Letter From Betty by Bobby Vee ...\nfetching id for Summertime by Chris Columbo Quintet ...\nfetching id for I Wish I Were A Princess by Little Peggy March ...\n--> [error] I Wish I Were A Princess by Little Peggy March\nfetching id for Birdland by Chubby Checker ...\nfetching id for The Good Life by Tony Bennett ...\nfetching id for Your Old Stand By by Mary Wells ...\nfetching id for 18 Yellow Roses by Bobby Darin ...\nfetching id for Poor Little Rich Girl by Steve Lawrence ...\n--> [error] Summertime by Chris Columbo Quintet\nfetching id for Not Too Young To Get Married by Bob B. Soxx And The Blue Jeans ...\nfetching id for Don't Try To Fight It, Baby by Eydie Gorme ...\n--> [error] Poor Little Rich Girl by Steve Lawrence\nfetching id for Gypsy Woman by Rick Nelson ...\nfetching id for If My Pillow Could Talk by Connie Francis ...\nfetching id for Tears Of Joy by Chuck Jackson ...\n--> [error] Don't Try To Fight It, Baby by Eydie Gorme\nfetching id for Spring by Birdlegs & Pauline And Their Versatility Birds ...\nfetching id for Say Wonderful Things by Patti Page ...\nfetching id for Guilty by Jim Reeves ...\nfetching id for The Ten Commandments Of Love by James MacArthur ...\n--> [error] Tears Of Joy by Chuck Jackson\nfetching id for Wildwood Days by Bobby Rydell ...\nfetching id for Every Step Of The Way by Johnny Mathis ...\nfetching id for Two Faces Have I by Lou Christie ...\nfetching id for Shake A Hand by Jackie Wilson & Linda Hopkins ...\n--> [error] The Ten Commandments Of Love by James MacArthur\nfetching id for Old Smokey Locomotion by Little Eva ...\n--> [error] Shake A Hand by Jackie Wilson & Linda Hopkins\nfetching id for The Love Of My Man by Theola Kilgore ...\nfetching id for Give Us Your Blessing by Ray Peterson ...\nfetching id for Get Him by The Exciters ...\nfetching id for Scarlett O'Hara by Lawrence Welk And His Orchestra ...\n--> [error] Old Smokey Locomotion by Little Eva\nfetching id for If You Wanna Be Happy by Jimmy Soul ...\nfetching id for Prisoner Of Love by James Brown And The Famous Flames ...\nfetching id for Another Saturday Night by Sam Cooke ...\nfetching id for Losing You by Brenda Lee ...\nfetching id for The Bounce by The Olympics ...\nfetching id for There Goes (My Heart Again) by Fats Domino ...\n--> [error] Scarlett O'Hara by Lawrence Welk And His Orchestra\nfetching id for I'm Movin' On by Matt Lucas ...\n--> [error] There Goes (My Heart Again) by Fats Domino\nfetching id for Sting Ray by The Routers ...\nfetching id for If You Need Me by Solomon Burke ...\nfetching id for Graduation Day by Bobby Pickett ...\n--> [error] I'm Movin' On by Matt Lucas\nfetching id for Hello Jim by Paul Anka ...\n--> [error] Graduation Day by Bobby Pickett\nfetching id for Say Wonderful Things by Ronnie Carroll ...\nfetching id for El Watusi by Ray Barretto ...\nfetching id for Hot Pastrami by The Dartells ...\n--> [error] Hello Jim by Paul Anka\nfetching id for Pushover by Etta James ...\nfetching id for Foolish Little Girl by The Shirelles ...\nfetching id for I'm Saving My Love by Skeeter Davis ...\nfetching id for Sweet Dreams (Of You) by Patsy Cline ...\nfetching id for Take These Chains From My Heart by Ray Charles ...\nfetching id for Let's Go Steady Again by Neil Sedaka ...\nfetching id for I Will Follow Him by Little Peggy March ...\nfetching id for Little Latin Lupe Lu by The Righteous Brothers ...\nfetching id for Don't Make My Baby Blue by Frankie Laine ...\n--> [error] Hot Pastrami by The Dartells\nfetching id for Soon (I'll Be Home Again) by The 4 Seasons Featuring Frankie Valli ...\nfetching id for Yeh-Yeh! by Mongo Santamaria Orch. ...\n--> [error] Don't Make My Baby Blue by Frankie Laine\nfetching id for Banzai Pipeline by Henry Mancini And His Orchestra ...\n--> [error] Yeh-Yeh! by Mongo Santamaria Orch.\nfetching id for Breakwater by Lawrence Welk And His Orchestra ...\n--> [error] Banzai Pipeline by Henry Mancini And His Orchestra\nfetching id for Pipeline by Chantay's ...\nfetching id for Killer Joe by The Rocky Fellers ...\nfetching id for What A Guy by The Raindrops ...\nfetching id for Reverend Mr. Black by The Kingston Trio ...\nfetching id for Puff (The Magic Dragon) by Peter, Paul & Mary ...\nfetching id for Shame, Shame, Shame by Jimmy Reed ...\nfetching id for Teenage Heaven by Johnny Cymbal ...\nfetching id for Ain't That A Shame! by The 4 Seasons ...\nfetching id for Patty Baby by Freddy Cannon ...\nfetching id for Do It - Rat Now by Bill Black's Combo ...\nfetching id for Hobo Flats - Part I by Jimmy Smith ...\n--> [error] Breakwater by Lawrence Welk And His Orchestra\nfetching id for Lonely Boy, Lonely Guitar by Duane Eddy ...\nfetching id for A Stranger In Your Town by The Shacklefords ...\n--> [error] Hobo Flats - Part I by Jimmy Smith\nfetching id for Sad, Sad Girl And Boy by The Impressions ...\nfetching id for You Know It Ain't Right by Joe Hinton ...\nfetching id for Spring In Manhattan by Tony Bennett ...\nfetching id for Black Cloud by Chubby Checker ...\nfetching id for Can't Get Used To Losing You by Andy Williams ...\nfetching id for This Little Girl by Dion ...\nfetching id for Hot Pastrami With Mashed Potatoes - Part I by Joey Dee & the Starliters ...\n--> [error] A Stranger In Your Town by The Shacklefords\nfetching id for Two Kind Of Teardrops by Del Shannon ...\nfetching id for Little Band Of Gold by James Gilreath ...\nfetching id for That's How Heartaches Are Made by Baby Washington ...\nfetching id for If You Need Me by Wilson Pickett ...\nfetching id for Gravy Waltz by Steve Allen ...\n--> [error] Hot Pastrami With Mashed Potatoes - Part I by Joey Dee & the Starliters\nfetching id for Needles And Pins by Jackie DeShannon ...\nfetching id for The Last Leaf by The Cascades ...\nfetching id for Soulville by Dinah Washington ...\nfetching id for Forever by The Marvelettes ...\nfetching id for These Arms Of Mine by Otis Redding ...\nfetching id for Days Of Wine And Roses by Andy Williams ...\nfetching id for He's So Fine by The Chiffons ...\nfetching id for Charms by Bobby Vee ...\nfetching id for Mecca by Gene Pitney ...\nfetching id for Remember Diana by Paul Anka ...\nfetching id for The Dog by Rufus Thomas ...\nfetching id for Got You On My Mind by Cookie And His Cupcakes ...\n--> [error] Gravy Waltz by Steve Allen\nfetching id for Danger by Vic Dana ...\nfetching id for River's Invitation by Percy Mayfield ...\nfetching id for On Broadway by The Drifters ...\nfetching id for Watermelon Man by Mongo Santamaria Band ...\n--> [error] Got You On My Mind by Cookie And His Cupcakes\nfetching id for A Love She Can Count On by The Miracles ...\nfetching id for Don't Say Nothin' Bad (About My Baby) by The Cookies ...\n--> [error] Watermelon Man by Mongo Santamaria Band\nfetching id for Tom Cat by The Rooftop Singers ...\nfetching id for (Today I Met) The Boy I'm Gonna Marry by Darlene Love ...\nfetching id for Days Of Wine And Roses by Henry Mancini And His Orchestra ...\nfetching id for Young And In Love by Dick and DeeDee ...\nfetching id for Baby Workout by Jackie Wilson ...\nfetching id for The Bird's The Word by The Rivingtons ...\nfetching id for Call Me Irresponsible by Jack Jones ...\nfetching id for Call Me Irresponsible by Frank Sinatra ...\nfetching id for Ronnie, Call Me When You Get A Chance by Shelley Fabares ...\nfetching id for I Know I Know by \"Pookie\" Hudson ...\nfetching id for The Last Minute (Pt. I) by Jimmy McGriff ...\nfetching id for Linda by Jan & Dean ...\nfetching id for Young Lovers by Paul and Paula ...\nfetching id for The End Of The World by Skeeter Davis ...\nfetching id for Heart by Kenny Chandler ...\nfetching id for Locking Up My Heart by The Marvelettes ...\nfetching id for Here I Stand by The Rip Chords ...\nfetching id for Bony Moronie by The Appalachians ...\n--> [error] I Know I Know by \"Pookie\" Hudson\nfetching id for Rockin' Crickets by Rockin' Rebels ...\nfetching id for Shy Girl by The Cascades ...\nfetching id for Heart! (I Hear You Beating) by Wayne Newton And The Newton Brothers ...\n--> [error] Bony Moronie by The Appalachians\nfetching id for The Folk Singer by Tommy Roe ...\nfetching id for He's A Bad Boy by Carole King ...\nfetching id for Old Enough To Love by Ricky Nelson ...\nfetching id for South Street by The Orlons ...\nfetching id for Sandy by Dion ...\nfetching id for Over The Mountain (Across The Sea) by Bobby Vinton ...\nfetching id for Do The Bird by Dee Dee Sharp ...\nfetching id for Twenty Miles by Chubby Checker ...\nfetching id for Rainbow by Gene Chandler ...\nfetching id for Mr. Bass Man by Johnny Cymbal ...\nfetching id for Memory Lane by The Hippies (Formerly The Tams) ...\n--> [error] Heart! (I Hear You Beating) by Wayne Newton And The Newton Brothers\nfetching id for How Can I Forget by Jimmy Holiday ...\nfetching id for Don't Let Her Be Your Baby by The Contours ...\n--> [error] Memory Lane by The Hippies (Formerly The Tams)\nfetching id for Mother, Please! by Jo Ann Campbell ...\nfetching id for One Boy Too Late by Mike Clifford ...\nfetching id for Don't Be Afraid, Little Darlin' by Steve Lawrence ...\n--> [error] One Boy Too Late by Mike Clifford\nfetching id for Our Day Will Come by Ruby And The Romantics ...\nfetching id for I Got What I Wanted by Brook Benton ...\nfetching id for Out Of My Mind by Johnny Tillotson ...\nfetching id for Follow The Boys by Connie Francis ...\nfetching id for In Dreams by Roy Orbison ...\nfetching id for What Are Boys Made Of by The Percells ...\nfetching id for Our Winter Love by Bill Pursell ...\nfetching id for Whatever You Want by Jerry Butler ...\nfetching id for You Never Miss Your Water (Till The Well Runs Dry) by Little Esther Phillips & Big Al Downing ...\n--> [error] Don't Be Afraid, Little Darlin' by Steve Lawrence\nfetching id for Ask Me by Maxine Brown ...\nfetching id for Ann-Marie by The Belmonts ...\nfetching id for Theme From Lawrence Of Arabia by Ferrante & Teicher ...\n--> [error] You Never Miss Your Water (Till The Well Runs Dry) by Little Esther Phillips & Big Al Downing\nfetching id for Rhythm Of The Rain by The Cascades ...\nfetching id for Blame It On The Bossa Nova by Eydie Gorme ...\nfetching id for I Wanna Be Around by Tony Bennett ...\nfetching id for All Over The World by Nat King Cole ...\nfetching id for All I Have To Do Is Dream by Richard Chamberlain ...\nfetching id for Dearer Than Life by Brook Benton ...\nfetching id for Sun Arise by Rolf Harris ...\nfetching id for He's So Heavenly by Brenda Lee ...\n--> [error] Theme From Lawrence Of Arabia by Ferrante & Teicher\nfetching id for Hot Cakes! 1st Serving by Dave \"Baby\" Cortez ...\n--> [error] He's So Heavenly by Brenda Lee\nfetching id for Preacherman by Charlie Russo ...\n--> [error] Hot Cakes! 1st Serving by Dave \"Baby\" Cortez\nfetching id for You're The Reason I'm Living by Bobby Darin ...\nfetching id for Yakety Sax by Boots Randolph and his Combo ...\n--> [error] Preacherman by Charlie Russo\nfetching id for Wild Weekend by The Rebels ...\nfetching id for Let's Limbo Some More by Chubby Checker ...\nfetching id for Laughing Boy by Mary Wells ...\nfetching id for Walk Like A Man by The 4 Seasons ...\nfetching id for I Got A Woman by Rick Nelson ...\nfetching id for I'm Just A Country Boy by George McCurn ...\nfetching id for You Don't Love Me Anymore (And I Can Tell) by Rick Nelson ...\nfetching id for He's Got The Power by The Exciters ...\nfetching id for Amy by Paul Petersen ...\n--> [error] I'm Just A Country Boy by George McCurn\nfetching id for Sax Fifth Avenue by Johnny Beecher and his Buckingham Road Quintet ...\n--> [error] Amy by Paul Petersen\nfetching id for Bill Bailey, Won't You Please Come Home by Ella Fitzgerald ...\nfetching id for This Empty Place by Dionne Warwick ...\nfetching id for How Can I Forget by Ben E. King ...\nfetching id for Eternally by The Chantels ...\nfetching id for Diane by Joe Harnell And His Orchestra ...\nfetching id for If You Can't Rock Me by Ricky Nelson ...\nfetching id for What Will My Mary Say by Johnny Mathis ...\nfetching id for Why Do Lovers Break Each Other's Heart? by Bob B. Soxx And The Blue Jeans ...\nfetching id for One Broken Heart For Sale by Elvis Presley With The Mello Men ...\n--> [error] Diane by Joe Harnell And His Orchestra\nfetching id for Ruby Baby by Dion ...\nfetching id for Love For Sale by Arthur Lyman Group ...\nfetching id for Don't Set Me Free by Ray Charles and his Orchestra ...\nfetching id for Back At The Chicken Shack, Part 1 by Jimmy Smith ...\n--> [error] Back At The Chicken Shack, Part 1 by Jimmy Smith\nfetching id for Don't Wanna Think About Paula by Dickey Lee ...\n--> [error] Don't Set Me Free by Ray Charles and his Orchestra\nfetching id for Meditation (Meditacao) by Charlie Byrd ...\nfetching id for Funny Man by Ray Stevens ...\nfetching id for Don't Mention My Name by The Shepherd Sisters ...\nfetching id for Not For All The Money In The World by The Shirelles ...\nfetching id for Hey Paula by Paul and Paula ...\nfetching id for Let's Turkey Trot by Little Eva ...\nfetching id for Greenback Dollar by The Kingston Trio ...\nfetching id for Mama Didn't Lie by Jan Bradley ...\nfetching id for Boss Guitar by Duane Eddy and the Rebelettes ...\n--> [error] Don't Mention My Name by The Shepherd Sisters\nfetching id for Alice In Wonderland by Neil Sedaka ...\nfetching id for Tell Him I'm Not Home by Chuck Jackson ...\nfetching id for Butterfly Baby by Bobby Rydell ...\nfetching id for Cast Your Fate To The Wind by Vince Guaraldi Trio ...\nfetching id for That's All by Rick Nelson ...\nfetching id for I'll Make It Alright by The Valentinos (The Lovers) ...\nfetching id for Walk Right In by The Rooftop Singers ...\nfetching id for The Gypsy Cried by Lou Christie ...\nfetching id for Hitch Hike by Marvin Gaye ...\nfetching id for The Jive Samba by Cannonball Adderley ...\nfetching id for I'm In Love Again by Rick Nelson ...\nfetching id for Insult To Injury by Timi Yuro ...\nfetching id for Gone With The Wind by The Duprees featuring Joey Vann ...\nfetching id for Marching Thru Madrid by Herb Alpert's Tijuana Brass ...\nfetching id for Send Me Some Lovin' by Sam Cooke ...\nfetching id for From A Jack To A King by Ned Miller ...\nfetching id for You've Really Got A Hold On Me by The Miracles ...\nfetching id for Fly Me To The Moon - Bossa Nova by Joe Harnell And His Orchestra ...\n--> [error] I'll Make It Alright by The Valentinos (The Lovers)\nfetching id for Call On Me by Bobby Bland ...\nfetching id for Little Town Flirt by Del Shannon ...\nfetching id for That's The Way Love Is by Bobby Bland ...\nfetching id for As Long As She Needs Me by Sammy Davis Jr. ...\nfetching id for Let's Stomp by Bobby Comstock ...\n--> [error] Fly Me To The Moon - Bossa Nova by Joe Harnell And His Orchestra\nfetching id for Pepino's Friend Pasqual (The Italian Pussy-Cat) by Lou Monte ...\nfetching id for Nothing Goes Up (Without Coming Down) by Nat King Cole ...\n--> [error] Let's Stomp by Bobby Comstock\nfetching id for Cigarettes And Coffee Blues by Marty Robbins ...\nfetching id for Little Star by Bobby Callender ...\n--> [error] Nothing Goes Up (Without Coming Down) by Nat King Cole\nfetching id for Don't Let Me Cross Over by Carl Butler & Pearl ...\nfetching id for Two Wrongs Don't Make A Right by Mary Wells ...\nfetching id for He's Sure The Boy I Love by The Crystals ...\nfetching id for Up On The Roof by The Drifters ...\nfetching id for They Remind Me Too Much Of You by Elvis Presley With The Mello Men ...\n--> [error] Little Star by Bobby Callender\nfetching id for Big Wide World by Teddy Randazzo ...\nfetching id for Ridin' The Wind by The Tornadoes ...\nfetching id for Bossa Nova U.S.A. by The Dave Brubeck Quartet ...\nfetching id for I'm The One Who Loves You by The Impressions ...\nfetching id for All About My Girl by Jimmy McGriff ...\nfetching id for Baby, Baby, Baby by Sam Cooke ...\nfetching id for Pin A Medal On Joey by James Darren ...\nfetching id for Hi-Lili, Hi-Lo by Richard Chamberlain ...\nfetching id for Who Stole The Keeshka? by The Matys Bros. ...\n--> [error] Ridin' The Wind by The Tornadoes\nfetching id for The Brightest Smile In Town by Ray Charles and his Orchestra ...\n--> [error] Who Stole The Keeshka? by The Matys Bros.\nfetching id for Don't Be Cruel by Barbara Lynn ...\nfetching id for My Foolish Heart by The Demensions ...\nfetching id for Your Used To Be by Brenda Lee ...\nfetching id for The Night Has A Thousand Eyes by Bobby Vee ...\nfetching id for Every Day I Have To Cry by Steve Alaimo ...\nfetching id for Love (Makes the World Go 'round) by Paul Anka ...\nfetching id for Java by Floyd Cramer ...\nfetching id for I Really Don't Want To Know by Little Esther Phillips ...\n--> [error] The Brightest Smile In Town by Ray Charles and his Orchestra\nfetching id for What Does A Girl Do? by Marcie Blane ...\nfetching id for If Mary's There by Brian Hyland ...\nfetching id for Meditation (Meditacao) by Pat Boone ...\n--> [error] I Really Don't Want To Know by Little Esther Phillips\nfetching id for The 2,000 Pound Bee (Part 2) by The Ventures ...\n--> [error] Meditation (Meditacao) by Pat Boone\nfetching id for Pretty Boy Lonely by Patti Page ...\nfetching id for I Will Live My Life For You by Tony Bennett ...\nfetching id for Boss by The Rumblers ...\nfetching id for Loop De Loop by Johnny Thunder ...\nfetching id for Go Away Little Girl by Steve Lawrence ...\nfetching id for It's Up To You by Rick Nelson ...\nfetching id for Half Heaven - Half Heartache by Gene Pitney ...\nfetching id for My Coloring Book by Sandy Stewart ...\n--> [error] The 2,000 Pound Bee (Part 2) by The Ventures\nfetching id for The Cinnamon Cinder (It's A Very Nice Dance) by The Pastel Six ...\nfetching id for My Dad by Paul Petersen ...\nfetching id for She'll Never Know by Brenda Lee ...\n--> [error] My Coloring Book by Sandy Stewart\nfetching id for I'm A Woman by Peggy Lee ...\nfetching id for Strange I Know by The Marvelettes ...\nfetching id for Chicken Feed by Bent Fabric and His Piano ...\n--> [error] She'll Never Know by Brenda Lee\nfetching id for Don't Fence Me In by George Maharis ...\nfetching id for Faded Love by Jackie DeShannon ...\nfetching id for I Saw Linda Yesterday by Dickey Lee ...\nfetching id for Puddin N' Tain (Ask Me Again, I'll Tell You The Same) by The Alley Cats ...\nfetching id for Tell Him by The Exciters ...\nfetching id for My Coloring Book by Kitty Kallen ...\n--> [error] Don't Fence Me In by George Maharis\nfetching id for Settle Down (Goin' Down That Highway) by Peter, Paul & Mary ...\nfetching id for I'm Gonna' Be Warm This Winter by Connie Francis ...\nfetching id for Proud by Johnny Crawford ...\nfetching id for Don't Make Me Over by Dionne Warwick ...\nfetching id for Two Lovers by Mary Wells ...\nfetching id for Would It Make Any Difference To You by Etta James ...\nfetching id for Ain't Gonna Kiss Ya by The Ribbons ...\nfetching id for Leavin' On Your Mind by Patsy Cline ...\nfetching id for From The Bottom Of My Heart (Dammi, Dammi, Dammi) by Dean Martin ...\nfetching id for Only You (And You Alone) by Mr. Acker Bilk ...\n--> [error] My Coloring Book by Kitty Kallen\nfetching id for Shake Sherry by The Contours ...\nfetching id for Telstar by The Tornadoes ...\nfetching id for Everybody Loves A Lover by The Shirelles ...\nfetching id for The Ballad Of Jed Clampett by Flatt & Scruggs ...\nfetching id for Shake Me I Rattle (Squeeze Me I Cry) by Marion Worth ...\nfetching id for How Much Is That Doggie In The Window by Baby Jane & The Rockabyes ...\n--> [error] Only You (And You Alone) by Mr. Acker Bilk\nfetching id for The Popeye Waddle by Don Covay ...\nfetching id for What To Do With Laurie by Mike Clifford ...\n--> [error] How Much Is That Doggie In The Window by Baby Jane & The Rockabyes\nfetching id for Let Me Go The Right Way by The Supremes ...\nfetching id for I'd Rather Be Here In Your Arms by The Duprees ...\nfetching id for Zing! Went The Strings Of My Heart by The Furys ...\n--> [error] What To Do With Laurie by Mike Clifford\nfetching id for Al Di La by Connie Francis ...\nfetching id for M.G. Blues by Jimmy McGriff ...\nfetching id for Baby, You're Driving Me Crazy by Joey Dee ...\nfetching id for Hotel Happiness by Brook Benton ...\nfetching id for Pepino The Italian Mouse by Lou Monte ...\nfetching id for Remember Then by The Earls ...\nfetching id for Limbo Rock by Chubby Checker ...\nfetching id for Zip-A-Dee Doo-Dah by Bob B. Soxx And The Blue Jeans ...\nfetching id for See See Rider by LaVern Baker ...\nfetching id for Willie Can by Sue Thompson ...\nfetching id for Walk Right In by The Moments ...\nfetching id for I Need You by Rick Nelson ...\nfetching id for Remember Baby by Shep And The Limelites ...\nfetching id for Every Beat Of My Heart by James Brown And The Famous Flames ...\nfetching id for Shutters And Boards by Jerry Wallace ...\nfetching id for Bobby's Girl by Marcie Blane ...\nfetching id for Big Girls Don't Cry by The 4 Seasons ...\nfetching id for Wiggle Wobble by Les Cooper and the Soul Rockers ...\nfetching id for Return To Sender by Elvis Presley With The Jordanaires ...\n--> [error] Zing! Went The Strings Of My Heart by The Furys\nfetching id for Lovesick Blues by Frank Ifield ...\nfetching id for Some Kinda Fun by Chris Montez ...\nfetching id for You Are My Sunshine by Ray Charles ...\nfetching id for Trouble Is My Middle Name by Bobby Vinton ...\nfetching id for Molly by Bobby Goldsboro ...\nfetching id for The Same Old Hurt by Burl Ives ...\nfetching id for The Lone Teen Ranger by Jerry Landis ...\nfetching id for Let's Kiss And Make Up by Bobby Vinton ...\nfetching id for Chains by The Cookies ...\nfetching id for The Lonely Bull (El Solo Torro) by Herb Alpert And Tijuana Brass ...\nfetching id for Let's Go (pony) by The Routers ...\nfetching id for My Wife Can't Cook by Lonnie Russ ...\nfetching id for Darkest Street In Town by Jimmy Clanton ...\nfetching id for Jellybread by Booker T. & The MG's ...\nfetching id for Oo-La-La-Limbo by Danny & The Juniors ...\nfetching id for Love Came To Me by Dion ...\nfetching id for Dear Lonely Hearts by Nat King Cole ...\nfetching id for The Love Of A Boy by Timi Yuro ...\nfetching id for Keep Your Hands Off My Baby by Little Eva ...\nfetching id for Don't Hang Up by The Orlons ...\nfetching id for Ruby Ann by Marty Robbins ...\nfetching id for (Dance With The) Guitar Man by Duane Eddy and the Rebelettes ...\n--> [error] Oo-La-La-Limbo by Danny & The Juniors\nfetching id for Ten Little Indians by The Beach Boys ...\nfetching id for That's Life (That's Tough) by Gabriel And The Angels ...\nfetching id for Coney Island Baby by The Excellents ...\nfetching id for You're Gonna Need Me by Barbara Lynn ...\nfetching id for Little Tin Soldier by The Toy Dolls ...\nfetching id for Look At Me by Dobie Gray ...\nfetching id for Red Pepper I by Roosevelt Fountain And Pens Of Rhythm ...\nfetching id for Monsters' Holiday by Bobby \"Boris\" Pickett And The Crypt-Kickers ...\n--> [error] Monsters' Holiday by Bobby \"Boris\" Pickett And The Crypt-Kickers\nfetching id for Ride! by Dee Dee Sharp ...\nfetching id for Your Cheating Heart by Ray Charles ...\n--> [error] Red Pepper I by Roosevelt Fountain And Pens Of Rhythm\nfetching id for Desafinado by Stan Getz/Charlie Byrd ...\nfetching id for Comin' Home Baby by Mel Torme ...\nfetching id for You Threw A Lucky Punch by Gene Chandler ...\nfetching id for Echo by The Emotions ...\n--> [error] Your Cheating Heart by Ray Charles\nfetching id for Trouble In Mind by Aretha Franklin ...\nfetching id for Big Boat by Peter, Paul & Mary ...\nfetching id for Someone Somewhere by Junior Parker ...\nfetching id for Slop Time by The Sherrys ...\nfetching id for Rumors by Johnny Crawford ...\nfetching id for The Push And Kick by Mark Valentino ...\nfetching id for He's A Rebel by The Crystals ...\nfetching id for All Alone Am I by Brenda Lee ...\nfetching id for Spanish Lace by Gene McDaniels ...\nfetching id for I May Not Live To See Tomorrow by Brian Hyland ...\nfetching id for Me And My Shadow by Frank Sinatra & Sammy Davis Jr. ...\nfetching id for Rainbow At Midnight by Jimmie Rodgers ...\n--> [error] Rainbow At Midnight by Jimmie Rodgers\nfetching id for Gonna Raise A Rukus Tonight by Jimmy Dean ...\n--> [error] I May Not Live To See Tomorrow by Brian Hyland\nfetching id for Sam's Song by Dean Martin & Sammy Davis Jr. ...\nfetching id for The (Bossa Nova) Bird by The Dells ...\nfetching id for Let Me Entertain You by Ray Anthony ...\n--> [error] Gonna Raise A Rukus Tonight by Jimmy Dean\nfetching id for Santa Claus Is Coming To Town by The 4 Seasons ...\nfetching id for The Little Drummer Boy by The Harry Simeone Chorale ...\n--> [error] Let Me Entertain You by Ray Anthony\nfetching id for The Chipmunk Song (Christmas Don't Be Late) by David Seville And The Chipmunks ...\n--> [error] The Little Drummer Boy by The Harry Simeone Chorale\nfetching id for Santa Claus Is Watching You by Ray Stevens ...\nfetching id for I Left My Heart In San Francisco by Tony Bennett ...\nfetching id for Don't Go Near The Eskimos by Ben Colder ...\nfetching id for Rudolph The Red Nosed Reindeer by David Seville And The Chipmunks ...\n--> [error] The Chipmunk Song (Christmas Don't Be Late) by David Seville And The Chipmunks\nfetching id for Lover Come Back To Me by The Cleftones ...\nfetching id for Silent Night, Holy Night by Mahalia Jackson ...\nfetching id for Eso Beso (That Kiss!) by Paul Anka ...\nfetching id for My Own True Love by The Duprees ...\nfetching id for The Cha-Cha-Cha by Bobby Rydell ...\nfetching id for I Can't Help It (If I'm Still In Love With You) by Johnny Tillotson ...\nfetching id for A Little Bit Now (A Little Bit Later) by The Majors ...\n--> [error] Rudolph The Red Nosed Reindeer by David Seville And The Chipmunks\nfetching id for Diddle-Dee-Dum (What Happens When Your Love Has Gone) by The Belmonts ...\nfetching id for Road Hog by John D. Loudermilk ...\nfetching id for She's A Troublemaker by The Majors ...\n--> [error] A Little Bit Now (A Little Bit Later) by The Majors\nfetching id for Alvin's Harmonica by David Seville And The Chipmunks ...\nfetching id for White Christmas by The Drifters Featuring Clyde McPhatter And Bill Pinkney ...\nfetching id for Jingle Bell Rock by Bobby Rydell/Chubby Checker ...\nfetching id for Three Hearts In A Tangle by James Brown And The Famous Flames ...\nfetching id for Twilight Time by Andy Williams ...\nfetching id for Does He Mean That Much To You? by Eddy Arnold ...\n--> [error] She's A Troublemaker by The Majors\nfetching id for Next Door To An Angel by Neil Sedaka ...\nfetching id for Only Love Can Break A Heart by Gene Pitney ...\nfetching id for Mary Ann Regrets by Burl Ives ...\nfetching id for Stubborn Kind Of Fellow by Marvin Gaye ...\nfetching id for Lovers By Night, Strangers By Day by The Fleetwoods ...\nfetching id for I Lost My Baby by Joey Dee ...\nfetching id for Baby Has Gone Bye Bye by George Maharis ...\nfetching id for Desafinado (Slightly Out Of Tune) by Pat Thomas ...\n--> [error] Does He Mean That Much To You? by Eddy Arnold\nfetching id for I Found A New Baby by Bobby Darin ...\nfetching id for Limelight by Mr. Acker Bilk ...\n--> [error] Desafinado (Slightly Out Of Tune) by Pat Thomas\nfetching id for Theme From Taras Bulba (The Wishing Star) by Jerry Butler ...\nfetching id for Gina by Johnny Mathis ...\nfetching id for Nothing Can Change This Love by Sam Cooke ...\nfetching id for That Stranger Used To Be My Girl by Trade Martin ...\nfetching id for What Kind Of Fool Am I by Sammy Davis Jr. ...\nfetching id for I've Got A Woman (Part I) by Jimmy McGriff ...\nfetching id for Mama Sang A Song by Stan Kenton ...\n--> [error] Limelight by Mr. Acker Bilk\nfetching id for Close To Cathy by Mike Clifford ...\nfetching id for Léah by Roy Orbison ...\nfetching id for Mama Sang A Song by Walter Brennan ...\n--> [error] Mama Sang A Song by Stan Kenton\nfetching id for Popeye (The Hitchhiker) by Chubby Checker ...\nfetching id for If You Were A Rock And Roll Record by Freddy Cannon ...\nfetching id for The Jitterbug by The Dovells ...\nfetching id for Still Waters Run Deep by Brook Benton ...\nfetching id for Getting Ready For The Heartbreak by Chuck Jackson ...\n--> [error] Mama Sang A Song by Walter Brennan\nfetching id for Zero-Zero by Lawrence Welk ...\nfetching id for Night Time by Pete Antell ...\nfetching id for I Was Such A Fool (To Fall In Love With You) by Connie Francis ...\nfetching id for James (Hold The Ladder Steady) by Sue Thompson ...\nfetching id for Surfin' Safari by The Beach Boys ...\nfetching id for Love Me Tender by Richard Chamberlain ...\nfetching id for Don't Ask Me To Be Friends by The Everly Brothers ...\nfetching id for I'll Bring It Home To You by Carla Thomas ...\nfetching id for Workin' For The Man by Roy Orbison ...\nfetching id for Torture by Kris Jensen ...\nfetching id for Untie Me by The Tams ...\nfetching id for Stormy Monday Blues by Bobby Bland ...\n--> [error] Getting Ready For The Heartbreak by Chuck Jackson\nfetching id for Anna (Go To Him) by Arthur Alexander ...\nfetching id for Next Door To The Blues by Etta James ...\nfetching id for Mr. Lonely by Buddy Greco ...\nfetching id for Heart Breaker by Dean Christie ...\nfetching id for I'm So Lonesome I Could Cry by Johnny Tillotson ...\nfetching id for Somebody Have Mercy by Sam Cooke ...\nfetching id for This Land Is Your Land by The New Christy Minstrels ...\nfetching id for This Land Is Your Land by Ketty Lester ...\n--> [error] Heart Breaker by Dean Christie\nfetching id for You're A Sweetheart by Dinah Washington ...\nfetching id for Pop Pop Pop - Pie by The Sherrys ...\n--> [error] This Land Is Your Land by Ketty Lester\nfetching id for Sherry by The 4 Seasons ...\nfetching id for Patches by Dickey Lee ...\nfetching id for Green Onions by Booker T. & The MG's ...\nfetching id for Susie Darlin' by Tommy Roe ...\nfetching id for Alley Cat by Bent Fabric and His Piano ...\n--> [error] Pop Pop Pop - Pie by The Sherrys\nfetching id for He Thinks I Still Care by Connie Francis ...\nfetching id for I've Been Everywhere by Hank Snow ...\nfetching id for You Can Run (But You Can't Hide) by Jerry Butler ...\nfetching id for Heartaches by Patsy Cline ...\nfetching id for Happy Weekend by Dave \"Baby\" Cortez ...\n--> [error] He Thinks I Still Care by Connie Francis\nfetching id for The Searching Is Over by Joe Henderson ...\n--> [error] Happy Weekend by Dave \"Baby\" Cortez\nfetching id for Dear Hearts And Gentle People by The Springfields ...\nfetching id for Fiesta by Dave \"Baby\" Cortez ...\nfetching id for Fools Rush In by Etta James ...\nfetching id for Don't Stop The Wedding by Ann Cole ...\n--> [error] The Searching Is Over by Joe Henderson\nfetching id for If A Man Answers by Bobby Darin ...\nfetching id for Don't You Believe It by Andy Williams ...\nfetching id for I'm Going Back To School by Dee Clark ...\nfetching id for I Remember You by Frank Ifield ...\nfetching id for Ramblin' Rose by Nat King Cole ...\nfetching id for Hide & Go Seek, Part I by Bunker Hill ...\n--> [error] Don't Stop The Wedding by Ann Cole\nfetching id for Twistin' With Linda by The Isley Brothers ...\n--> [error] Hide & Go Seek, Part I by Bunker Hill\nfetching id for The Alley Cat Song by David Thorne ...\n--> [error] Twistin' With Linda by The Isley Brothers\nfetching id for I'm Here To Get My Baby Out Of Jail by The Everly Brothers ...\nfetching id for Four Walls by Kay Starr ...\nfetching id for Mama Sang A Song by Bill Anderson ...\nfetching id for Cold, Cold Heart by Dinah Washington ...\nfetching id for One More Town by The Kingston Trio ...\nfetching id for Aladdin by Bobby Curtola ...\nfetching id for Let's Dance by Chris Montez ...\nfetching id for Warmed Over Kisses (Left Over Love) by Brian Hyland ...\nfetching id for Little Black Book by Jimmy Dean ...\nfetching id for Venus In Blue Jeans by Jimmy Clanton ...\nfetching id for Baby Face by Bobby Darin ...\n--> [error] The Alley Cat Song by David Thorne\nfetching id for No One Will Ever Know by Jimmie Rodgers ...\nfetching id for The Burning Of Atlanta by Claude King ...\nfetching id for Second Fiddle Girl by Barbara Lynn ...\nfetching id for I'll Remember Carol by Tommy Boyce ...\nfetching id for Don't Ever Leave Me by Bob And Earl ...\nfetching id for Magic Wand by Don & Juan ...\nfetching id for What Kind Of Fool Am I? by Robert Goulet ...\nfetching id for I Left My Heart In The Balcony by Linda Scott ...\n--> [error] Don't Ever Leave Me by Bob And Earl\nfetching id for Don't Go Near The Indians by Rex Allen ...\nfetching id for Rain Rain Go Away by Bobby Vinton ...\n--> [error] I Left My Heart In The Balcony by Linda Scott\nfetching id for If I Had A Hammer (The Hammer Song) by Peter, Paul & Mary ...\nfetching id for Ten Lonely Guys by Pat Boone ...\nfetching id for King Of The Whole Wide World by Elvis Presley With The Jordanaires ...\nfetching id for Did You Ever See A Dream Walking by Fats Domino ...\nfetching id for Father Knows Best by The Radiants ...\n--> [error] Ten Lonely Guys by Pat Boone\nfetching id for Hully Gully Baby by The Dovells ...\nfetching id for Lie To Me by Brook Benton ...\nfetching id for What Kind Of Love Is This by Joey Dee & the Starliters ...\nfetching id for You Beat Me To The Punch by Mary Wells ...\nfetching id for You Belong To Me by The Duprees ...\nfetching id for Stop The Music by The Shirelles ...\nfetching id for Sheila by Tommy Roe ...\nfetching id for Sweet Sixteen Bars by Earl Grant ...\nfetching id for Save All Your Lovin' For Me by Brenda Lee ...\n--> [error] Father Knows Best by The Radiants\nfetching id for What Kind Of Fool Am I by Anthony Newley ...\nfetching id for Further More by Ray Stevens ...\n--> [error] Save All Your Lovin' For Me by Brenda Lee\nfetching id for Where Do You Come From by Elvis Presley With The Jordanaires ...\n--> [error] Further More by Ray Stevens\nfetching id for Punish Her by Bobby Vee ...\nfetching id for Teen Age Idol by Rick Nelson ...\nfetching id for It Might As Well Rain Until September by Carole King ...\nfetching id for Beechwood 4-5789 by The Marvelettes ...\nfetching id for Come On Little Angel by The Belmonts ...\nfetching id for The Things We Did Last Summer by Shelley Fabares ...\nfetching id for A Wonderful Dream by The Majors ...\nfetching id for A Taste Of Honey by Martin Denny and His Orchestra ...\n--> [error] Teen Age Idol by Rick Nelson\nfetching id for You Can't Judge A Book By The Cover by Bo Diddley ...\nfetching id for When The Boys Get Together by Joanie Sommers ...\nfetching id for I'm Gonna Change Everything by Jim Reeves ...\nfetching id for Limbo Dance by The Champs ...\nfetching id for The Loco-Motion by Little Eva ...\nfetching id for Rinky Dink by Baby Cortez ...\nfetching id for If I Didn't Have A Dime (To Play The Jukebox) by Gene Pitney ...\nfetching id for The Swiss Maid by Del Shannon ...\nfetching id for I Keep Forgettin' by Chuck Jackson ...\nfetching id for What Time Is It? by The Jive Five With Eugene Pitts ...\n--> [error] When The Boys Get Together by Joanie Sommers\nfetching id for ...And Then There Were Drums by Sandy Nelson ...\nfetching id for 409 by The Beach Boys ...\nfetching id for Forever And A Day by Jackie Wilson ...\nfetching id for You Can't Lie To A Liar by Ketty Lester ...\nfetching id for Hail To The Conquering Hero by James Darren ...\n--> [error] You Can't Lie To A Liar by Ketty Lester\nfetching id for Hully Gully Guitar by Jerry Reed And The Hully Girlies ...\n--> [error] Hail To The Conquering Hero by James Darren\nfetching id for Silver Threads And Golden Needles by The Springfields ...\nfetching id for She's Not You by Elvis Presley With The Jordanaires ...\nfetching id for Devil Woman by Marty Robbins ...\nfetching id for You Don't Know Me by Ray Charles ...\nfetching id for Shame On Me by Bobby Bare ...\nfetching id for Point Of No Return by Gene McDaniels ...\nfetching id for A Swingin' Safari by Billy Vaughn And His Orchestra ...\nfetching id for Send Me The Pillow You Dream On by Johnny Tillotson ...\nfetching id for Your Nose Is Gonna Grow by Johnny Crawford ...\nfetching id for Party Lights by Claudine Clark ...\nfetching id for Lollipops And Roses by Paul Petersen ...\nfetching id for Don't You Worry by Don Gardner And Dee Dee Ford ...\n--> [error] Hully Gully Guitar by Jerry Reed And The Hully Girlies\nfetching id for Long As The Rose Is Red by Florraine Darlin ...\nfetching id for I Love You The Way You Are by Bobby Vinton ...\nfetching id for Lookin' For A Love by The Valentinos ...\nfetching id for Papa-Oom-Mow-Mow by The Rivingtons ...\nfetching id for Mashed Potatoes U.S.A. by James Brown And The Famous Flames ...\nfetching id for I'm The Girl From Wolverton Mountain by Jo Ann Campbell ...\nfetching id for What's Gonna Happen When Summer's Done by Freddy Cannon ...\nfetching id for The Boys' Night Out by Patti Page ...\n--> [error] Long As The Rose Is Red by Florraine Darlin\nfetching id for Big Love by Joe Henderson ...\nfetching id for Yield Not To Temptation by Bobby Bland ...\nfetching id for Every Night (Without You) by Paul Anka ...\nfetching id for Ol' Man River by Jimmy Smith ...\nfetching id for Broken Heart by The Fiestas ...\nfetching id for Way Over There by The Miracles ...\nfetching id for Sweet Little Sixteen by Jerry Lee Lewis ...\nfetching id for Someday (When I'm Gone From You) by Bobby Vee and The Crickets ...\n--> [error] The Boys' Night Out by Patti Page\nfetching id for Try A Little Tenderness by Aretha Franklin ...\nfetching id for Stop The Wedding by Etta James ...\nfetching id for What's A Matter Baby (Is It Hurting You) by Timi Yuro ...\nfetching id for Things by Bobby Darin ...\nfetching id for Mr. Songwriter by Connie Stevens ...\nfetching id for Baby Elephant Walk by Lawrence Welk And His Orchestra ...\nfetching id for Vacation by Connie Francis ...\nfetching id for Till Death Do Us Part by Bob Braun ...\n--> [error] Baby Elephant Walk by Lawrence Welk And His Orchestra\nfetching id for Glory Of Love by Don Gardner And Dee Dee Ford ...\nfetching id for I Really Don't Want To Know by Solomon Burke ...\nfetching id for Roses Are Red (My Love) by Bobby Vinton ...\nfetching id for Little Diane by Dion ...\nfetching id for Twist And Shout by The Isley Brothers ...\nfetching id for Make It Easy On Yourself by Jerry Butler ...\nfetching id for Bring It On Home To Me by Sam Cooke ...\nfetching id for Call Me Mr. In-Between by Burl Ives ...\nfetching id for Lolita Ya-Ya by The Ventures ...\nfetching id for (Theme from) A Summer Place by Dick Roman ...\n--> [error] Till Death Do Us Part by Bob Braun\nfetching id for Beach Party by King Curtis And The Noble Knights ...\n--> [error] (Theme from) A Summer Place by Dick Roman\nfetching id for There Is No Greater Love by The Wanderers ...\n--> [error] Beach Party by King Curtis And The Noble Knights\nfetching id for Copy Cat by Gary U.S. Bonds ...\nfetching id for Bonanza! by Johnny Cash ...\nfetching id for Your Heart Belongs To Me by The Supremes ...\nfetching id for Ahab, The Arab by Ray Stevens ...\nfetching id for Sealed With A Kiss by Brian Hyland ...\nfetching id for You'll Lose A Good Thing by Barbara Lynn ...\nfetching id for Speedy Gonzales by Pat Boone ...\nfetching id for Just Tell Her Jim Said Hello by Elvis Presley With The Jordanaires ...\n--> [error] There Is No Greater Love by The Wanderers\nfetching id for Heart In Hand by Brenda Lee ...\nfetching id for The Wah Watusi by The Orlons ...\nfetching id for Love Me As I Love You by George Maharis ...\nfetching id for Wolverton Mountain by Claude King ...\nfetching id for Jivin' Around by Al Casey Combo ...\nfetching id for Reap What You Sow by Billy Stewart ...\nfetching id for I Want To Be Loved by Dinah Washington ...\nfetching id for Silly Boy (She Doesn't Love You) by The Lettermen ...\nfetching id for Send For Me (If you need some Lovin) by Barbara George ...\nfetching id for Mama (He Treats Your Daughter Mean) by Ruth Brown ...\nfetching id for The Stripper by David Rose and His Orchestra ...\n--> [error] Jivin' Around by Al Casey Combo\nfetching id for (Girls, Girls, Girls) Made To Love by Eddie Hodges ...\nfetching id for Theme From Dr. Kildare (Three Stars Will Shine Tonight) by Richard Chamberlain ...\nfetching id for I Can't Stop Loving You by Ray Charles ...\nfetching id for The Ballad Of Paladin by Duane Eddy ...\nfetching id for Having A Party by Sam Cooke ...\nfetching id for Too Late To Worry - Too Blue To Cry by Glen Campbell ...\nfetching id for A Taste Of Honey by The Victor Feldman Quartet ...\nfetching id for Oh! What It Seemed To Be by The Castells ...\nfetching id for I Wouldn't Know (What To Do) by Dinah Washington ...\nfetching id for For All We Know by Dinah Washington ...\nfetching id for Johnny Get Angry by Joanie Sommers ...\nfetching id for Dancin' Party by Chubby Checker ...\nfetching id for I Need Your Loving by Don Gardner And Dee Dee Ford ...\nfetching id for Have A Good Time by Sue Thompson ...\nfetching id for So Wrong by Patsy Cline ...\nfetching id for I'm Coming Home by Paul Anka ...\n--> [error] A Taste Of Honey by The Victor Feldman Quartet\nfetching id for Beach Party by Dave York and The Beachcombers ...\n--> [error] I'm Coming Home by Paul Anka\nfetching id for Till There Was You by Valjean on Piano ...\n--> [error] Beach Party by Dave York and The Beachcombers\nfetching id for Route 66 Theme by Nelson Riddle ...\nfetching id for Little Red Rented Rowboat by Joe Dowell ...\nfetching id for Gravy (For My Mashed Potatoes) by Dee Dee Sharp ...\nfetching id for It Started All Over Again by Brenda Lee ...\n--> [error] Till There Was You by Valjean on Piano\nfetching id for Al Di La' by Emilio Pericoli ...\nfetching id for Palisades Park by Freddy Cannon ...\nfetching id for Bongo Stomp by Little Joey And The Flips ...\nfetching id for Limbo Rock by The Champs ...\nfetching id for Ben Crazy by Dickie Goodman ...\nfetching id for Above The Stars by Mr. Acker Bilk ...\n--> [error] It Started All Over Again by Brenda Lee\nfetching id for I'll Never Dance Again by Bobby Rydell ...\nfetching id for Mary's Little Lamb by James Darren ...\n--> [error] Above The Stars by Mr. Acker Bilk\nfetching id for The Bird Man by The Highwaymen ...\n--> [error] Mary's Little Lamb by James Darren\nfetching id for A Miracle by Frankie Avalon ...\nfetching id for Careless Love by Ray Charles ...\nfetching id for Right String But The Wrong Yo-Yo by Dr. Feelgood And The Interns ...\n--> [error] The Bird Man by The Highwaymen\nfetching id for Worried Mind by Ray Anthony ...\nfetching id for Too Bad by Ben E. King ...\nfetching id for Poor Little Puppet by Cathy Carroll ...\nfetching id for Don't Worry 'Bout Me by Vincent Edwards ...\n--> [error] Right String But The Wrong Yo-Yo by Dr. Feelgood And The Interns\nfetching id for Limbo by The Capris ...\n--> [error] Don't Worry 'Bout Me by Vincent Edwards\nfetching id for Sweet Georgia Brown by Carroll Bros. ...\n--> [error] Limbo by The Capris\nfetching id for Welcome Home Baby by The Shirelles ...\nfetching id for Summertime, Summertime by The Jamies ...\nfetching id for It Keeps Right On A-Hurtin' by Johnny Tillotson ...\nfetching id for Steel Men by Jimmy Dean ...\nfetching id for Playboy by The Marvelettes ...\nfetching id for Johnny Loves Me by Shelley Fabares ...\nfetching id for Never In A Million Years by Linda Scott ...\nfetching id for Why Did You Leave Me? by Vince Edwards ...\n--> [error] Sweet Georgia Brown by Carroll Bros.\nfetching id for Callin' Doctor Casey by John D. Loudermilk ...\nfetching id for Sugar Plum by Ike Clanton ...\n--> [error] Why Did You Leave Me? by Vince Edwards\nfetching id for Snap Your Fingers by Joe Henderson ...\nfetching id for I Don't Love You No More (I Don't Care About You) by Jimmy Norman ...\n--> [error] Sugar Plum by Ike Clanton\nfetching id for But Not For Me by Ketty Lester ...\nfetching id for Little Bitty Pretty One by Clyde McPhatter ...\nfetching id for Walk On The Wild Side (Part 1) by Jimmy Smith And The Big Band ...\nfetching id for The Crowd by Roy Orbison ...\nfetching id for Fortuneteller by Bobby Curtola ...\nfetching id for West Of The Wall by Toni Fisher ...\nfetching id for Cindy's Birthday by Johnny Crawford ...\nfetching id for Seven Day Weekend by Gary U.S. Bonds ...\nfetching id for My Daddy Is President by Little Jo Ann ...\n--> [error] I Don't Love You No More (I Don't Care About You) by Jimmy Norman\nfetching id for Stranger On The Shore by Mr. Acker Bilk ...\nfetching id for Where Are You by Dinah Washington ...\nfetching id for Swingin' Gently by Earl Grant ...\nfetching id for La Bomba by The Tokens ...\n--> [error] My Daddy Is President by Little Jo Ann\nfetching id for Life's Too Short by The Lafayettes ...\n--> [error] La Bomba by The Tokens\nfetching id for Theme From \"Hatari!\" by Henry Mancini And His Orchestra ...\n--> [error] Life's Too Short by The Lafayettes\nfetching id for Come On Baby by Bruce Channel ...\n--> [error] Theme From \"Hatari!\" by Henry Mancini And His Orchestra\nfetching id for Houdini by Walter Brennan ...\n--> [error] Come On Baby by Bruce Channel\nfetching id for A Steel Guitar And A Glass Of Wine by Paul Anka ...\nfetching id for Bristol Twistin' Annie by The Dovells ...\nfetching id for If I Should Lose You by Dreamlovers ...\nfetching id for Hot Pepper by Floyd Cramer ...\n--> [error] Houdini by Walter Brennan\nfetching id for Shout And Shimmy by James Brown And The Famous Flames ...\n--> [error] Hot Pepper by Floyd Cramer\nfetching id for Boom Boom by John Lee Hooker ...\nfetching id for I Just Can't Help It by Jackie Wilson ...\nfetching id for All Night Long by Sandy Nelson ...\nfetching id for Sweet And Lovely by April Stevens & Nino Tempo ...\nfetching id for Goodnight, Irene by Jerry Reed And The Hully Girlies ...\n--> [error] Shout And Shimmy by James Brown And The Famous Flames\nfetching id for Nothing New (Same Old Thing) by Fats Domino ...\nfetching id for Little Young Lover by The Impressions ...\nfetching id for I'm Tossin' And Turnin' Again by Bobby Lewis ...\nfetching id for Stranger On The Shore by Andy Williams ...\nfetching id for Sharing You by Bobby Vee ...\nfetching id for (The Man Who Shot) Liberty Valance by Gene Pitney ...\nfetching id for That's Old Fashioned (That's The Way Love Should Be) by The Everly Brothers ...\nfetching id for Down In The Valley by Solomon Burke ...\nfetching id for That Greasy Kid Stuff by Janie Grant ...\nfetching id for I'm Hanging Up My Heart For You by Solomon Burke ...\nfetching id for Keep Your Hands In Your Pockets by The Playmates ...\n--> [error] Goodnight, Irene by Jerry Reed And The Hully Girlies\nfetching id for Potato Peeler by Bobby Gregg and His Friends ...\n--> [error] Keep Your Hands In Your Pockets by The Playmates\nfetching id for Don't Cry, Baby by Aretha Franklin ...\nfetching id for Dance With Mr. Domino by Fats Domino ...\nfetching id for Goodbye Dad by The Castle Sisters ...\nfetching id for I Love You by The Volume's ...\nfetching id for The One Who Really Loves You by Mary Wells ...\nfetching id for Any Day Now (My Wild Beautiful Bird) by Chuck Jackson ...\nfetching id for I'll Try Something New by The Miracles ...\nfetching id for Theme From Ben Casey by Valjean on Piano ...\n--> [error] Potato Peeler by Bobby Gregg and His Friends\nfetching id for Dr. Ben Basey by Mickey Shorr and The Cutups ...\n--> [error] Theme From Ben Casey by Valjean on Piano\nfetching id for Follow That Dream by Elvis Presley ...\nfetching id for Why'd You Wanna Make Me Cry by Connie Stevens ...\n--> [error] Dr. Ben Basey by Mickey Shorr and The Cutups\nfetching id for Village Of Love by Nathaniel Mayer And The Fabulous Twilights ...\nfetching id for Keep Your Love Locked (Deep In Your Heart) by Paul Petersen ...\nfetching id for Good Lover by Jimmy Reed ...\nfetching id for Where Have You Been (All My Life) by Arthur Alexander ...\n--> [error] Why'd You Wanna Make Me Cry by Connie Stevens\nfetching id for Born To Lose by Ray Charles ...\nfetching id for Second Hand Love by Connie Francis ...\nfetching id for Woman Is A Man's Best Friend by Teddy & The Twilights ...\nfetching id for Lovers Who Wander by Dion ...\nfetching id for So This Is Love by The Castells ...\nfetching id for Don't Play That Song (You Lied) by Ben E. King ...\nfetching id for Tennessee by Jan & Dean ...\nfetching id for The Green Leaves Of Summer by Kenny Ball and his Jazzmen ...\nfetching id for You Should'a Treated Me Right by Ike & Tina Turner ...\nfetching id for Dancin' The Strand by Maureen Gray ...\nfetching id for What Did Daddy Do by Shep And The Limelites ...\n--> [error] Where Have You Been (All My Life) by Arthur Alexander\nfetching id for Teach Me Tonight by George Maharis ...\nfetching id for Twistin' Matilda (and the channel) by Jimmy Soul ...\nfetching id for How Is Julie? by The Lettermen ...\nfetching id for I Sold My Heart To The Junkman by The Blue-Belles ...\nfetching id for Balboa Blue by The Marketts ...\nfetching id for Mashed Potato Time by Dee Dee Sharp ...\nfetching id for Shout! Shout! (Knock Yourself Out) by Ernie Maresca ...\nfetching id for That Happy Feeling by Bert Kaempfert And His Orchestra ...\nfetching id for Oh My Angel by Bertha Tillman ...\nfetching id for Baby Elephant Walk by The Miniature Men ...\n--> [error] I Sold My Heart To The Junkman by The Blue-Belles\nfetching id for Shake A Hand by Ruth Brown ...\n--> [error] Baby Elephant Walk by The Miniature Men\nfetching id for My Time For Cryin' by Maxine Brown ...\nfetching id for Cry Myself To Sleep by Del Shannon ...\nfetching id for Soldier Boy by The Shirelles ...\nfetching id for Uptown by The Crystals ...\nfetching id for Lemon Tree by Peter, Paul & Mary ...\nfetching id for Everybody Loves Me But You by Brenda Lee ...\nfetching id for Night Train by James Brown And The Famous Flames ...\nfetching id for When I Get Thru With You (You'll Love Me Too) by Patsy Cline ...\nfetching id for Hit Record by Brook Benton ...\nfetching id for My Real Name by Fats Domino ...\nfetching id for Queen Of My Heart by Rene And Ray ...\nfetching id for Doctor Feel-Good by Dr. Feelgood And The Interns ...\n--> [error] Shake A Hand by Ruth Brown\nfetching id for Adios Amigo by Jim Reeves ...\nfetching id for Scotch And Soda by The Kingston Trio ...\nfetching id for Lisa by Ferrante & Teicher ...\n--> [error] Doctor Feel-Good by Dr. Feelgood And The Interns\nfetching id for Air Travel by Ray And Bob ...\n--> [error] Lisa by Ferrante & Teicher\nfetching id for Old Rivers by Walter Brennan ...\nfetching id for She Cried by Jay & The Americans ...\nfetching id for Conscience by James Darren ...\nfetching id for Funny Way Of Laughin' by Burl Ives ...\nfetching id for Tell Me by Dick and DeeDee ...\n--> [error] Air Travel by Ray And Bob\nfetching id for I Wish That We Were Married by Ronnie and The Hi-Lites ...\nfetching id for Caterina by Perry Como ...\nfetching id for How Can I Meet Her? by The Everly Brothers ...\nfetching id for Drummin' Up A Storm by Sandy Nelson ...\n--> [error] Caterina by Perry Como\nfetching id for Lipstick Traces (On A Cigarette) by Benny Spellman ...\nfetching id for Marianna by Johnny Mathis ...\nfetching id for Drum Stomp by Sandy Nelson ...\nfetching id for Johnny Angel by Shelley Fabares ...\nfetching id for P.T. 109 by Jimmy Dean ...\nfetching id for Most People Get Married by Patti Page ...\nfetching id for Good Luck Charm by Elvis Presley With The Jordanaires ...\nfetching id for Moon River by Henry Mancini And His Orchestra ...\nfetching id for Count Every Star by Linda Scott ...\n--> [error] Moon River by Henry Mancini And His Orchestra\nfetching id for Number One Man by Bruce Channel ...\n--> [error] Count Every Star by Linda Scott\nfetching id for Shout - Part I by Joey Dee & the Starliters ...\n--> [error] Number One Man by Bruce Channel\nfetching id for If I Cried Every Time You Hurt Me by Wanda Jackson ...\nfetching id for Itty Bitty Pieces by James Ray ...\nfetching id for That's My Desire by Yvonne Baker and the Sensations ...\n--> [error] Shout - Part I by Joey Dee & the Starliters\nfetching id for Meet Me At The Twistin' Place by Johnnie Morisette ...\nfetching id for Let Me Be The One by The Paris Sisters ...\nfetching id for Twist, Twist Senora by Gary U.S. Bonds ...\nfetching id for You Are Mine by Frankie Avalon ...\nfetching id for Lover Please by Clyde McPhatter ...\nfetching id for Slow Twistin' by Chubby Checker (with Dee Dee Sharp) ...\nfetching id for (I was) Born To Cry by Dion ...\nfetching id for Soul Twist by King Curtis And The Noble Knights ...\nfetching id for Hearts by Jackie Wilson ...\n--> [error] That's My Desire by Yvonne Baker and the Sensations\nfetching id for I'll Take You Home by The Corsairs Featuring Jay \"Bird\" Uzzell ...\n--> [error] Hearts by Jackie Wilson\nfetching id for I Found A Love by The Falcons & Band (Ohio Untouchables) ...\n--> [error] I'll Take You Home by The Corsairs Featuring Jay \"Bird\" Uzzell\nfetching id for Love Can't Wait by Marty Robbins ...\nfetching id for Young World by Rick Nelson ...\nfetching id for Blues (Stay Away From Me) by Ace Cannon ...\nfetching id for Dear One by Larry Finnegan ...\nfetching id for King Of Clowns by Neil Sedaka ...\nfetching id for Love Letters by Ketty Lester ...\nfetching id for I Will by Vic Dana ...\nfetching id for Johnny Jingo by Hayley Mills ...\nfetching id for Hide 'Nor Hair by Ray Charles and his Orchestra ...\n--> [error] Love Can't Wait by Marty Robbins\nfetching id for Ginny Come Lately by Brian Hyland ...\nfetching id for What'd I Say (Part 1) by Bobby Darin ...\nfetching id for She Can't Find Her Keys by Paul Petersen ...\nfetching id for The Jam - Part 1 by Bobby Gregg and His Friends ...\n--> [error] Hide 'Nor Hair by Ray Charles and his Orchestra\nfetching id for Deep In The Heart Of Texas by Duane Eddy ...\nfetching id for Here Comes That Feelin' by Brenda Lee ...\n--> [error] The Jam - Part 1 by Bobby Gregg and His Friends\nfetching id for Imagine That by Patsy Cline ...\nfetching id for Dream by Dinah Washington ...\nfetching id for Jane, Jane, Jane by The Kingston Trio ...\nfetching id for Quando, Quando, Quando (Tell Me When) by Pat Boone ...\n--> [error] Here Comes That Feelin' by Brenda Lee\nfetching id for -twistin'-White Silver Sands by Bill Black's Combo ...\n--> [error] Quando, Quando, Quando (Tell Me When) by Pat Boone\nfetching id for The John Birch Society by The Chad Mitchell Trio ...\n--> [error] -twistin'-White Silver Sands by Bill Black's Combo\nfetching id for Love Me Warm And Tender by Paul Anka ...\nfetching id for Cinderella by Jack Ross ...\n--> [error] The John Birch Society by The Chad Mitchell Trio\nfetching id for Nut Rocker by B. Bumble & The Stingers ...\nfetching id for Twistin' The Night Away by Sam Cooke ...\nfetching id for At The Club by Ray Charles and his Orchestra ...\n--> [error] Cinderella by Jack Ross\nfetching id for You Better Move On by Arthur Alexander ...\nfetching id for Two Of A Kind by Sue Thompson ...\nfetching id for Something's Got A Hold On Me by Etta James ...\nfetching id for Moments by Jennell Hawkins ...\nfetching id for Runaway by Lawrence Welk And His Orchestra ...\n--> [error] At The Club by Ray Charles and his Orchestra\nfetching id for Stranger On The Shore by The Drifters ...\nfetching id for Memories Of Maria by Jerry Byrd ...\n--> [error] Runaway by Lawrence Welk And His Orchestra\nfetching id for You're Nobody 'til Somebody Loves You by Dinah Washington ...\n--> [error] Memories Of Maria by Jerry Byrd\nfetching id for Operator by Gladys Knight And The Pips ...\nfetching id for Midnight In Moscow by Kenny Ball and his Jazzmen ...\nfetching id for Anything That's Part Of You by Elvis Presley With The Jordanaires ...\nfetching id for Let Me In by The Sensations ...\nfetching id for Dream Baby (How Long Must I Dream) by Roy Orbison ...\nfetching id for You Talk About Love by Barbara George ...\nfetching id for Hey! Baby by Bruce Channel ...\nfetching id for The Big Draft by The Four Preps ...\nfetching id for Tra La La La La by Ike & Tina Turner ...\nfetching id for Don't Break The Heart That Loves You by Connie Francis ...\nfetching id for Please Don't Ask About Barbara by Bobby Vee ...\nfetching id for What's Your Name by Don & Juan ...\nfetching id for Patricia - Twist by Perez Prado And His Orchestra ...\nfetching id for Thou Shalt Not Steal by John D. Loudermilk ...\nfetching id for (What A Sad Way) To Love Someone by Ral Donner ...\n--> [error] Patricia - Twist by Perez Prado And His Orchestra\nfetching id for Walk On With The Duke by The Duke Of Earl ...\n--> [error] (What A Sad Way) To Love Someone by Ral Donner\nfetching id for (I've Got) Bonnie by Bobby Rydell ...\nfetching id for Come Back Silly Girl by The Lettermen ...\nfetching id for Annie Get Your Yo-Yo by Little Junior Parker ...\nfetching id for When My Little Girl Is Smiling by The Drifters ...\nfetching id for Cry Baby Cry by The Angels ...\nfetching id for Patti Ann by Johnny Crawford ...\nfetching id for You Win Again by Fats Domino ...\nfetching id for The Ballad Of Thunder Road by Robert Mitchum ...\nfetching id for Lovesick Blues by Floyd Cramer ...\nfetching id for You Don't Miss Your Water by William Bell ...\nfetching id for Where Have All The Flowers Gone by The Kingston Trio ...\nfetching id for She's Got You by Patsy Cline ...\nfetching id for The Alvin Twist by The Chipmunks With David Seville ...\n--> [error] Walk On With The Duke by The Duke Of Earl\nfetching id for Duke Of Earl by Gene Chandler ...\nfetching id for If A Woman Answers (Hang Up The Phone) by Leroy Van Dyke ...\nfetching id for Chapel By The Sea by Billy Vaughn And His Orchestra ...\nfetching id for Midnight Special, Part 1 by Jimmy Smith ...\n--> [error] Chapel By The Sea by Billy Vaughn And His Orchestra\nfetching id for The Rains Came by Big Sambo and The House Wreckers ...\nfetching id for The White Rose Of Athens by David Carroll And His Orchestra ...\n--> [error] Midnight Special, Part 1 by Jimmy Smith\nfetching id for What Am I Supposed To Do by Ann-Margret ...\nfetching id for Lollipops And Roses by Jack Jones ...\nfetching id for I Found Love by Jackie Wilson & Linda Hopkins ...\n--> [error] The White Rose Of Athens by David Carroll And His Orchestra\nfetching id for Who Will The Next Fool Be by Bobby Bland ...\nfetching id for Funny by Gene McDaniels ...\nfetching id for Her Royal Majesty by James Darren ...\n--> [error] I Found Love by Jackie Wilson & Linda Hopkins\nfetching id for Jamie by Eddie Holland ...\nfetching id for Tuff by Ace Cannon ...\nfetching id for Ev'rybody's Twistin' by Frank Sinatra ...\nfetching id for March Of The Siamese Children by Kenny Ball and his Jazzmen ...\nfetching id for I'm On My Way by The Highwaymen ...\n--> [error] Ev'rybody's Twistin' by Frank Sinatra\nfetching id for Sugar Blues by Ace Cannon ...\nfetching id for Doin' The Continental Walk by Danny & The Juniors ...\n--> [error] I'm On My Way by The Highwaymen\nfetching id for Shout - Part 1 by The Isley Brothers ...\nfetching id for Cookin' by Al Casey Combo ...\nfetching id for Smoky Places by The Corsairs Featuring Jay \"Bird\" Uzzell ...\nfetching id for Percolator (Twist) by Billy Joe & The Checkmates ...\nfetching id for The Wanderer by Dion ...\nfetching id for Break It To Me Gently by Brenda Lee ...\nfetching id for Crying In The Rain by The Everly Brothers ...\nfetching id for Pop-Eye by Huey Smith ...\nfetching id for The Twist by Chubby Checker ...\nfetching id for Our Anniversary by Shep And The Limelites ...\nfetching id for A Girl Has To Know by The G-Clefs ...\nfetching id for Amor by Roger Williams ...\nfetching id for Smile by Ferrante & Teicher ...\n--> [error] Smile by Ferrante & Teicher\nfetching id for Honky-Tonk Man by Johnny Horton ...\n--> [error] Doin' The Continental Walk by Danny & The Juniors\nfetching id for Sweet Thursday by Johnny Mathis ...\nfetching id for Lover Come Back by Doris Day ...\nfetching id for I'm Blue (The Gong-Gong Song) by The Ikettes ...\nfetching id for Dear Lady Twist by Gary U.S. Bonds ...\nfetching id for Cry To Me by Solomon Burke ...\nfetching id for Cotton Fields by The Highwaymen ...\nfetching id for My Boomerang Won't Come Back by Charlie Drake ...\nfetching id for Chip Chip by Gene McDaniels ...\nfetching id for Walk On The Wild Side by Brook Benton ...\nfetching id for He Knows I Love Him Too Much by The Paris Sisters ...\nfetching id for (Do The New) Continental by The Dovells ...\nfetching id for Yes Indeed by Pete Fountain ...\nfetching id for Ain't That Loving You by Bobby Bland ...\nfetching id for Nite Owl by Dukays ...\n--> [error] Ain't That Loving You by Bobby Bland\nfetching id for Pop-Eye Stroll by Mar-Keys ...\nfetching id for Play The Thing by Marlowe Morris Quintet ...\n--> [error] Nite Owl by Dukays\nfetching id for Drums Are My Beat by Sandy Nelson ...\nfetching id for Norman by Sue Thompson ...\nfetching id for Hey, Let's Twist by Joey Dee & the Starliters ...\nfetching id for Baby It's You by The Shirelles ...\nfetching id for A Little Bitty Tear by Burl Ives ...\nfetching id for Peppermint Twist - Part I by Joey Dee & the Starliters ...\n--> [error] Play The Thing by Marlowe Morris Quintet\nfetching id for Twistin' Postman by The Marvelettes ...\nfetching id for I Know (You Don't Love Me No More) by Barbara George ...\nfetching id for Afrikaan Beat by Bert Kaempfert And His Orchestra ...\nfetching id for La Paloma Twist by Chubby Checker ...\nfetching id for Surfin by The Beach Boys ...\nfetching id for Yessiree by Linda Scott ...\nfetching id for Roly Poly by Joey Dee & the Starliters ...\nfetching id for I'll See You In My Dreams by Pat Boone ...\n--> [error] Peppermint Twist - Part I by Joey Dee & the Starliters\nfetching id for Guitar Boogie Shuffle Twist by The Virtues ...\n--> [error] I'll See You In My Dreams by Pat Boone\nfetching id for Stardust by Frank Sinatra ...\nfetching id for The Wonderful World Of The Young by Andy Williams ...\nfetching id for The Cajun Queen by Jimmy Dean ...\nfetching id for To A Sleeping Beauty by Jimmy Dean ...\nfetching id for Lizzie Borden by The Chad Mitchell Trio ...\n--> [error] Guitar Boogie Shuffle Twist by The Virtues\nfetching id for That's My Pa by Sheb Wooley ...\nfetching id for What's So Good About Good-by by The Miracles ...\nfetching id for Mashed Potatoes (Part 1) by Steve Alaimo ...\nfetching id for Bermuda by Linda Scott ...\nfetching id for Summertime by Rick Nelson ...\nfetching id for Teen Queen Of The Week by Freddy Cannon ...\nfetching id for Duchess Of Earl by Pearlettes ...\nfetching id for The Moon Was Yellow by Frank Sinatra ...\nfetching id for Can't Help Falling In Love by Elvis Presley With The Jordanaires ...\nfetching id for Chattanooga Choo Choo by Floyd Cramer ...\nfetching id for Town Without Pity by Gene Pitney ...\nfetching id for The Greatest Hurt by Jackie Wilson ...\nfetching id for Irresistible You by Bobby Darin ...\n--> [error] Lizzie Borden by The Chad Mitchell Trio\nfetching id for Ecstasy by Ben E. King ...\nfetching id for Dreamy Eyes by Johnny Tillotson ...\nfetching id for B'wa Nina (Pretty Girl) by The Tokens ...\n--> [error] Irresistible You by Bobby Darin\nfetching id for Shadrack by Brook Benton ...\nfetching id for Surfer's Stomp by The Marketts ...\nfetching id for She's Everything (I Wanted You To Be) by Ral Donner ...\nfetching id for Pictures In The Fire by Pat Boone ...\n--> [error] B'wa Nina (Pretty Girl) by The Tokens\nfetching id for Blue Water Line by The Brothers Four ...\nfetching id for Lose Her by Bobby Rydell ...\n--> [error] Pictures In The Fire by Pat Boone\nfetching id for Tears And Laughter by Dinah Washington ...\nfetching id for Do You Know How To Twist by Hank Ballard ...\nfetching id for Ida Jane by Fats Domino ...\nfetching id for Baby It's Cold Outside by Ray Charles & Betty Carter ...\n--> [error] Lose Her by Bobby Rydell\nfetching id for Aw Shucks, Hush Your Mouth by Jimmy Reed ...\nfetching id for My Melancholy Baby by The Marcels ...\nfetching id for Letter Full Of Tears by Gladys Knight And The Pips ...\nfetching id for Bandit Of My Dreams by Eddie Hodges ...\nfetching id for Let Me Call You Sweetheart by Timi Yuro ...\nfetching id for Oliver Twist by Rod McKuen ...\nfetching id for Love Is The Sweetest Thing by Saverio Saridis ...\nfetching id for I Surrender, Dear by Aretha Franklin ...\nfetching id for Joey Baby by Anita & Th' So-And-So's ...\n--> [error] Love Is The Sweetest Thing by Saverio Saridis\nfetching id for (Quarter To Four) Stomp by The Stompers ...\n--> [error] Joey Baby by Anita & Th' So-And-So's\nfetching id for The Lion Sleeps Tonight by The Tokens ...\nfetching id for Multiplication by Bobby Darin ...\nfetching id for Run To Him by Bobby Vee ...\nfetching id for When I Fall In Love by The Lettermen ...\nfetching id for If You Gotta Make A Fool Of Somebody by James Ray ...\nfetching id for So Deep by Brenda Lee ...\nfetching id for The Birth Of The Beat by Sandy Nelson ...\nfetching id for Midnight by Johnny Gibson ...\nfetching id for Shimmy, Shimmy Walk, Part 1 by The Megatons ...\n--> [error] (Quarter To Four) Stomp by The Stompers\nfetching id for It's Magic by The Platters ...\nfetching id for I Can't Say Goodbye by Bobby Vee ...\nfetching id for Popeye Joe by Ernie K-Doe ...\nfetching id for Do-Re-Mi by Lee Dorsey ...\nfetching id for Twist-Her by Bill Black's Combo ...\nfetching id for Let There Be Drums by Sandy Nelson ...\nfetching id for Walk On By by Leroy Van Dyke ...\nfetching id for Happy Birthday, Sweet Sixteen by Neil Sedaka ...\nfetching id for Unchain My Heart by Ray Charles and his Orchestra ...\n--> [error] Shimmy, Shimmy Walk, Part 1 by The Megatons\nfetching id for Happy Jose (Ching-Ching) by Jack Ross ...\nfetching id for Dear Ivan by Jimmy Dean ...\nfetching id for Go On Home by Patti Page ...\nfetching id for Funny How Time Slips Away by Jimmy Elledge ...\nfetching id for The Majestic by Dion ...\nfetching id for What's The Reason by Bobby Edwards ...\nfetching id for Strange by Patsy Cline ...\nfetching id for Grow Closer Together by The Impressions ...\nfetching id for Goodbye Cruel World by James Darren ...\nfetching id for Pocketful Of Miracles by Frank Sinatra ...\nfetching id for Turn On Your Love Light by Bobby Bland ...\nfetching id for Poor Fool by Ike & Tina Turner ...\nfetching id for When The Boy In Your Arms (Is The Boy In Your Heart) by Connie Francis ...\nfetching id for Small Sad Sam by Phil McLean ...\nfetching id for Flying Circle by Frank Slay And His Orchestra ...\n--> [error] What's The Reason by Bobby Edwards\nfetching id for Please Mr. Postman by The Marvelettes ...\nfetching id for I Could Have Loved You So Well by Ray Peterson ...\nfetching id for A Little Too Much by Clarence Henry ...\nfetching id for Let's Go by Floyd Cramer ...\n--> [error] Flying Circle by Frank Slay And His Orchestra\nfetching id for Rough Lover by Aretha Franklin ...\nfetching id for Sugar Babe by Buster Brown ...\nfetching id for Rock-A-Hula Baby by Elvis Presley With The Jordanaires ...\n--> [error] Let's Go by Floyd Cramer\nfetching id for Maria by Roger Williams ...\nfetching id for Let's Twist Again by Chubby Checker ...\nfetching id for There's No Other (Like My Baby) by The Crystals ...\nfetching id for I Don't Know Why by Linda Scott ...\nfetching id for Moon River by Jerry Butler ...\nfetching id for There'll Be No Next Time by Jackie Wilson ...\nfetching id for Tears From An Angel by Troy Shondell ...\nfetching id for Twistin' All Night Long by Danny & The Juniors with Freddy Cannon ...\nfetching id for A Little Bitty Tear by Wanda Jackson ...\nfetching id for Tequila Twist by The Champs ...\nfetching id for Revenge by Brook Benton ...\nfetching id for Jambalaya (On The Bayou) by Fats Domino ...\nfetching id for Gypsy Woman by The Impressions ...\nfetching id for 'Til by The Angels ...\nfetching id for Walkin' With My Angel by Bobby Vee ...\nfetching id for Let's Go Trippin' by Dick Dale and The Del-Tones ...\nfetching id for Just Got To Know by Jimmy McCracklin ...\nfetching id for I Told The Brook by Marty Robbins ...\n--> [error] I Don't Know Why by Linda Scott\nfetching id for The Door Is Open by Tommy Hunt ...\nfetching id for Portrait Of A Fool by Conway Twitty ...\nfetching id for Little Altar Boy by Vic Dana ...\nfetching id for Hey! Little Girl by Del Shannon ...\nfetching id for Johnny Will by Pat Boone ...\nfetching id for Turn Around, Look At Me by Glen Campbell ...\nfetching id for Up A Lazy River by Si Zentner And His Orchestra ...\n--> [error] I Told The Brook by Marty Robbins\nfetching id for Your Ma Said You Cried In Your Sleep Last Night by Kenny Dino ...\nfetching id for But On The Other Hand Baby by Ray Charles and his Orchestra ...\n--> [error] Up A Lazy River by Si Zentner And His Orchestra\nfetching id for Big Bad John by Jimmy Dean ...\nfetching id for I Need Some One by The Belmonts ...\nfetching id for The Lost Penny by Brook Benton ...\n--> [error] But On The Other Hand Baby by Ray Charles and his Orchestra\nfetching id for Room Full Of Tears by The Drifters ...\nfetching id for The Basie Twist by Count Basie & His Orch. ...\n--> [error] The Lost Penny by Brook Benton\nfetching id for A Sunday Kind Of Love by Jan & Dean ...\nfetching id for Tonight by Ferrante & Teicher ...\nfetching id for Well, I Told You by The Chantels ...\nfetching id for Just Out Of Reach (Of My Two Open Arms) by Solomon Burke ...\nfetching id for The Gypsy Rover by The Highwaymen ...\n--> [error] The Basie Twist by Count Basie & His Orch.\nfetching id for Baby's First Christmas by Connie Francis ...\nfetching id for Pop Goes The Weasel by Anthony Newley ...\nfetching id for Unsquare Dance by The Dave Brubeck Quartet ...\nfetching id for Island In The Sky by Troy Shondell ...\nfetching id for Searching by Jack Eubanks ...\nfetching id for The Comancheros by Claude King ...\nfetching id for Lonesome Number One by Don Gibson ...\nfetching id for Fool #1 by Brenda Lee ...\nfetching id for Soothe Me by Sims Twins ...\n--> [error] The Gypsy Rover by The Highwaymen\nfetching id for I Understand (Just How You Feel) by The G-Clefs ...\nfetching id for Language Of Love by John D. Loudermilk ...\nfetching id for Please Come Home For Christmas by Charles Brown ...\nfetching id for Maria by Johnny Mathis ...\nfetching id for It's All Because by Linda Scott ...\nfetching id for Crazy by Patsy Cline ...\nfetching id for Pushin' Your Luck by Sleepy King ...\n--> [error] Soothe Me by Sims Twins\nfetching id for The Twist by Ernie Freeman ...\nfetching id for The Waltz You Saved For Me by Ferlin Husky ...\nfetching id for The Little Drummer Boy by The Jack Halloran Singers ...\n--> [error] Pushin' Your Luck by Sleepy King\nfetching id for Motorcycle by Tico And The Triumphs ...\nfetching id for Drown In My Own Tears by Don Shirley ...\nfetching id for Runaround Sue by Dion ...\nfetching id for September In The Rain by Dinah Washington ...\nfetching id for Bristol Stomp by The Dovells ...\nfetching id for Heartaches by The Marcels ...\nfetching id for Tower Of Strength by Gene McDaniels ...\nfetching id for Twistin' U.S.A. by Chubby Checker ...\nfetching id for The Peppermint Twist by Danny Peppermint and the Jumping Jacks ...\n--> [error] The Little Drummer Boy by The Jack Halloran Singers\nfetching id for Free Me by Johnny Preston ...\nfetching id for Santa & The Touchables by Dickie Goodman ...\nfetching id for Ev'rybody's Cryin' by Jimmie Beaumont ...\nfetching id for The Fly by Chubby Checker ...\nfetching id for You're The Reason by Bobby Edwards ...\nfetching id for In The Middle Of A Heartache by Wanda Jackson ...\nfetching id for It's Too Soon To Know by Etta James ...\nfetching id for Three Steps From The Altar by Shep And The Limelites ...\nfetching id for Never, Never by The Jive Five With Joe Rene And Orchestra ...\n--> [error] Ev'rybody's Cryin' by Jimmie Beaumont\nfetching id for Tennessee Flat-Top Box by Johnny Cash ...\nfetching id for Seven Day Fool by Etta James ...\nfetching id for Everybody's Gotta Pay Some Dues by The Miracles ...\nfetching id for You Don't Have To Be A Tower Of Strength by Gloria Lynne ...\nfetching id for I Love How You Love Me by The Paris Sisters ...\nfetching id for This Time by Troy Shondell ...\nfetching id for A Wonder Like You by Rick Nelson ...\nfetching id for I Wanna Thank You by Bobby Rydell ...\nfetching id for Tonight by Eddie Fisher ...\nfetching id for Foot Stomping - Part 1 by The Flares ...\nfetching id for Smile by Timi Yuro ...\nfetching id for God, Country And My Baby by Johnny Burnette ...\nfetching id for I Hear You Knocking by Fats Domino ...\nfetching id for Sometime by Gene Thomas ...\nfetching id for Happy Times (Are Here To Stay) by Tony Orlando ...\nfetching id for Losing Your Love by Jim Reeves ...\nfetching id for (How Can I Write On Paper) What I Feel In My Heart by Jim Reeves ...\n--> [error] Sometime by Gene Thomas\nfetching id for She Really Loves You by Timi Yuro ...\nfetching id for Sad Movies (Make Me Cry) by Sue Thompson ...\nfetching id for Hit The Road Jack by Ray Charles and his Orchestra ...\n--> [error] (How Can I Write On Paper) What I Feel In My Heart by Jim Reeves\nfetching id for Everlovin' by Rick Nelson ...\nfetching id for Rock-A-Bye Your Baby With A Dixie Melody by Aretha Franklin ...\nfetching id for Let's Get Together by Hayley Mills and Hayley Mills ...\nfetching id for Ya Ya by Lee Dorsey ...\nfetching id for I Cried My Last Tear by Ernie K-Doe ...\nfetching id for A Certain Girl by Ernie K-Doe ...\nfetching id for Danny Boy by Andy Williams ...\nfetching id for What A Walk by Bobby Lewis ...\nfetching id for You're Following Me by Perry Como ...\nfetching id for It Do Me So Good by Ann-Margret ...\nfetching id for Walkin' Back To Happiness by Helen Shapiro ...\nfetching id for Crying by Roy Orbison ...\nfetching id for Under The Moon Of Love by Curtis Lee ...\nfetching id for Anybody But Me by Brenda Lee ...\nfetching id for Candy Man by Roy Orbison ...\nfetching id for Big John by The Shirelles ...\n--> [error] You're Following Me by Perry Como\nfetching id for The Way I Am by Jackie Wilson ...\nfetching id for Take Five by The Dave Brubeck Quartet ...\nfetching id for On Bended Knees by Clarence Henry ...\nfetching id for My Heart Belongs To Only You by Jackie Wilson ...\nfetching id for The Bridge Of Love by Joe Dowell ...\nfetching id for The Way You Look Tonight by The Lettermen ...\nfetching id for Blue Moon by The Ventures ...\nfetching id for I'll Be Seeing You by Frank Sinatra ...\nfetching id for I'll Never Stop Wanting You by Brian Hyland ...\nfetching id for I Wonder (If Your Love Will Ever Belong To Me) by The Pentagons ...\nfetching id for Steps 1 And 2 by Jack Scott ...\nfetching id for Greetings (This is Uncle Sam) by Valadiers ...\nfetching id for Young Boy Blues by Ben E. King ...\nfetching id for Sweets For My Sweet by The Drifters ...\nfetching id for (He's My) Dreamboat by Connie Francis ...\nfetching id for So Long Baby by Del Shannon ...\nfetching id for Please Don't Go by Ral Donner ...\nfetching id for School Is In by Gary U.S. Bonds ...\nfetching id for What A Party by Fats Domino ...\nfetching id for I Really Love You by The Stereos ...\nfetching id for Mexico by Bob Moore and His Orch. ...\n--> [error] My Heart Belongs To Only You by Jackie Wilson\nfetching id for For Me And My Gal by Freddy Cannon ...\nfetching id for Somewhere Along The Way by Steve Lawrence ...\nfetching id for Stick Shift by Duals ...\nfetching id for Love (I'm So Glad) I Found You by The Spinners ...\nfetching id for Don't Blame Me by The Everly Brothers ...\nfetching id for Look In My Eyes by The Chantels ...\nfetching id for You Must Have Been A Beautiful Baby by Bobby Darin ...\nfetching id for Take Good Care Of My Baby by Bobby Vee ...\nfetching id for Little Sister by Elvis Presley ...\nfetching id for Morning After by Mar-Keys ...\nfetching id for Bright Lights Big City by Jimmy Reed ...\nfetching id for Come September by Billy Vaughn And His Orchestra ...\nfetching id for Fly By Night by Andy Williams ...\n--> [error] Fly By Night by Andy Williams\nfetching id for The Door To Paradise by Bobby Rydell ...\n--> [error] Come September by Billy Vaughn And His Orchestra\nfetching id for Hollywood by Connie Francis ...\nfetching id for The Mountain's High by Dick and DeeDee ...\nfetching id for My True Story by The Jive Five With Joe Rene And Orchestra ...\nfetching id for Movin' by Bill Black's Combo ...\nfetching id for (Marie's The Name) His Latest Flame by Elvis Presley ...\nfetching id for Missing You by Ray Peterson ...\nfetching id for Berlin Melody by Billy Vaughn And His Orchestra ...\n--> [error] The Door To Paradise by Bobby Rydell\nfetching id for Your Last Goodbye by Floyd Cramer ...\nfetching id for It's Gonna Work Out Fine by Ike & Tina Turner ...\nfetching id for It's Your World by Marty Robbins ...\nfetching id for Human by Tommy Hunt ...\nfetching id for Sad Movies (Make Me Cry) by The Lennon Sisters ...\n--> [error] Berlin Melody by Billy Vaughn And His Orchestra\nfetching id for Broken Heart And A Pillow Filled With Tears by Patti Page ...\n--> [error] Sad Movies (Make Me Cry) by The Lennon Sisters\nfetching id for Just Because by The McGuire Sisters ...\nfetching id for It's Just A House Without You by Brook Benton ...\nfetching id for (He's) The Great Impostor by The Fleetwoods ...\n--> [error] Broken Heart And A Pillow Filled With Tears by Patti Page\nfetching id for One Track Mind by Bobby Lewis ...\nfetching id for Bless You by Tony Orlando ...\nfetching id for Michael by The Highwaymen ...\nfetching id for Without You by Johnny Tillotson ...\nfetching id for The Astronaut (Parts 1 & 2) by Jose Jimenez ...\n--> [error] (He's) The Great Impostor by The Fleetwoods\nfetching id for Don't Get Around Much Anymore by The Belmonts ...\nfetching id for Don't Cry No More by Bobby Bland ...\nfetching id for Tonight I Won't Be There by Adam Wade ...\n--> [error] The Astronaut (Parts 1 & 2) by Jose Jimenez\nfetching id for Backtrack by Faron Young ...\nfetching id for Why Not Now by Matt Monro ...\nfetching id for Who Put The Bomp (In The Bomp, Bomp, Bomp) by Barry Mann ...\nfetching id for Here Comes The Night by Ben E. King ...\nfetching id for Let True Love Begin by Nat King Cole ...\nfetching id for More Money For You And Me by The Four Preps ...\nfetching id for Wasn't The Summer Short? by Johnny Mathis ...\nfetching id for Muskrat by The Everly Brothers ...\nfetching id for I Apologize by Timi Yuro ...\nfetching id for Who Can I Count On by Patsy Cline ...\nfetching id for Does Your Chewing Gum Lose It's Flavor (On The Bedpost Over Night) by Lonnie Donegan And His Skiffle Group ...\n--> [error] Tonight I Won't Be There by Adam Wade\nfetching id for Let Me Belong To You by Brian Hyland ...\nfetching id for Water Boy by Don Shirley Trio ...\nfetching id for Big Cold Wind by Pat Boone ...\n--> [error] Does Your Chewing Gum Lose It's Flavor (On The Bedpost Over Night) by Lonnie Donegan And His Skiffle Group\nfetching id for When We Get Married by The Dreamlovers ...\nfetching id for You Don't Know What It Means by Jackie Wilson ...\n--> [error] Big Cold Wind by Pat Boone\nfetching id for Back To The Hop by Danny & The Juniors ...\n--> [error] You Don't Know What It Means by Jackie Wilson\nfetching id for Rockin' Bicycle by Fats Domino ...\nfetching id for Late Date by The Parkays ...\n--> [error] Back To The Hop by Danny & The Juniors\nfetching id for Guilty Of Loving You by Jerry Fuller ...\n--> [error] Late Date by The Parkays\nfetching id for Make Believe Wedding by The Castells ...\nfetching id for A Little Bit Of Soap by The Jarmels ...\nfetching id for Hurt by Timi Yuro ...\nfetching id for Wooden Heart by Joe Dowell ...\nfetching id for I Wake Up Crying by Chuck Jackson ...\nfetching id for I Just Don't Understand by Ann-Margret ...\n--> [error] Guilty Of Loving You by Jerry Fuller\nfetching id for Sweet Little You by Neil Sedaka ...\nfetching id for Cinderella by Paul Anka ...\nfetching id for For Sentimental Reasons by The Cleftones ...\nfetching id for Frankie And Johnny by Brook Benton ...\nfetching id for Lover's Island by The Blue Jays ...\nfetching id for Kissin' On The Phone by Paul Anka ...\nfetching id for Riders In The Sky by Lawrence Welk His Orchestra And Chorus ...\n--> [error] Kissin' On The Phone by Paul Anka\nfetching id for Laugh by The Velvets featuring Virgil Johnson ...\n--> [error] Riders In The Sky by Lawrence Welk His Orchestra And Chorus\nfetching id for Honky Train by Bill Black's Combo ...\nfetching id for Hang On by Floyd Cramer ...\nfetching id for Image - Part 1 by Hank Levine And Orchestra ...\nfetching id for Last Night by Mar-Keys ...\nfetching id for School Is Out by Gary U.S. Bonds ...\nfetching id for Don't Bet Money Honey by Linda Scott ...\n--> [error] Image - Part 1 by Hank Levine And Orchestra\nfetching id for I Fall To Pieces by Patsy Cline ...\nfetching id for Amor by Ben E. King ...\nfetching id for Let The Four Winds Blow by Fats Domino ...\nfetching id for Juke Box Saturday Night by Nino & The Ebb Tides ...\nfetching id for Magic Moon (Clair De Lune) by The Rays ...\n--> [error] Don't Bet Money Honey by Linda Scott\nfetching id for Music, Music, Music by The Sensations Featuring Yvonne ...\nfetching id for Magic Is The Night by Kathy Young With The Innocents ...\n--> [error] Magic Moon (Clair De Lune) by The Rays\nfetching id for True, True Love by Frankie Avalon ...\nfetching id for Pocketful Of Rainbows by Deane Hawley ...\nfetching id for Linda by Adam Wade ...\n--> [error] Pocketful Of Rainbows by Deane Hawley\nfetching id for Impossible by Gloria Lynne ...\nfetching id for You Don't Know What You've Got (Until You Lose It) by Ral Donner ...\nfetching id for Years From Now by Jackie Wilson ...\n--> [error] Linda by Adam Wade\nfetching id for Jeremiah Peabody's Poly Unsaturated Quick Dissolving Fast Acting Pleasant T by Ray Stevens ...\nfetching id for As If I Didn't Know by Adam Wade ...\nfetching id for Tossin' And Turnin' by Bobby Lewis ...\nfetching id for Wizard Of Love by The Ly - Dells ...\n--> [error] Years From Now by Jackie Wilson\nfetching id for Every Breath I Take by Gene Pitney ...\nfetching id for Nag by The Halos ...\nfetching id for Baby You're So Fine by Mickey & Sylvia ...\nfetching id for My Blue Heaven by Duane Eddy And The Rebels ...\n--> [error] Wizard Of Love by The Ly - Dells\nfetching id for A Little Dog Cried by Jimmie Rodgers ...\nfetching id for I Don't Like It Like That by The Bobbettes ...\nfetching id for Roll Over Beethoven by The Velaires ...\nfetching id for Summer Souvenirs by Karl Hammel, Jr. ...\n--> [error] My Blue Heaven by Duane Eddy And The Rebels\nfetching id for Faraway Star by The Chordettes ...\nfetching id for It's All Right by Sam Cooke ...\nfetching id for Johnny Willow by Fred Darian ...\nfetching id for I'll Never Smile Again by The Platters ...\nfetching id for Don't Cry, Baby by Etta James ...\nfetching id for I Like It Like That, Part 1 by Chris Kenner ...\n--> [error] Summer Souvenirs by Karl Hammel, Jr.\nfetching id for Sea Of Heartbreak by Don Gibson ...\nfetching id for I Dreamed Of A Hill-Billy Heaven by Tex Ritter ...\nfetching id for Transistor Sister by Freddy Cannon ...\nfetching id for Night Train by Richard Hayman And His Orchestra ...\n--> [error] I Like It Like That, Part 1 by Chris Kenner\nfetching id for My Dream Come True by Jack Scott ...\nfetching id for Keep On Dancing by Hank Ballard And The Midnighters ...\nfetching id for Take My Love (I Want To Give It All To You) by Little Willie John ...\nfetching id for Anniversary Of Love by The Caslons ...\nfetching id for Signed, Sealed And Delivered by Rusty Draper ...\n--> [error] Night Train by Richard Hayman And His Orchestra\nfetching id for Girl In My Dreams by The Capris ...\nfetching id for Dear Mr. D.J. Play It Again by Tina Robin ...\n--> [error] Signed, Sealed And Delivered by Rusty Draper\nfetching id for I Love You Yes I Do by Bull Moose Jackson ...\nfetching id for Panic by Otis Williams And His Charms ...\n--> [error] Dear Mr. D.J. Play It Again by Tina Robin\nfetching id for I'm Gonna Knock On Your Door by Eddie Hodges ...\nfetching id for Pretty Little Angel Eyes by Curtis Lee ...\nfetching id for Baby, You're Right by James Brown And The Famous Flames ...\nfetching id for Now And Forever by Bert Kaempfert And His Orchestra ...\n--> [error] Panic by Otis Williams And His Charms\nfetching id for Starlight, Starbright by Linda Scott ...\n--> [error] Now And Forever by Bert Kaempfert And His Orchestra\nfetching id for I Don't Want To Take A Chance by Mary Wells ...\nfetching id for Lonely Street by Clarence Henry ...\nfetching id for I'm A Telling You by Jerry Butler ...\nfetching id for Romeo by Janie Grant ...\nfetching id for Donald Where's Your Troosers? by Andy Stewart ...\nfetching id for Starlight by The Preludes Five ...\n--> [error] Starlight, Starbright by Linda Scott\nfetching id for (Theme From) Silver City by The Ventures ...\n--> [error] Starlight by The Preludes Five\nfetching id for Well-A, Well-A by Shirley & Lee ...\nfetching id for I Can't Take It by Mary Ann Fisher ...\n--> [error] (Theme From) Silver City by The Ventures\nfetching id for Pitter-Patter by The Four Sportsmen ...\nfetching id for Dum Dum by Brenda Lee ...\nfetching id for Together by Connie Francis ...\nfetching id for Back Beat No. 1 by The Rondels ...\nfetching id for Princess by Frank Gari ...\n--> [error] I Can't Take It by Mary Ann Fisher\nfetching id for San-Ho-Zay by Freddy King ...\nfetching id for Nothing But Good by Hank Ballard And The Midnighters ...\n--> [error] Princess by Frank Gari\nfetching id for Big River, Big Man by Claude King ...\nfetching id for You're The Reason by Joe South ...\nfetching id for Right Or Wrong by Wanda Jackson ...\nfetching id for Never On Sunday by The Chordettes ...\nfetching id for One Summer Night by The Diamonds ...\nfetching id for I'll Be There by Damita Jo ...\nfetching id for Hats Off To Larry by Del Shannon ...\nfetching id for Hully Gully Again by Little Caesar and The Romans ...\nfetching id for A Thing Of The Past by The Shirelles ...\nfetching id for Teardrops In My Heart by Joe Barry ...\n--> [error] Hully Gully Again by Little Caesar and The Romans\nfetching id for My Kind Of Girl by Matt Monro ...\nfetching id for Quarter To Three by U.S. Bonds ...\nfetching id for Black Land Farmer by Wink Martindale ...\n--> [error] Teardrops In My Heart by Joe Barry\nfetching id for My Heart's On Fire by Billy Bland ...\nfetching id for The Boll Weevil Song by Brook Benton ...\nfetching id for Don't Forget I Love You by The Butanes ...\n--> [error] Black Land Farmer by Wink Martindale\nfetching id for Quite A Party by The Fireballs ...\nfetching id for Runaround by The Regents ...\n--> [error] Don't Forget I Love You by The Butanes\nfetching id for Cupid by Sam Cooke ...\nfetching id for Please Stay by The Drifters ...\nfetching id for San Antonio Rose by Floyd Cramer ...\nfetching id for A Tear by Gene McDaniels ...\nfetching id for I Never Knew by Clyde McPhatter ...\nfetching id for The Charleston by Ernie Fields ...\nfetching id for Tears On My Pillow by The McGuire Sisters ...\n--> [error] Runaround by The Regents\nfetching id for Time Was by The Flamingos ...\nfetching id for What A Sweet Thing That Was by The Shirelles ...\nfetching id for Peanuts by Rick And The Keens ...\nfetching id for Mr. Happiness by Johnny Maestro with The Coeds ...\n--> [error] Tears On My Pillow by The McGuire Sisters\nfetching id for Girls Girls Girls (Part II) by The Coasters ...\nfetching id for Yellow Bird by Arthur Lyman Group ...\nfetching id for Sacred by The Castells ...\nfetching id for That's What Girls Are Made For by The Spinners ...\nfetching id for Raindrops by Dee Clark ...\nfetching id for Mighty Good Lovin' by The Miracles ...\nfetching id for No, No, No by The Chanters ...\nfetching id for The Fish by Bobby Rydell ...\n--> [error] Mr. Happiness by Johnny Maestro with The Coeds\nfetching id for The Guns Of Navarone by Joe Reisman Orch. & Chorus ...\n--> [error] The Fish by Bobby Rydell\nfetching id for Run, Run, Run by Ronny Douglas ...\nfetching id for My Claire De Lune by Steve Lawrence ...\nfetching id for If by The Paragons ...\nfetching id for Here In My Heart by Al Martino ...\nfetching id for Dedicated (To The Songs I Love) by The 3 Friends ...\n--> [error] The Guns Of Navarone by Joe Reisman Orch. & Chorus\nfetching id for In Time by Steve Lawrence ...\nfetching id for The Switch-A-Roo by Hank Ballard And The Midnighters ...\nfetching id for Moody River by Pat Boone ...\nfetching id for Every Beat Of My Heart by Pips ...\nfetching id for Travelin' Man by Ricky Nelson ...\nfetching id for Better Tell Him No by The Starlets ...\nfetching id for Hello Mary Lou by Ricky Nelson ...\nfetching id for Should I by The String-A-Longs ...\nfetching id for You Can't Sit Down Part 2 by Philip Upchurch Combo ...\n--> [error] Dedicated (To The Songs I Love) by The 3 Friends\nfetching id for Ready For Your Love by Shep And The Limelites ...\nfetching id for Heart And Soul by Jan & Dean ...\nfetching id for The Girl's A Devil by The Dukays ...\nfetching id for What Would You Do? by Jim Reeves ...\n--> [error] You Can't Sit Down Part 2 by Philip Upchurch Combo\nfetching id for Take A Fool's Advice by Nat King Cole ...\nfetching id for Lovedrops by Mickey & Sylvia ...\n--> [error] What Would You Do? by Jim Reeves\nfetching id for The Bells Are Ringing by The Van Dykes ...\nfetching id for Running Scared by Roy Orbison ...\nfetching id for Tell Me Why by The Belmonts ...\nfetching id for I'm Comin' On On Back To You by Jackie Wilson ...\nfetching id for Bobby by Neil Scott ...\nfetching id for You'll Answer To Me by Patti Page ...\n--> [error] Lovedrops by Mickey & Sylvia\nfetching id for Dance On Little Girl by Paul Anka ...\nfetching id for My Memories Of You by Donnie and The Dreamers ...\nfetching id for La Dolce Vita (The Sweet Life) by Ray Ellis And His Orchestra ...\n--> [error] You'll Answer To Me by Patti Page\nfetching id for Black Land Farmer by Frankie Miller ...\nfetching id for Now You Know by Little Willie John ...\nfetching id for All I Have To Do Is Dream by The Everly Brothers ...\nfetching id for Those Oldies But Goodies (Remind Me Of You) by Little Caesar and The Romans ...\nfetching id for The Writing On The Wall by Adam Wade ...\nfetching id for Heart And Soul by The Cleftones ...\nfetching id for Tonight (Could Be The Night) by The Velvets featuring Virgil Johnson ...\nfetching id for Three Hearts In A Tangle by Roy Drusky ...\nfetching id for Rainin' In My Heart by Slim Harpo ...\nfetching id for The Graduation Song... Pomp And Circumstance by Adrian Kimberly ...\n--> [error] La Dolce Vita (The Sweet Life) by Ray Ellis And His Orchestra\nfetching id for Too Many Rules by Connie Francis ...\n--> [error] The Graduation Song... Pomp And Circumstance by Adrian Kimberly\nfetching id for Tender Years by George Jones ...\nfetching id for Granada by Frank Sinatra ...\n--> [error] Too Many Rules by Connie Francis\nfetching id for It Keeps Rainin' by Fats Domino ...\nfetching id for Ole Buttermilk Sky by Bill Black's Combo ...\nfetching id for Drivin' Home by Duane Eddy And The Rebels ...\nfetching id for Te-Ta-Te-Ta-Ta by Ernie K-Doe ...\nfetching id for You Always Hurt The One You Love by Clarence Henry ...\nfetching id for Nature Boy by Bobby Darin ...\nfetching id for Hello Walls by Faron Young ...\nfetching id for Mom And Dad's Waltz by Patti Page ...\n--> [error] Drivin' Home by Duane Eddy And The Rebels\nfetching id for Watch Your Step by Bobby Parker ...\nfetching id for Never On Sunday by Don Costa And His Orchestra And Chorus ...\n--> [error] Mom And Dad's Waltz by Patti Page\nfetching id for Barbara-Ann by The Regents ...\nfetching id for Eventually by Brenda Lee ...\nfetching id for Joanie by Frankie Calen ...\nfetching id for I've Got News For You by Ray Charles ...\n--> [error] Never On Sunday by Don Costa And His Orchestra And Chorus\nfetching id for Fool That I Am by Etta James ...\nfetching id for Wishin' On A Rainbow by Phill Wilson ...\n--> [error] I've Got News For You by Ray Charles\nfetching id for Peanut Butter by The Marathons ...\nfetching id for Jura (I Swear I Love You) by Les Paul And Mary Ford ...\nfetching id for Rama Lama Ding Dong by The Edsels ...\nfetching id for Yellow Bird by Lawrence Welk And His Orchestra ...\n--> [error] Wishin' On A Rainbow by Phill Wilson\nfetching id for Dream by Etta James ...\nfetching id for Little Egypt (Ying-Yang) by The Coasters ...\nfetching id for Blue Tomorrow by Billy Vaughn And His Orchestra ...\n--> [error] Yellow Bird by Lawrence Welk And His Orchestra\nfetching id for Point Of No Return by Adam Wade ...\n--> [error] Blue Tomorrow by Billy Vaughn And His Orchestra\nfetching id for I Feel So Bad by Elvis Presley ...\nfetching id for Boogie Woogie by B. Bumble & The Stingers ...\nfetching id for I'm A Fool To Care by Joe Barry ...\n--> [error] Point Of No Return by Adam Wade\nfetching id for Temptation by The Everly Brothers ...\nfetching id for Wild In The Country by Elvis Presley With The Jordanaires ...\nfetching id for I Don't Mind by James Brown And The Famous Flames ...\nfetching id for Count Every Star by Donnie and The Dreamers ...\nfetching id for Tragedy by The Fleetwoods ...\nfetching id for A Scottish Soldier (Green Hills of Tyrol) by Andy Stewart ...\n--> [error] I'm A Fool To Care by Joe Barry\nfetching id for Hold Back The Tears by The Delacardos ...\nfetching id for Daydreams by Johnny Crawford ...\nfetching id for (Theme From) \"Goodbye Again\" by Ferrante & Teicher ...\nfetching id for I'll Never Be Free by Kay Starr ...\nfetching id for Broken Hearted by The Miracles ...\nfetching id for Daddy's Home by Shep And The Limelites ...\nfetching id for A Hundred Pounds Of Clay by Gene McDaniels ...\nfetching id for Mother-In-Law by Ernie K-Doe ...\nfetching id for Little Devil by Neil Sedaka ...\nfetching id for Mama Said by The Shirelles ...\nfetching id for Runaway by Del Shannon ...\nfetching id for Stick With Me Baby by The Everly Brothers ...\nfetching id for Girl Of My Best Friend by Ral Donner ...\nfetching id for Lullaby Of Love by Frank Gari ...\n--> [error] (Theme From) \"Goodbye Again\" by Ferrante & Teicher\nfetching id for Every Beat Of My Heart by Gladys Knight And The Pips ...\nfetching id for The Bilbao Song by Andy Williams ...\nfetching id for The Lonely Crowd by Teddy Vann ...\n--> [error] Lullaby Of Love by Frank Gari\nfetching id for I'm Gonna Move To The Outskirts Of Town by Ray Charles ...\n--> [error] The Lonely Crowd by Teddy Vann\nfetching id for Sad Eyes (Don't You Cry) by The Echoes ...\nfetching id for Jimmy Martinez by Marty Robbins ...\n--> [error] I'm Gonna Move To The Outskirts Of Town by Ray Charles\nfetching id for Nobody Cares (about me) by Jeanette (Baby) Washington ...\n--> [error] Nobody Cares (about me) by Jeanette (Baby) Washington\nfetching id for Anna by Jorgen Ingmann & His Guitar ...\n--> [error] Jimmy Martinez by Marty Robbins\nfetching id for Breakin' In A Brand New Broken Heart by Connie Francis ...\nfetching id for Triangle by Janie Grant ...\nfetching id for What A Surprise by Johnny Maestro The Voice Of The Crests ...\nfetching id for Halfway To Paradise by Tony Orlando ...\nfetching id for Exodus by Eddie Harris ...\nfetching id for Portrait Of My Love by Steve Lawrence ...\nfetching id for The Wayward Wind by Gogi Grant ...\nfetching id for That Old Black Magic by Bobby Rydell ...\nfetching id for How Many Tears by Bobby Vee ...\nfetching id for Buzz Buzz A-Diddle-It by Freddy Cannon ...\nfetching id for The Float by Hank Ballard And The Midnighters ...\nfetching id for Dooley by The Olympics ...\nfetching id for I've Told Every Little Star by Linda Scott ...\nfetching id for You Can Depend On Me by Brenda Lee ...\nfetching id for A Love Of My Own by Carla Thomas ...\nfetching id for Take Good Care Of Her by Adam Wade ...\nfetching id for Just For Old Time's Sake by The McGuire Sisters ...\nfetching id for Big Big World by Johnny Burnette ...\nfetching id for Big Boss Man by Jimmy Reed ...\nfetching id for Lonely Life by Jackie Wilson ...\n--> [error] What A Surprise by Johnny Maestro The Voice Of The Crests\nfetching id for Milord by Teresa Brewer ...\nfetching id for Summertime by The Marcels ...\nfetching id for Miss Fine by The New Yorkers ...\nfetching id for Lonesome Whistle Blues by Freddy King ...\nfetching id for (I've Got) Spring Fever by Little Willie John ...\nfetching id for Ring Of Fire by Duane Eddy ...\nfetching id for Tonight I Fell In Love by The Tokens ...\nfetching id for Blue Moon by The Marcels ...\nfetching id for Bonanza by Al Caiola And His Orchestra ...\nfetching id for But I Do by Clarence Henry ...\nfetching id for Glory Of Love by The Roommates ...\nfetching id for That's The Way With Love by Piero Soffici ...\n--> [error] Lonely Life by Jackie Wilson\nfetching id for (Dance The) Mess Around by Chubby Checker ...\nfetching id for In My Heart by The Timetones ...\nfetching id for Who Else But You by Frankie Avalon ...\nfetching id for You'd Better Come Home by Russell Byrd ...\n--> [error] That's The Way With Love by Piero Soffici\nfetching id for A Little Feeling (Called Love) by Jack Scott ...\nfetching id for Ronnie by Marcy Joe ...\n--> [error] You'd Better Come Home by Russell Byrd\nfetching id for Son-In-Law by Louise Brown ...\nfetching id for Can't Help Lovin' That Girl Of Mine by The Excels ...\n--> [error] Ronnie by Marcy Joe\nfetching id for Flaming Star by Elvis Presley With The Jordanaires ...\n--> [error] Can't Help Lovin' That Girl Of Mine by The Excels\nfetching id for Some Kind Of Wonderful by The Drifters ...\nfetching id for One Mint Julep by Ray Charles ...\nfetching id for Bumble Boogie by B. Bumble & The Stingers ...\nfetching id for Funny by Maxine Brown ...\nfetching id for On The Rebound by Floyd Cramer ...\nfetching id for The Touchables In Brooklyn by Dickie Goodman ...\nfetching id for Son-In-Law by The Blossoms ...\nfetching id for Brother-In-Law (He's A Moocher) by Paul Peek ...\nfetching id for Driving Wheel by Junior Parker ...\nfetching id for Here's My Confession by Wyatt (Earp) McPherson ...\nfetching id for Baby Blue by The Echoes ...\nfetching id for (It Never Happens) In Real Life by Chuck Jackson ...\nfetching id for The Continental Walk by Hank Ballard And The Midnighters ...\nfetching id for Tonight My Love, Tonight by Paul Anka ...\nfetching id for Asia Minor by Kokomo ...\nfetching id for Be My Boy by Paris Sisters ...\nfetching id for Good, Good Lovin' by Chubby Checker ...\nfetching id for What'd I Say by Jerry Lee Lewis And His Pumping Piano ...\n--> [error] Here's My Confession by Wyatt (Earp) McPherson\nfetching id for Underwater by The Frogmen ...\nfetching id for Lullaby Of The Leaves by The Ventures ...\nfetching id for Saved by LaVern Baker ...\nfetching id for I Can't Do It By Myself by Anita Bryant ...\n--> [error] What'd I Say by Jerry Lee Lewis And His Pumping Piano\nfetching id for I'm In The Mood For Love by The Chimes ...\nfetching id for Our Love Is Here To Stay by Dinah Washington ...\nfetching id for A Cross Stands Alone by Jimmy Witter ...\n--> [error] I Can't Do It By Myself by Anita Bryant\nfetching id for Pick Me Up On Your Way Down by Pat Zill ...\n--> [error] A Cross Stands Alone by Jimmy Witter\nfetching id for For Your Love by The Wanderers ...\n--> [error] Pick Me Up On Your Way Down by Pat Zill\nfetching id for Apache by Jorgen Ingmann & His Guitar ...\n--> [error] For Your Love by The Wanderers\nfetching id for Please Love Me Forever by Cathy Jean and The Roommates ...\nfetching id for Foolin' Around by Kay Starr ...\nfetching id for African Waltz by Cannonball Adderley Orchestra ...\nfetching id for Shy Away by Jerry Fuller ...\n--> [error] Shy Away by Jerry Fuller\nfetching id for Kissin Game by Dion ...\nfetching id for In Between Tears by Lenny Miles ...\nfetching id for You're Gonna Need Magic by Roy Hamilton ...\n--> [error] African Waltz by Cannonball Adderley Orchestra\nfetching id for What Will I Tell My Heart by The Harptones ...\nfetching id for (A Ship Will Come) Ein Schiff Wird Kommen by Lale Anderson ...\n--> [error] You're Gonna Need Magic by Roy Hamilton\nfetching id for Frogg by The Brothers Four ...\nfetching id for Dedicated To The One I Love by The Shirelles ...\nfetching id for Surrender by Elvis Presley With The Jordanaires ...\n--> [error] (A Ship Will Come) Ein Schiff Wird Kommen by Lale Anderson\nfetching id for Trust In Me by Etta James ...\nfetching id for Pony Time by Chubby Checker ...\nfetching id for Don't Worry by Marty Robbins ...\nfetching id for Find Another Girl by Jerry Butler ...\nfetching id for Tenderly by Bert Kaempfert And His Orchestra ...\nfetching id for Sleepy-Eyed John by Johnny Horton ...\nfetching id for A Dollar Down by The Limeliters ...\nfetching id for Hide Away by Freddy King ...\nfetching id for Brass Buttons by The String-A-Longs ...\nfetching id for This World We Love In (Il Cielo In Una Stanza) by Mina ...\nfetching id for Life's A Holiday by Jerry Wallace ...\nfetching id for Walk Right Back by The Everly Brothers ...\nfetching id for Please Tell Me Why by Jackie Wilson ...\nfetching id for Think Twice by Brook Benton ...\nfetching id for (Love Theme From) One Eyed Jacks by Ferrante & Teicher ...\nfetching id for Like, Long Hair by Paul Revere & The Raiders ...\nfetching id for Shu Rah by Fats Domino ...\nfetching id for (Theme from) My Three Sons by Lawrence Welk And His Orchestra ...\n--> [error] (Love Theme From) One Eyed Jacks by Ferrante & Teicher\nfetching id for Gee Whiz (Look At His Eyes) by Carla Thomas ...\nfetching id for Theme From Dixie by Duane Eddy ...\nfetching id for Once Upon A Time by Rochell And The Candles With Johnny Wyatt ...\n--> [error] (Theme from) My Three Sons by Lawrence Welk And His Orchestra\nfetching id for Ain't It, Baby by The Miracles ...\nfetching id for The Charanga by Merv Griffin ...\nfetching id for Where The Boys Are by Connie Francis ...\nfetching id for Wheels by The String-A-Longs ...\nfetching id for Fell In Love On Monday by Fats Domino ...\nfetching id for Happy Birthday Blues by Kathy Young With The Innocents ...\n--> [error] The Charanga by Merv Griffin\nfetching id for Trees by The Platters ...\nfetching id for To Be Loved (Forever) by The Pentagons ...\n--> [error] Happy Birthday Blues by Kathy Young With The Innocents\nfetching id for That's It - I Quit - I'm Movin' On by Sam Cooke ...\nfetching id for Model Girl by Johnny Maestro ...\nfetching id for The Next Kiss (Is The Last Goodbye) by Conway Twitty ...\nfetching id for The Continental Walk by The Rollers ...\nfetching id for The Mess Around by Bobby Freeman ...\nfetching id for Hop Scotch by Santo & Johnny ...\n--> [error] To Be Loved (Forever) by The Pentagons\nfetching id for I'm A Fool To Care by Oscar Black ...\n--> [error] Hop Scotch by Santo & Johnny\nfetching id for Won'cha Come Home, Bill Bailey by Della Reese ...\nfetching id for Ebony Eyes by The Everly Brothers ...\nfetching id for Ginnie Bell by Paul Dino ...\n--> [error] I'm A Fool To Care by Oscar Black\nfetching id for Bye Bye Baby by Mary Wells ...\nfetching id for Spanish Harlem by Ben E. King ...\nfetching id for Bewildered by James Brown And The Famous Flames ...\nfetching id for Merry-Go-Round by Marv Johnson ...\nfetching id for The Second Time Around by Frank Sinatra ...\nfetching id for Seventeen by Frankie Ford ...\n--> [error] Ginnie Bell by Paul Dino\nfetching id for Come Along by Maurice Williams & The Zodiacs ...\nfetching id for I'll Just Have A Cup Of Coffee (Then I'll Go) by Claude Gray ...\nfetching id for The Very Thought Of You by Little Willie John ...\nfetching id for La Pachanga by Audrey Arno And The Hazy Osterwald Sextet ...\n--> [error] Seventeen by Frankie Ford\nfetching id for California Sun by Joe Jones ...\n--> [error] California Sun by Joe Jones\nfetching id for Ling-Ting-Tong by Buddy Knox ...\nfetching id for Come On Over by The Strollers ...\n--> [error] La Pachanga by Audrey Arno And The Hazy Osterwald Sextet\nfetching id for Welcome Home by Sammy Kaye And His Orchestra ...\n--> [error] Come On Over by The Strollers\nfetching id for Your One And Only Love by Jackie Wilson ...\nfetching id for Ground Hog by The Browns Featuring Jim Edward Brown ...\nfetching id for Where I Fell In Love by The Capris ...\nfetching id for Little Pedro by The Olympics ...\n--> [error] Welcome Home by Sammy Kaye And His Orchestra\nfetching id for Lazy River by Bobby Darin ...\nfetching id for Baby Sittin' Boogie by Buzz Clifford ...\nfetching id for The Watusi by The Vibrations ...\nfetching id for The Blizzard by Jim Reeves ...\nfetching id for A City Girl Stole My Country Boy by Patti Page ...\nfetching id for Mr. Pride by Chuck Jackson ...\nfetching id for I Told You So by Jimmy Jones ...\nfetching id for Hearts Of Stone by Bill Black's Combo ...\nfetching id for Calcutta by Lawrence Welk And His Orchestra ...\n--> [error] A City Girl Stole My Country Boy by Patti Page\nfetching id for Little Boy Sad by Johnny Burnette ...\nfetching id for You Can Have Her by Roy Hamilton ...\n--> [error] Calcutta by Lawrence Welk And His Orchestra\nfetching id for Good Time Baby by Bobby Rydell ...\nfetching id for Exodus by Ferrante & Teicher ...\nfetching id for There's A Moon Out Tonight by The Capris ...\nfetching id for Blue Moon by Herb Lance & The Classics ...\nfetching id for Little Miss Stuck-Up by The Playmates ...\n--> [error] Blue Moon by Herb Lance & The Classics\nfetching id for Your Friends by Dee Clark ...\nfetching id for Pony Express by Danny & The Juniors ...\nfetching id for The Touchables by Dickie Goodman ...\nfetching id for Lonely Blue Nights by Rosie ...\n--> [error] Little Miss Stuck-Up by The Playmates\nfetching id for Honky Tonk (Part 2) by Bill Doggett ...\nfetching id for (Theme from) The Great Impostor by Henry Mancini And His Orchestra ...\nfetching id for I Pity The Fool by Bobby Bland ...\nfetching id for Kokomo by The Flamingos ...\n--> [error] Kokomo by The Flamingos\nfetching id for Little Turtle Dove by Otis Williams And His Charms ...\n--> [error] (Theme from) The Great Impostor by Henry Mancini And His Orchestra\nfetching id for I Don't Want To Cry by Chuck Jackson ...\nfetching id for Sweet Little Kathy by Ray Peterson ...\nfetching id for Shop Around by The Miracles (featuring Bill \"Smokey\" Robinson) ...\n--> [error] Little Turtle Dove by Otis Williams And His Charms\nfetching id for Will You Love Me Tomorrow by The Shirelles ...\nfetching id for Let's Go Again (Where We Went Last Night) by Hank Ballard And The Midnighters ...\nfetching id for Wheels by Billy Vaughn And His Orchestra ...\n--> [error] Shop Around by The Miracles (featuring Bill \"Smokey\" Robinson)\nfetching id for Wings Of A Dove by Ferlin Husky ...\nfetching id for For My Baby by Brook Benton ...\nfetching id for Lonely Man by Elvis Presley With The Jordanaires ...\nfetching id for Calendar Girl by Neil Sedaka ...\nfetching id for More Than I Can Say by Bobby Vee ...\nfetching id for Stayin' In by Bobby Vee ...\nfetching id for Orange Blossom Special by Billy Vaughn And His Orchestra ...\n--> [error] Wheels by Billy Vaughn And His Orchestra\nfetching id for Top Forty, News, Weather And Sports by Mark Dinning ...\nfetching id for Green Grass Of Texas by The Texans ...\nfetching id for What A Price by Fats Domino ...\nfetching id for Ram-Bunk-Shush by The Ventures ...\nfetching id for Emotions by Brenda Lee ...\nfetching id for Pony Time by The Goodtimers ...\n--> [error] Orange Blossom Special by Billy Vaughn And His Orchestra\nfetching id for It's Unbelievable by The Larks ...\n--> [error] Pony Time by The Goodtimers\nfetching id for Theme from Tunes Of Glory by The Cambridge Strings And Singers ...\nfetching id for Jimmy's Girl by Johnny Tillotson ...\nfetching id for All In My Mind by Maxine Brown ...\nfetching id for All Of Everything by Frankie Avalon ...\n--> [error] All Of Everything by Frankie Avalon\nfetching id for Utopia by Frank Gari ...\n--> [error] Theme from Tunes Of Glory by The Cambridge Strings And Singers\nfetching id for Wait A Minute by The Coasters ...\nfetching id for (I Wanna) Love My Life Away by Gene Pitney ...\nfetching id for Memphis by Donnie Brooks ...\nfetching id for Milord by Edith Piaf ...\nfetching id for Ain't That Just Like A Woman by Fats Domino ...\nfetching id for Angel On My Shoulder by Shelby Flint ...\nfetching id for Havin' Fun by Dion ...\nfetching id for Cerveza by Bert Kaempfert And His Orchestra ...\n--> [error] Utopia by Frank Gari\nfetching id for Won't Be Long by Aretha Franklin With The Ray Bryant Combo ...\nfetching id for Apache by Sonny James ...\n--> [error] Cerveza by Bert Kaempfert And His Orchestra\nfetching id for Canadian Sunset by Etta Jones ...\nfetching id for Oh Mein Papa by Dick Lee ...\n--> [error] Apache by Sonny James\nfetching id for Early Every Morning (Early Every Evening Too) by Dinah Washington ...\nfetching id for I Lied To My Heart by The Enchanters ...\nfetching id for Cowboy Jimmy Joe (Die Sterne Der Prarie) by Lolita ...\n--> [error] Oh Mein Papa by Dick Lee\nfetching id for The Story Of My Love by Paul Anka ...\n--> [error] Cowboy Jimmy Joe (Die Sterne Der Prarie) by Lolita\nfetching id for No One by Connie Francis ...\nfetching id for Wonderland By Night by Bert Kaempfert And His Orchestra ...\nfetching id for Pepe by Duane Eddy His Twangy Guitar And The Rebels ...\n--> [error] The Story Of My Love by Paul Anka\nfetching id for Angel Baby by Rosie And The Originals ...\nfetching id for My Empty Arms by Jackie Wilson ...\nfetching id for The Tear Of The Year by Jackie Wilson ...\nfetching id for Cherié by Bobby Rydell ...\n--> [error] Pepe by Duane Eddy His Twangy Guitar And The Rebels\nfetching id for (Ghost) Riders In The Sky by Ramrods ...\nfetching id for At Last by Etta James ...\nfetching id for When I Fall In Love by Etta Jones ...\nfetching id for C'est Si Bon (It's So Good) by Conway Twitty ...\nfetching id for Keep Your Hands Off Of Him by Damita Jo ...\nfetching id for Tunes Of Glory by Mitch Miller With Orchestra And Chorus ...\n--> [error] Cherié by Bobby Rydell\nfetching id for The Most Beautiful Words by Della Reese ...\nfetching id for A Texan And A Girl From Mexico by Anita Bryant ...\nfetching id for Ja-Da by Johnny And The Hurricanes ...\nfetching id for Rubber Ball by Bobby Vee ...\nfetching id for Once In Awhile by The Chimes ...\n--> [error] A Texan And A Girl From Mexico by Anita Bryant\nfetching id for I Count The Tears by The Drifters ...\nfetching id for There She Goes by Jerry Wallace ...\nfetching id for The Age For Love by Jimmy Charles ...\nfetching id for Are You Lonesome To-night? by Elvis Presley With The Jordanaires ...\n--> [error] Once In Awhile by The Chimes\nfetching id for What Would I Do by Mickey & Sylvia ...\n--> [error] Are You Lonesome To-night? by Elvis Presley With The Jordanaires\nfetching id for Corinna, Corinna by Ray Peterson ...\nfetching id for Leave My Kitten Alone by Little Willie John ...\nfetching id for You're The Boss by LaVern Baker & Jimmy Ricks ...\nfetching id for Dream Boy by Annette With The Afterbeats ...\n--> [error] What Would I Do by Mickey & Sylvia\nfetching id for If I Didn't Care by The Platters ...\nfetching id for Leave My Kitten Alone by Johnny Preston ...\nfetching id for The Exodus Song (This Land Is Mine) by Pat Boone ...\nfetching id for A Lover's Question by Ernestine Anderson ...\nfetching id for Cherry Berry Wine by Charlie Mccoy ...\n--> [error] A Lover's Question by Ernestine Anderson\nfetching id for Battle Of Gettysburg by Fred Darian ...\nfetching id for Lost Love by H.B. Barnum ...\nfetching id for North To Alaska by Johnny Horton ...\nfetching id for I'm Learning About Love by Brenda Lee ...\nfetching id for Last Date by Floyd Cramer ...\nfetching id for Close Together by Jimmy Reed ...\nfetching id for Sailor (Your Home Is The Sea) by Lolita ...\nfetching id for Them That Got by Ray Charles and his Orchestra ...\n--> [error] Cherry Berry Wine by Charlie Mccoy\nfetching id for The Magnificent Seven by Al Caiola And His Orchestra ...\n--> [error] Them That Got by Ray Charles and his Orchestra\nfetching id for Cherry Pink And Apple Blossom White by Jerry Murad's Harmonicats ...\n--> [error] The Magnificent Seven by Al Caiola And His Orchestra\nfetching id for Charlena by The Sevilles ...\nfetching id for Dance By The Light Of The Moon by The Olympics ...\nfetching id for A Thousand Stars by Kathy Young With The Innocents ...\n--> [error] Cherry Pink And Apple Blossom White by Jerry Murad's Harmonicats\nfetching id for Don't Believe Him, Donna by Lenny Miles ...\nfetching id for The Hoochi Coochi Coo by Hank Ballard And The Midnighters ...\nfetching id for First Taste Of Love by Ben E. King ...\nfetching id for Dedicated To The One I Love by The \"5\" Royales ...\nfetching id for Muskrat Ramble by Freddy Cannon ...\nfetching id for Main Theme from Exodus (Ari's Theme) by Mantovani & His Orch. ...\n--> [error] Don't Believe Him, Donna by Lenny Miles\nfetching id for What Am I Gonna Do by Jimmy Clanton ...\nfetching id for Wonderland By Night by Louis Prima ...\nfetching id for You Are The Only One by Ricky Nelson ...\nfetching id for Don't Let Him Shop Around by Debbie Dean ...\nfetching id for Lovey Dovey by Buddy Knox ...\nfetching id for Calcutta by The Four Preps ...\nfetching id for What About Me by Don Gibson ...\nfetching id for Baby Oh Baby by The Shells ...\nfetching id for He Will Break Your Heart by Jerry Butler ...\nfetching id for Wonderland By Night by Anita Bryant ...\n--> [error] Main Theme from Exodus (Ari's Theme) by Mantovani & His Orch.\nfetching id for Yes, I'm Lonesome Tonight by Thelma Carpenter ...\n--> [error] Wonderland By Night by Anita Bryant\nfetching id for Yes, I'm Lonesome Tonight by Dodie Stevens ...\nfetching id for Sound-Off by Titus Turner ...\nfetching id for You're Sixteen by Johnny Burnette ...\nfetching id for Many Tears Ago by Connie Francis ...\nfetching id for Doll House by Donnie Brooks ...\nfetching id for Stay by Maurice Williams & The Zodiacs ...\nfetching id for Lonely Teenager by Dion ...\nfetching id for My Last Date (With You) by Skeeter Davis ...\nfetching id for Blue Tango by Bill Black's Combo ...\nfetching id for Gee Whiz by The Innocents ...\nfetching id for I'm Hurtin' by Roy Orbison ...\nfetching id for My Last Date (With You) by Joni James ...\n--> [error] Yes, I'm Lonesome Tonight by Thelma Carpenter\nfetching id for I Remember by Maurice Williams & The Zodiacs ...\nfetching id for Sugar Bee by Cleveland Crochet and Band ...\n--> [error] My Last Date (With You) by Joni James\nfetching id for Chills And Fever by Ronnie Love And His Orchestra ...\nfetching id for Trouble In Mind by Nina Simone ...\nfetching id for My Girl Josephine by Fats Domino ...\nfetching id for Flamingo Express by The Royaltones ...\n--> [error] Chills And Fever by Ronnie Love And His Orchestra\nfetching id for I Gotta Know by Elvis Presley With The Jordanaires ...\n--> [error] Flamingo Express by The Royaltones\nfetching id for Bumble Bee by LaVern Baker ...\nfetching id for Perfidia by The Ventures ...\nfetching id for Don't Read The Letter by Patti Page ...\nfetching id for Happy Days by Marv Johnson ...\nfetching id for Last Date by Lawrence Welk And His Orchestra ...\n--> [error] Don't Read The Letter by Patti Page\nfetching id for Sad Mood by Sam Cooke ...\nfetching id for You Don't Want My Love by Andy Williams ...\nfetching id for Gift Of Love by Van Dykes ...\nfetching id for Tonite, Tonite by Mello-Kings ...\nfetching id for The Puppet Song by Frankie Avalon ...\n--> [error] Last Date by Lawrence Welk And His Orchestra\nfetching id for Oh, How I Miss You Tonight by Jeanne Black ...\nfetching id for In The Still Of The Nite by The Five Satins ...\nfetching id for Sway by Bobby Rydell ...\nfetching id for Poetry In Motion by Johnny Tillotson ...\nfetching id for Ruby by Ray Charles ...\nfetching id for A Perfect Love by Frankie Avalon ...\nfetching id for Gonzo by James Booker ...\nfetching id for Alone At Last by Jackie Wilson ...\nfetching id for Walk Slow by Little Willie John ...\nfetching id for Your Other Love by The Flamingos ...\nfetching id for I Don't Want Nobody (To Have My Love But You) by Ella Johnson With Buddy Johnson ...\n--> [error] The Puppet Song by Frankie Avalon\nfetching id for Milk Cow Blues by Ricky Nelson ...\nfetching id for Wabash Blues by The Viscounts ...\nfetching id for You've Got To Love Her With A Feeling by Freddy King ...\nfetching id for Fools Rush In (Where Angels Fear To Tread) by Brook Benton ...\nfetching id for And The Heavens Cried by Ronnie Savoy ...\n--> [error] Milk Cow Blues by Ricky Nelson\nfetching id for New Orleans by U.S. Bonds ...\nfetching id for Ballad Of The Alamo by Marty Robbins ...\nfetching id for How To Handle A Woman by Johnny Mathis ...\nfetching id for (Let's Do) The Hully Gully Twist by Bill Doggett ...\n--> [error] And The Heavens Cried by Ronnie Savoy\nfetching id for We Have Love by Dinah Washington ...\nfetching id for If I Knew by Nat King Cole ...\nfetching id for Is There Something On Your Mind by Jack Scott ...\nfetching id for I'll Save The Last Dance For You by Damita Jo ...\nfetching id for Oh Lonesome Me by Johnny Cash With The Gene Lowery Singers ...\nfetching id for Rockin', Rollin' Ocean by Hank Snow ...\nfetching id for Spoonful by Etta & Harvey ...\nfetching id for The Hucklebuck by Chubby Checker ...\nfetching id for Am I Losing You by Jim Reeves ...\n--> [error] (Let's Do) The Hully Gully Twist by Bill Doggett\nfetching id for Twistin' Bells by Santo & Johnny ...\nfetching id for Christmas Auld Lang Syne by Bobby Darin ...\nfetching id for Let's Go, Let's Go, Let's Go by Hank Ballard And The Midnighters ...\nfetching id for Like Strangers by The Everly Brothers ...\nfetching id for Ol' Mac Donald by Frank Sinatra ...\nfetching id for Save The Last Dance For Me by The Drifters ...\nfetching id for Mister Livingston by Larry Verne ...\nfetching id for Make Someone Happy by Perry Como ...\n--> [error] Ol' Mac Donald by Frank Sinatra\nfetching id for I Idolize You by Ike & Tina Turner ...\nfetching id for Talk To Me Baby by Annette With The Afterbeats ...\n--> [error] Make Someone Happy by Perry Como\nfetching id for This Is My Story by Mickey & Sylvia ...\nfetching id for Adeste Fideles (Oh, Come, All Ye Faithful) by Bing Crosby ...\nfetching id for Silent Night by Bing Crosby ...\nfetching id for Am I The Man by Jackie Wilson ...\nfetching id for Natural Born Lover by Fats Domino ...\nfetching id for Rudolph The Red Nosed Reindeer by The Melodeers ...\n--> [error] This Is My Story by Mickey & Sylvia\nfetching id for Rambling by The Ramblers ...\nfetching id for Gloria's Theme by Adam Wade ...\nfetching id for Send Me The Pillow You Dream On by The Browns Featuring Jim Edward Brown ...\nfetching id for Ramona by The Blue Diamonds ...\nfetching id for Have You Ever Been Lonely (Have You Ever Been Blue) by Teresa Brewer ...\nfetching id for Someday You'll Want Me To Want You by Brook Benton ...\nfetching id for Child Of God by Bobby Darin ...\nfetching id for Georgia On My Mind by Ray Charles ...\nfetching id for I Want To Be Wanted by Brenda Lee ...\nfetching id for Don't Go To Strangers by Etta Jones ...\nfetching id for Sleep by Little Willie John ...\nfetching id for Blue Angel by Roy Orbison ...\nfetching id for The Bells by James Brown ...\nfetching id for To Each His Own by The Platters ...\nfetching id for I Missed Me by Jim Reeves ...\nfetching id for Ruby Duby Du by Tobin Mathews & Co. ...\n--> [error] Rudolph The Red Nosed Reindeer by The Melodeers\nfetching id for Blue Christmas by The Browns Featuring Jim Edward Brown ...\nfetching id for You Talk Too Much by Joe Jones ...\nfetching id for Artificial Flowers by Bobby Darin ...\nfetching id for Alabam by Pat Boone ...\nfetching id for Togetherness by Frankie Avalon ...\nfetching id for My Dearest Darling by Etta James ...\nfetching id for Dear John by Pat Boone ...\n--> [error] Alabam by Pat Boone\nfetching id for Wait For Me by The Playmates ...\n--> [error] Dear John by Pat Boone\nfetching id for Groovy Tonight by Bobby Rydell ...\n--> [error] Wait For Me by The Playmates\nfetching id for Ruby Duby Du From Key Witness by Charles Wolcott ...\n--> [error] Groovy Tonight by Bobby Rydell\nfetching id for Alabam by Cowboy Copas ...\nfetching id for Gee by Jan & Dean ...\nfetching id for Hardhearted Hannah by Ray Charles ...\n--> [error] Ruby Duby Du From Key Witness by Charles Wolcott\nfetching id for The Green Leaves Of Summer by The Brothers Four ...\nfetching id for (You Better) Know What You're Doin' by Lloyd Price and His Orchestra ...\n--> [error] Hardhearted Hannah by Ray Charles\nfetching id for The Big Time Spender (Parts I & II) by Cornbread & Biscuits ...\nfetching id for Ballad Of The Alamo by Bud & Travis ...\n--> [error] The Big Time Spender (Parts I & II) by Cornbread & Biscuits\nfetching id for Devil Or Angel by Bobby Vee ...\nfetching id for Don't Be Cruel by Bill Black's Combo ...\nfetching id for Peter Gunn by Duane Eddy His Twangy Guitar And The Rebels ...\n--> [error] Ballad Of The Alamo by Bud & Travis\nfetching id for My Heart Has A Mind Of Its Own by Connie Francis ...\nfetching id for Summer's Gone by Paul Anka ...\nfetching id for Let's Think About Living by Bob Luman ...\nfetching id for Love Walked In by Dinah Washington ...\nfetching id for The Sundowners by Billy Vaughn And His Orchestra ...\n--> [error] Peter Gunn by Duane Eddy His Twangy Guitar And The Rebels\nfetching id for Cry Cry Cry by Bobby Bland ...\nfetching id for Theme From The Apartment by Ferrante & Teicher ...\nfetching id for Serenata by Sarah Vaughan ...\nfetching id for You Are My Sunshine by Johnny And The Hurricanes ...\nfetching id for Sweet Dreams by Don Gibson ...\nfetching id for Psycho by Bobby Hendricks ...\nfetching id for Diamonds And Pearls by The Paradons ...\nfetching id for Little Miss Blue by Dion ...\nfetching id for Chain Gang by Sam Cooke ...\nfetching id for Whole Lot Of Shakin' Going On by Conway Twitty ...\n--> [error] The Sundowners by Billy Vaughn And His Orchestra\nfetching id for Kiddio by Brook Benton ...\nfetching id for Tonights The Night by The Shirelles ...\nfetching id for A Million To One by Jimmy Charles and The Revelletts ...\n--> [error] Whole Lot Of Shakin' Going On by Conway Twitty\nfetching id for Night Theme by The Mark II ...\nfetching id for Have Mercy Baby by The Bobbettes ...\n--> [error] Night Theme by The Mark II\nfetching id for Whole Lotta Shakin' Goin' On by Chubby Checker ...\nfetching id for Stranger From Durango by Richie Allen ...\n--> [error] Have Mercy Baby by The Bobbettes\nfetching id for Theme From The Sundowners by Felix Slatkin Orchestra and Chorus ...\n--> [error] Stranger From Durango by Richie Allen\nfetching id for (Theme from) \"The Dark At The Top Of The Stairs\" by Ernie Freeman ...\n--> [error] Theme From The Sundowners by Felix Slatkin Orchestra and Chorus\nfetching id for It's Now Or Never by Elvis Presley With The Jordanaires ...\nfetching id for So Sad (To Watch Good Love Go Bad) by The Everly Brothers ...\nfetching id for Mr. Custer by Larry Verne ...\nfetching id for A Fool In Love by Ike & Tina Turner ...\nfetching id for Somebody To Love by Bobby Darin ...\nfetching id for Shimmy Like Kate by The Olympics ...\nfetching id for Side Car Cycle by Charlie Ryan and the Timberline Riders ...\n--> [error] (Theme from) \"The Dark At The Top Of The Stairs\" by Ernie Freeman\nfetching id for Anymore by Teresa Brewer ...\nfetching id for Push Push by Austin Taylor ...\nfetching id for One Of The Lucky Ones by Anita Bryant ...\n--> [error] Push Push by Austin Taylor\nfetching id for Fallen Angel by Webb Pierce ...\n--> [error] One Of The Lucky Ones by Anita Bryant\nfetching id for (You've Got To) Move Two Mountains by Marv Johnson ...\nfetching id for Three Nights A Week by Fats Domino ...\nfetching id for Twistin' U.S.A. by Danny & The Juniors ...\nfetching id for Walk -- Don't Run by The Ventures ...\nfetching id for I Wish I'd Never Been Born by Patti Page ...\n--> [error] Fallen Angel by Webb Pierce\nfetching id for Run Samson Run by Neil Sedaka ...\nfetching id for That's How Much by Brian Hyland ...\nfetching id for Pineapple Princess by Annette With The Afterbeats ...\n--> [error] I Wish I'd Never Been Born by Patti Page\nfetching id for Lucille by The Everly Brothers ...\nfetching id for Finger Poppin' Time by Hank Ballard And The Midnighters ...\nfetching id for If She Should Come To You (La Montana) by Anthony Newley ...\nfetching id for Patsy by Jack Scott ...\nfetching id for (I Do The) Shimmy Shimmy by Bobby Freeman ...\nfetching id for Senza Mamma (With No One) by Connie Francis ...\nfetching id for Midnight Lace by Ray Ellis ...\nfetching id for Everglades by The Kingston Trio ...\nfetching id for Kissin' And Twistin' by Fabian ...\n--> [error] Midnight Lace by Ray Ellis\nfetching id for (Theme From) The Sundowners by Mantovani & His Orchestra ...\n--> [error] Kissin' And Twistin' by Fabian\nfetching id for Midnight Lace - Part I by Ray Conniff His Orchestra And Chorus ...\n--> [error] (Theme From) The Sundowners by Mantovani & His Orchestra\nfetching id for A Thousand Miles Away by The Heartbeats ...\nfetching id for My Love For You by Johnny Mathis ...\nfetching id for The Same One by Brook Benton ...\nfetching id for Just A Little by Brenda Lee ...\nfetching id for You Mean Everything To Me by Neil Sedaka ...\nfetching id for Let's Have A Party by Wanda Jackson ...\nfetching id for Dreamin' by Johnny Burnette ...\nfetching id for Humdinger by Freddy Cannon ...\nfetching id for The Twist by Hank Ballard And The Midnighters ...\nfetching id for I'm Sorry by Brenda Lee ...\nfetching id for The Girl With The Story In Her Eyes by Safaris with The Phantom's Band ...\nfetching id for Temptation by Roger Williams ...\nfetching id for Irresistable You by Bobby Peterson ...\n--> [error] Midnight Lace - Part I by Ray Conniff His Orchestra And Chorus\nfetching id for Volare by Bobby Rydell ...\nfetching id for I'm Not Afraid by Ricky Nelson ...\nfetching id for Mission Bell by Donnie Brooks ...\nfetching id for (The Clickity Clack Song) Four Little Heels by Brian Hyland ...\n--> [error] Irresistable You by Bobby Peterson\nfetching id for My Hero by The Blue Notes ...\nfetching id for Shoppin' For Clothes by The Coasters ...\nfetching id for Dance With Me Georgie by The Bobbettes ...\n--> [error] (The Clickity Clack Song) Four Little Heels by Brian Hyland\nfetching id for Only The Lonely (Know How I Feel) by Roy Orbison ...\nfetching id for Midnight Lace by David Carroll And His Orchestra ...\n--> [error] Dance With Me Georgie by The Bobbettes\nfetching id for Isn't It Amazing by The Crests featuring Johnny Mastro ...\nfetching id for Honest I Do by The Innocents ...\nfetching id for Malagueña by Connie Francis ...\nfetching id for Yes Sir, That's My Baby by Ricky Nelson ...\nfetching id for Hello Young Lovers by Paul Anka ...\n--> [error] Midnight Lace by David Carroll And His Orchestra\nfetching id for Ta Ta by Clyde McPhatter ...\nfetching id for If I Can't Have You by Etta & Harvey ...\nfetching id for You're Looking Good by Dee Clark ...\nfetching id for Hush-Hush by Jimmy Reed ...\nfetching id for Time Machine by Dante and the Evergreens ...\n--> [error] Hello Young Lovers by Paul Anka\nfetching id for It's Not The End Of Everything by Tommy Edwards ...\nfetching id for Don't Let Love Pass Me By by Frankie Avalon ...\nfetching id for You Talk Too Much by Frankie Ford ...\n--> [error] Don't Let Love Pass Me By by Frankie Avalon\nfetching id for Please Help Me, I'm Falling by Hank Locklin ...\nfetching id for Harmony by Billy Bland ...\nfetching id for The Last One To Know by The Fleetwoods ...\nfetching id for Yogi by The Ivy Three ...\nfetching id for Itsy Bitsy Teenie Weenie Yellow Polkadot Bikini by Brian Hyland ...\n--> [error] You Talk Too Much by Frankie Ford\nfetching id for In My Little Corner Of The World by Anita Bryant ...\nfetching id for Over The Rainbow by The Demensions ...\nfetching id for Hot Rod Lincoln by Johnny Bond ...\nfetching id for I Love You In The Same Old Way by Paul Anka ...\nfetching id for Come Back by Jimmy Clanton ...\nfetching id for Rocking Goose by Johnny And The Hurricanes ...\nfetching id for Put Your Arms Around Me Honey by Fats Domino ...\nfetching id for Wait by Jimmy Clanton ...\nfetching id for A Kookie Little Paradise by Jo Ann Campbell ...\n--> [error] Itsy Bitsy Teenie Weenie Yellow Polkadot Bikini by Brian Hyland\nfetching id for Big Boy Pete by The Olympics ...\nfetching id for (You Were Made For) All My Love by Jackie Wilson ...\nfetching id for Hot Rod Lincoln by Charlie Ryan and the Timberline Riders ...\n--> [error] A Kookie Little Paradise by Jo Ann Campbell\nfetching id for Image Of A Girl by Safaris with The Phantom's Band ...\nfetching id for A Mess Of Blues by Elvis Presley With The Jordanaires ...\nfetching id for Let The Good Times Roll by Shirley & Lee ...\nfetching id for My Shoes Keep Walking Back To You by Guy Mitchell ...\nfetching id for I Walk The Line by Jaye P. Morgan ...\nfetching id for (I Can't Help You) I'm Falling Too by Skeeter Davis ...\nfetching id for Is You Is Or Is You Ain't My Baby by Buster Brown ...\nfetching id for Five Brothers by Marty Robbins ...\nfetching id for Nice 'N' Easy by Frank Sinatra ...\nfetching id for Over The Mountain; Across The Sea by Johnnie & Joe ...\nfetching id for Just Call Me (And I'll Understand) by Lloyd Price and His Orchestra ...\n--> [error] I Walk The Line by Jaye P. Morgan\nfetching id for And Now by Della Reese ...\nfetching id for The Lovin' Touch by Mark Dinning ...\nfetching id for Tonight's The Night by The Chiffons ...\nfetching id for Journey Of Love by The Crests featuring Johnny Mastro ...\nfetching id for A Woman, A Lover, A Friend by Jackie Wilson ...\nfetching id for It Only Happened Yesterday by Jack Scott ...\nfetching id for Brontosaurus Stomp by The Piltdown Men ...\nfetching id for This Old Heart by James Brown And The Famous Flames ...\nfetching id for Kommotion by Duane Eddy ...\nfetching id for No by Dodie Stevens ...\nfetching id for Alvin For President by David Seville And The Chipmunks ...\n--> [error] The Lovin' Touch by Mark Dinning\nfetching id for The Wreck Of The \"John B\" by Jimmie Rodgers ...\nfetching id for Walking To New Orleans by Fats Domino ...\nfetching id for Mule Skinner Blues by The Fendermen ...\nfetching id for Lisa by Jeanne Black ...\n--> [error] Alvin For President by David Seville And The Chipmunks\nfetching id for Feel So Fine by Johnny Preston ...\nfetching id for My Love by Nat King Cole-Stan Kenton ...\n--> [error] Lisa by Jeanne Black\nfetching id for Shortnin' Bread by Paul Chaplain and his Emeralds ...\nfetching id for Red Sails In The Sunset by The Platters Featuring Tony Williams ...\nfetching id for We Go Together by Jan & Dean ...\nfetching id for Kookie Little Paradise by The Tree Swingers ...\n--> [error] My Love by Nat King Cole-Stan Kenton\nfetching id for This Bitter Earth by Dinah Washington ...\nfetching id for Tell Laura I Love Her by Ray Peterson ...\nfetching id for Look For A Star by Garry Miles ...\n--> [error] Kookie Little Paradise by The Tree Swingers\nfetching id for How High The Moon (Part 1) by Ella Fitzgerald ...\nfetching id for Since I Met You Baby by Bobby Vee ...\nfetching id for Many A Wonderful Moment by Rosemary Clooney ...\n--> [error] Look For A Star by Garry Miles\nfetching id for The Old Oaken Bucket by Tommy Sands ...\n--> [error] Many A Wonderful Moment by Rosemary Clooney\nfetching id for Nobody Knows You When You're Down And Out by Nina Simone ...\nfetching id for A Teenager Feels It Too by Denny Reed ...\nfetching id for Trouble In Paradise by The Crests ...\nfetching id for Alley-Oop by Hollywood Argyles ...\nfetching id for Everybody's Somebody's Fool by Connie Francis ...\nfetching id for Question by Lloyd Price and His Orchestra ...\n--> [error] The Old Oaken Bucket by Tommy Sands\nfetching id for Don't Come Knockin' by Fats Domino ...\nfetching id for Look For A Star by Billy Vaughn And His Orchestra ...\nfetching id for That's All You Gotta Do by Brenda Lee ...\nfetching id for One Of Us (Will Weep Tonight) by Patti Page ...\n--> [error] Look For A Star by Billy Vaughn And His Orchestra\nfetching id for Delia Gone by Pat Boone ...\n--> [error] One Of Us (Will Weep Tonight) by Patti Page\nfetching id for Is There Any Chance by Marty Robbins ...\nfetching id for Candy Sweet by Pat Boone ...\n--> [error] Delia Gone by Pat Boone\nfetching id for Shortnin' Bread by The Bell Notes ...\nfetching id for Beachcomber by Bobby Darin ...\nfetching id for In The Still Of The Night by Dion & The Belmonts ...\nfetching id for Where Are You by Frankie Avalon ...\nfetching id for Look For A Star by Deane Hawley ...\nfetching id for Look For A Star - Part I by Garry Mills ...\n--> [error] Candy Sweet by Pat Boone\nfetching id for One Boy by Joanie Sommers ...\nfetching id for Little Bitty Pretty One by Frankie Lymon ...\nfetching id for Josephine by Bill Black's Combo ...\nfetching id for The Brigade Of Broken Hearts by Paul Evans ...\n--> [error] Look For A Star - Part I by Garry Mills\nfetching id for Please Help Me, I'm Falling by Rusty Draper ...\n--> [error] Please Help Me, I'm Falling by Rusty Draper\nfetching id for Because They're Young by Duane Eddy And The Rebels ...\n--> [error] The Brigade Of Broken Hearts by Paul Evans\nfetching id for Bongo Bongo Bongo by Preston Epps ...\nfetching id for Revival by Johnny And The Hurricanes ...\nfetching id for Far, Far Away by Don Gibson ...\nfetching id for Vaquero (Cowboy) by The Fireballs ...\nfetching id for When Will I Be Loved by The Everly Brothers ...\nfetching id for Bad Man Blunder by The Kingston Trio ...\nfetching id for Is A Blue Bird Blue by Conway Twitty ...\n--> [error] Because They're Young by Duane Eddy And The Rebels\nfetching id for Alley-Oop by Dante and the Evergreens ...\nfetching id for I Shot Mr. Lee by The Bobbettes ...\nfetching id for There's Something On Your Mind (Part 2) by Bobby Marchan ...\nfetching id for Heartbreak (It's Hurtin' Me) by Little Willie John ...\nfetching id for Heartbreak (It's Hurtin' Me) by Jon Thomas and Orchestra ...\n--> [error] Is A Blue Bird Blue by Conway Twitty\nfetching id for Mio Amore by The Flamingos ...\nfetching id for I'm Gettin' Better by Jim Reeves ...\nfetching id for I Know One by Jim Reeves ...\nfetching id for Blue Velvet by The Statues ...\n--> [error] Heartbreak (It's Hurtin' Me) by Jon Thomas and Orchestra\nfetching id for Happy Shades Of Blue by Freddie Cannon ...\n--> [error] Blue Velvet by The Statues\nfetching id for I Really Don't Want To Know by Tommy Edwards ...\nfetching id for Wake Me, Shake Me by The Coasters ...\nfetching id for Hey Little One by Dorsey Burnette ...\nfetching id for A Rockin' Good Way (To Mess Around And Fall In Love) by Dinah Washington & Brook Benton ...\nfetching id for My Home Town by Paul Anka ...\nfetching id for Sticks And Stones by Ray Charles and his Orchestra ...\n--> [error] Happy Shades Of Blue by Freddie Cannon\nfetching id for Runaround by The Fleetwoods ...\nfetching id for My Tani by The Brothers Four ...\nfetching id for Clap Your Hands by The Beau-Marks ...\nfetching id for Night Train by The Viscounts ...\n--> [error] Sticks And Stones by Ray Charles and his Orchestra\nfetching id for Wonderful World by Sam Cooke ...\nfetching id for Do You Mind? by Anthony Newley ...\nfetching id for Do You Mind? by Andy Williams ...\nfetching id for The Last Dance by The McGuire Sisters ...\n--> [error] Night Train by The Viscounts\nfetching id for Cathy's Clown by The Everly Brothers ...\nfetching id for Burning Bridges by Jack Scott ...\nfetching id for All I Could Do Was Cry by Etta James ...\nfetching id for Love You So by Ron Holden with The Thunderbirds ...\n--> [error] The Last Dance by The McGuire Sisters\nfetching id for Swingin' Down The Lane by Jerry Wallace ...\nfetching id for Cool Water by Jack Scott ...\nfetching id for I've Been Loved Before by Shirley and Lee ...\nfetching id for That's When I Cried by Jimmy Jones ...\nfetching id for Be Bop A-Lula by The Everly Brothers ...\nfetching id for She's Mine by Conway Twitty ...\nfetching id for Jealous Of You (Tango Della Gelosia) by Connie Francis ...\nfetching id for Paper Roses by Anita Bryant ...\n--> [error] I've Been Loved Before by Shirley and Lee\nfetching id for Pennies From Heaven by The Skyliners ...\nfetching id for Mack The Knife by Ella Fitzgerald ...\nfetching id for Happy-Go-Lucky-Me by Paul Evans ...\n--> [error] Paper Roses by Anita Bryant\nfetching id for Won't You Come Home Bill Bailey by Bobby Darin ...\nfetching id for I Can't Help It by Adam Wade ...\nfetching id for Too Young To Go Steady by Connie Stevens ...\nfetching id for Lonely Weekends by Charlie Rich ...\nfetching id for There's A Star Spangled Banner Waving #2 (The Ballad Of Francis Powers) by Red River Dave ...\n--> [error] I Can't Help It by Adam Wade\nfetching id for Theme From Adventures In Paradise by Jerry Byrd ...\n--> [error] There's A Star Spangled Banner Waving #2 (The Ballad Of Francis Powers) by Red River Dave\nfetching id for Something Happened by Paul Anka ...\n--> [error] Theme From Adventures In Paradise by Jerry Byrd\nfetching id for Good Timin' by Jimmy Jones ...\nfetching id for Swingin' School by Bobby Rydell ...\n--> [error] Something Happened by Paul Anka\nfetching id for Johnny Freedom by Johnny Horton ...\nfetching id for Second Honeymoon by Johnny Cash ...\nfetching id for Train Of Love by Annette With The Afterbeats ...\nfetching id for Down Yonder by Johnny And The Hurricanes ...\nfetching id for Down The Street To 301 by Johnny Cash And The Tennessee Two ...\nfetching id for Theme From \"The Unforgiven\" (The Need For Love) by Don Costa And His Orchestra And Chorus ...\n--> [error] Down The Street To 301 by Johnny Cash And The Tennessee Two\nfetching id for Whip It On Me by Jessie Hill ...\nfetching id for Honky-Tonk Girl by Johnny Cash ...\nfetching id for She's Just A Whole Lot Like You by Hank Thompson ...\nfetching id for Young Emotions by Ricky Nelson ...\nfetching id for Ding-A-Ling by Bobby Rydell ...\nfetching id for Another Sleepless Night by Jimmy Clanton ...\nfetching id for All The Love I've Got by Marv Johnson ...\nfetching id for Banjo Boy by Jan And Kjeld ...\nfetching id for Doggin' Around by Jackie Wilson ...\nfetching id for I'll Be There by Bobby Darin ...\nfetching id for Stuck On You by Elvis Presley With The Jordanaires ...\n--> [error] Theme From \"The Unforgiven\" (The Need For Love) by Don Costa And His Orchestra And Chorus\nfetching id for Lonely Winds by The Drifters ...\nfetching id for Spring Rain by Pat Boone ...\nfetching id for He'll Have To Stay by Jeanne Black ...\nfetching id for Sixteen Reasons by Connie Stevens ...\nfetching id for Night by Jackie Wilson ...\nfetching id for Theme For Young Lovers by Percy Faith And His Orchestra ...\nfetching id for Ooh Poo Pah Doo - Part II by Jessie Hill ...\nfetching id for Walking The Floor Over You by Pat Boone ...\n--> [error] Spring Rain by Pat Boone\nfetching id for Jump Over by Freddy Cannon ...\nfetching id for Dutchman's Gold by Walter Brennan With Billy Vaughn and his Orchestra ...\n--> [error] Walking The Floor Over You by Pat Boone\nfetching id for Ain't Gonna Be That Way by Marv Johnson ...\nfetching id for Cherry Pie by Skip And Flip ...\nfetching id for Greenfields by The Brothers Four ...\nfetching id for Sink The Bismark by Johnny Horton ...\nfetching id for Cradle Of Love by Johnny Preston ...\nfetching id for National City by Joiner, Arkansas Junior High School Band ...\nfetching id for Exclusively Yours by Carl Dobkins, Jr. ...\nfetching id for Mountain Of Love by Harold Dorman ...\nfetching id for You Were Born To Be Loved by Billy Bland ...\n--> [error] Exclusively Yours by Carl Dobkins, Jr.\nfetching id for Stairway To Heaven by Neil Sedaka ...\nfetching id for Pink Chiffon by Mitchell Torok ...\nfetching id for Let The Little Girl Dance by Billy Bland ...\nfetching id for Comin' Down With Love by Mel Gadson ...\n--> [error] You Were Born To Be Loved by Billy Bland\nfetching id for Banjo Boy by Dorothy Collins ...\n--> [error] Comin' Down With Love by Mel Gadson\nfetching id for You've Got The Power by James Brown And The Famous Flames ...\n--> [error] Banjo Boy by Dorothy Collins\nfetching id for Found Love by Jimmy Reed ...\nfetching id for Mr. Lucky by Henry Mancini ...\nfetching id for Think by James Brown And The Famous Flames ...\nfetching id for Mister Lonely by The Videls ...\nfetching id for The Old Lamplighter by The Browns Featuring Jim Edward Brown ...\nfetching id for When You Wish Upon A Star by Dion & The Belmonts ...\nfetching id for Barbara by The Temptations ...\n--> [error] You've Got The Power by James Brown And The Famous Flames\nfetching id for Alley-Oop by The Dyna-Sores ...\nfetching id for Ebb Tide by The Platters Featuring Tony Williams ...\nfetching id for The Madison by Al Brown's Tunetoppers Featuring Cookie Brown ...\n--> [error] Barbara by The Temptations\nfetching id for La Montana (If She Should Come To You) by Frank DeVol And His Rainbow Strings ...\n--> [error] The Madison by Al Brown's Tunetoppers Featuring Cookie Brown\nfetching id for Nobody Loves Me Like You by The Flamingos ...\nfetching id for Always It's You by The Everly Brothers ...\nfetching id for Got A Girl by The Four Preps ...\nfetching id for Tuxedo Junction by Frankie Avalon ...\nfetching id for Banjo Boy by Art Mooney And His Orchestra ...\n--> [error] La Montana (If She Should Come To You) by Frank DeVol And His Rainbow Strings\nfetching id for Oh, Little One by Jack Scott ...\nfetching id for Fame And Fortune by Elvis Presley With The Jordanaires ...\nfetching id for The Way Of A Clown by Teddy Randazzo ...\nfetching id for A Cottage For Sale by Little Willie John ...\nfetching id for Shadows Of Love by LaVern Baker ...\nfetching id for Mojo Workout (Dance) by Larry Bright ...\n--> [error] Banjo Boy by Art Mooney And His Orchestra\nfetching id for Biology by Danny Valentino ...\n--> [error] Mojo Workout (Dance) by Larry Bright\nfetching id for I'll Be Seeing You by The Five Satins ...\n--> [error] Biology by Danny Valentino\nfetching id for Right By My Side by Ricky Nelson ...\nfetching id for Step By Step by The Crests ...\nfetching id for White Silver Sands by Bill Black's Combo ...\nfetching id for The Madison Time - Part I by Ray Bryant Combo ...\nfetching id for The Ties That Bind by Brook Benton ...\nfetching id for Just A Closer Walk With Thee by Jimmie Rodgers ...\nfetching id for For Love by Lloyd Price and His Orchestra ...\nfetching id for Tell Me That You Love Me by Fats Domino ...\nfetching id for No If's - No And's by Lloyd Price and His Orchestra ...\n--> [error] The Madison Time - Part I by Ray Bryant Combo\nfetching id for What Am I Living For by Conway Twitty ...\nfetching id for River, Stay 'Way From My Door by Frank Sinatra ...\nfetching id for La Montana (If She Should Come To You) by Roger Williams ...\nfetching id for The Yen Yet Song by Gary Cane And His Friends ...\n--> [error] La Montana (If She Should Come To You) by Roger Williams\nfetching id for I Love The Way You Love by Marv Johnson ...\nfetching id for He'll Have To Go by Jim Reeves ...\nfetching id for Footsteps by Steve Lawrence ...\nfetching id for Apple Green by June Valli ...\n--> [error] The Yen Yet Song by Gary Cane And His Friends\nfetching id for Sweet Nothin's by Brenda Lee ...\nfetching id for City Lights by Debbie Reynolds ...\nfetching id for The Theme From \"A Summer Place\" by Percy Faith And His Orchestra ...\nfetching id for Angela Jones by Johnny Ferguson ...\n--> [error] Apple Green by June Valli\nfetching id for Down The Aisle by Ike Clanton ...\nfetching id for The Urge by Freddy Cannon ...\nfetching id for Puppy Love by Paul Anka ...\nfetching id for Earth Angel by Johnny Tillotson ...\nfetching id for Money (That's what I want) by Barrett Strong ...\nfetching id for Fannie Mae by Buster Brown ...\nfetching id for Hither And Thither And Yon by Brook Benton ...\nfetching id for You Don't Know Me by Lenny Welch ...\nfetching id for A Star Is Born (A Love Has Died) by Mark Dinning ...\n--> [error] Angela Jones by Johnny Ferguson\nfetching id for Last Chance by Collay & the Satellites ...\n--> [error] A Star Is Born (A Love Has Died) by Mark Dinning\nfetching id for Beautiful Obsession by Sir Chauncey and his exciting strings ...\nfetching id for Wheel Of Fortune by LaVern Baker ...\nfetching id for Mama by Connie Francis ...\nfetching id for Clementine by Bobby Darin ...\nfetching id for Big Iron by Marty Robbins ...\nfetching id for Don't Throw Away All Those Teardrops by Frankie Avalon ...\nfetching id for Pledging My Love by Johnny Tillotson ...\nfetching id for Wild One by Bobby Rydell ...\nfetching id for Just One Time by Don Gibson ...\nfetching id for Besame Mucho (Part I) by The Coasters ...\nfetching id for Is It Wrong (For Loving You) by Webb Pierce ...\nfetching id for Before I Grow Too Old by Fats Domino ...\nfetching id for Easy Lovin' by Wade Flemons ...\nfetching id for Jenny Lou by Sonny James ...\nfetching id for Put Your Arms Around Me Honey by Ray Smith ...\nfetching id for Baby What You Want Me To Do by Jimmy Reed ...\nfetching id for Someone Loves You, Joe by The Singing Belles ...\n--> [error] Beautiful Obsession by Sir Chauncey and his exciting strings\nfetching id for Summer Set by Monty Kelly And His Orchestra ...\n--> [error] Someone Loves You, Joe by The Singing Belles\nfetching id for (Welcome) New Lovers by Pat Boone ...\nfetching id for Teddy by Connie Francis ...\nfetching id for O Dio Mio by Annette ...\nfetching id for Teen-Ex by The Browns Featuring Jim Edward Brown ...\nfetching id for Am I That Easy To Forget by Debbie Reynolds ...\nfetching id for Beatnik Fly by Johnny And The Hurricanes ...\nfetching id for Little Bitty Girl by Bobby Rydell ...\nfetching id for (There Was A) Tall Oak Tree by Dorsey Burnette ...\nfetching id for Starbright by Johnny Mathis ...\nfetching id for Harbor Lights by The Platters ...\nfetching id for Wake Me When It's Over by Andy Williams ...\n--> [error] Summer Set by Monty Kelly And His Orchestra\nfetching id for Think Me A Kiss by Clyde McPhatter ...\nfetching id for Baby (You've Got What It Takes) by Dinah Washington & Brook Benton ...\nfetching id for Two Thousand, Two Hundred, Twenty-Three Miles by Patti Page ...\n--> [error] Wake Me When It's Over by Andy Williams\nfetching id for It Could Happen To You by Dinah Washington ...\nfetching id for El Matador by The Kingston Trio ...\nfetching id for Ruby by Adam Wade ...\nfetching id for Someday (You'll Want Me to Want You) by Della Reese ...\nfetching id for This Magic Moment by The Drifters ...\nfetching id for Caravan by Santo & Johnny ...\nfetching id for Don't Deceive Me by Ruth Brown ...\nfetching id for Rockin' Red Wing by Sammy Masters ...\nfetching id for (Doin' The) Lovers Leap by Webb Pierce ...\n--> [error] Two Thousand, Two Hundred, Twenty-Three Miles by Patti Page\nfetching id for Shazam! by Duane Eddy His Twangy Guitar And The Rebels ...\n--> [error] (Doin' The) Lovers Leap by Webb Pierce\nfetching id for Handy Man by Jimmy Jones ...\nfetching id for Down By The Riverside by Les Compagnons De La Chanson ...\n--> [error] Shazam! by Duane Eddy His Twangy Guitar And The Rebels\nfetching id for Forever by The Little Dippers ...\nfetching id for Lady Luck by Lloyd Price and His Orchestra ...\n--> [error] Down By The Riverside by Les Compagnons De La Chanson\nfetching id for China Doll by The Ames Brothers ...\nfetching id for What In The World's Come Over You by Jack Scott ...\nfetching id for Paradise by Sammy Turner ...\nfetching id for At My Front Door by Dee Clark ...\nfetching id for Teenage Sonata by Sam Cooke ...\nfetching id for My Empty Room by Little Anthony And The Imperials ...\nfetching id for Lawdy Miss Clawdy by Gary Stites ...\n--> [error] China Doll by The Ames Brothers\nfetching id for Midnite Special by Paul Evans ...\n--> [error] Lawdy Miss Clawdy by Gary Stites\nfetching id for Beyond The Sea by Bobby Darin ...\nfetching id for Chattanooga Choo Choo by Ernie Fields & Orch. ...\nfetching id for Let It Be Me by The Everly Brothers ...\nfetching id for Why Do I Love You So by Johnny Tillotson ...\nfetching id for Delaware by Perry Como ...\nfetching id for Teen Angel by Mark Dinning ...\nfetching id for Running Bear by Johnny Preston ...\nfetching id for How Deep Is The Ocean by Miss Toni Fisher ...\n--> [error] Midnite Special by Paul Evans\nfetching id for Rockin' Little Angel by Ray Smith ...\nfetching id for About This Thing Called Love by Fabian ...\n--> [error] How Deep Is The Ocean by Miss Toni Fisher\nfetching id for String Along by Fabian ...\n--> [error] About This Thing Called Love by Fabian\nfetching id for Don't Fence Me In by Tommy Edwards ...\nfetching id for Why I'm Walkin' by Stonewall Jackson ...\nfetching id for Outside My Window by The Fleetwoods ...\nfetching id for Adam And Eve by Paul Anka ...\nfetching id for Where Or When by Dion & The Belmonts ...\nfetching id for What Do You Want? by Bobby Vee ...\nfetching id for Never Let Me Go by Lloyd Price and His Orchestra ...\n--> [error] String Along by Fabian\nfetching id for House Of Bamboo by Earl Grant ...\nfetching id for Anyway The Wind Blows by Doris Day ...\nfetching id for The Same Old Me by Guy Mitchell ...\nfetching id for Chattanooga Shoe Shine Boy by Freddy Cannon ...\nfetching id for Tracy's Theme by Spencer Ross ...\n--> [error] Anyway The Wind Blows by Doris Day\nfetching id for Country Boy by Fats Domino ...\nfetching id for El Paso by Marty Robbins ...\nfetching id for Down By The Station by The Four Preps ...\nfetching id for Road Runner by Bo Diddley ...\nfetching id for Lonely Blue Boy by Conway Twitty ...\nfetching id for Bulldog by The Fireballs ...\nfetching id for Too Much Tequila by The Champs ...\nfetching id for Lucky Devil by Carl Dobkins, Jr. ...\n--> [error] Tracy's Theme by Spencer Ross\nfetching id for Crazy Arms by Bob Beckham ...\n--> [error] Lucky Devil by Carl Dobkins, Jr.\nfetching id for Just A Little Bit by Rosco Gordon ...\nfetching id for Jambalaya by Bobby Comstock And The Counts ...\n--> [error] Crazy Arms by Bob Beckham\nfetching id for (Baby) Hully Gully by The Olympics ...\nfetching id for You Got What It Takes by Marv Johnson ...\nfetching id for Bad Boy by Marty Wilde ...\nfetching id for I Need You Now by 100 Strings and Jono (Choir of 40 Voices) ...\n--> [error] Jambalaya by Bobby Comstock And The Counts\nfetching id for Eternally by Sarah Vaughan ...\nfetching id for Alvin's Orchestra by David Seville And The Chipmunks ...\n--> [error] I Need You Now by 100 Strings and Jono (Choir of 40 Voices)\nfetching id for Shimmy, Shimmy, Ko-Ko-Bop by Little Anthony And The Imperials ...\nfetching id for Pretty Blue Eyes by Steve Lawrence ...\nfetching id for Straight A's In Love by Johnny Cash And The Tennessee Two ...\n--> [error] Alvin's Orchestra by David Seville And The Chipmunks\nfetching id for Too Pooped To Pop (\"Casey\") by Chuck Berry ...\nfetching id for Words by Pat Boone ...\n--> [error] Straight A's In Love by Johnny Cash And The Tennessee Two\nfetching id for Let It Rock by Chuck Berry ...\nfetching id for Go, Jimmy, Go by Jimmy Clanton ...\nfetching id for Sleepy Lagoon by The Platters ...\nfetching id for Time And The River by Nat King Cole ...\nfetching id for On The Beach by Frank Chacksfield And His Orch. ...\nfetching id for Clementine by Jan & Dean ...\n--> [error] Words by Pat Boone\nfetching id for Suddenly by Nickey DeMatteo ...\nfetching id for T.L.C. Tender Love And Care by Jimmie Rodgers ...\nfetching id for Whatcha' Gonna Do by Nat King Cole ...\nfetching id for A Closer Walk by Pete Fountain ...\nfetching id for Waltzing Matilda by Jimmie Rodgers ...\nfetching id for Why by Frankie Avalon ...\nfetching id for The Big Hurt by Miss Toni Fisher ...\nfetching id for Darling Lorraine by The Knockouts ...\nfetching id for The Village Of St. Bernadette by Andy Williams ...\nfetching id for Werewolf by The Frantics ...\nfetching id for Just Give Me A Ring by Clyde McPhatter ...\n--> [error] Suddenly by Nickey DeMatteo\nfetching id for You're My Baby by Sarah Vaughan ...\nfetching id for Little Coco Palm by Jerry Wallace ...\nfetching id for Teensville by Chet Atkins ...\n--> [error] Just Give Me A Ring by Clyde McPhatter\nfetching id for I Was Such A Fool (To Fall In Love With You) by The Flamingos ...\nfetching id for Way Down Yonder In New Orleans by Freddie Cannon ...\n--> [error] Teensville by Chet Atkins\nfetching id for Sandy by Larry Hall ...\nfetching id for Time After Time by Frankie Ford ...\nfetching id for That Old Feeling by Kitty Kallen ...\nfetching id for It's Time To Cry by Paul Anka ...\nfetching id for Secret Of Love by Elton Anderson With Sid Lawrence Combo ...\nfetching id for (Do The) Mashed Potatoes (Part 1) by Nat Kendrick And The Swans ...\nfetching id for Up Town by Roy Orbison ...\n--> [error] Way Down Yonder In New Orleans by Freddie Cannon\nfetching id for Among My Souvenirs by Connie Francis ...\nfetching id for The Old Payola Roll Blues (Side I) by Stan Freberg ...\nfetching id for Peace Of Mind by Teresa Brewer ...\nfetching id for Bonnie Came Back by Duane Eddy His Twangy Guitar And The Rebels ...\n--> [error] Up Town by Roy Orbison\nfetching id for First Name Initial by Annette With The Afterbeats ...\n--> [error] Bonnie Came Back by Duane Eddy His Twangy Guitar And The Rebels\nfetching id for Amapola by Jacky Noguez And His Orchestra ...\n--> [error] First Name Initial by Annette With The Afterbeats\nfetching id for Tell Her For Me by Adam Wade ...\nfetching id for If I Had A Girl by Rod Lauren ...\nfetching id for Little Things Mean A Lot by Joni James ...\nfetching id for Mumblin' Mosie by The Johnny Otis Show ...\n--> [error] Amapola by Jacky Noguez And His Orchestra\nfetching id for The Happy Muleteer by Ivo Robic ...\nfetching id for Honey Hush by Joe Turner ...\nfetching id for Not One Minute More by Della Reese ...\nfetching id for Hound Dog Man by Fabian ...\nfetching id for No Love Have I by Webb Pierce ...\n--> [error] Mumblin' Mosie by The Johnny Otis Show\nfetching id for What's Happening by Wade Flemons ...\n--> [error] No Love Have I by Webb Pierce\nfetching id for Let Them Talk by Little Willie John ...\nfetching id for Baciare Baciare (Kissing Kissing) by Dorothy Collins ...\nfetching id for Heartaches By The Number by Guy Mitchell ...\nfetching id for Smokie - Part 2 by Bill Black's Combo ...\nfetching id for How About That by Dee Clark ...\nfetching id for Mack The Knife by Bobby Darin ...\nfetching id for I Know What God Is by Perry Como ...\nfetching id for Forever by Billy Walker ...\nfetching id for Skokiaan (South African Song) by Bill Haley And His Comets ...\nfetching id for Honey Love by Narvel Felts ...\nfetching id for I Can't Say Goodbye by The Fireflies Featuring Ritchie Adams ...\nfetching id for Cry Me A River by Janice Harper ...\n--> [error] Smokie - Part 2 by Bill Black's Combo\nfetching id for This Friendly World by Fabian ...\nfetching id for Don't Let The Sun Catch You Cryin' by Ray Charles ...\nfetching id for If You Need Me by Fats Domino ...\nfetching id for Just Come Home by Hugo & Luigi ...\n--> [error] Cry Me A River by Janice Harper\nfetching id for Run Red Run by The Coasters ...\nfetching id for Mary Don't You Weep by Stonewall Jackson ...\nfetching id for A Year Ago Tonight by The Crests ...\nfetching id for One Mint Julep by Chet Atkins ...\n--> [error] Just Come Home by Hugo & Luigi\nfetching id for I Don't Know What It Is by The Bluenotes ...\n--> [error] One Mint Julep by Chet Atkins\nfetching id for Tear Drop by Santo & Johnny ...\nfetching id for I Wanna Be Loved by Ricky Nelson ...\nfetching id for Let's Try Again by Clyde McPhatter ...\nfetching id for Oh! Carol by Neil Sedaka ...\nfetching id for I'll Take Care Of You by Bobby Bland ...\nfetching id for Talk That Talk by Jackie Wilson ...\nfetching id for What About Us by The Coasters ...\nfetching id for The Whiffenpoof Song by Bob Crewe ...\n--> [error] I Don't Know What It Is by The Bluenotes\nfetching id for My Little Marine by Jamie Horton ...\n--> [error] The Whiffenpoof Song by Bob Crewe\nfetching id for We Got Love by Bobby Rydell ...\nfetching id for Scarlet Ribbons (For Her Hair) by The Browns ...\nfetching id for So Many Ways by Brook Benton ...\nfetching id for I Forgot More Than You'll Ever Know by Sonny James ...\nfetching id for Believe Me by Royal Teens ...\n--> [error] My Little Marine by Jamie Horton\nfetching id for Since I Made You Cry by The Rivieras ...\nfetching id for Mediterranean Moon by The Rays ...\n--> [error] Believe Me by Royal Teens\nfetching id for Mighty Good by Ricky Nelson ...\nfetching id for In The Mood by Ernie Fields & Orch. ...\nfetching id for Come Into My Heart by Lloyd Price and His Orchestra ...\n--> [error] Mediterranean Moon by The Rays\nfetching id for God Bless America by Connie Francis ...\nfetching id for Uh! Oh! Part 2 by The Nutty Squirrels ...\nfetching id for Danny Boy by Conway Twitty ...\nfetching id for Be My Guest by Fats Domino ...\nfetching id for Let The Good Times Roll by Ray Charles ...\nfetching id for Misty by Johnny Mathis ...\nfetching id for Livin' Dangerously by The McGuire Sisters ...\n--> [error] Uh! Oh! Part 2 by The Nutty Squirrels\nfetching id for Reveille Rock by Johnny And The Hurricanes ...\nfetching id for Swingin' On A Rainbow by Frankie Avalon ...\nfetching id for This Time Of The Year by Brook Benton ...\nfetching id for Marina by Rocco Granata and the International Quintet ...\n--> [error] Livin' Dangerously by The McGuire Sisters\nfetching id for Dance With Me by The Drifters ...\nfetching id for Climb Ev'ry Mountain by Tony Bennett ...\nfetching id for Always by Sammy Turner ...\nfetching id for Marina by Willy Alberti ...\nfetching id for Riverboat by Faron Young ...\nfetching id for Don't You Know by Della Reese ...\nfetching id for Teenage Hayride by Tender Slim ...\nfetching id for Mr. Blue by The Fleetwoods ...\nfetching id for The Clouds by The Spacemen ...\nfetching id for (New In) The Ways Of Love by Tommy Edwards ...\nfetching id for (If You Cry) True Love, True Love by The Drifters ...\nfetching id for Wont'cha Come Home by Lloyd Price and His Orchestra ...\n--> [error] Teenage Hayride by Tender Slim\nfetching id for Promise Me A Rose (A Slight Detail) by Anita Bryant ...\nfetching id for (Seven Little Girls) Sitting In The Back Seat by Paul Evans and the Curls ...\n--> [error] Promise Me A Rose (A Slight Detail) by Anita Bryant\nfetching id for Uh! Oh! Part 1 by The Nutty Squirrels ...\nfetching id for I'm Movin' On by Ray Charles and his Orchestra ...\n--> [error] Uh! Oh! Part 1 by The Nutty Squirrels\nfetching id for The Sound Of Music by Patti Page ...\nfetching id for High School U.S.A. by Tommy Facenda ...\nfetching id for The Happy Reindeer by Dancer, Prancer And Nervous ...\n--> [error] High School U.S.A. by Tommy Facenda\nfetching id for Smokie-Part 2 by Bill Doggett ...\nfetching id for Just As Much As Ever by Bob Beckham ...\n--> [error] The Happy Reindeer by Dancer, Prancer And Nervous\nfetching id for Primrose Lane by Jerry Wallace With The Jewels ...\n--> [error] Just As Much As Ever by Bob Beckham\nfetching id for Love Potion No. 9 by The Clovers ...\nfetching id for The Little Drummer Boy by Johnny Cash ...\nfetching id for Do-Re-Mi by Mitch Miller ...\nfetching id for Deck Of Cards by Wink Martindale ...\nfetching id for Happy Anniversary by Jane Morgan ...\nfetching id for One More Chance by Rod Bernard ...\nEsecuzione completata in 15323.7393 secondi\n"
],
[
"# creo backup del billboard dataset\ndf_billboard_bak = df_billboard.copy()",
"_____no_output_____"
],
[
"# inserisco gli id ottenuti in una nuova colonna nel df_billboard\nids = np.array(output)[:,0]\ndf_billboard.insert(0, 'id', ids)",
"_____no_output_____"
],
[
"# calcolo percentuale di canzoni trovate\nfound_id = df_billboard.id.count()\nx = (found_id / df_billboard.title.count()) * 100\nprint(\"Found ids = %d%%\" % x)",
"Found ids = 82%\n"
],
[
"# esporto su google drive\nfrom google.colab import drive\n\n# mounts the google drive to Colab Notebook\ndrive.mount('/content/drive',force_remount=True)\n\ndf_billboard.to_csv('/content/drive/My Drive/Colab Notebooks/datasets/billboard+ids_3.csv')",
"Mounted at /content/drive\n"
]
],
[
[
"##Recupero audio features del Billboard dataset",
"_____no_output_____"
]
],
[
[
"# reimporto dataset billboard (con ids) + dataset principale\n\"\"\"\ndrive.CreateFile({'id':'1fZzuYu-HXKP9HUeio-FL9P4eNygOQ0qq'}).GetContentFile('billboard+ids_0.csv') \ndf_billboard = pd.read_csv(\"billboard+ids_0.csv\").drop('Unnamed: 0',axis=1)\n\ndrive.CreateFile({'id':'1eOqgPk_izGXKIT5y6KfqPkmKWqBonVc0'}).GetContentFile('dataset2_X_billboard.csv')\ndf_songs = pd.read_csv(\"dataset2_X_billboard.csv\").drop('Unnamed: 0',axis=1)\n\"\"\"",
"_____no_output_____"
],
[
"# elimino valori nulli (= id non trovati)\ndf_billboard = df_billboard.dropna()\n\n# creo lista con gli id del dataset billboard\nids = list(df_billboard.id.array)",
"_____no_output_____"
],
[
"# creo lista degli id che non sono presenti nel dataset principale\ntime_0 = time.perf_counter()\nids_new = [id for id in ids if id not in list(df_songs.id.array)]\nprint_exec_time(time_0)",
"Esecuzione completata in 1374.0148 secondi\n"
],
[
"time_0 = time.perf_counter()\n\nwith concurrent.futures.ProcessPoolExecutor() as executor:\n results = executor.map(get_features, ids_new)\n\n output = []\n for result in results:\n output.append(result)\n\nprint_exec_time(time_0)",
"fetching features for id: 3PyQV3cDjV5tEJGpYYH2K1\nfetching features for id: 7qHGRefOGiaPqrG4IEckcv\nfetching features for id: 1PSfY3lSwBD888PZW0s5JH\nfetching features for id: 1er2tyXb4iw23SfZxb1FW1\nfetching features for id: 6C9MYPkHrESo8n3KnapaR9\nfetching features for id: 2kLrFBgVs9BsyNQ6EDc5LH\nfetching features for id: 4Ht8wlFBxdNiQQSdWTBOik\nfetching features for id: 4Dxm37KSWN6xXTn98ddrGp\nfetching features for id: 4DvqAY9mzeTjQUNvsROwji\nfetching features for id: 6nd9sBgQj3ABs1ZmNrStoR\nfetching features for id: 26R4HdOOnj4zYGl8lbYW9j\nfetching features for id: 6IjvCFBNprUHhlamLey2Tb\nfetching features for id: 7K40sZ0ZV6vGjzSso2nFyr\nfetching features for id: 3NcbI0iX8VQyR9D3qsmNjl\nfetching features for id: 3HZeaD9anHDrTM1AW5PY7y\nfetching features for id: 4K0PEpyhPdemgNj1CRPGOj\nfetching features for id: 5raayWOkxcm1pt5p59xN8k\nfetching features for id: 723SZaMK90mxb59BZj3Oig\nfetching features for id: 3YBzEkdpzpp0NomRIl56da\nfetching features for id: 1sEImexIhca9b7pUcpObsC\nfetching features for id: 2F4g7wn3slLJrDg9SY79gM\nfetching features for id: 1ZtPpbHgnj6JLIv1Rpuc3x\nfetching features for id: 2nRk3PqhhekI963sCvoRrm\nfetching features for id: 7agk25vHxeKe626vePNxqz\nfetching features for id: 4XHuEHpj3YJIA5NGt7iyCh\nfetching features for id: 6jxi8P16mztEUWNKTWp9k6\nfetching features for id: 6a0k23wrj492S3BQm3iHMK\nfetching features for id: 3FUGhktHS6iEAReiDwXuqm\nfetching features for id: 7tu5axZXIdRYPdzmRFcogZ\nfetching features for id: 0FBxYLStk8H7oTcyx8iW0P\nfetching features for id: 5TyyswOrL22rJAJvvuCLVA\nfetching features for id: 5SX9wQDM4Zbf87SDQoldLH\nfetching features for id: 0a32BQVIZxqDYiSHtvpAkb\nfetching features for id: 1NyYgY2eqKQWJJrnp5kgPW\nfetching features for id: 1XM8rG5yEi8wTeTSHJUoom\nfetching features for id: 4QZONvX5YwXrvpZQxgysBh\nfetching features for id: 0bnCQbMzi1aPhG2i1XnVO4\nfetching features for id: 18rfUVDkibMedZIFNjE8IE\nfetching features for id: 49TAefltwdGl085fK0EcUV\nfetching features for id: 6sJCTxB1HY8W3GvCvhu5GP\nfetching features for id: 0bPO6CAxcjhiSwTlwrjLtQ\nfetching features for id: 276DTCILFYlkQlafEXvnV7\nfetching features for id: 0qxxN0dtn5GZ0wNEboFekI\nfetching features for id: 2m8GdIN3YhG5JNxHFHAXOQ\nfetching features for id: 7lWrbXLcsWuRYtS8dqQgNX\nfetching features for id: 6Nkt6UsJbLQpNeyW0pcoSr\nfetching features for id: 4Nx3qqb1GPGkbSljU6K13g\nfetching features for id: 0ds9FJOCQjZa4sgTRgXujQ\nfetching features for id: 6Udetniaf2njLUgs7Y03Iv\nfetching features for id: 7jpdW7yH8Q1zdWL4W96jxf\nfetching features for id: 6fsAxEHvbVPU8pVPnBXaX2\nfetching features for id: 4d8MN3egxWp2lqC7J7VKlj\nfetching features for id: 72JpCXt7QiZxC9sW7XNXmP\nfetching features for id: 6tvGUktKAUgCMPl4hyY4un\nfetching features for id: 0e7Yx4UFoJ7jzx8l7XotMK\nfetching features for id: 4WGm5D8wrFPRwaRnyWdaf8\nfetching features for id: 4q7sSG5Ki7S9DqrjPztywV\nfetching features for id: 40w47r4QO4Gwfcg5kdqb7h\nfetching features for id: 1H28BobfKOhI3odo2ZIWY0\nfetching features for id: 4tElChKuTE2XTiyKwLWwxq\nfetching features for id: 26qDFDKrveLljfIX5ft061\nfetching features for id: 49dCQGqLHLqlgTcnxqN5h6\nfetching features for id: 1WFZTMBQabuD1Thl7OPBlk\nfetching features for id: 5gzvMkVATJt5XmrVq3HzUT\nfetching features for id: 6IbJJnUdfLx8tIihXgnVUE\nfetching features for id: 63DZKZIbIxSf8XUfqIB6wC\nfetching features for id: 4wwd1GByeADC5G0awAXPsz\nfetching features for id: 09jpVE2px743apfFt7vJif\nfetching features for id: 5K1xCoqZoqNkQQndzjMKJX\nfetching features for id: 3Ndvx2mxs9Z8QiMjP7NuYX\nfetching features for id: 26KwdOu3ALLhOaIRFlG48G\nfetching features for id: 4FtNl9Y6iNY96jknEQ7sXu\nfetching features for id: 77uBKmD6CASCdkShSM3SPt\nfetching features for id: 0HLPvnTJQ5VGBK6hgzxYFG\nfetching features for id: 77OrSP1deVWbCGtE1FNNlr\nfetching features for id: 3MEsh9TCi4D03XLhMgYw2u\nfetching features for id: 59XNlPnp7mkbCTupmPocbJ\nfetching features for id: 1A4eGVWeU3TUd1MxNdxU65\nfetching features for id: 1seMwNogcfKoM4l3dYf2GH\nfetching features for id: 7qbqZKFVQUzYXqjfThNidJ\nfetching features for id: 3AHbmf5Jt282ebdkGWwozI\nfetching features for id: 7bV9hOdb1NqHCGBJgREmw7\nfetching features for id: 5CSRuOixA0hWAvctAM44wJ\nfetching features for id: 4QscbhPWgoercB4xv645QE\nfetching features for id: 2e3lyF4HwlMtzUL5FffZZG\nfetching features for id: 5VGhkTMwXleuHmtbrCrvix\nfetching features for id: 1Rns2Yiuu0yiLDeCOqClJH\nfetching features for id: 4mBuXy2y0frBrt8qTiP0mY\nfetching features for id: 4gZqRlaWMKcjUFh1Uss8fI\nfetching features for id: 38VSNNXo448TWVYccZT9vF\nfetching features for id: 2iNfDmKtgvrK120J8DP9KF\nfetching features for id: 47rtbd1K2285AETpjioeyG\nfetching features for id: 0HglXwBvTeDnVGYZVUTKJg\nfetching features for id: 72uhF6sS2bRB0K4E7v4Lnc\nfetching features for id: 0SY37QOK0zoT0M94mWDZt9\nfetching features for id: 0hA8G8smCwi1h1nmxyRqT3\nfetching features for id: 74FDr9vJuxKWvXPhF5lPf7\nfetching features for id: 6g2RtR71Qj12dKQIwPpFJc\nfetching features for id: 0uGvmNaATnaJlcfHJ7oLt8\nfetching features for id: 6bZPMgJkTV6i5Mc4PZKKWJ\nfetching features for id: 7AZJqLdDI22YvziojIveKg\nfetching features for id: 1mrSHPNTDdMkXpR4BiSiD7\nfetching features for id: 1aIh8C6OpXxOdM4pJatdIW\nfetching features for id: 1uGpujmeUYG2PWOfUWiSUT\nfetching features for id: 2wbbw0UcpKf8fNjy7oRLbA\nfetching features for id: 4gZAcHUmSixJCsLgh2N8Jf\nfetching features for id: 3BH6jRogHdFvpGkQoxzJTZ\nfetching features for id: 2jd2VUyrGyhHHRzSsJuq6P\nfetching features for id: 7g4LLcIDNC1gVcz97dpVR2\nfetching features for id: 42iJbtyZ5osLlnk5L0aTVT\nfetching features for id: 3JC4gY9aHE2Qvjye9Ucqnw\nfetching features for id: 5FTEGiVTI0D4wburkn1FNR\nfetching features for id: 0mFB7wfCbTF7I3JNVxZozg\nfetching features for id: 67RKpDruBXMJgAmhJAivXr\nfetching features for id: 0qJ7VIc05OdciRRvTXAVkt\nfetching features for id: 0It04LDUih6fnGVE2dcX2E\nfetching features for id: 6rCVaWvrLlPNiIIhR2N6p2\nfetching features for id: 0G9Fw7UD13Cxfs2rXm46zF\nfetching features for id: 1wVNVHIywYAfHuVgdHXr3p\nfetching features for id: 5AyTdLBYQCsqV9DMGCp3mV\nfetching features for id: 2u0kV66s6u9OAVP6PtBf2f\nfetching features for id: 5gO7fXwncP766lzRXlNvxa\nfetching features for id: 7bwHniUnXqVp59mCbrNnJc\nfetching features for id: 7pTL00cPeIGhvVNlq5qP0k\nfetching features for id: 5Jb3RfbPcmkoKSy7003xOP\nfetching features for id: 75h57zShgPCAEeV3mowddN\nfetching features for id: 15P00DXhQEcVfGkdC6ip1F\nfetching features for id: 7F8ggf9wjiUl9CCPtvcBrd\nfetching features for id: 4F7DPcLkBhKIl6yhFMcwhm\nfetching features for id: 2KOZDREeP4FsaqFs8tsMLZ\nfetching features for id: 1iwcvFex3aq5mTosFFV2RG\nfetching features for id: 5RlIBEQsCmqMPI9zbE4vwp\nfetching features for id: 6cEfNnApZLeaYFUjHgpl4P\nfetching features for id: 1nRc0NBPQTbK0LGPi94BtW\nfetching features for id: 7FVFIPHARBN2EOz1eXkZth\nfetching features for id: 2vcnoGHvP31cN4PW7B3EVs\nfetching features for id: 5ENPFf3Q9qRXHVv5jSRLxS\nfetching features for id: 5qwlLXzZ0FtwQYxwwsbWQw\nfetching features for id: 6p37vbN1ckKthSv1XwnJ5N\nfetching features for id: 3iSws76HjaU7k49EqJVTfF\nfetching features for id: 4QU9f1lOmTHwIyH0YbbX67\nfetching features for id: 1xKEI3TDR1NjJSgAlC384x\nfetching features for id: 1Ij3korBS4VtNvTrXH4cYl\nfetching features for id: 0dvUHLhkBR1p7Fo5Jk50IR\nfetching features for id: 0LjsKq95QM3pZm6W04VeVA\nfetching features for id: 0pnWjLrf784CX0CHKYoUVj\nfetching features for id: 5drsRsgpbfvKGlhMcb5hKt\nfetching features for id: 03igFc1dNZ3vlU7weZbs6B\nfetching features for id: 0Hn2czXxRzvTHPshSNpXgT\nfetching features for id: 0nDoWlJuMZQ3XfA70L2Loy\nfetching features for id: 4fw2iGFp7f43NKSJnw4YM7\nfetching features for id: 21WNxRnhckY2zoWd6TKPcE\nfetching features for id: 0hA8G8smCwi1h1nmxyRqT3\nfetching features for id: 17Y2K7uNxMkaTDBtf1Tcv3\nfetching features for id: 3HuyiVITRMpjsebm2oogMT\nfetching features for id: 36lRShqCywAnd6sWppukGi\nfetching features for id: 5aA2EY9ZSK3yfbjM9QLGX6\nfetching features for id: 6ajOegjuQpO3Zz7IYikL9z\nfetching features for id: 0c0zlxvVFAVFezIkJ3N1t5\nfetching features for id: 3kx3S0MbCTRkG6wDUMBwSw\nfetching features for id: 7ntvk3Wh8i14tnMyAkRpfk\nfetching features for id: 2aJ0ZdaL40vOB9dlZ7ZhzL\nfetching features for id: 22t9iuxxXPGYuRZ0R0Mgej\nfetching features for id: 3J0rLwjOx4yjavPjmYXJtE\nfetching features for id: 7tldd9oCoMwNelucVLToWB\nfetching features for id: 3bwGKA19ifHVldVGqbxFJL\nfetching features for id: 2o4TJ9g2Fi2J4tkuT4gCka\nfetching features for id: 7eCHZeIm2SxGEJkGEggstN\nfetching features for id: 2cjoDBTUi9FO103xhAU6Tk\nfetching features for id: 3SPnjlkyDnwbl4qJDjRzyZ\nfetching features for id: 7M212oV6l0hn2AvitWUl7x\nfetching features for id: 1ka15JCWyITGlO3pzbku7I\nfetching features for id: 4W9dXLZ2ZV3pSYzea0k4Wz\nfetching features for id: 6bEZFHtdXyFvugCSv6AMRD\nfetching features for id: 2Kp2Ubw7XoTZCMCF0UM7DT\nfetching features for id: 18WxWjAQtrhwpdbdLOCd7N\nfetching features for id: 2mB7RE3hJubHlEJag8K9pj\nfetching features for id: 12Lm52NdvlHFc3Z9XcQqs0\nfetching features for id: 5qSB1mi5t7UojPB0mO39vr\nfetching features for id: 54hR1Uq3uNO3uEY19XtqN9\nfetching features for id: 3eTwRykJNtxtkYNGENVmNN\nfetching features for id: 1mxE5cFNlYy42Xd3akQp7v\nfetching features for id: 5Ef0xSKstUnm8masfM1k00\nfetching features for id: 22SsanlYUsSCkRkwpIEOXp\nfetching features for id: 6pw8j1U1XHLISTQAcHfVRH\nfetching features for id: 6tTot4QmK15dofc6WaZNyf\nfetching features for id: 7GYUDhLNm2DoG94NsaMd6J\nfetching features for id: 029711BlHKszqkSd9GaNI6\nfetching features for id: 0DMq2fTSzlmnqeeiU36Z66\nfetching features for id: 69odFwjE1JL3kHZdEC1wpL\nfetching features for id: 4mOhsQgceqXdUfz6zB9A0E\nfetching features for id: 3RZMzCvYsmJ0u2ioKTOsmJ\nfetching features for id: 1eUOJbd9u8kU3XiYwo7sx4\nfetching features for id: 5HOA2Vu2Su13uijEv7krb0\nfetching features for id: 0loo1QzCmSXerjeZBUVLPG\nfetching features for id: 5t4871ZY9tLGew6wt6KBSK\nfetching features for id: 3ab48EWVWFiCCI9glmD3pe\nfetching features for id: 2yeExv5t15T8quco592dED\nfetching features for id: 70MqJOpUHPB10OzrI06x7U\nfetching features for id: 6zXrni8kurxfJvQYI8NXJL\nfetching features for id: 5h6JA5GqW2kgq82RfEeptY\nfetching features for id: 5kytQwmrbViv4f1twPs8BI\nfetching features for id: 48jb06DQ2W3Qik0Bz0Ziqz\nfetching features for id: 79VLN3Akfbtadc8IYuygQd\nfetching features for id: 1S73njkLJ5orszfbNeqDQs\nfetching features for id: 64uAxyHtJirAYhy9r6u1Zl\nfetching features for id: 1kv28lKIdRA2aydk5ZYtW2\nfetching features for id: 2dzeoQNVrmQtozoKn4l1Oi\nfetching features for id: 4jbJsvzAgDQ6KwGjdVII13\nfetching features for id: 1t4I9xuq9Mgdq4SR5hV8p5\nfetching features for id: 6IpwJjawIPRENAGyEuKUo7\nfetching features for id: 0Rz9IUnHJmSnPMqSHqYQkp\nfetching features for id: 1c1PsSNvo3ylcBctdpd3QW\nfetching features for id: 29P1hXAG4VZ21lSH4hUI5I\nfetching features for id: 760hwLaTFEONDusgIS4FhW\nfetching features for id: 6LQHfGK9wkx2kWxAPDbPil\nfetching features for id: 57rxqpox1nfx6hwIqfAh0F\nfetching features for id: 3Mz27X48Ey7MbfLOmSzkBK\nfetching features for id: 0HJd5HwtTzuT5SZQBFzQdA\nfetching features for id: 6MSTUcmtS8S0B4GAWyJzzT\nfetching features for id: 2g8OuYXyWjNbLyVjFRGWDg\nfetching features for id: 1FqmRC0JrT7WDEBVgSqUXJ\nfetching features for id: 0SZkxd1GxCoGmgGXftrB2U\nfetching features for id: 3tjcWYIhfDi6kPCvZbBvCJ\nfetching features for id: 5e6X8odyjARbtQM4j9nE2z\nfetching features for id: 2XYSkO6GWk1ZpINrFBsLWc\nfetching features for id: 1GA3OPsFnpgqJcjqaQd0Lx\nfetching features for id: 1fQn0TtyvkS6PjdVOitG4p\nfetching features for id: 2F3vCjCAoGAzwh5CdOambh\nfetching features for id: 4ZuODMZPjTLAExyuRSmAs6\nfetching features for id: 1UUntqWL8aZ42vFqivpvtQ\nfetching features for id: 0hBpAXaXrE3G0eSqHK7OIV\nfetching features for id: 3LMXAUFA6PWe0iHyHfMKRL\nfetching features for id: 0KJVesMtdQ9dolNb3ps2WR\nfetching features for id: 0F6fa3I40SkP6oZYeFsXPA\nfetching features for id: 6NJT5WTujxy2kA3Rr6ItF1\nfetching features for id: 1LLCVdrvdQUdymGzbwwllb\nfetching features for id: 4aBT6pCHBkbIWglVpd1VpE\nfetching features for id: 5olP9v3xgBpvJlhWU9DoxX\nfetching features for id: 4kmTlqtjkxHdcratIAj5Uu\nfetching features for id: 6FXwTBdpv4wD0G0Sz3Wxn4\nfetching features for id: 66g7RleXwZuFUlec54oXnG\nfetching features for id: 09W94l8I09xXi8QRzsXXjL\nfetching features for id: 6qsngfhruE792s577j5VwN\nfetching features for id: 1q63VPAf7rnceC1bndpp6T\nfetching features for id: 5lCPJWsjGhgCRVzrYsDuow\nfetching features for id: 09c6fmAd8uvxROSmFHCkx9\nfetching features for id: 49X8hXnlugShyHCIhSAXhh\nfetching features for id: 5jQky3fADov04ITwQaCS3J\nfetching features for id: 6kAHWX3CMLZvXBxQa3nSuK\nfetching features for id: 1MuwDu0r693YRQw1lsKF3J\nfetching features for id: 53JnBuXIHTP0Tyc1H9uayA\nfetching features for id: 5fXjKHiDTgrovkagABJqU1\nfetching features for id: 1J9L1lhlcjL1W4G3cnXjLZ\nfetching features for id: 1NAUXbQoeuFpzTs1tSurBG\nfetching features for id: 246K5vQofQkfzDqU4ms8Bu\nfetching features for id: 098Q0YUZa12anVf9zi1LOn\nfetching features for id: 354oJ0prNvlmvopjdUI1mE\nfetching features for id: 1JswHxJx8wkD01FT8NvKdU\nfetching features for id: 5zTT1Tp6nJyflksiZsM7xs\nfetching features for id: 2jfj17B5I1eXuzOoyaFnt6\nfetching features for id: 1f3t8Wmbvlu9q8uaAdj4bf\nfetching features for id: 6LsQBpsEDQAlfPIOadu3uY\nfetching features for id: 3daw0o8J2gWbNpat6Sud6r\nfetching features for id: 6WdRtYX5QMaP503hgnFUEN\nfetching features for id: 4eBKiYjrQpB6io7fzcqjIs\nfetching features for id: 6XsxJlXSrSLUDVtUPn7E3C\nfetching features for id: 7x5vC0sqmXpmTrgnuDMXmc\nfetching features for id: 10h8NHPZTpNJl4yHpJtVyy\nfetching features for id: 6RzqD964t0xz5yHHdvpEtO\nfetching features for id: 0C24UTMQt8i9nJiXdxeJAv\nfetching features for id: 2sbh4xPA4jBtmZNLcSrYNz\nfetching features for id: 4lNYWHmZtlgkgntRn9mk51\nfetching features for id: 7xxHlrlNMFC6ApQCUzGAhS\nfetching features for id: 5yM1XZ2kk5A6Eh5u7kMLMk\nfetching features for id: 4osZZATIyErv2ema2MjP7l\nfetching features for id: 20OFwXhEXf12DzwXmaV7fj\nfetching features for id: 5dClbtFrsEtU363gpPwX2I\nfetching features for id: 7vMGfPlgbM41ssMPFSqA6y\nfetching features for id: 4dvp1I0lYy5OgeMzugM8L2\nfetching features for id: 5n2nKBJ0SuEgf1aWDEH4tn\nfetching features for id: 7v7iraFbfC12qCZEJFmHAg\nfetching features for id: 3glPdKy9ystj2fKTMCPfA8\nfetching features for id: 60N1VHlOm2nR24bN1DVGU2\nfetching features for id: 3ZIzonNJdYQDGXqVFYn3pg\nfetching features for id: 1R1z2tryTXZApOWozqOM6j\nfetching features for id: 0PNUyy6plH0rASsHyBkUNZ\nfetching features for id: 0HJ545UgMU7ttqYHFvb6pn\nfetching features for id: 0Hy7b6nSyN4MtFV5LFq7jn\nfetching features for id: 09tyJ0VvbLty84iHIV3WQn\nfetching features for id: 4AE3dRLSuWT4t1cWy5RkHc\nfetching features for id: 2qVo961euBoetk1492Lnf1\nfetching features for id: 4rJHIHMMAZXvwgM4hvR0kO\nfetching features for id: 2cYehRcgTf0l9ULQlak6qL\nfetching features for id: 5FrCb3dn8zBDQidICpI4Q5\nfetching features for id: 6eHkSvvSolKfLgqQs4FZuf\nfetching features for id: 7ziwiiXAuWS5aIBIlcGFzU\nfetching features for id: 6lqB974fYAOAx6JhAZjqlJ\nfetching features for id: 4X8yPSO0mgVVoFOhHAUJT2\nfetching features for id: 1kgtLKvVAxeci5kU0u5ov7\nfetching features for id: 2etDtwtgXPQE50ybBNclmh\nfetching features for id: 7IeVNYGilvoqUniMCn7U4G\nfetching features for id: 3viczOpmIQy8W78w5ejSQP\nfetching features for id: 6itaiG5m7iBzAXivLuyXi8\nfetching features for id: 7BGuuzMvamvGsmTL0Cvx7i\nfetching features for id: 536gfzVYZkxywv1knXbVSx\nfetching features for id: 6UVMhsNRskMxkAQhTXDQAh\nfetching features for id: 3yPccz77NeVhPyETQRzRB9\nfetching features for id: 0BTFoXd7zqH4nbYwzJQTTv\nfetching features for id: 1S5fUiFmKltr00XfSRHNwf\nfetching features for id: 5RlTopKNMnGMgFrvREcKgL\nfetching features for id: 2eq1ZLy9Q1aaB8AjHAMb2m\nfetching features for id: 6jnGyjY75bSW6mlzqoxjxv\nfetching features for id: 0ZTr7I93DO88V9dVlWgGfo\nfetching features for id: 1PuyOHjBUULlmPNstHPPoz\nfetching features for id: 4THlEtZqUMIkZafbaQRzr1\nfetching features for id: 34yIb7hw5SuR80au1BHTtt\nfetching features for id: 0kcEK7YuNNfE7jiTnXS1cm\nfetching features for id: 71a9vOAsTH6ze4bonGsZdK\nfetching features for id: 767SZ6IM7yXRmiu3NruknZ\nfetching features for id: 4LbfXBKo1Q2l1F2sEQqjs2\nfetching features for id: 0S0QNeHAIWuwh7IEbzbxso\nfetching features for id: 39pr604E2XOwb0kSWqzEfG\nfetching features for id: 1EnkH8PmTfh9h0vDmORTVx\nfetching features for id: 6MUHAvKGAmKXojn8wdFB71\nfetching features for id: 7BjK8kpag8H0qhIuCs3eb4\nfetching features for id: 0VCKQGl3Xx35JH3LbfH83z\nfetching features for id: 01cZbN980X7YkWdzSRlBGD\nfetching features for id: 4uZ1oRbgdkDmJ9o82Pa5hm\nfetching features for id: 0Bp6HuwGiP610hAbLWISal\nfetching features for id: 2Bdg1WuHjrtbLkRWvfagYn\nfetching features for id: 2BZUWsWUDhRnGqfZOaOzfC\nfetching features for id: 2L4jGkJPjDwZXuH1riZXvn\nfetching features for id: 1DoB9qQAEHwV9s5BWofgUm\nfetching features for id: 7HGPQkOoyUbObi3EMQuF9R\nfetching features for id: 5npjUR2tkFHYwbmNPNaDNB\nfetching features for id: 3XOVanRi3prWDEZQiZTlYx\nfetching features for id: 5HWRHyGh5pkMYXHrrDfdpG\nfetching features for id: 6abzPFNkbIcWv3QZonV4IB\nfetching features for id: 7I3RJVzwh6OZhOPrfIJr7o\nfetching features for id: 6wkrULQ7AzurQ0KpXzFDbH\nfetching features for id: 5fDM8NlEJkwn8TScgmunEa\nfetching features for id: 1V9jRNK8t5Y7k9zQ1H1Mw4\nfetching features for id: 2woOKDGdPEh9XOKxjcPoBv\nfetching features for id: 26D5mLklBxduCJmGZ210Rf\nfetching features for id: 18ytBAGsV3KoubYAEGgKGt\nfetching features for id: 0e27D3hGktUV6VEz3rkPqL\nfetching features for id: 0FAjZhaX4I6krULvlqc0Sc\nfetching features for id: 2KbvldtpfvwCWq9WLHo3sN\nfetching features for id: 2Gj1vc5Z4cX673brTMDe8W\nfetching features for id: 4IZ5cak7UK9j6CdaN4JfVJ\nfetching features for id: 3r6EzxHkcX3JLzuV4a4waK\nfetching features for id: 7sMVOxVJR6ctP6JUPyqndQ\nfetching features for id: 061YcmAKl6tatyzkCCMYhG\nfetching features for id: 0h95AzJJk0TQn38ZMVijDZ\nfetching features for id: 3gXO4EiIocjy58CbyE8ipW\nfetching features for id: 286qmvt9ztc3vTIFSoNxo3\nfetching features for id: 78TdinDawX2yhF22cOSLkS\nfetching features for id: 0ps392UqjroU35BNX4l3HL\nfetching features for id: 5JDbqSNMXTRQgf2oCyPHWJ\nfetching features for id: 3qq5qOc70lovPvgBqI0vBI\nfetching features for id: 5pZrIoICBkvGDjMPBaNuwo\nfetching features for id: 0JJwbDbM3J2hSPAmZsof57\nfetching features for id: 5LvIl1VHaqtCJJXwrB8QR9\nfetching features for id: 1TqwcNTF7f8nR9U0WlSGR3\nfetching features for id: 1guE3vAGquSDNFjnZA7b2h\nfetching features for id: 2BmP58UccDykNGiqJzZXfL\nfetching features for id: 3LAa8tNBzwHNCgNYCpguFe\nfetching features for id: 4fzQ3UPDlbvNbUP938zsJf\nfetching features for id: 7f9TETJBuaMXfT9mfMsHGg\nfetching features for id: 3PbgV4gC1U1we6i0PaF22v\nfetching features for id: 5HmpCr8ZYWu42K8qUdAluG\nfetching features for id: 4gFnX1BENnPnu5xLKDX77s\nfetching features for id: 01AxKIwrI7bCLOZ0nmw41I\nfetching features for id: 5WjfC5SYnuuEmtENFHaC9Z\nfetching features for id: 58IiROv3C3SPnYEmx8zxcE\nfetching features for id: 7p7kHvFpphFrlZvgKUhclw\nfetching features for id: 0OxHWA10FOs5fD5M7bQ9qk\nfetching features for id: 2oHcWtwRY6IEiKHCPrN5Ji\nfetching features for id: 30CcCcghwpe24NuVmL2Mhq\nfetching features for id: 2vsszTVacOB38dZlDUIHmM\nfetching features for id: 7d4d77rUX0wQIkkZ9Hr6xj\nfetching features for id: 5cfhjwlnKG5TkCUWdUNCct\nfetching features for id: 4yBvtBvzuu7qprVBgmt9Op\nfetching features for id: 4AnFpWY63ksuVzGqusI5zA\nfetching features for id: 6qXUtnNvcnAyukfo1Zs0pd\nfetching features for id: 1OC7vbblA5EzgCZgzuy7GC\nfetching features for id: 1xKrmk59JTLhiTmp6RJDSJ\nfetching features for id: 4ND9SgqQMRgoA0TEybnCvz\nfetching features for id: 4NKuHWlIvRRP4P0uPfgVym\nfetching features for id: 5kYMMNGEZDfVZZiAAmP0y2\nfetching features for id: 31oeQLsLKmZq7X7aeHpTqO\nfetching features for id: 3aAW6olIlXjRphwWhwY3s1\nfetching features for id: 0FFMHepFyZnxasNxUykTCo\nfetching features for id: 28gwNZzicN00sd6Y0ZKMLA\nfetching features for id: 3YNaszqT4FEKafFrSMSD7z\nfetching features for id: 6bQOE7Y65IxzWsxI0EcKgF\nfetching features for id: 1ruJBQ8VifAL4nZjB91pkK\nfetching features for id: 0QvePm7qtDWIL7Lbfeorba\nfetching features for id: 2zKMyzqdfJMry4zTryROlh\nfetching features for id: 7lL2lMWNtzOcf5HnEudNgn\nfetching features for id: 3j0iquDiY3xVzFqx3xnE6V\nfetching features for id: 3pl0zws6JCFYUOuMJQNwB7\nfetching features for id: 1wj9ltraaBrktwngkQKfwK\nfetching features for id: 4cpaaN9gp0JvTaro27c0ex\nfetching features for id: 0VFWAbRoKDIItHejnl8bEa\nfetching features for id: 5mSZLysuAtdNA21mbOm0sx\nfetching features for id: 0FPSiF70mG3LilmRLnzKEW\nfetching features for id: 74i6ZrnR1WzZsuc3qrJg7I\nfetching features for id: 3QL1WUVydAYUl5kM2d1K8X\nfetching features for id: 7GKlBrdF3WaXjRyH0oQFdZ\nfetching features for id: 0T54e5hevBelKdm79XqVuJ\nfetching features for id: 1Xf8TOaGT3KlORRABQH4Mh\nfetching features for id: 1hFmFWgeQgj3LFkWnFUcA6\nfetching features for id: 0Gy0lfsMY9QoPLOKJAhk8m\nfetching features for id: 32HYJoyWuZ8ETsj8A5stoY\nfetching features for id: 6dKadknfQZ4iM26eI15oWS\nfetching features for id: 0fXn4R5nPWa19kL8hInjkA\nfetching features for id: 2z5tXhynZ7JULfGxcz1rry\nfetching features for id: 5dfHU6igXUSMJPyYjagKsd\nfetching features for id: 5sUcX8YWRa4UR3QDyEnhJT\nfetching features for id: 48G9n0VrQqGbihRFXIDu4G\nfetching features for id: 7zVFxziD5veXw4KcGip6O3\nfetching features for id: 6yoa4DtfyMUjEu9GRgUUNV\nfetching features for id: 7vJtTE1k2wPaqRmzEJ84OK\nfetching features for id: 78OJTuCxtTxNP52dDprvDZ\nfetching features for id: 626npgn8eelkfILd5pqSUj\nfetching features for id: 0oem2FjDstqU4UklvfIJvg\nfetching features for id: 0RRVNsSZED2oGFBKgxdPfR\nfetching features for id: 2jSqZlREVLLJQfzOCnR8eL\nfetching features for id: 15TITugVC8RrpLctHXOqT8\nfetching features for id: 5T2pWL9W3RnUjHgrpB1Mmy\nfetching features for id: 4Cf5RigFToa0AdrNDJIVS1\nfetching features for id: 4ziizyLIRG1V8caZoz70GL\nfetching features for id: 0uM3FQZ0VeqdhA1HJPkRCk\nfetching features for id: 6JoRk5bdLx3lzuCuMPmpSP\nfetching features for id: 2NoPCsgRLoailY7Sh9nDLo\nfetching features for id: 2c4tzfXA33AyGFOSHcZqRT\nfetching features for id: 6L0bHTV6hf9UL6uCezlJCC\nfetching features for id: 5O5AwtNW8jjhTfM98LmuG1\nfetching features for id: 1du0k5dlEtZM6wYlQMZAhs\nfetching features for id: 7BUBf9shtJ3NbQgZBKicHX\nfetching features for id: 475FJrQVjU80EYAqweZTS5\nfetching features for id: 3i4AZ2XuuS5teB5QHGy7Ao\nfetching features for id: 7LnXffdUS2wSAw3AvIQDEH\nfetching features for id: 6OU7pt11iGYyv9etnaToyH\nfetching features for id: 4f4TeR0LtVfsShvbfDlumS\nfetching features for id: 0ohnjVj0hHcEwjWUCEmunP\nfetching features for id: 7odHgoLFi3GQ90E9PeraI3\nfetching features for id: 6pyfy3LoAtJ9lNWN98W9DA\nfetching features for id: 5opz11K4IkIVD9gv90MToF\nfetching features for id: 4WE7DQcRohC9jmn7tvqH7g\nfetching features for id: 2KqC5hNhhuTtDH8TypbOvS\nfetching features for id: 0SWDyeULu38YyMnyf6DZw6\nfetching features for id: 5b3Bvd2m81gEQRWrShNkx2\nfetching features for id: 6i68reGiO50cNQKGKPbFLF\nfetching features for id: 2B2K73RtU1ne3e5LlsTueE\nfetching features for id: 63qL37oSySINeOqrnIaYTJ\nfetching features for id: 0IszsgXx0CKA8ToQn3RoNz\nfetching features for id: 6DQxtefRJmQ2aWPJ75IRbQ\nfetching features for id: 3oen6iVS2Kwxt2UvalxaNo\nfetching features for id: 5D8GyDQid3v34iPeIHcfS4\nfetching features for id: 69EILuL34ggLWQsa626bes\nfetching features for id: 6wceQSqQDvdcw0cMNKdp7s\nfetching features for id: 2FJbY6XaLqldFBHZPZErlj\nfetching features for id: 0qglTmOtgHmBoej0iyyR5T\nfetching features for id: 7wFTKWaAbePsqBe2H5o6Eh\nfetching features for id: 3D1zvyu1WaLFaOdqF98sKs\nfetching features for id: 6mu0nDX8yAEELwnBrieAmV\nfetching features for id: 5zq1Mib1pAJ5UoqvLR2L0E\nfetching features for id: 73yRcc7EFX4uWRmxGHMzAh\nfetching features for id: 7hhxaVupi49IEDd9EyRqcK\nfetching features for id: 41tl6xfIKib7fexaHudvV0\nfetching features for id: 7osQgwNLoDtLFoZJ4Rwkqq\nfetching features for id: 4k1KBmLXE1Q1cFyfC6bVLH\nfetching features for id: 2STSpwPnkPkSK4hVirHUq7\nfetching features for id: 0kirMU1GK9LM4PF3OhJT9p\nfetching features for id: 3TIg1dio3AHo3XnzwDlSRE\nfetching features for id: 54cqzbNc662Sis4VG3rDLk\nfetching features for id: 0AO2AXM7jLm4ZL6uWRM9AY\nfetching features for id: 1svB1gR5yFsNgoBxRevt3n\nfetching features for id: 0gh9MaV9g5shq3Y6BBTTX4\nfetching features for id: 2vn07tyGKthmxKx9H2qjyF\nfetching features for id: 1ckcPnfKv7StOuASFoZ5NX\nfetching features for id: 6thVOj1grfYYEvru4PSP2J\nfetching features for id: 2qHG3p5MBUvTqhrGRbAly8\nfetching features for id: 1ZHSmc2m66fqF0BaRuS9d2\nfetching features for id: 6UtKnVSvYhNGO0n8SWdpVn\nfetching features for id: 4z375DRT2OBS1VpKUdgCZ2\nfetching features for id: 6uMR7QrywhdSqhJSef3YqV\nfetching features for id: 5azu75l6zBBna2JHBLseCg\nfetching features for id: 2fFPjK1sslhH3aWNckp3I6\nfetching features for id: 310v4j8Mnda0oAg2E8Ml1a\nfetching features for id: 1nxnRYlU4DRfZruzbKYEoj\nfetching features for id: 3qxvIhgdd3k22ioRqmgamM\nfetching features for id: 4DPTB97GjKGuA9zBp1FhuV\nfetching features for id: 3J0syNI2oBf92TZ0xF4FpA\nfetching features for id: 6GN0ssRcyJ4PkzfVJTwKSC\nfetching features for id: 6eiZdeG6uDUylSH8HBTatE\nfetching features for id: 0kfkMmJ5bAsN5ggffJ1PWz\nfetching features for id: 4lSnYr1YT9jYgkGHsftdu1\nfetching features for id: 1aTwS3pcR1pM9RJ4KqXSDz\nfetching features for id: 7II22fb2A8y90vqcLfb2jJ\nfetching features for id: 7qA41EPaEQRIIJoEmus3H3\nfetching features for id: 5DMTYRfjveX3SWcMtkH6BY\nfetching features for id: 4DvAQecueclj7xTZSTlt7V\nfetching features for id: 4u8pvyvUdysYaD8cjXKtkU\nfetching features for id: 5Nlzr8dH6f2TKayAX9I8tm\nfetching features for id: 4h8bpmpsaaDWLsgHktJn5v\nfetching features for id: 1AboGOId6vPiP74HGTQhhC\nfetching features for id: 07uKak8a1SbavJwBS2moSi\nfetching features for id: 0o47lTgeRH9B5Yo97iHSGj\nfetching features for id: 6lrYo0LfA4i3Ih65eKI8Tz\nfetching features for id: 27rnu200jk1s0OIpVd4Pvl\nfetching features for id: 6hkcW1ud6rTmwSmkGYQV8I\nfetching features for id: 5aiI6vcyhkkPmsTWsAu65t\nfetching features for id: 40uG3QadBzPpToV2FbPtuF\nfetching features for id: 3Wdk8wT9uqBVXhwwWt4YEb\nfetching features for id: 6qXnH5OmtO9DNzx9jjz65d\nfetching features for id: 6ob3S7MjngWrcV377CQkDu\nfetching features for id: 6AEs6Jrd8ExJvhRzQcnz1Q\nfetching features for id: 5os6Tl9oaIR65Ae93UY9aa\nfetching features for id: 5Is0r0E2JuW69TDyID9FVU\nfetching features for id: 44Y5Yh5OSciJB6ODlAhlHo\nfetching features for id: 6l7eJ98iTfO3hOxXKphHzY\nfetching features for id: 2bGqpfhpaU1zQjPGLWFiWL\nfetching features for id: 0YcKZu2WVjO0U3vSNAPsKj\nfetching features for id: 7abjkQDLaCsDu19I4hQYSh\nfetching features for id: 0MffRzAW3tcVsK0aC3ethW\nfetching features for id: 7vsIsdegBXHac6FdSM08uW\nfetching features for id: 1pAJIGUCD8t0NRuqxxarAw\nfetching features for id: 1oWLERlWcdoDA4zJAPCtyk\nfetching features for id: 1bhKFWkQ4cQtk9GOgx7jb5\nfetching features for id: 1hW5sYd3UdKW3cYvXjgRWj\nfetching features for id: 2sZIclzhHlH8tzf7wry9hK\nfetching features for id: 6LEXD2N3PxSmu4BMUHjaTb\nfetching features for id: 6TiS1jv4msJWoWB35UvJb6\nfetching features for id: 4FJLbOUFTNl795TCcutUlA\nfetching features for id: 72cSQVBT0Uln5RitecXHLr\nfetching features for id: 79v8qeBXuC8IDs7FKAxKo1\nfetching features for id: 45dJNtYHmJ5MTmsjZWRN1S\nfetching features for id: 3Cxi7TG4eWtw3rDFuQdnHM\nfetching features for id: 5qA1ZrjjVLh1cL9GZOFdNm\nfetching features for id: 4Ym0HjYfgEaUgjI3WuDqMK\nfetching features for id: 6ediPUKjEAS53mhTWmDcid\nfetching features for id: 217TWF5jp26a93UCssbya5\nfetching features for id: 6zlJXSjlMD0usP9nnPVigY\nfetching features for id: 4b4iLxgE6pv8BRudp5wfdi\nfetching features for id: 5OYzOLFjbKjvWSq85x3dS5\nfetching features for id: 7FNIEfhhBlzsZFxCwI5Nwh\nfetching features for id: 6xUFBHDsCLcHoARWsKGJUo\nfetching features for id: 75aAJt5ZJO8JWgrGFK5AqS\nfetching features for id: 3ox9cbIc8aoKYzBxgns8xg\nfetching features for id: 7384ZImepcCLSwbxmX1D2T\nfetching features for id: 04RM7D5FZzFahr4L6QOuW6\nfetching features for id: 1DubKw67kktIJ7r17nSp6K\nfetching features for id: 46fAGJwpTKyaREreMebFdO\nfetching features for id: 0x2PeHuAguW1SWRETzSddw\nfetching features for id: 2DzQEqnWYDusmybVyphbsP\nfetching features for id: 7F5r8CWyMTRXbRZyhVweZX\nfetching features for id: 3UMqQ1QTgvAj8FTQMoNPfk\nfetching features for id: 0cduoX64cRfzLvdapbCsBH\nfetching features for id: 1L4GnTwQq8Uuff376kJDeZ\nfetching features for id: 0wKQQZq7U3fztiFUerjSPW\nfetching features for id: 6iIZbGAaw39GiCQ1hoPqLj\nfetching features for id: 5KVKsVfutYirN4Hj0cfu4X\nfetching features for id: 3FpzyU1FdHsTRa3hwv20nb\nfetching features for id: 271rEUoF3LoGBHRlkn1ICC\nfetching features for id: 4q93XQv0xo4r4j2bXpWnFr\nfetching features for id: 6wGXR7V64PJ4I9mIpmDA3n\nfetching features for id: 2ff6ILugtshFUiWIVu83B9\nfetching features for id: 1nDw1XXLveMXaj2c89ZPXr\nfetching features for id: 3uK6SqdNXutPBNi6vUenpt\nfetching features for id: 01h0dh08fyqGPofJVCw08N\nfetching features for id: 4J3MoB2nVNvXBFdh84mxoG\nfetching features for id: 1sCTCUgSakywC950OVIC9K\nfetching features for id: 096cHX9f4Ji7UE4MBtexTO\nfetching features for id: 5ufRrjEUzSSuXc4nFjX2FP\nfetching features for id: 7L4tnCB9rogwiHBUBvc5nb\nfetching features for id: 0pFS3h6oeZtNAnOlsNp6nn\nfetching features for id: 5K3oepuLYQCrfbEzXEHWCd\nfetching features for id: 4WKUoic5rKlAmlgYmpJLCF\nfetching features for id: 0O56SgMsaKZnfofiEhlZ8Q\nfetching features for id: 4x4Kr9zOCzs8DFI5PXPfOO\nfetching features for id: 28DfnajP63VCCr1eGREU6q\nfetching features for id: 7kYIMKSzBwvggLcVOXzJZy\nfetching features for id: 0YbKyUAst6ZaTtJSOkU8G1\nfetching features for id: 686LL450oOx1ZlscdM8JJP\nfetching features for id: 31pi9vJJWHZxygk8AoGd0d\nfetching features for id: 51sulFzBlSrbU1G2IAbfgV\nfetching features for id: 5o8qgCODgpjd5RpFpvbPQ3\nfetching features for id: 6n13tRkf6BfW49hd8x3GBn\nfetching features for id: 5txH0HbTcqR1NYVmRVgunx\nfetching features for id: 2wc9DHQnBTnMKDIYcXFzoq\nfetching features for id: 0ByEWViMcNUfQ3GpSknx20\nfetching features for id: 1TmQybeHfudIil96VJAQpt\nfetching features for id: 6YD6bjlPkuNr4SJ59AB6K4\nfetching features for id: 2AVUPbwgQUGEIBdsUpzYzU\nfetching features for id: 3rvW6SvKMLUAVHFtoIH6gH\nfetching features for id: 2rqqjc9niGp2w7C1Pv3Wj3\nfetching features for id: 5GZizdM8UeW2svHkxy2XaS\nfetching features for id: 3egChKZ4fIbZpiL98dx49h\nfetching features for id: 2zEvB5gYtH1wICRBST2Qlg\nfetching features for id: 6qUKvwwXss4ACF4bwXPujP\nfetching features for id: 7gxeDaqGLT33dkWSTAEOue\nfetching features for id: 4j2ArHvM38V1LxLhhrY5Ks\nfetching features for id: 6c05I2lI8biGDG3ououyQl\nfetching features for id: 64gpxD0IvZnnjUzv6rCDBt\nfetching features for id: 47IGCBcC8P7L8CHlEnzWFD\nfetching features for id: 7tf10BYZc7tqY5k9e0mkBZ\nfetching features for id: 5iRnkIAcHpbAlygyHwPSzG\nfetching features for id: 4eM7bZnwoRQbVXktBxwlXa\nfetching features for id: 5A7X68HGl6MzeBQdRQzUmo\nfetching features for id: 47bDqW4ZFJybnMwF9wkSZi\nfetching features for id: 2xpNOeAiGNwPMYV9MMKSy7\nfetching features for id: 6R9EalCA6jU8nk4s6cTPJJ\nfetching features for id: 0OlPwJ89MAQFrjw4nAtAPP\nfetching features for id: 3I7PM0ArfQPsLWpk4auNGx\nfetching features for id: 7Gf8kY2LWsBapoALdpqY4p\nfetching features for id: 2v32CGfALncxmdlF4JbKZ4\nfetching features for id: 5TNbU6SbJ79dFD6x3uo43S\nfetching features for id: 1YlFODzWlgk4EFqlt4PkvB\nfetching features for id: 0lF4vTTszEImpNoqM75bnV\nfetching features for id: 6jIitOrI9c5LsHLHUTQhaY\nfetching features for id: 10llQmNEkgJZ66kWH8EFS8\nfetching features for id: 37WK4t8ueXG14q96W7uvIi\nfetching features for id: 2FCiuaXNE1kjwSoAvso4vT\nfetching features for id: 7oGKsZIfauHlqrHdfFV5rJ\nfetching features for id: 07zG1DNaaSwXKNeqxW0fEZ\nfetching features for id: 2vjBDckWRwsaMj2SgIO7wg\nfetching features for id: 5FQCWsEiEcnILhNHP0DWH8\nfetching features for id: 7FXuCkl1kfxwS5icDjTwbM\nfetching features for id: 4ViWZ3r1mb9Igr8C3A6DiM\nfetching features for id: 19YLQaeQT88GJ6nz1IlbNK\nfetching features for id: 1zuqS73L64T2X6Ny3xHqmn\nfetching features for id: 545TmKBYJTxxwPRkmZAS9h\nfetching features for id: 4Eur5HL2hyJcMVAa9K5QQT\nfetching features for id: 086qiBZmxPjVwig3Yy1o5r\nfetching features for id: 6bxUQ1ydrmWr3Km6DQQZ5a\nfetching features for id: 0CCUVuXep0eemhT4jcm4mK\nfetching features for id: 27QxL30kLR6RyIjLXCMFdU\nfetching features for id: 5tP3TltMOc2BbyVWyrKbYC\nfetching features for id: 6C6m2tqQliQ6wReSb30jKJ\nfetching features for id: 2MhIpJoV2oma1wSKZaDJgE\nfetching features for id: 5Lu7IAEgjNF7n4s1osxPvQ\nfetching features for id: 7tmxs6OLPktPdNVrJgKWEN\nfetching features for id: 3sWVdMFlcUzA8p93yIjR2f\nfetching features for id: 0aI5KoqucjqXjPi7bFENFQ\nfetching features for id: 4UiDi9dIR4mX6wg2f1T17c\nfetching features for id: 7MvI6xZmi2650P1qXHgXg8\nfetching features for id: 4fF8W36n4q1M81QzR9Qmkw\nfetching features for id: 1wGeHxQTeDRt4rYO4tMB6a\nfetching features for id: 20WgikhqE0cdDqApl9eVKB\nfetching features for id: 6WvbMOjKf5y3CAHn4ubwlp\nfetching features for id: 3uBtA78OM8o5HE2hTlKWfC\nfetching features for id: 7nEDchhuaVLIi2EEKqe740\nfetching features for id: 3BBrBOfTuMwVmQToKWQxZx\nfetching features for id: 6ZGHt5l3B8C64NV6uiJmEQ\nfetching features for id: 3rc3eKwmp3OqavTpgWGBbo\nfetching features for id: 7nN8COgR76WMRhJro3T0nk\nfetching features for id: 6ixmQ7kTbWosF5VCiGTnM8\nfetching features for id: 2MuUExl0s8Vxsl6t8tZLr0\nfetching features for id: 4JPEuPsDG3nNJMAvvQ2Rr3\nfetching features for id: 1E2MEAXoC2qgwDspNaE7rl\nfetching features for id: 0XpxP4P8NLegFrZij9BYHx\nfetching features for id: 1JuJ8KSzX0ke9lvX8AlgVW\nfetching features for id: 49iKFDnxIRvohfjUlqtQmh\nfetching features for id: 3C9Vv8OkqQwJ0CqB24d7Ty\nfetching features for id: 26NysICbeElPyjSqkPkXLJ\nfetching features for id: 4RAeYXwaqPS65Nyg8HazdZ\nfetching features for id: 6goPukKEL71RzfbSuNdU8j\nfetching features for id: 1h9pYichpN1sdLDNjgpzK5\nfetching features for id: 6Sp8FtxXBKnyJE0OHwECHh\nfetching features for id: 6JBGFlHJJvNtKFAYpchJbk\nfetching features for id: 4nGVgxqZPzG5g9Azml2lko\nfetching features for id: 3fDkVK8bIbDh3hllMAqVf5\nfetching features for id: 5SUq4LjsJjYSGq9IIjEiBP\nfetching features for id: 3eNGdIH7IjPtnEm5BZPFBP\nfetching features for id: 2LImDr39rmOpTc6jDuTuv8\nfetching features for id: 4q605l3PIAX6HTM5bNoObR\nfetching features for id: 5ihFDUNsWY2wKPoAis8Shk\nfetching features for id: 0oVEzllLNgGRjMK4x1uktw\nfetching features for id: 0rMD3QlnXdFRkV4fjvJuG6\nfetching features for id: 3UbNeYyyTnDhYU37Q3w2Jy\nfetching features for id: 5RoOIuOOyMLfreRSCVbQ9n\nfetching features for id: 2pgEabdcVmCfEbhd5MnKJ7\nfetching features for id: 0oFvTAHSCi3uBgNRcFr53H\nfetching features for id: 2ofrKgfmvJk5qzgfyfCjkG\nfetching features for id: 5n691Y1hibDwdxSJwjzwgi\nfetching features for id: 3ADQ9jGCJ6MfSbIZqtbVj2\nfetching features for id: 75tHHyxJw2u6EeMPaTNXHD\nfetching features for id: 4VAAQHZOKUnkRNWKKtS224\nfetching features for id: 17ZL64W0e6jQUgJUvqssTd\nfetching features for id: 65nKoISRcXlBA4avGaSvo9\nfetching features for id: 228jEjW3dgYStlF1JrZfdq\nfetching features for id: 3zaIEaOgbhjUvxft0hdde0\nfetching features for id: 7D2GAOzzB2pm8DuStFCX1Y\nfetching features for id: 70bUKzBh9HImwhrsJJqLaO\nfetching features for id: 5k3lU88VEF8WYlzYjkZ0FL\nfetching features for id: 3HsSXZiLVG9qFPTZB1xf4m\nfetching features for id: 5DrPvVpMKoM4sVkQVGRfa1\nfetching features for id: 2gZuPEa5hb8dD6Pcdcuasb\nfetching features for id: 315NtKOZZ30iThBRGS13Al\nfetching features for id: 6hG59WXc951utGHcrbhr4V\nfetching features for id: 1FubJmhgn2HDDCAmJM80OO\nfetching features for id: 5aJozu8lCjc1O14LsjNWdj\nfetching features for id: 4AhTSXvQBpIFDhotdteOhp\nfetching features for id: 6s10XyfYy0X2vQrPAnJ2lh\nfetching features for id: 0efxCfcv2JWxhls2LLlndV\nfetching features for id: 6vJ7Eh9EnJ8i4RXUV1usRz\nfetching features for id: 6v8Sybh5HSLy6l8360GPdR\nfetching features for id: 27ItQJCmJ1OiBYHX6WQgCm\nfetching features for id: 1bGOcfS5FTFik5fTnWx6we\nfetching features for id: 18oX158s1ny8C4Tk7Vcm6l\nfetching features for id: 3iJZUoDWu3xtL5QceRsy7X\nfetching features for id: 74eNTEW5v73x0jDGusJiPd\nfetching features for id: 3oBDHEINmsaU8t7hDWrOSr\nfetching features for id: 4gSGdDflijQmFvBNdTTENk\nfetching features for id: 5ei8EdWRV6SIHmTCWk47WJ\nfetching features for id: 2Ej7FgxsCf7UjGO2pOvYOO\nfetching features for id: 6iWaf45YIhj0YT8GFGxsw0\nfetching features for id: 3BucMqBqIR5Aw7MrUkF00y\nfetching features for id: 63UCz2y3eAmYkkuKZ9WnsT\nfetching features for id: 0dWbZ75ReAZ5LUcuIIoLjv\nfetching features for id: 7kAf7rFgUNxtCZWB1DHoV4\nfetching features for id: 7knoxBr7kSFcmJtfp5iIOI\nfetching features for id: 7vLulg0d9DreZngW3SsPvz\nfetching features for id: 4JIWmTZIy485xb8h2FVUXy\nfetching features for id: 3diwyFfg0tWx8Z0j92oSZj\nfetching features for id: 290atote8p8Bs95ATeHBGD\nfetching features for id: 7DwHL5o6nscK81sYJrbDbO\nfetching features for id: 7bZD4PsmyKpgKBXMBWq2A2\nfetching features for id: 05WuemBRmew7sHqNbipsuy\nfetching features for id: 7h5mrcoymubXxvgTBO7a1x\nfetching features for id: 49EoBTPBSmPUeRmPTJsIBX\nfetching features for id: 1bAhMCb5E2jSLHG3uJX50N\nfetching features for id: 4UZ5xERMJAskse5AH8dNb0\nfetching features for id: 3MKhV2pyf6PR4G6K7GsIUH\nfetching features for id: 0WR2ShQK2lmbjiyXw7nanO\nfetching features for id: 2ItFgr7TEyipBFkiF2Mr3l\nfetching features for id: 3p3X7WlD0GaEGIH3mEvIiF\nfetching features for id: 5Odfit8lEljfa3MsgeUmX4\nfetching features for id: 6IVV21D3ry6O4AZ4nw5qAX\nfetching features for id: 2RAVnOzE7SaOGF3NsGwXnR\nfetching features for id: 2IL8DHt6a6aoIakPqKA4Me\nfetching features for id: 2kSaj8NoWz9n5ry4iMDVdz\nfetching features for id: 2TdhHtNXvGwbVSkZSQ5Olt\nfetching features for id: 4dutGV1hlN2ox21WHbUjYp\nfetching features for id: 25nEGGqcm7rrz7yqscoPCo\nfetching features for id: 7bP5O5IYUIycego3IHsvra\nfetching features for id: 2rjaHAF0MzEpxVVYkh8Bnj\nfetching features for id: 6qZGAqcEYUt6EE6gI4rSr5\nfetching features for id: 4nFh6WVXQU2vgNZxAvw6tV\nfetching features for id: 6mTsWgHqjyCgVk83J4qN1N\nfetching features for id: 3vWuaWqY6vzsyWF61AE3WG\nfetching features for id: 3AlkJsv9PE1LOAvO5Je4SH\nfetching features for id: 4l4CZeKh4CkbVi9FlMRSkr\nfetching features for id: 4hIbkMIG6fVN5R1LzMcol7\nfetching features for id: 0oM0bKrgOfmoOjAbZVVy4r\nfetching features for id: 39Qzggm7Maml8J0LeibnY8\nfetching features for id: 0rA9NWth0fNN62CcpCPLvr\nfetching features for id: 1fNGwzjBYbL6UnJh1mcbAN\nfetching features for id: 0Cd6yJ4fgxPq3dTxDhPhIB\nfetching features for id: 5HNCDalJlhey3PzJqSA5Jr\nfetching features for id: 4hIjL7PzSYdbMJpKvbHyxx\nfetching features for id: 5DqDDxSBUjo3WCl48jY2FC\nfetching features for id: 00U45bMLJ8QM1kwT9i6cIL\nfetching features for id: 4KLxv1kzZobz0WaLj0Anrt\nfetching features for id: 4RJZSPNQcA7PG8AtNTm4fG\nfetching features for id: 2uYkzFqBU0g2AVztIaa58z\nfetching features for id: 2KukEvIw9uzm5rbfvtAsae\nfetching features for id: 6nMWFSBfyOaOe1UxKhL7Gb\nfetching features for id: 5z4iT44mMHyZozsTFy4A51\nfetching features for id: 5m9TJTcMOzwJEkxEEmSBmN\nfetching features for id: 4FVq4f9tepIdeEmFX60HVV\nfetching features for id: 3T2vA5VaDmZjvNIxzr6u94\nfetching features for id: 31f9rGz7b2rjAv7tg10iSp\nfetching features for id: 7FkwSaSBFvLUWCWlpeiV6h\nfetching features for id: 5RcIofhXhi7CTrmNXCOn4G\nfetching features for id: 5MrmMiU440CvGVgegeZx9g\nfetching features for id: 7nxUD7qBjQnCqQ84UvfMdW\nfetching features for id: 0XLXN9FPQIOl9v7J4TrrF5\nfetching features for id: 3v6QaZYVz7ozFrJfoP7NaP\nfetching features for id: 0a4pdyzHJzLhY8YWEb72GX\nfetching features for id: 4PUjy9xBqDl5ppbALCj1Hh\nfetching features for id: 4KVEHkTYmCxYwkat9eniQs\nfetching features for id: 3PRQqvqlBunArCpn4Q3kbl\nfetching features for id: 35hMprxd8xjqmVo1Ss6R2c\nfetching features for id: 33HKkBoXOgay5WwagMhOiC\nfetching features for id: 1jRnSVA3PfS0BhoKylM4E2\nfetching features for id: 6DyGA9SgVADFAJsduweUFO\nfetching features for id: 6kni0aRaJX8NNu63sfKTmr\nfetching features for id: 2sAWLmLGNXNXWtCw2iwanG\nfetching features for id: 2Q4RjDSbiWutacquLvOvTu\nfetching features for id: 5w9C1f415WEIBm6QKyZCHv\nfetching features for id: 55l8C6rfUm3joc4UdoPuxW\nfetching features for id: 6tIEW4jhonURp53h9Qc8wX\nfetching features for id: 4UcYefRknctfgciUVIAuU8\nfetching features for id: 5A3cnzVRLXUH50LVaQGIO5\nfetching features for id: 2TMm8uoO8g80pY5tOzkPqU\nfetching features for id: 1kkyUSwuSCkM8nnbqBdtEf\nfetching features for id: 6X0aRgGbgXYFqW4NKS5ON6\nfetching features for id: 2fR8EWqgUc2vzYDWaJP6AN\nfetching features for id: 1WkYJqmW5MVNPF8YPt448h\nfetching features for id: 0bbazTsBa37DXVBl3RwBfu\nfetching features for id: 3XQ6vzfo6YBa95XUhD6wdN\nfetching features for id: 3jSLgt1sgejI8CafIOgM5O\nfetching features for id: 7IKziXch2FOV1GPXsLom6l\nfetching features for id: 5ZMrCeFSwkn46QuzKFLul3\nfetching features for id: 17JmNaxpPRwWb9ZBsfRoky\nfetching features for id: 3NB27dEoSKEOvD5YuEHQEd\nfetching features for id: 223FlHocF8gVAYwIMhQ2my\nfetching features for id: 76gntP6fUkwUokmxWcASok\nfetching features for id: 4qT8AobbFK8hDQOlHBWUMj\nfetching features for id: 3aafnVOz0MIxHJqs93Ffon\nfetching features for id: 6QFT6m1sqcgsbLNCnDk1vV\nfetching features for id: 5dcT1Muoh3heS3a4H96J5e\nfetching features for id: 59DZoz3ExJi9oCzt4VtVZ0\nfetching features for id: 3RfkdH5PGEn0pJpbV4H8qp\nfetching features for id: 6sGv0jL7wEH5ikjjoSpEcs\nfetching features for id: 742ePjImigmiox2eD6X4pP\nfetching features for id: 6dCcjqSwPI2HX5dU5xqTFr\nfetching features for id: 5KiXgwrhSRlkkzaPpvPQKu\nfetching features for id: 6ykLs6tGHh7VY2mVTxjD8n\nfetching features for id: 3dR4PkKZyL2aQLq2MWhCWI\nfetching features for id: 0EkQJ7hHSnnMDXjYbn42Jh\nfetching features for id: 1FPmEIIIhzo6lSUXPoxVuS\nfetching features for id: 5o179PzNeHCNvveju5xa1Q\nfetching features for id: 6sSVVxnULOpq0yXOpNaTqO\nfetching features for id: 6KOtheMY0KN4s9TrQHr9It\nfetching features for id: 4ti7Us7isw8IRrVONcmKUA\nfetching features for id: 6dLKiUywNz1HYP6hk7n5IU\nfetching features for id: 0NTDwtz4DvAizJ5cM80s6f\nfetching features for id: 69gTnRx9jt23FdlaHlg2Sn\nfetching features for id: 5jvFzNBMhSHxlt9P8BL2aU\nfetching features for id: 2gciyJOs53AUEWbvErQUzY\nfetching features for id: 437K6n86c8sfJiUN9x8CI9\nfetching features for id: 6zrM3FSFakJfv4X25pjgCW\nfetching features for id: 51l6HLDjoQw7qDfbKTBqMT\nfetching features for id: 6otraxvXsHr0s92xdV6kSm\nfetching features for id: 2KVOEdNDWrMfc5AlMEF0lm\nfetching features for id: 2GvZf1c1Lvw7TZhyWw8QzJ\nfetching features for id: 05YQqgKIUvxOVqs9A2lHm0\nfetching features for id: 0Cf60FPZbeZa3PKob5GKbM\nfetching features for id: 4sX1ZaiFTFbU8omeUCXX8M\nfetching features for id: 5OQONjZd14iiZERbc4kHpP\nfetching features for id: 2BS8TsYnXlFwyzCO0IAywv\nfetching features for id: 317OKJNq79y0MU6BjEh7dD\nfetching features for id: 4BQVBVTok2VEEv5qyiOB3G\nfetching features for id: 2htH7r8oqqEwp3BB6oh2cU\nfetching features for id: 79unP2UOhDsow26wIW4jdz\nfetching features for id: 7lbPDDVBPdsOVTOzu7KQq7\nfetching features for id: 6jLz0eS09dZ5nV5Q8JamVX\nfetching features for id: 09v2waPISsO1PZR36VEzpQ\nfetching features for id: 5YRBpKTf5lKBy9jYS2TDCn\nfetching features for id: 7M7VNz8MndGPQaZ1vLkiZh\nfetching features for id: 2yglo7I0FEWVvu5yATre3y\nfetching features for id: 7Iay1Q5jEjHPxJbmGPOLzV\nfetching features for id: 1oqpdSW9YqEvZ4LRkQsXNq\nfetching features for id: 1RdFoJL4vjNXinXA2iSgGu\nfetching features for id: 1rkT8caKhdEFENiqips7dy\nfetching features for id: 2qhXUDLja1d632GIxRtm1z\nfetching features for id: 69Q47Mtuk2FJY9ijuojlXc\nfetching features for id: 2dZjrzmulQbM5AQjS0J6ms\nfetching features for id: 75FCuXUiquMZsMqSj8rigm\nfetching features for id: 3QMEgLBMIWuvOelcq0h9qB\nfetching features for id: 40pyP2adDdcFnOO74zo4XG\nfetching features for id: 13xYdYhHZpP1qOH0JBkqkS\nfetching features for id: 33YKKW1MaDUwq7GDw3WOqR\nfetching features for id: 7JMdrgAYUnYLZUcSGiOHcH\nfetching features for id: 7qdKwbhXsyWOHekUw7xbqC\nfetching features for id: 3NLcMav3rc0tlbwO6sXg8v\nfetching features for id: 1lcUS7v5FdoVfx7aEjhjig\nfetching features for id: 0a4B59pJDQl549hdArfFVm\nfetching features for id: 6fXsuHWpbQJDU5i74BfwN7\nfetching features for id: 1uXZrI0WYyUvv25FAsI13h\nfetching features for id: 2HLjjlg4hfvrgjashHVRmK\nfetching features for id: 6adDLR9RTXyi4DMRbZJerC\nfetching features for id: 7d3cKDG22Ig9kZOS1qsq3S\nfetching features for id: 0vi7ozUKrpaECgVAt2qOTC\nfetching features for id: 54y2tSK57KkTjeGZnKpDcX\nfetching features for id: 0BH4wsBNL5xfiFsuCTl3Sq\nfetching features for id: 69C6Doeyve2lM0afZ92n87\nfetching features for id: 7KpERdL8FSWPt0Pc4Nmrcw\nfetching features for id: 504UR8XeXpyemvSxw050WT\nfetching features for id: 7oDywJMdbNtPpUwDUG38c5\nfetching features for id: 2dhe9LLgTXg0vOwgqOsmw9\nfetching features for id: 1SCnrq7aAvg2MIddieSQ9n\nfetching features for id: 2fbMIIgUDUcbdspYZKnACp\nfetching features for id: 5DwTG6MGx4w7KC8arkeEWw\nfetching features for id: 66rjZFghoArgSOjbIpxLCs\nfetching features for id: 4fvQp3R9NGjkQiPXbLkAN1\nfetching features for id: 6Lh3SlKQAe86VtJpOJWqTt\nfetching features for id: 5e0t3fyvsDApza4UTP0Tiu\nfetching features for id: 7Bizf3PgirXJhkSTgyjUWC\nfetching features for id: 6Pqji4iKTll0YO2HnQQaXL\nfetching features for id: 7H6URjJKkBh6fE2Xvw4eMM\nfetching features for id: 6gu3Kv4Wtcs1JnQXUQmRbH\nfetching features for id: 72EkpsMgMnerGyouKRRN4N\nfetching features for id: 7FmJekHUTWjaAzATpJpZeG\nfetching features for id: 6IcU1BradM1l3E3IdCWSlA\nfetching features for id: 0v3ZTOZRhfM7edUg44GpaE\nfetching features for id: 1nxxYKsraVrPnNRzISliwM\nfetching features for id: 71QDMjqxSXYBiCQeZtJkPf\nfetching features for id: 4qrkxnQx4hEBKzB64Z6ppj\nfetching features for id: 64SBy7WqIUjmJKUTBZnAY7\nfetching features for id: 0iQ327wLzW0NYImYdL7W7A\nfetching features for id: 002Pxn2yZnK9j2PlemH3yG\nfetching features for id: 1LrfhDdsKPVYtdtodBrA8G\nfetching features for id: 6MSsopsVefnA2alhuDXpyv\nfetching features for id: 4UywFB7lROzFHNj7JLoWst\nfetching features for id: 1sLciXCWuOQOTVWlvSt82B\nfetching features for id: 1h4XVjt26ZCDKmeEoHxT07\nfetching features for id: 1pSVZad7Vt9LknqBMQ1Ndv\nfetching features for id: 3tkSyO0K6n1SPwgJdY2dn4\nfetching features for id: 1FEPm43R9PtGIFnCHqZc6R\nfetching features for id: 7crxZRe5zcCaLb4RtaPrgd\nfetching features for id: 1HfDPwJVY6ReeQ1AtA5NiL\nfetching features for id: 76TIRbq0sMsX2RIq1KNQI9\nfetching features for id: 6qUlJiUDeJRwxCrHngPHXe\nfetching features for id: 4lHkuSUGjwstBAJoimtjnk\nfetching features for id: 12eZu2EWV12MBXhoNSFqc9\nfetching features for id: 4dnmDSlbtn7cm9qXpG01mg\nfetching features for id: 4WSc4jv49Kj3ZqcCUiWH9D\nfetching features for id: 2J6aSaz6UOHjgltO2w1lkc\nfetching features for id: 5Wq13fV6xFymI6pOEOPh9M\nfetching features for id: 6b1ZEXdMikajKUfrLzgOwj\nfetching features for id: 1brOHEJcktXiwPbhECMhaB\nfetching features for id: 5ejhLHGjm8AYIKHv01LC1K\nfetching features for id: 1iCBYtztuUEZw9HDuzneXd\nfetching features for id: 2CbCoTobw9YtBLm7Rej89J\nfetching features for id: 3T8Ixv0Ayl7M8GlhYBmtFR\nfetching features for id: 0gbuUe5b0T5er552R4PR7b\nfetching features for id: 1N7R2f8HEZea5MvgBvQFqE\nfetching features for id: 4TBBPZks71c60whhq0PgdP\nfetching features for id: 68vs9rfQBpkn1EDgC0nBTJ\nfetching features for id: 79qnb4XTD5WxJ7t5v7594W\nfetching features for id: 1XTzERcWA8JFv7q1jHdFI8\nfetching features for id: 6DpvREtNKnyYkPgv9RHHPw\nfetching features for id: 36mdlSTkI395sFQlPQL2Ls\nfetching features for id: 29bU062RLWPo6OR6rjonrN\nfetching features for id: 5i7qzYnqUiv1sjlr6uKtbl\nfetching features for id: 5v3lNECv7F40DKpKIFaiNK\nfetching features for id: 28fUeM8vxp0zx3NnjCguuf\nfetching features for id: 5LME1QLKHnNtrIITsZBT9J\nfetching features for id: 2QFhKYOGopPjjL3ODvi7Mf\nfetching features for id: 2lBuql0mYYMfC83ppuQEnE\nfetching features for id: 6bGpztCCWPPk1Xck6ibpZb\nfetching features for id: 3VarPeE6u3uTvDDECUvsLY\nfetching features for id: 1pw0PCc3LHtSQjYwRo9WqP\nfetching features for id: 4AzN9nuystYVb69dRgzyZE\nfetching features for id: 4PxHDMW8qjBHMkoY3oHn3G\nfetching features for id: 6EhLNFfi4feTg0JU8VyDAt\nfetching features for id: 2pRxtlVOGHlcmMM1csPN4m\nfetching features for id: 5idn7EhxgDzCJMjq5chRvF\nfetching features for id: 39LSfgIQHJwKi3gBDXmcwi\nfetching features for id: 00JRruCr5M0bscbDVt886Y\nfetching features for id: 5WOfxDEsK9FLq1hoSZqo6m\nfetching features for id: 2nmteiJeK6RAIGH3pL9Pmh\nfetching features for id: 27jKL4imo6MxLJuj486nte\nfetching features for id: 6KZLiDQ8rxyKJD3eS8wC0r\nfetching features for id: 12hAeEyERydkkb1Wow1xdA\nfetching features for id: 37zdCyboXQ6nfT468R5eaF\nfetching features for id: 5VYHTkkwE9nhomsREveBcv\nfetching features for id: 1qr4063nTkNEjc78KpjXlN\nfetching features for id: 0xAqhyWvTkfcdFgm7dozMO\nfetching features for id: 5AUb0sDQfwDmlV16vwLw5c\nfetching features for id: 7hTW791CEdFSlw2wouCVYP\nfetching features for id: 1TUCxgxczTjQ0dsVnxsIsa\nfetching features for id: 5kygjg2AQwPZ5t0qlaxxNu\nfetching features for id: 51hn2IWRbpSp80INMvD2wR\nfetching features for id: 1JFfCawXkvyuX7xpFslcGe\nfetching features for id: 4xdFoaOLN76WpOoaLOPydK\nfetching features for id: 2iIeQdi360dvjmjDWrvryp\nfetching features for id: 3swpc7fsvdAkib6qIS7K4g\nfetching features for id: 0SbZEfOvogkUI7AmZHbR6s\nfetching features for id: 3qCImLo5lahv1bjlf5vz9P\nfetching features for id: 495H4Urq81O9juThGEPALc\nfetching features for id: 5LkazMnksUZfqC9E5479sH\nfetching features for id: 0mbS3VwRbO6HVBMPXnzOGA\nfetching features for id: 78ZqE2tjAxbqEGGlvGnQfT\nfetching features for id: 18kKQf4bOwY0I9FUV3ne1q\nfetching features for id: 7udDurbLqcmXEGCyE0b8SX\nfetching features for id: 247LjOYYHS4qdiv9u1FOxH\nfetching features for id: 42BaDyvQp6yBrr56xoO1wM\nfetching features for id: 6ft9PAgNOjmZ2kFVP7LGqb\nfetching features for id: 3UXQiopMR4H5TFvz61XPey\nfetching features for id: 7gen818QlnY5REVx0D70mZ\nfetching features for id: 1yXzgdB3Tdd986SDY0ssZd\nfetching features for id: 6xwrglPlNAUiiSjTRxD6st\nfetching features for id: 5oPno5EuUJ3zFdx2IOyG2u\nfetching features for id: 3K75GsqYHIPJkU29CLxLeR\nfetching features for id: 7e4JOggoGV6CYBUyl1NuO0\nfetching features for id: 0h16mEAACVZkQoeOwbxtZP\nfetching features for id: 1Bsg81xYF5fPtIQ9iUF1cF\nfetching features for id: 4D49SGpnyxN4n2j5jLaJ7N\nfetching features for id: 0KHLV5TPrJBlOjBWdFwZYB\nfetching features for id: 5RvHBonRc1HfpOlf8jpA2L\nfetching features for id: 2n1NPWcvShZKTDT1aZNoYu\nfetching features for id: 277pHePclnKynKdg73t9pI\nfetching features for id: 2X0AB8xDyNe4iupYJVOIKk\nfetching features for id: 6UhCn5qXW38VC0VzEgFqXS\nfetching features for id: 1wUleNxYgCwy0GZtndVd7z\nfetching features for id: 0AEwlgUZ8zkVH2WEJibDQz\nfetching features for id: 3WWznQqOGeE8WVvr0s8zBc\nfetching features for id: 1RknXtekvGbn57eru2vXjd\nfetching features for id: 18y85RXNwPsaqTwKPfTqZI\nfetching features for id: 0T6SH2vVwZszg8qLc2RGyS\nfetching features for id: 3O7JSI4JkRIicn4W2H8Aoy\nfetching features for id: 39hq2TGyGMM4lKlb7bnGqw\nfetching features for id: 2L3tzoilZgW2DUD9XWOuWi\nfetching features for id: 4dDfEtgHqFFPchNhDd0CBJ\nfetching features for id: 3unFRYDotoc1a2vR08T4yN\nfetching features for id: 28zYLv6vZVQPxmWeCRMwk4\nfetching features for id: 4ygQFe0kTwEQYUc0SjMr58\nfetching features for id: 605dIOk2fnhJHg8SbgCYao\nfetching features for id: 7azOozeUjGM5QCWfy2ZvaY\nfetching features for id: 0DmsCdHUEqiLhl9oSopVRP\nfetching features for id: 0abnVktJLMXYOTBP3qHxIX\nfetching features for id: 13ZqCFrJ9aHlG6BPVu2sc5\nfetching features for id: 0R3flkZBGrHE1n7hHGNwLU\nfetching features for id: 5bfL1VNWxv4SpGJcK3R4q2\nfetching features for id: 7MUq6RwnqJ2MASNieOZINI\nfetching features for id: 0TMHOFGVHGk21BX1Jtwl24\nfetching features for id: 15sA2sh9yeG8g0oQLxzRl0\nfetching features for id: 5mGUVta08LnMoQ7NUNI3f7\nfetching features for id: 49wmhN7AAF7acdnDYlN8dj\nfetching features for id: 6jnSWYnmGFj64jeTMMk079\nfetching features for id: 6H9qqWROiE9ZnktW3YrH9b\nfetching features for id: 0PKZ2sB3g7PjgFcAqFRPrS\nfetching features for id: 5S9LsNRLqqZFiDSBMskY72\nfetching features for id: 30nBdjVsKGchhjSy89mv2N\nfetching features for id: 5eeZr5k6eprE1NToNTZuxs\nfetching features for id: 5mF5vKAxLU0LHCQQ8eP4Wv\nfetching features for id: 27Zxj2jAlbu9GlhRqDB3tR\nfetching features for id: 1gvTNXAtuLOQrsi6viPcsG\nfetching features for id: 4EHD0TJ9HPMLSfiHI17tjK\nfetching features for id: 14ysBbruzodLe40AaoHrGn\nfetching features for id: 4wCzp0vWMEqBaLSHBOCQQo\nfetching features for id: 3F37xwBMpBxOqxQ7MPqArq\nfetching features for id: 5BJIZTtGWujic8BKV8Piww\nfetching features for id: 1aJqvwsyVAPi4BpMpO40bT\nfetching features for id: 4R6n8nSg5yIaiy9ltdKV0p\nfetching features for id: 1us3B4JET9DGY5VLcVYTAQ\nfetching features for id: 1DpvgmahJSLQL33KGaXvOi\nfetching features for id: 63Zb0epU5ysyvDa72Jm4I1\nfetching features for id: 6duLDfk7RheKAup8VFt0P1\nfetching features for id: 1hh6PfvINPBnKplONjgju4\nfetching features for id: 2DhCsY4wgd0hRBQsbdaVeB\nfetching features for id: 7s25THrKz86DM225dOYwnr\nfetching features for id: 5kr5oIPoXcl7eJXHRQGyoS\nfetching features for id: 49BkjVIwWuO5lk23uhZcJz\nfetching features for id: 57KyBPfnROUOMBvyQQwnei\nfetching features for id: 1f8H6K29EalspWxrA7KDWe\nfetching features for id: 0xv72Mvr6QvsjOYbzbLetG\nfetching features for id: 3z0iSr9ZUx1XPy8b6JjSIU\nfetching features for id: 0hfDyQbnvDdRZJsjtwUegG\nfetching features for id: 51wC0EhWxyaSCIvWCvWXQv\nfetching features for id: 2lHSVxYcOrvLpK9pgAHkDf\nfetching features for id: 59IxJQuFPmW4Xvy7GynZDF\nfetching features for id: 1kJ2VFimNH9RYxloaXNWde\nfetching features for id: 3eTdybxkhPXIHLu9HPxenw\nfetching features for id: 0Z7a70FevCaQINjlrsVZb4\nfetching features for id: 7bzliWon3VpMR1guX1HlOf\nfetching features for id: 1BO8bCK5u4isl5RCuPIuKB\nfetching features for id: 1ZUgRUFkCfim2RYhUFr6Jy\nfetching features for id: 6BuOJEMm9bnlf6WCwRW9PS\nfetching features for id: 475HKqM4Fk9wHZ0NF8DNBo\nfetching features for id: 30dIevLsWhw7KDU2i1RdJY\nfetching features for id: 0wmWvw4Y1QTnB5gxwC8A5D\nfetching features for id: 7fp41IDi1dqARoX0Nuvz5O\nfetching features for id: 4V3KUwlCYLJXDhpWLIfYVp\nfetching features for id: 4xTO6qvRnREo3Axva7SGdo\nfetching features for id: 4GeDvtCKRWFi6HtauLMM0G\nfetching features for id: 4xcf2N9xEZ7ogjDIEIOxG1\nfetching features for id: 3hwIo04AK1DFAC1NXxNWlE\nfetching features for id: 5xshoeAeMRgUbXzfHf4eDs\nfetching features for id: 0yY93L0Ligj9McONit3FCI\nfetching features for id: 1AZQxYkqHKPagjHyqfaYtg\nfetching features for id: 72OfN6EHGzIOizvzhnnhKT\nfetching features for id: 1bOyh3hrYncnNNg298H6au\nfetching features for id: 2vlXMXWgrqWgTmi0CtvZbS\nfetching features for id: 5w4KVUZN11WOhTdNO8Nj6r\nfetching features for id: 2SusUbsUQnw8OJDq56ZMbE\nfetching features for id: 0zV8gHHcSVDy9czQWrztC3\nfetching features for id: 5Pr9yvx1SEcZUexWe18I0U\nfetching features for id: 4rOsz8T9kaSJMBrpX0bNiq\nfetching features for id: 0zLnCSOruJz7QsamiOzhAu\nfetching features for id: 5UiPIUAEy5MsBeX6eEdR3U\nfetching features for id: 7mnlWaXynCtCo1I0DiINXA\nfetching features for id: 75T7h6o8n5YZoCHqV6fTfa\nfetching features for id: 2ZVN3aLhzeet3uepP9RdoZ\nfetching features for id: 1INEmvIGCk92Vg5Bk4VfWY\nfetching features for id: 1x4cWY0XlKELUqGjVYGn6V\nfetching features for id: 299l2wkME9A3Ofbu3BW9xY\nfetching features for id: 1bmcrUnBHBKNI3R6ycO1Cc\nfetching features for id: 0x4ijFipSxDfNEge0bpkk1\nfetching features for id: 3DKUkv9RxYY3807Wetll2a\nfetching features for id: 3dixl1MZ9qOXybfCjNX9oC\nfetching features for id: 4E6yRmGk4v1bcMhQbUuv2s\nfetching features for id: 6Mr7Q7t9D7apHG2IxrNHfs\nfetching features for id: 6gBfyxWTkUFscxpcz1Yghj\nfetching features for id: 5fY0sxLDOrM38whFo5SMYO\nfetching features for id: 5VhOpe9A0GQyiVT6kLaXsF\nfetching features for id: 0VRBhWj4AlA4F5gitwGKnD\nfetching features for id: 1hlzWy4ouz6nOCnmZzrItn\nfetching features for id: 2LUPw0EB4c2t5zoRmhG6h3\nfetching features for id: 37kJWFpOSGIF5CdKpmNMr4\nfetching features for id: 5xOE1ynrGbFxlK0mL7gNib\nfetching features for id: 393xYitpT6bPYSDucBYLnL\nfetching features for id: 1byNQfcvDyC851g5MSW0N0\nfetching features for id: 3SsHe99t3EN5k3sEu07m2p\nfetching features for id: 5ly2ULtvOsRj0O7TtHVktQ\nfetching features for id: 5w1urYjnY6qkn2vF4GQSmS\nfetching features for id: 1Joe94gUml6QWqLCXl9c7U\nfetching features for id: 5x305I0eh5PQuo7DbqvRWd\nfetching features for id: 0MbawDKcfL98znAym9cFVj\nfetching features for id: 7KKCysLL3WwTRZnyGWVeZV\nfetching features for id: 5eIAJHG0lgBCkREwIGRBU4\nfetching features for id: 1yexhSDARSLVvRCBU3wDAm\nfetching features for id: 28st5OVMNqrgTkIP1IflEi\nfetching features for id: 4IjpmyTLiaNIffjkax72mE\nfetching features for id: 05gxElp6T3xNGpZDq0tFOQ\nfetching features for id: 1VoXsGqFp6v1GYEgyabCnY\nfetching features for id: 1FJOhsLu7mhIC7SfxwurCO\nfetching features for id: 4uVFUMQ8WZyhpePcAD533r\nfetching features for id: 03DmfO0mdYu56ycyLWTe4P\nfetching features for id: 4vsJejr0UiQDppgqDlQVZe\nfetching features for id: 4dSNyAD8cNB0JrOZWnAQO3\nfetching features for id: 6VAgpbCQYVMKzr0ZfaGanJ\nfetching features for id: 4CsGhF52AzvEefojFCQ5IW\nfetching features for id: 1ibHIWQvenBe1p9NsrpWFa\nfetching features for id: 533yUtsvWXcOI5onoOzag4\nfetching features for id: 0rGpTr0VnfnWBEgJGE0Ahq\nfetching features for id: 1eGPKo73n7rFkTqmQCXq79\nfetching features for id: 78WAUYG6MhoZxSARsK7mXy\nfetching features for id: 7qTsVCOqCMGLu1GzzvlLrN\nfetching features for id: 6zMKLxHp8DmqqPchnIldQj\nfetching features for id: 2ZlpHzD786VSWcPc4mdLDX\nfetching features for id: 0T7zBRcchKYBl9g3ZZhyf4\nfetching features for id: 4cDwwCJi1s6SzxUL3zgPhW\nfetching features for id: 6phgexl36yCRsg0XDDEzbq\nfetching features for id: 5FOh9frnN5RMHgVOTYoTGI\nfetching features for id: 4yt90IbMhzW8ed7pV90rJ8\nfetching features for id: 65OrMO60eZdpH8FTpHdUvK\nfetching features for id: 0NxHOfxuXgyxPUsieiU27l\nfetching features for id: 1EXKgXQNrETEGqRjcFRbPZ\nfetching features for id: 3RSj1QA6znFbSkSFIZG9f3\nfetching features for id: 2c3icRUTCk1STapXM1UJbB\nfetching features for id: 6uW1WepoIJD03mTPSIGHgu\nfetching features for id: 13O3q5OsehC13t9tU6NSbz\nfetching features for id: 1HAf8z5UiqPxN2tY47f7VM\nfetching features for id: 6nvGeEsHWysTGxy6gnhFP4\nfetching features for id: 4eZ8w3n8xZDAVveSBYK9U4\nfetching features for id: 300qXG6Be7OeOIVCFuk2rR\nfetching features for id: 6yMy557ei2JnMR7boHTlX2\nfetching features for id: 33vOmVmYIgn4QGUc0ooNio\nfetching features for id: 1g0W5WDakVdYBD2RFlo58l\nfetching features for id: 3MbJTwduZrrkp6Uk5zc5jj\nfetching features for id: 4gQ6oyK3y4UZoBmvjkNr8q\nfetching features for id: 3dodhTAqKdlOPsx7kqY3sr\nfetching features for id: 7aRDgHf1tmDTdmtYjbhyDH\nfetching features for id: 0DpVkccnFxFLYE4QQy2mwn\nfetching features for id: 10h6Y5HHIFM9KhNSxGi6e3\nfetching features for id: 2GBC3K6SsGu8GISqIVgVeU\nfetching features for id: 333fBWrLeVkYf7uf3LwSLo\nfetching features for id: 7Ai4qIKyxcBCcKAz4AtGxo\nfetching features for id: 68oPRfiz4pLUmnm5mlyyF4\nfetching features for id: 64WgJBmkXXasoy6gpalNuI\nfetching features for id: 5TbHBh0nrOLkgYmIWk7Q0l\nfetching features for id: 6N0QR1bKO0a5OROLT0MWy1\nfetching features for id: 6lTQUF6os6Y400rkjWb30B\nfetching features for id: 2Keex13ovGMjDvO5H5vW8R\nfetching features for id: 4UnF0EOHBJOFNtynLa26ba\nfetching features for id: 5xQLjEHGibyfh8YkAtMuWx\nfetching features for id: 3im4hrqpF3MW0uTtwuNHxZ\nfetching features for id: 4zYktHYolvTl8CNUPa14mK\nfetching features for id: 7GykWKpUdiwoYLCACCpmjp\nfetching features for id: 0Zql5iFzkEvV89QfgSUHMH\nfetching features for id: 27JuahbVzPdLctRmgBkTc5\nfetching features for id: 5yGpPtZdgfL2jD8i6MX7tE\nfetching features for id: 78siU3A3EIZgcNhKV7JGzm\nfetching features for id: 3RnsnfrquBAy6kIejQzc2C\nfetching features for id: 5Y9St7Txe6OT7MgXknJwDr\nfetching features for id: 2tuWxZctCHB4ETgPE8rb34\nfetching features for id: 1dEXwXtwiYlP0SFEkd7JxL\nfetching features for id: 2b8sqtEfTU0EKTZJxm1Isd\nfetching features for id: 2tdXsWXJuZeCJFvJAiTSIj\nfetching features for id: 1StE7j5oQHwGnUQy1RUo4z\nfetching features for id: 7y5bD8RMEoJYgCvghxACjf\nfetching features for id: 45JOdGtN1kkmeghHRz8OMZ\nfetching features for id: 2OiOI7sppJH3oDmafgkGiJ\nfetching features for id: 4vCAFphCJ6Gn1MtP73paCd\nfetching features for id: 3QjrJuzcTnwTLDHZSaoGcg\nfetching features for id: 6Ke3G3jvDM0k93bvkr4mqm\nfetching features for id: 7D9ViCvLC6atoIV042u0Ta\nfetching features for id: 2MdXHo2uhdGMksRnUBxmzI\nfetching features for id: 3mabymD8ortwKJfJQ6nVR2\nfetching features for id: 2AqbuxkHkZ3aSlY65rvjHp\nfetching features for id: 2UnrpDZ6RnMYq1iwzsft9Q\nfetching features for id: 7C13QJB2jagmGBK7PS18Z4\nfetching features for id: 3M9ecJ710c7RrwqDZhb7mZ\nfetching features for id: 46rtoQbjaiQBN35fwXDfjK\nfetching features for id: 3Caj3EuBDu2cpjpFNNTXLL\nfetching features for id: 2DyiLNW9M6mn8GIZb1fOpX\nfetching features for id: 1tGoMICXf2B4xz2iWJeZWG\nfetching features for id: 2QZ3TaDZsHEFPYBxBWel9X\nfetching features for id: 6QeBPf8oEpIU55p1LqiUJv\nfetching features for id: 21yKY6JxjiR83hwhwe2VgR\nfetching features for id: 24wd3JuMe658XEGV2viGrc\nfetching features for id: 5FghKDW0T6ARG4U3Ef4x7N\nfetching features for id: 4aaf6Me2nRCDWRajaYJdNo\nfetching features for id: 4w6KzdRpUwyKfEf4vFLeiI\nfetching features for id: 5ObPTCyOhEk9g5XUepPxGx\nfetching features for id: 3MedejWo5Q7hYLR6epEDTp\nfetching features for id: 4wowCtWxnXTLJw093mh1fl\nfetching features for id: 3566Hq03kwDoZipjf1ROoy\nfetching features for id: 4m4vlae7fycsqeVX7W4qkX\nfetching features for id: 4dp6dBDTRVQS7UKRuZFdOl\nfetching features for id: 6RXoA806Vy7clqPkcKiIjx\nfetching features for id: 0ydH7amZ8mOIVE2GIfOYx1\nfetching features for id: 6cahHUfSQDIB8i0Yx3srwx\nfetching features for id: 0gl74IvGW0tzX1Cw15Thic\nfetching features for id: 6pT33ADvrAnXzNUFvlrRdQ\nfetching features for id: 0xTkZjpIAGYvK1gvUmvIMT\nfetching features for id: 5HbRi1fwxhZArT9IubMgr8\nfetching features for id: 4wpOxZQmoxrHtfQQcR4PyW\nfetching features for id: 65s2XkRfOUdAadGgA9bjGS\nfetching features for id: 4mSfwqx71pxE2PeYc7bvr7\nfetching features for id: 6JSrZlaGV7Xle7KJhGikRq\nfetching features for id: 7G4ggNSeJG6P82Yk0fvvdR\nfetching features for id: 19eolVl0e9EDVl4cTRSZti\nfetching features for id: 6Tl4lk8QBmrZ4xD5TbHEiW\nfetching features for id: 15mNqJkj7JNSzgYuae5pPZ\nfetching features for id: 1CfeKBehU6z7lZ9zO0kf5g\nfetching features for id: 3jwPIp1n8YrKcEpKB4PJoT\nfetching features for id: 1oMs2rmpTZW8p9f9zL2clh\nfetching features for id: 6gk2ARZmiYtXBnqYmh7H4O\nfetching features for id: 0EDDzuxDc8oBkXV4X372Az\nfetching features for id: 4Pu5jsCCU3CbOYGAInjLpS\nfetching features for id: 5ouMGeOtmeHUfneTwgjNWC\nfetching features for id: 1Vp8fFUPiBLc6LuSemiahz\nfetching features for id: 0W7QzopfYsOv7YpScrDwAY\nfetching features for id: 6E8XOuqYdDNtxIlxowWeg6\nfetching features for id: 5aYnq06qFFSHlUDISQafgw\nfetching features for id: 3qf40dVHsTi3XWmThoCMom\nfetching features for id: 4cix9zymmhisLuM56RDcB7\nfetching features for id: 3A5N7BB4q7pARAey4si83V\nfetching features for id: 0GTT7x9HY0WYP0E0TkFwRx\nfetching features for id: 7o3mquXtovl4Lg2UwmuF9w\nfetching features for id: 2wY8TkRMnTNSVhSDJnFHTg\nfetching features for id: 7eotSbuPkqE4DkvsSZLUM6\nfetching features for id: 3u5NVpdpX1yj6QWkjeZiqc\nfetching features for id: 2Sp9MHp6VRos9kwvUWpUlp\nfetching features for id: 5dKfQoMk0Y5GKWikhiHQiM\nfetching features for id: 5Sz09kaSzvpTC8lgm5W8Mt\nfetching features for id: 6AwCMQu47EyMU2IbzhkjlL\nfetching features for id: 30bxwrpUzdHZYMslOnmA2n\nfetching features for id: 2X1EonkN2OiF2tKyNEuzHY\nfetching features for id: 2gofnl02nfeNFuXZdYroRC\nfetching features for id: 3AuPRF3WVgtCNvvspIWc0Z\nfetching features for id: 3rQzv2lYriRlWITedMsECq\nfetching features for id: 749S6pIVPtnCCq6mzufmUd\nfetching features for id: 3JBFfG915JxanyJSEDP51k\nfetching features for id: 19wFpRBbJhCqXJ13V6Yw7n\nfetching features for id: 3gD8PjbZRQGyoZ5KjXCM2N\nfetching features for id: 3Lk0Fy1xGEQ8xOQWukyzRP\nfetching features for id: 1nneC7m6oPIo1JuCXPZyHa\nfetching features for id: 62IemfSmPDN0S5tSxhNab0\nfetching features for id: 6VOZrmZE0URu7E5LHaVT5W\nfetching features for id: 4eQ7kb8hyNIYEPuPNZ9RoN\nfetching features for id: 77L13Hpqe4bBcWL1BHJxSr\nfetching features for id: 4ytbzwn1SkPxYzz21n7Zoa\nfetching features for id: 58wullwJKOgd2ePaso2WlT\nfetching features for id: 3RB1zMyOLc5MvwUJeLN6Pv\nfetching features for id: 213sXSifN2KluzqglSDQTm\nfetching features for id: 2N1T9jGtCOcyE34fAdgPQC\nfetching features for id: 1aeDg6eS1NeGNu42l9qYn4\nfetching features for id: 6H8WMHCov3QGaPLbpOMpcJ\nfetching features for id: 3Htxny2KBYt7JaSnQPFSnQ\nfetching features for id: 4t3lklr6YGeCK4MXfGkKBl\nfetching features for id: 4aaQpHCvL2jkF0tJzXxvgB\nfetching features for id: 1hTRmVmVaHSUHN0dxph6FI\nfetching features for id: 75aFifgT6BZY7h7hkEPWVU\nfetching features for id: 0YGCSXIPs8VTUZmk8nDpdu\nfetching features for id: 58smfvnOg9Wn7koHxsvJ8V\nfetching features for id: 2N6pHH41hTjwXpOwnsTLCG\nfetching features for id: 46rJEodxZ0CezzTS5Uke32\nfetching features for id: 70DZRMZUbE4vLcNnwEpbNb\nfetching features for id: 65t1qUTFzrl94IxuDq7oQa\nfetching features for id: 7nYBMEtMVLMAJQMpaQHULU\nfetching features for id: 2Du61OdJ7CXmUBG7FaZST6\nfetching features for id: 120BvxQsMS1M41PdBClcxa\nfetching features for id: 3HfXCqcFDH5Oz4O89O9Mtu\nfetching features for id: 3koPNalAOcO8orv2MBvxen\nfetching features for id: 4PEeZ2U4UfP2Jo8EtIOjus\nfetching features for id: 7K6KRwyvE0plDEzxrVmCon\nfetching features for id: 0dmQv5F4dm9nMxX8zz2x34\nfetching features for id: 3FRCcvlS0KFKqKI17bDZRK\nfetching features for id: 2CHBo5ozruzhxgaRHdyupI\nfetching features for id: 6zGufyo5txRnBPsGm51Zc2\nfetching features for id: 1LziKsFJM2A0Uu68gtKWGH\nfetching features for id: 7x29Vj48r9cPuQCEC2VUia\nfetching features for id: 2RJPho1Ex5yhAm4tr76DLp\nfetching features for id: 4zcfNc9jBdX9BPqXzqPSZi\nfetching features for id: 4rwqam3s5xzOqOJNyqwCZU\nfetching features for id: 7lhPwKa37sjV9eqWouT9B4\nfetching features for id: 6W9uexj1gEJDab6bcWV7cA\nfetching features for id: 1DOQ4CSjYKvwQbFK0lGbj5\nfetching features for id: 0UsAYccEydT3AoDZi6wXwG\nfetching features for id: 58fnw6HtW7CwDXVrvXu3TB\nfetching features for id: 1MeMuueW7NPiy90BweHDhJ\nfetching features for id: 4SGAlTbZ5Y6GeNuRGWW6KV\nfetching features for id: 6aIt7na6SlEAO6lk3BMhK4\nfetching features for id: 1VOa4gAfFLXF18Fda5SnwJ\nfetching features for id: 5q5oUwH8wxSbtztFT8Mwv0\nfetching features for id: 2HcT0IGf27ov1w9ncWtij6\nfetching features for id: 0HkFBtJT19oKGvK2nhs88Q\nfetching features for id: 7mGlt3k3luo8Qt69BxxZp7\nfetching features for id: 31vgSRfK5YOGSCAfj6zPAS\nfetching features for id: 7LWUt6WCJsTUBjVl2GhpOK\nfetching features for id: 5I04kxoRbmyz6WfSjE0jWT\nfetching features for id: 177tdHqrRwTCUhBwTpqQrn\nfetching features for id: 1L34KjA1nP5BA7PLCaRqUg\nfetching features for id: 12hAeEyERydkkb1Wow1xdA\nfetching features for id: 6cKPKg0QjeY7K3F6EHgdg2\nfetching features for id: 5trFoWc3DxknotgAxtYkFl\nfetching features for id: 3pBtSlaM0qyH8mLXzVeUhh\nfetching features for id: 02PSTB9Cu6AQccSrMZKuKo\nfetching features for id: 2Jymfq9aYchbP9mSTp646l\nfetching features for id: 24tCr5aP2Mjbd2C0tZb3XL\nfetching features for id: 0dmQv5F4dm9nMxX8zz2x34\nfetching features for id: 5Yt5e7my4DliCa64DK2Ovd\nfetching features for id: 7CRVGxQiVGc7zXxUvwJCVD\nfetching features for id: 2oWu4i4rYXIs37J1X3epmn\nfetching features for id: 6iWLaVM5A60iR0z9nEcG6A\nfetching features for id: 2JfAqeGWP3cEDOLosccf7R\nfetching features for id: 1OppEieGNdItZbE14gLBEv\nfetching features for id: 0R6jYwBiYBFOcBMqD9fZnA\nfetching features for id: 4UolfBGFer7I2ldSh4GMZi\nfetching features for id: 3gwu6u6Zc5V9Y7TNqtbhLd\nfetching features for id: 14SxSIUPNX0uWokgXwTb4O\nfetching features for id: 03vxPxwlOOa50NAbb2zGmK\nfetching features for id: 0WxPGuKaTQe20nm8UAAohJ\nfetching features for id: 7ki09wB8QGbf6TGrHl8nHl\nfetching features for id: 5Tf6XOWk492HEk6Wk7DeAk\nfetching features for id: 7rh95hfK1OEgDL6gD3nqVw\nfetching features for id: 7zySrJDb2H7fn2kF3t9odr\nfetching features for id: 5ldK6ZO1TCLm3hfB2WApxV\nfetching features for id: 7MFMZ2K2nJa6ztUcR8LznM\nfetching features for id: 2GHg4ukhI8ubzHe24YL8mR\nfetching features for id: 1Qf3si9AhOhAgYkF1md0FB\nfetching features for id: 6SSBWjGwdBmwhmtOYVL0Pn\nfetching features for id: 4lgkyAvwudA9fcjmOKM706\nfetching features for id: 4RGxUjaCmVtD30pIhtEi7w\nfetching features for id: 6j2ydq5grFnavpTXTiCYed\nfetching features for id: 1MwH62ULXoq0m4CShyUVje\nfetching features for id: 3L2BfLNQnVYY8J9Y0quaUA\nfetching features for id: 6rXtc7TXUn0d2V9xPlJnTb\nfetching features for id: 6IcUdlgT1l2XwSYGFkx5YY\nfetching features for id: 20Rx0z4HQqQuUoN8Ho9eX2\nfetching features for id: 0QWHVRBV4XYd8D2X1jxgDH\nfetching features for id: 3UppV186Y0plS3C68dvHYa\nfetching features for id: 1OPINklujDatP2f2s25Pyh\nfetching features for id: 4qK7JRcMpizOH7YwWNxlmF\nfetching features for id: 5GR1Jj5ahZtoR6WqyM5LP4\nfetching features for id: 4gySZmwNJcFvVUR0SaELUK\nfetching features for id: 5YTfDljsmdlwuQiUZXpNnf\nfetching features for id: 6iGU74CwXuT4XVepjc9Emf\nfetching features for id: 6RDuH8C4yjdxSsjOEn65IO\nfetching features for id: 31VnXPM48QGMyUmVnVtCAj\nfetching features for id: 2I5l8CQz7qdJGWzIiQAg1j\nfetching features for id: 13t4VV3Y9PBk0OnAAw03WT\nfetching features for id: 5XgUHbrBYYsbUFXn6Zsywv\nfetching features for id: 0ncVdfMgdTdSirbhICZdYE\nfetching features for id: 0RlrRftZDNtuHYX7VGWr1Z\nfetching features for id: 6E9zaZ0Tnm9Np5XkfinOQq\nfetching features for id: 7AzFID6u1b3zIWbd9pb8Dk\nfetching features for id: 5RQRPai5wx67G2P1kbBdwv\nfetching features for id: 1fjORL6GPyVVwc9rjdfgm8\nfetching features for id: 5MzQw6nOYKXylsoA8vtI60\nfetching features for id: 2b9e6tO9HYGTg0WaHw3X3p\nfetching features for id: 264kU95bSEp3GYcUUPUcpn\nfetching features for id: 5DJ4EhPZ3i6sQ7fAvAk9Bi\nfetching features for id: 5AHtv2v9nvvvyPugUytAxI\nfetching features for id: 3HdwtikIeE1NfXwfgUkCsP\nfetching features for id: 4QA2dj0lI0us0GFYa4YWa4\nfetching features for id: 2p1ixDAqMAdZaa40a2PdmW\nfetching features for id: 28XwCRRmoFTPHpryWVyZaj\nfetching features for id: 6YH8kXEqVrUXzQ6NiM4yze\nfetching features for id: 2I07mm97gsZtLdF058b3OX\nfetching features for id: 7nHpusEs6QsRM2dDQVsuDo\nfetching features for id: 4qH8fQ5Ub2r6llHnrRGM3w\nfetching features for id: 4BDzapvxHkr8AKU9G45ERV\nfetching features for id: 6XssRDT7fq7e08bZbMxPoL\nfetching features for id: 1A41xE0cj2n6ryb124AQeY\nfetching features for id: 4ZxUICvETa5hE87tSLJYQ3\nfetching features for id: 4LiEdOeFleIJZXy0mVwuOq\nfetching features for id: 0neniOq88zX3RLJmYffzjc\nfetching features for id: 7tibCehUdKMeJABU2GHmM8\nfetching features for id: 1f6JxG6MNzILprLk4PNdi8\nfetching features for id: 5WeHEbyAGCQ7fB3IF2FeD8\nfetching features for id: 1C1Ou0rEJwjpZZgdaQjf0y\nfetching features for id: 4B19PdOVRx9jPrf3YZSx4l\nfetching features for id: 0CXWq42BW38Kwx1IjTnM9Q\nfetching features for id: 0u5J4IdTy7HSloMZSdsClZ\nfetching features for id: 6K0LVSEiy6aEfnefHWVmnk\nfetching features for id: 1DTydXUp7Y4InWSM38rLuI\nfetching features for id: 0jvb9XyXRNHQItRBjDdj4O\nfetching features for id: 3DBdTT9nwUOw4ENzumkyWi\nfetching features for id: 0wvcESUuD3yKzs0PAVpMiv\nfetching features for id: 1tTCibOHzIeuwgTe77I2TN\nfetching features for id: 25nU5mxSzlzyOXzeqx4c5j\nfetching features for id: 5AaOHeUUUbapFNn4wHrjWA\nfetching features for id: 0UFIypTuAtPrmBVwjdA6VA\nfetching features for id: 36bdQ20bomVkGt6FGD96lu\nfetching features for id: 2sTzesoyRqdQliEdc3WSoT\nfetching features for id: 77nSOTGJoNEfRMtfRMybI6\nfetching features for id: 2FGnxdx61AQcOupUx7Sk5p\nfetching features for id: 4V5RFkTNDTGjkHEkg31aQQ\nfetching features for id: 4Tn0B9HGfhYT9rq6uKdDjp\nfetching features for id: 7qykdbkxp5BN1EBpRdOLfs\nfetching features for id: 39HkG70nWuTZk9ymknt7oX\nfetching features for id: 7inxB3CvYZyZb4i8KODD2s\nfetching features for id: 7juGsvjVcJUfPxJ98lQZDW\nfetching features for id: 53YvEcjYsIfGUWW8ZpKh8Z\nfetching features for id: 7BDkpsS8gPjNmJWcrIVXot\nfetching features for id: 7KP3jPXSIjB2xx6UifWQ2V\nfetching features for id: 4rz596X40rwNfOhwTs7dgI\nfetching features for id: 2UDvFt9DihkjhaNY5Ak50d\nfetching features for id: 3UxZw2CfZ3dmNIii4CxMtG\nfetching features for id: 20XHrBOCXsubJH6Bm5Mfxp\nfetching features for id: 2VFlsgiY8uI7mN3nzXBBtl\nfetching features for id: 3rGQZbNH41FZEwkxfAY0nf\nfetching features for id: 7Djn4EfJGHXgJ8BLdbHbAh\nfetching features for id: 3UYALm8BNblDexrC4pDAb1\nfetching features for id: 6ozkTvpSzrRK9vJCMksza3\nfetching features for id: 0mABWoxXTuxGvqRY1Wfwzj\nfetching features for id: 00YhuN9oOmXUyLQiHjXPxt\nfetching features for id: 3qhakj4od2kNX9dMGyRVWV\nfetching features for id: 3FuF9ae9kkhpV0M9Za17aU\nfetching features for id: 3fMt2d233szyKPYL7Znact\nfetching features for id: 7CzsQaxOCUXNLrxQ3uTENN\nfetching features for id: 5RxxO4kwxcT9VjrlPzhJXJ\nfetching features for id: 3FvcomFA03nobW9h2lKo7a\nfetching features for id: 79bkwA2bUvuxYxAYgKY7H9\nfetching features for id: 3VgSSANMXHPNgkWTJDJYAK\nfetching features for id: 1jfcjRGW6y4qnlOGchGqyn\nfetching features for id: 63T7DJ1AFDD6Bn8VzG6JE8\nfetching features for id: 5dAlp0tZLiRoUZnPVhT099\nfetching features for id: 6vc4xOtDRq6kDB2iKsfbbo\nfetching features for id: 7c8fTb0e6BocxwiyRq8Gz6\nfetching features for id: 3CPNbSbr5N0p32ACDKq30h\nfetching features for id: 2Q4rbHZ1qltrsAbX7nYmlW\nfetching features for id: 3e3gnYm9b4hoRaLwEQPQJd\nfetching features for id: 0PrXe1sVQoYMeP70ZjEyP1\nfetching features for id: 6oP8Ps4asdwDGhQ59t6UFA\nfetching features for id: 3jwjRiAGKizbD4Ma00UBUe\nfetching features for id: 3wHRoSk4VkMgB7kxY1lgzt\nfetching features for id: 06YcuDoEZvMcxgzvtzJTz4\nfetching features for id: 0ExUux7mwrd8uMlgNL5lhY\nfetching features for id: 774FgCxWI5rdmLbjJn2lfj\nfetching features for id: 5Ugpl0C3Syh0ndKivv6Yr5\nfetching features for id: 23DITgKYutJaurSN3EAZ2Z\nfetching features for id: 4naJ8j7Lm2XeH60VBX6Q7V\nfetching features for id: 2xOEmK42jfYwj0biDzNlW2\nfetching features for id: 5GLr9bkV2czl8o7zJoWXca\nfetching features for id: 6fuqcFBVbRJx7F2D849fXh\nfetching features for id: 7GH6f81ZIjRF7MDBsqTsEV\nfetching features for id: 6DfQ5isWcsabpShWiMhBpP\nfetching features for id: 6ZarwCcXk7Kn9ZoE3CJWF1\nfetching features for id: 4QWIK0sLhlT2J7DOuKdijP\nfetching features for id: 26GpmXvGfnsvBkLu0ijA5v\nfetching features for id: 1nmnm9oqkHn4jHMU7ZPVF1\nfetching features for id: 7jpbe27ryBnqLSIoGZ2CZC\nfetching features for id: 2xUPazvz0vm9MwHArnzFea\nfetching features for id: 2PFqgzG65dvpdsdIkMzPkA\nfetching features for id: 2bTx3OevanlUx7beo8YJsN\nfetching features for id: 7aQ1gzWP5UXI56c09Y90v5\nfetching features for id: 07GpPzN1AAk1mLBLpnK12b\nfetching features for id: 1IDnb9YirBHT4OpQTanvWv\nfetching features for id: 36tHtmi6ReXOPM3OJ0HUNb\nfetching features for id: 4URcCVFgWv9cWZxgjhveFj\nfetching features for id: 7xpfik1yecW50767Bfx8lP\nfetching features for id: 0nc6Xu6bjDZz8UTyU5WHBS\nfetching features for id: 3u7n1FGFhlUEZrjCmwwepe\nfetching features for id: 3g9731HiHbDvSwxVf7R2CZ\nfetching features for id: 6cGWnETSKdUw8JU8MxKv5W\nfetching features for id: 5O0ibxlu6WLiDr7Q2n8EuI\nfetching features for id: 3EQM6RIkUZ3nVI8mTVwIvY\nfetching features for id: 49CnPV5v4T9AX2rubSK5S1\nfetching features for id: 56aCFmwTdm1me0suarfiWn\nfetching features for id: 19jo0UT2vqD4pNVfIqTy4R\nfetching features for id: 7EiWrR27y0mfzEJrsoSwQU\nfetching features for id: 59HTJznkQkbBnfpe9KhIRJ\nfetching features for id: 0RZbhSQw2Z7NeOgzi5Gt37\nfetching features for id: 1B0YNcxRYQ4vJIcFFJRCt8\nfetching features for id: 0kYqTzbLCdlEdAL3snnZHM\nfetching features for id: 3H10ZvoLhe5sHpTu3slqlr\nfetching features for id: 4fSQBnovCFWkLhMbWkmsWc\nfetching features for id: 6Y93DEf2OkUgksFCGHr8Zx\nfetching features for id: 44okWRzv99N7MzyYiC4SRU\nfetching features for id: 1FRVMsTQHwcDhZBBDKfhyT\nfetching features for id: 2GEW7EFcnqHqugI6KOSPm8\nfetching features for id: 5Qm5fwmi7kq1bIVYFcjDap\nfetching features for id: 3OmHk1OUKWqtZbcRhupyis\nfetching features for id: 5RODyaBQJ1CpksgR7abWb6\nfetching features for id: 0qlmIziH262eMI6L43l5lX\nfetching features for id: 13BBUy0iYfRwruBRbZqKME\nfetching features for id: 7GEL5dZbLQhJGrDtfPeHHz\nfetching features for id: 7ES3QjxzmFKMJWt90giLsv\nfetching features for id: 2DS20dL4AkzYS9yPt4hnxv\nfetching features for id: 46CuB9nNPwY6RyvAoVRtjD\nfetching features for id: 45uZPAPHtLjiDyD2fYpp5G\nfetching features for id: 1157yaYcDmuqBYm9hSqbwF\nfetching features for id: 78T592KXsizVSvYdTldh8r\nfetching features for id: 0ruUoErhMN7ySRw9BAkepv\nfetching features for id: 0BhbbbNTK2W7voVHSbR1In\nfetching features for id: 3rtLq4R8PM0HlPXZ8CU8qY\nfetching features for id: 17QOLAql5iYwYquHZS0eon\nfetching features for id: 5vFWE2UY9rTZ5sOs0MbPhu\nfetching features for id: 2xgqgPLi3K47WHjECz88IW\nfetching features for id: 56OqFabpmkMegY248eoSm5\nfetching features for id: 4zZ1AsMhireArLtGjmANGp\nfetching features for id: 6XB9L1nzsoHvB2igECVjGe\nfetching features for id: 5pRgsodSwssdCAEXnY1cg3\nfetching features for id: 1k9rx6o8vBnaWTBoYsjM68\nfetching features for id: 0DR10fyEwXFUfgqbTksOvK\nfetching features for id: 3c06k9x6MHHdPQi3ix68yF\nfetching features for id: 6EdGkH1mk7dKIgc3CKgo7F\nfetching features for id: 54dLJWk0He1SHiNaBXUav1\nfetching features for id: 6pJAvmUYLOyl2gHBUuShPt\nfetching features for id: 4iVf2zAzVu499RwIov6uyx\nfetching features for id: 18F1Hhz5HmbFoCVLM0D2fW\nfetching features for id: 5Hife6QE6Hnq72Ims8CX96\nfetching features for id: 7yTUoUbNaVkzdWTj5nr882\nfetching features for id: 45RvzLqo8e2rqGPGQRFbFm\nfetching features for id: 1EKCl4FD0ks1mqzYWMM6h2\nfetching features for id: 5nRaqQ7rhWwZR7lk75qqLa\nfetching features for id: 6N5S5zo5l63PEmuhyCbwjl\nfetching features for id: 7d5UJFsmgvLRPxbMPza0Mt\nfetching features for id: 2T2edhKwGuVF69i45qXkto\nfetching features for id: 7zd7QV2lCDGGmwOxMiULVY\nfetching features for id: 21ElXnithj4MX8E4rg5lh8\nfetching features for id: 4dihWVHHo4eDBrRTEa4kAF\nfetching features for id: 2kGSHT10a4pzedty1OFpCW\nfetching features for id: 22bA5fQMVqBAQMDjP7Z3nO\nfetching features for id: 3osIiNeeYjPJrIbl24qw4G\nfetching features for id: 53qO57yx0LA4Cotli9WlLl\nfetching features for id: 2gGPw8qbUilOhiSQRSvZG9\nfetching features for id: 2DTTay2tPKDKUb1UYDgTnC\nfetching features for id: 6o2FF4ArSHzLaLNxWxUndK\nfetching features for id: 4BbLz6pVP5oXl2plR7HLMZ\nfetching features for id: 2ptYBRF9lnToxEYQ2ngx4k\nfetching features for id: 1klqBqLXdUN6ilIhtOAYtm\nfetching features for id: 49tqyhwvGuhjFklMq6BMBI\nfetching features for id: 2pIhOwhpKagalKdP89hMEo\nfetching features for id: 71PHf4hLbw4LnQ02N8Gx7h\nfetching features for id: 3jMfXcpWVX9mJmnzO0hEVh\nfetching features for id: 4aHueicFyTxORTrQFEV9mn\nfetching features for id: 6xzt3DjBQHIIrPIFGnZDEx\nfetching features for id: 1cHIHTExDFGCg66uRwPeF4\nfetching features for id: 3x7suPNmMkXGLBLMi0cFEg\nfetching features for id: 7KYm266nz6EmqxfQpLUYdE\nfetching features for id: 3qVrXbnbOeRqWOATjFb6Oa\nfetching features for id: 0liRKxQQmW2b9wDVwtBFAR\nfetching features for id: 6MWZvL0YOKvr3mB2iR4eT9\nfetching features for id: 7crgweiPg38lEwViU8EPPI\nfetching features for id: 4YehhvNVDkI2DNwrsR5JJc\nfetching features for id: 4oeOG0zp2WYyL2ZQKD1gbR\nfetching features for id: 42khap6heGqCfJCJje287E\nfetching features for id: 5Uci1t39DsLmb8TjzdPdHI\nfetching features for id: 7CnIx5LpNWIUuz4H4u6xgS\nfetching features for id: 4AODrv2ts40UxwukxEPY71\nfetching features for id: 4Tt2lvnrYwql1HvXYgXZmj\nfetching features for id: 6bX5K0pwOdQAGUA8YT9TLq\nfetching features for id: 11MPqDv9WZhq4Ma6cT2SJV\nfetching features for id: 2OQhFe5jjTnMWaaDRZxz31\nfetching features for id: 6dNVdzsCLEcKhqLxoB3Qc3\nfetching features for id: 6o8FFvTMZFCCf9fB4bhXWn\nfetching features for id: 4iK3b3u7XdnC677gp9AaG9\nfetching features for id: 1ETpePXk6Epk0vT0sx51cs\nfetching features for id: 2d1Qvf7lYynPE0UpZhz2ke\nfetching features for id: 3QlrXlXuaBLMYGmlroFRTV\nfetching features for id: 4KQL1NUeDs5P2TrlyPDmqh\nfetching features for id: 0YpzbiqE1OG5kzVgP3Nkln\nfetching features for id: 34l5f31YtT0WFj1mY4QswG\nfetching features for id: 0FESeUvkf85Xj65Yv08Ovf\nfetching features for id: 4CglcaCdtisnk0sq4LBFDD\nfetching features for id: 19Zq3eSNhf951AwGn6XnNQ\nfetching features for id: 5pzoNtKCAXQQnrdupb5Tt3\nfetching features for id: 2j1GzI1DwpEado5Rjw0wxk\nfetching features for id: 0o4pKAWsrM8YKfkCC5Xjxy\nfetching features for id: 0oJo99vN0V5kFqxpLhiPXa\nfetching features for id: 4bkVDSiPvKuUflJTRKkOoO\nfetching features for id: 4qxmUFf9zE803gfmaHBmuQ\nfetching features for id: 4zgVGCEzik0fdeYK5cQXUY\nfetching features for id: 7FaYqYa8EBuDx84rFDO9vW\nfetching features for id: 2XdDxK8mLhaOdXPBpMxOkV\nfetching features for id: 7pBQDWVN58i618PhJpyhb0\nfetching features for id: 4E4bW7wqx3nADox2kEVlvq\nfetching features for id: 5MUM1P0T47W393eK02eCI5\nfetching features for id: 1ibxkpxrMnq9CgBLSvFs0q\nfetching features for id: 4ISir5Vt1QWXPU5Yy6an2W\nfetching features for id: 2PVIDKZOTFRi8KOkuXB0Br\nfetching features for id: 5A8dh7K5qdUKd0Tm2EIVsI\nfetching features for id: 42wIlJxGshiaqqJlWtYBnn\nfetching features for id: 0Naqxxo2gB6JniDzZGbOHv\nfetching features for id: 3BPN4dpI5tyFYfRKNXL8Df\nfetching features for id: 1V8AAdohEteFlvjsNwHsP9\nfetching features for id: 5DkgAX62bM7gwYpOGtOGfK\nfetching features for id: 32DkyRGUJoW6tlpUrRsOcr\nfetching features for id: 3JAJE71YkaRAiTDVnOW2AA\nfetching features for id: 4B3ULO9EbS1epBNIF0gwNy\nfetching features for id: 3unSsfih6sEp103NgfYPke\nfetching features for id: 7L7DgjhqnYYuIfgyhKTcxs\nfetching features for id: 5rdLwV3jCeV8fTQRCFJeip\nfetching features for id: 087wJuMxOm1YM7hhT2h8cy\nfetching features for id: 6LVQN95MVvXSvDL5cgcHsJ\nfetching features for id: 2QG1xxuz6QDTiC7QmQgWMt\nfetching features for id: 6gyszE5aRg1bBcbh5Dq9gP\nfetching features for id: 06DIV8OvK1jeGmV6NhyY0f\nfetching features for id: 6g5NZg2VXJobSX3TmXq9nQ\nfetching features for id: 5HtHlrt5pSGt3diFKBaJkg\nfetching features for id: 7khsvqWYjms5UASgLb8bft\nfetching features for id: 4yrTrpb8liAIFuA6Qlvqum\nfetching features for id: 2JVPFUQ17Hap4JxMbUGfb1\nfetching features for id: 0J2BwhULmcrOkRmQxcZQWN\nfetching features for id: 2r7yNV824MtgwOlMAiBHiG\nfetching features for id: 4snR6myh4KgWiZVdK5925N\nfetching features for id: 2La69C5wSuMVXUG5k1YurV\nfetching features for id: 4WohEROr1v5VlrkgmnCHFS\nfetching features for id: 0bJrWYVfCpObiOpeN0TUOW\nfetching features for id: 1z1WImpyDPcQIKdwseBjN0\nfetching features for id: 5HveO08yOVzUGTs48bYzU9\nfetching features for id: 5eV8oYrOrgh4ZSs2UXFqqs\nfetching features for id: 6cyuQP6SxQkYiF3zLKQL8Y\nfetching features for id: 3PWGOe61M4iHWPkhOo0yoT\nfetching features for id: 56tIxOOatOzEc5TGXaKc8V\nfetching features for id: 55h9R6rKBtBH4XY6ozo7ZM\nfetching features for id: 45F49WgoqhwZCq1qtXi4KF\nfetching features for id: 2k2JskUXczr5haGRV03CZR\nfetching features for id: 3FcLmMZL5VGmAffR0is85y\nfetching features for id: 068Hf6m0UfrAuTHHlCpW2B\nfetching features for id: 5pZmKhBISjmTWzjsp1uMR0\nfetching features for id: 2ZAziniVKKou2xrgbJOMfp\nfetching features for id: 2GKZrOJdhUM0qbYdNCeO65\nfetching features for id: 3jrTfnVy0xQ6mPJVxnuUUL\nfetching features for id: 0Ah3HuQ8uhBjJwshu7tZiQ\nfetching features for id: 73ugGXyBy400PGhs77uelv\nfetching features for id: 45ywLbW1vB67t4koU30J1b\nfetching features for id: 6JqUPWwbkWiQX1ItuOJYqG\nfetching features for id: 7CHZdfFTezbYR1MOlCbWmV\nfetching features for id: 4QRG7p25g5m4vRK8ec4yGe\nfetching features for id: 7yGR8R1HgQJeW6s3KRuyGS\nfetching features for id: 78eqrJFbfW5WRWVeM3KPPg\nfetching features for id: 5qBqBdfTEIWJwAS0Jm2F5R\nfetching features for id: 6u0vJG9SG59dFRYSwuhiKm\nfetching features for id: 7cdRfsOoTmHBOBuhOC4Ezv\nfetching features for id: 3a9JYMtoe0NeZwRXqLFQSr\nfetching features for id: 5AxThlogq1ddfJdhMetYpq\nfetching features for id: 6fZUCxyaLxjO6B2QJyTHQA\nfetching features for id: 2VYvVNJjbGuyiRX2tBPVkm\nfetching features for id: 0nGFjSLODdvEj7npKMU8GG\nfetching features for id: 0NHtbgj4l2RDAYIFLJyVnc\nfetching features for id: 5jlpKLVutiVG9f9CMYkJRF\nfetching features for id: 3Vd7lnnD4xihfGIgqNDo9B\nfetching features for id: 4mkkncTtp9mVQneL3Q3x1W\nfetching features for id: 4ioBCs2b4q0p1zuzumbdg9\nfetching features for id: 4rTszyh3bsd6xy6UpXsFjl\nfetching features for id: 5TWgV7rwXT9xUEne2Q3g7T\nfetching features for id: 6ddXJQ5oKBSVzp2VY90ptO\nfetching features for id: 62k7oaajknbA5be6frZMZf\nfetching features for id: 3y86HtbwgvflMylhzBcF9m\nfetching features for id: 0c9ckWyi7iwVCieZy7jixX\nfetching features for id: 1DqIFLNEGRCGfJpkJzFX0G\nfetching features for id: 45hZUh5U0OluXkyj9omlXI\nfetching features for id: 46VXY6ICU226D25CODh6zr\nfetching features for id: 3LPxVIQ5WXVZxBGhrA7opC\nfetching features for id: 0wX6mflFA4a72SOcAkDYsY\nfetching features for id: 2Bi0JWdxDMaKPanNofJSdL\nfetching features for id: 2nqCncFzp4UKMLMxWCI1JL\nfetching features for id: 5t6q0AC5pTGxCnLSg3qu4i\nfetching features for id: 2VreadsZpksgUYCgEvW046\nfetching features for id: 0puKKRrt0j7O8wDroYOedE\nfetching features for id: 4gjUJOXFg70Uuq12EAC53R\nfetching features for id: 4KYnkrjlSAWwhUzPV9ciT2\nfetching features for id: 5SaZMyjsjVR1YH225iVFyy\nfetching features for id: 1UGRl3kBWlc0aEe2jafsyZ\nfetching features for id: 2zyR4szo2EukeZkCRw54ET\nfetching features for id: 2eOrKHcAxMssTrxVjT8MF6\nfetching features for id: 5HSr7hDQL4B2UpzcoNaABc\nfetching features for id: 6YmBllj632vtXTj2kQGI7N\nfetching features for id: 36kH5d8egSQ8Se62A63xA2\nfetching features for id: 47OvlSd10q62DwR8QawqOq\nfetching features for id: 1gcY3t8AODYm42HsxVPo8R\nfetching features for id: 6rDzwydTebaXai342YicSg\nfetching features for id: 1NXfLEz3TRpf9ctJrOEEjP\nfetching features for id: 1iYrvHaZXNrLolzjR6ZCe5\nfetching features for id: 6YNRrZKHLr1cqpo3cOy8CS\nfetching features for id: 2lZTbu5C5I5P1C7jAXO48P\nfetching features for id: 3RkJsA0JxopALjE3Nkfrm7\nfetching features for id: 4BZo9zSqVl55H3zEuQKbkc\nfetching features for id: 2RPCzaGjubftL5XB9nmAJ4\nfetching features for id: 03wrQEmLVQsiDemqaZm11Q\nfetching features for id: 6x5BiQwNlbtisITsEHa8Eu\nfetching features for id: 1k6Gsz8nnZThZ59ZsEjw0r\nfetching features for id: 7q0CKuWtZgTwOaNWDGtGVf\nfetching features for id: 6j0OVI9OAeu2HJdkSNVxs7\nfetching features for id: 5hv401ZISKkQ7tOIagk2L0\nfetching features for id: 29Xdknl9fhRsV0oOYyQOKy\nfetching features for id: 3tuup6TfHtbNdqlbF01HrX\nfetching features for id: 188nnxepHwA3d1TeWixUOz\nfetching features for id: 41nHzM661MpSZ2pVIXdZhB\nfetching features for id: 5WQZJeEGbFddlcrnPRcjmh\nfetching features for id: 0br9zl6t3H0BlQM5TzvOQ4\nfetching features for id: 7utH4pVmFAmMpOmhq5YUYL\nfetching features for id: 7u6YcwfKv88jRrhDOVhBa1\nfetching features for id: 0LuxhJENJYd7yc315s1zms\nfetching features for id: 75f3C2aMNaEis1uFS0oZZF\nfetching features for id: 0aZONv71HjoKut86G2ghVd\nfetching features for id: 5uPKoULQx82sU3NZOO9mDa\nfetching features for id: 4heMx0OAwfILu13Lf0VbBM\nfetching features for id: 2sWdpFc82A7NP4DPazBQ6M\nfetching features for id: 5nS5kcvyJG7sqx6hVRtHWv\nfetching features for id: 2OzNYmuerhhV0FVX97UJrb\nfetching features for id: 5mz9pQZZXNpAw9CdQ7Bk8q\nfetching features for id: 6VOcKiMjjto3H6kwxG90lm\nfetching features for id: 4vqZep7dEFi5MOByjUV6sX\nfetching features for id: 4Tk6pf48rjQGOaCKe5LKy4\nfetching features for id: 10yauUCK4imhAhCCzlickn\nfetching features for id: 2Qy2yfjZa4FDZdforumynV\nfetching features for id: 4vXM6nJT5IgfQa9xKI9Da7\nfetching features for id: 6Q5UkaoE4QpsSm4kykFhKc\nfetching features for id: 0D2HP0imuFLuROJEMp7M0P\nfetching features for id: 6tluFvNsaxJ0ExyAiNOvi2\nfetching features for id: 1xnYucUwKO612Ht2r7JqBF\nfetching features for id: 3ByqU21P7nf0vxW5s7fOMr\nfetching features for id: 4JLnUYJHDNLYFRqzHzrawV\nfetching features for id: 6yvH9SyHQnI5cMVj0cavDt\nfetching features for id: 1udKn1oNKYQSQ9OmiIWCMu\nfetching features for id: 1IVWnhqf8tcGu4EKGkp3AP\nfetching features for id: 4on16ARfYTBVeNBk8qQA43\nfetching features for id: 5DoMxGnY3yAOGaf0xreMlE\nfetching features for id: 5h4WnIAIiyhiGqZ4wASNAU\nfetching features for id: 5xiWenMGQxYG9m8qHoVZ37\nfetching features for id: 1DM97JJjBJShdMuWI0B5L9\nfetching features for id: 4XhZ3Bi2fuIRrRGQQ4PcCx\nfetching features for id: 3Opx1MTq68q1qsidkMNGMT\nfetching features for id: 2jRAvJqykySWxGmdD5BCWS\nfetching features for id: 6Jx6jy5B6ATD4wkWAYGR0f\nfetching features for id: 3ztH1smFnnMohQUocY9Jzt\nfetching features for id: 5agp4ORTcHoEqTWNKgElTJ\nfetching features for id: 36ckFm0oicmvX8bWEErIHd\nfetching features for id: 4g2B1iGsws8hpI5c2Rx7Ko\nfetching features for id: 1dC1vc2C0lPBiL7oCAAr2G\nfetching features for id: 5TzrUONJL0qkk8WDJqEGfj\nfetching features for id: 5fw2E2H6yZJWN5H11dDznl\nfetching features for id: 7JCd3x25rM1cc7dxfYoKe5\nfetching features for id: 1UJRy5cOZtsuTPKq5pr7XU\nfetching features for id: 17odnqwMkLKh7ayb8ZEmBD\nfetching features for id: 5cQJMxRQuYI9tVMKzmoBhu\nfetching features for id: 7qY7QChY4O2D3QhwgvNx8R\nfetching features for id: 3IeUG8Sgyjb5ujKYk7lW1g\nfetching features for id: 6ijOZdht2wqGT34yuj8uH1\nfetching features for id: 4pxUZiQifmJAA0l0KSUf1d\nfetching features for id: 0MU9aeHB4uKciIIBCVhIoU\nfetching features for id: 5Vy8PKv7gS5WDR1EUuk0Bf\nfetching features for id: 23CMTOcuR6nPHxG5ol0mB5\nfetching features for id: 2A557Qk0ftZqDSuAvmAxMP\nfetching features for id: 5q8U2NsIc57QWjY7MZi371\nfetching features for id: 1YyoprYT9Oz3QiOqhYHO8i\nfetching features for id: 3wn2aA8h5yl39BxhynNP70\nfetching features for id: 4g6BbL6wLsTD1iQZXGc5fX\nfetching features for id: 7wuO3DvGbHPGYbcaY0qCqB\nfetching features for id: 1Ex4c1VTqxomrggLvR5Y31\nfetching features for id: 4CKqqKUKsjNTTNF3veSiWi\nfetching features for id: 0820J4H0wwqH506WtDuhKO\nfetching features for id: 3Xe5EdeHYuL8FGiiw1Kwku\nfetching features for id: 2Zmi8n6vDjO4T0miMFDwI1\nfetching features for id: 4gOI04TYKDBc2LDJ0u9K9h\nfetching features for id: 4tTUOHoNICuo2JZOj2hQOj\nfetching features for id: 3sBwq1Bhsd2thzSxC2PQwh\nfetching features for id: 5PaWiOpagb2p6X8PbFBK1G\nfetching features for id: 7rOAjhjlNx0LYsYJthOP89\nfetching features for id: 5GInJChhrHyXmwb4tQFJJG\nfetching features for id: 473hgaSdgEKvL20YdhqVaK\nfetching features for id: 3u650FGOhGmw3EjkLCzSJd\nfetching features for id: 40N0Wizb2BANJZTRZ4oivr\nfetching features for id: 5DDmne2Ia9bfDdXklCOVjl\nfetching features for id: 0QCQBP11w2QNel5LhkYskU\nfetching features for id: 475yfAikvSAt5sQrDFkNGH\nfetching features for id: 1HU2MNYU5CNIsjQgkRXz1k\nfetching features for id: 5FGJkdpDfTsNsEYcbPnPtB\nfetching features for id: 2RARojKOA0SR8VxWuFyQr6\nfetching features for id: 1TeO5FsAnM9F6XOzJyImBG\nfetching features for id: 66sX1HdE1EM1kBYgKU3kbD\nfetching features for id: 4d8fqtVtYYE2wVkrU3mTMO\nfetching features for id: 54eajp10qPgjUCiD2Ds9jk\nfetching features for id: 2lp6dr6gTMQJgI2Ny0kwYw\nfetching features for id: 2ucHU3u0UMQYyo40B9zaIW\nfetching features for id: 5LQwvnlNn84XLcVxWdtgD5\nfetching features for id: 5AIi7YlHwURZe2BNcyU9nh\nfetching features for id: 114mpBe63wliCxik7TKcw6\nfetching features for id: 6h2YEmoJIEuXzfc8b0wwOx\nfetching features for id: 1pL1H11bERuV5QIoFd8Cp4\nfetching features for id: 4HbzefakOqCIkwActU5b3N\nfetching features for id: 3YhHxoJHKmKebNZUzCOOWj\nfetching features for id: 2zCruKarW8FMGbK77z2yh6\nfetching features for id: 2hOUhzloY09AsjJ61IyWbV\nfetching features for id: 54onqcRS41n2Sp6ZwxNTQG\nfetching features for id: 7sXEILFDaRhFK7UQZ9gEfx\nfetching features for id: 6kOGvRl0ie97TShrk210ON\nfetching features for id: 5xsXzVOEA83GlDZ14Q74yh\nfetching features for id: 7hPIEh41LPasckJFXOznmW\nfetching features for id: 71XEFgsJJFGn1MUyG4l39N\nfetching features for id: 7tElOwaw2ONWiPCW8bDErC\nfetching features for id: 0s7JK7h4rlpAYG0pEJC9Ow\nfetching features for id: 3D6iiz9rsoy2PvTJ0Q8OBH\nfetching features for id: 6qJSIEaUw2VDYRzzHcKjqm\nfetching features for id: 53QdLR7B8sP1O4AUvHEjdF\nfetching features for id: 3e0CoSLzOHHtjyYYmNUKaz\nfetching features for id: 2ii9WItOrdWGVVhtcPJxEZ\nfetching features for id: 6MGqxb12oR5TC6lIwQsANT\nfetching features for id: 59RkCJ1LeyY83HvlAcVuYa\nfetching features for id: 3rmpCyBbtHUW2oIMNiMYv1\nfetching features for id: 4vPOtuL3HbN9zYYV5oT79i\nfetching features for id: 3BjitDpDJErAxMGnqDpdMn\nfetching features for id: 6YNDwlmzhWcTHt70Mq9IIE\nfetching features for id: 2HCaIYjkvWSZzaSKUoOh3d\nfetching features for id: 5BwkPXxLhEvIdeqzZHobwQ\nfetching features for id: 1KgyVoQMJcYbX07QC3aIQC\nfetching features for id: 2m4Pj6wQvhiBITHzjYZpsh\nfetching features for id: 6bMF4pCGpD6JQxGcIaspw9\nfetching features for id: 6TvAEPaJCqLnLP4tLzbjFc\nfetching features for id: 4i6CSIdAlGHyz9PLn52Loo\nfetching features for id: 4ErkRZoMu2sErWBHNyWxNR\nfetching features for id: 101tKsNj1azODLsJbZZNiM\nfetching features for id: 1E2FDj8wsyqAsEGMqypswj\nfetching features for id: 7lPSEIzv0GFZj8KEnaFR75\nfetching features for id: 3OfttbJAJNvkI4VwtZD2E7\nfetching features for id: 0SRkuudTEWe2HOloI1Nssq\nfetching features for id: 18TM70njJRFH4Fm4ZuwLd8\nfetching features for id: 1i2U8DwAtHKFN75632DlZH\nfetching features for id: 7BB9AfVb5bjEdpMN8HafdJ\nfetching features for id: 0xUt2TLACg0gLlDGWsceGd\nfetching features for id: 0yAS1Yhyqu3pZVIEwSxRxn\nfetching features for id: 1o0TEStQmRssKSu41iQnn9\nfetching features for id: 67SVvBhxFzSZiNoCGOfdTe\nfetching features for id: 4MprqMObZb1NiIpZaAGTAz\nfetching features for id: 1Vzr9AOdHw65UHoK1Vuxls\nfetching features for id: 1IsaW2BvYIs0iN0VjeB3j1\nfetching features for id: 67GjAr4MsMZgYuNW0pX1fk\nfetching features for id: 3v2DaVVxtgsO9Q3MnVTMO0\nfetching features for id: 6FfS6LTCO8qaDb4s9Q8l8T\nfetching features for id: 0Lo5YM7ZvIcUnbvnqLMUre\nfetching features for id: 4ryl610VB8Nb7aOKwuA4wQ\nfetching features for id: 3haZcHm3HydDVxm3je3Zmg\nfetching features for id: 5MrBeFD9PY3pCyB9lfB6Ai\nfetching features for id: 5pabbbDXPaxz5phgY8Mnut\nfetching features for id: 4pmiBE8CbMTELLsVQnUENs\nfetching features for id: 43z94VXUDXzqLiRy7wvK3x\nfetching features for id: 4nGIIeemCmkJ2I3hpC3x43\nfetching features for id: 1jYt67ObwzHEKaa8DxrnV2\nfetching features for id: 5QdkhR3MSMQRqbsW14KUXZ\nfetching features for id: 4efIPPwKyU2bCYhzdgudX9\nfetching features for id: 3Zs0dCcOoAkxwnYuKYCWZP\nfetching features for id: 42wmL3XqWPyn9C5clU7VMc\nfetching features for id: 3PmZmkgLfJtKdPjFF24uML\nfetching features for id: 6XNny82mljrVzgv1zdrWzD\nfetching features for id: 7nHLuhcD1kJyinxz2VabgC\nfetching features for id: 1dPUQhlNGEaDm9Qi1vcL7I\nfetching features for id: 7Jm0gQhWuBMzp0TI4ydm3D\nfetching features for id: 4SfgngnC3koo5c4YMMYkcc\nfetching features for id: 5Byd7d0WlL6qgmQb2SEoc4\nfetching features for id: 6rGuOsQMY1UsvyPcpvbE4i\nfetching features for id: 2V1jzWfUm1HdsqSt08fFtt\nfetching features for id: 2fGe1krTxovW7xsgiaHkrN\nfetching features for id: 0fXd3B3PbDr1fQvgZ2ZLfV\nfetching features for id: 6M1WuCPI2AOBiR2NkJ6GML\nfetching features for id: 2w1vbqV6Z3caGfLZdVLJyW\nfetching features for id: 0Kd48ETagbgGKoXqK2Ne3H\nfetching features for id: 7dHlQJqO7UWa9E2Vm9Z5S0\nfetching features for id: 1Ug9kQitfdcgyrWDjvXV6z\nfetching features for id: 0Mx6Q5XnT84BLk3e4ITJGC\nfetching features for id: 3YY9XB322S8n5CIobQsR6h\nfetching features for id: 3T4zow8ial809ciFETZ6wM\nfetching features for id: 5zTsicOFimBo9Bj7g9XSkv\nfetching features for id: 56OdLFVSIGJdtVpHrAKkd4\nfetching features for id: 64NwEdfzwUGorCzC9tYZFA\nfetching features for id: 6sm028EoGuUbVnzC6pDaNx\nfetching features for id: 5bveD2SoLJh6JPDyY9mfDJ\nfetching features for id: 4lTMgNop2AUchpTEH9ZAqF\nfetching features for id: 3rouQsYoHY3HfZX1glHxPv\nfetching features for id: 0jTX2E4WX4BtSnoVRUGZJX\nfetching features for id: 2F4ix8buVKDFhVH4yjOg2s\nfetching features for id: 5a8eQZcut98ho3QDqhDUKi\nfetching features for id: 5UrPacEd3L8dsjwSNPDL2d\nfetching features for id: 72MvrPaYBDVZfLapuVqsts\nfetching features for id: 2cLZdMFit9ecOGvwYfTeoO\nfetching features for id: 6Y609znJunsJnpW0RYf7iR\nfetching features for id: 2bcV8ePGpwBw7HZCGTvvwC\nfetching features for id: 3MeP8rMjztb9MpAlBaO6cU\nfetching features for id: 7g599RRZDUJ4J6kCAh7JbS\nfetching features for id: 0nmSuqq9khDg6piXYgyzOd\nfetching features for id: 5gkeDbOU56XmoNMcecmbYr\nfetching features for id: 1tZuun2k2fmLx5rF5ja0aX\nfetching features for id: 4rbYdWpcjXi02Tm3hvmYlh\nfetching features for id: 6KfVhijRswRpWIuxq7Eqlr\nfetching features for id: 4mmoOJwXcf0nEH0Z2wivAV\nfetching features for id: 2JZbhUQLyXdBtsJxGU339u\nfetching features for id: 2boQ0A6CkYdKhBn250tc4r\nfetching features for id: 3J46tDUbPxjGlfudGIaRrX\nfetching features for id: 2x8J4apESFBOPmGqMpLh0a\nfetching features for id: 5BSIMqJcGvM9D4wPw9CY4Y\nfetching features for id: 5dKpUFw9JC4F322nEYV4c4\nfetching features for id: 4BkHHw26fIlHEdcUqeCkgv\nfetching features for id: 0VpHIkmXGSIJ8phPcRfQBU\nfetching features for id: 2bTAf7hPTbjiKzvJLqGGbA\nfetching features for id: 71K4AFZLc2hGYPiNncTJ1C\nfetching features for id: 2WX4wOulDgaoWJstc4r34h\nfetching features for id: 70EUOJoFKWuzPCyHl6l6nL\nfetching features for id: 3qC08srhXBJ6DEQ7CwPMB0\nfetching features for id: 52mslz1GDjNg4vmZHZGmHw\nfetching features for id: 4MRhEMGSsBTaBX8IBNCe1b\nfetching features for id: 1Ti8nW5kOkv33Swl3DZveQ\nfetching features for id: 0UCiPhWITzF3yU2rVChCVg\nfetching features for id: 7kxMwCdLI8SOXjLA9tWVyF\nfetching features for id: 3tvqPPpXyIgKrm4PR9HCf0\nfetching features for id: 4VFGjvg01giZD45wwmVUXF\nfetching features for id: 0xawXToxW2wFhSVrkP40iO\nfetching features for id: 7eGLhn6AJNiXJUS7VawiUK\nfetching features for id: 2ZVb4m91QCXD2GFbxC2OwV\nfetching features for id: 6Coo1Eje58ffuqt2JBDMPk\nfetching features for id: 0WhZRne1XMn2ruGRlGJVI2\nfetching features for id: 2aBa2XO23shN0lQjpL0G1K\nfetching features for id: 16HrfeIFAE98VAXV4WdjPg\nfetching features for id: 2cqBR0FuG3DWjwnurXUFdK\nfetching features for id: 2nCMkTfSIjCSN7OjdQK1AC\nfetching features for id: 5BkcuH5kyZC5ZJWfWHmahH\nfetching features for id: 5vKop7Xx5LtKcEqERNPoYV\nfetching features for id: 5gluIvIDxcH1BkHn58QPZO\nfetching features for id: 79T991XVouO4qW6yUIH75z\nfetching features for id: 7arkA3xW7Vg3ipw6VQinmU\nfetching features for id: 1znMBunwCIVomAUacZdkxX\nfetching features for id: 0wFZCKRA6ss5kFigKArHuY\nfetching features for id: 6F6F3EX3FEntkWZKNjvEoL\nfetching features for id: 37ycGnhGIt9nxYntGDXfg6\nfetching features for id: 4cj5wrbKodTgwR8AMqdYUv\nfetching features for id: 65daZIvb8sLV1QtYkziYfp\nfetching features for id: 2U8i316GHJMluI42xqbThK\nfetching features for id: 1D2VIwDVHzh5n7nkGk0vij\nfetching features for id: 4DdICLH2xKTy6d4LFS5WjI\nfetching features for id: 603aCBXRNkrSTiM4uRGkbj\nfetching features for id: 2EN9rD4AlOBPww3ZtFox5y\nfetching features for id: 4MFU8kCLOQD9nV03Gfvrkn\nfetching features for id: 71UG7fxAh0mffJpyC07SiX\nfetching features for id: 7hyuFoept7slNptV126UwW\nfetching features for id: 560ROSxJmpRIIp89O1AVLB\nfetching features for id: 4UPksaq5ggU0ZchtmHhtDd\nfetching features for id: 6QwHXIZxcRn1EH4FM5fRAM\nfetching features for id: 76PjPbGl8IaVqJWV153oJn\nfetching features for id: 7vYA9ET5AUqJt5pBbhKmcB\nfetching features for id: 4Su0s11An3mZPnMiPxD0fn\nfetching features for id: 57GQHVnq4UIAgxU38lJwUC\nfetching features for id: 0UglrUqb22msRsDHqDmVTY\nfetching features for id: 76dD4B8fONKkIXxSfFTM9Z\nfetching features for id: 6rd3jnfFTozqphQfktsdVo\nfetching features for id: 5TrvERHRtiRshORTehfs0w\nfetching features for id: 1ZI40a48FO8d3OTUfAdYwn\nfetching features for id: 4QHapHuNmFvbvK9L3a8jmP\nfetching features for id: 1V97AJ77KW3BnVYCwPwvfF\nfetching features for id: 1PYpdnXFzFO4VM5K8G4PCu\nfetching features for id: 7BNihfYAzfKrZ14N4NziFe\nfetching features for id: 2ITpt5mcvMOI7kIJPqUvbb\nfetching features for id: 12nrPFtvEUVFdbN2cM3oJS\nfetching features for id: 1atDDBTMunTF3Vwn2jpTLV\nfetching features for id: 5JBHz653h64fYqdntGj1rX\nfetching features for id: 4imOI9rBD6RW7SgDjyWp9N\nfetching features for id: 33XiyZ5JzpdfbxW8yv1Qnm\nfetching features for id: 7EsyuIBXcGcEWGdkZMAIJi\nfetching features for id: 63W5utSNaHPHm2hWD4jF7u\nfetching features for id: 0ZQ6ckgerHYM5mOUxTl99Q\nfetching features for id: 0O6rINb71GYZQoea5fxTqf\nfetching features for id: 7GzisMgxLZgCi0Ed3yTw9y\nfetching features for id: 0SmCDPTRx0gqqDGjGHeM2K\nfetching features for id: 58QCFHBuRF8Fvkutf0HYAy\nfetching features for id: 5O9sIXT7EBkvz6f3yZCuUl\nfetching features for id: 6ydpo7H1hyzjnDhDKm8VqO\nfetching features for id: 1RE85ZBVGofX9JfnhNDaER\nfetching features for id: 03cYkjvuyZ4jACwo9MYEZW\nfetching features for id: 3aXcJDHfMtDRZ9KLuzScrB\nfetching features for id: 2Js48tzQZThEpumKgcgxWl\nfetching features for id: 3SOxNBtEd95JHvdSIGhXnj\nfetching features for id: 6HXiCCertmHIfuVHDdZ8QG\nfetching features for id: 5naGmbqRY9AlitnRgbw0uX\nfetching features for id: 02oIcYlpfTq4p3ssQnyuqG\nfetching features for id: 3pEwXiu1AVstyAluolJnW7\nfetching features for id: 7hxha93ckBwTck8s83Pu9b\nfetching features for id: 31ivuiF6QnKMJBscB3tRmM\nfetching features for id: 4BOvsc4Orv9LWNfSEFbzXH\nfetching features for id: 0KOE1hat4SIer491XKk4Pa\nfetching features for id: 4Gx5o90mo9HYZxE9GoxcP8\nfetching features for id: 61xuOY4bOj3Z75SrUQ2Aqa\nfetching features for id: 2jglHyoBmjsyQW2j0CAuhh\nfetching features for id: 4kVdh89jfzuU3TB1L7Lbt7\nfetching features for id: 5XgCk8ikjfTx02rgxNULEy\nfetching features for id: 2Jy7WpTqsa8yJtrdvfse0o\nfetching features for id: 0lMfJ3HjAXUKL517ePufmc\nfetching features for id: 4VU6RQBoryrzDh9YSFT7rz\nfetching features for id: 1EJQrVf9I93wlkzJAquRWP\nfetching features for id: 1cdkzx4QGDL8V3J8I2iu1V\nfetching features for id: 6HDakUMdK2KcQZZ4TwyAbB\nfetching features for id: 7LJDHtkeSLTpnecBcQRl93\nfetching features for id: 1woFG35C47HlZ8v4hjgmMx\nfetching features for id: 2O5tP6uuOZBhVwDeO4T6Hk\nfetching features for id: 6wXVUSh5lxvulIR1sSWaEz\nfetching features for id: 6IuK4tKidp5nBRnGbHTbzX\nfetching features for id: 7bhoxrOErOoqKUXJ6CA66h\nfetching features for id: 3hYBWJCHTKaYRjFZWR0cfe\nfetching features for id: 7BiRyVxIimvgnlmZg4vSi9\nfetching features for id: 4w0D62o5qwUPtrcTTAHe7d\nfetching features for id: 6P6adjzPbo1Ukb54i9D4rt\nfetching features for id: 545PsoburZUtR9z75I2wjj\nfetching features for id: 6CZA78dY3Xx1B31y0BP7mg\nfetching features for id: 2iw33rWGH2cjSFTkFJRoUf\nfetching features for id: 2BOdbqFnRer3wiYNMaV33W\nfetching features for id: 7skXVFIQNnTNwBhhCYMKbn\nfetching features for id: 0af0cAINmDtjCXGJfb2GAf\nfetching features for id: 5wNgjOGEWjuCd2mj4ynRM9\nfetching features for id: 4AmX2GyjCRyk4WMMmiqUjd\nfetching features for id: 5A8Ohrypi99jnyYICKHvPM\nfetching features for id: 71UPOlaPow5CbwGvLmPmDQ\nfetching features for id: 4HIVG4xTUCk6ruZuXKz6k3\nfetching features for id: 1cD8uv6vzf0zOyK4YtqMsa\nfetching features for id: 21f8rcouzIT9PR9j78f5MR\nfetching features for id: 2bLLKpVglQgnxfRAdmwtQz\nfetching features for id: 2SqgxBocr9hhRXkzkD1HjC\nfetching features for id: 21vO5hxkYIvnbi4S7tEuU9\nfetching features for id: 0RyMaI0EvCiiXBx96vwcgH\nfetching features for id: 4D1IQ6H104u2XEpqQPEg1V\nfetching features for id: 0S9Kq7EHqTfJdy4nZrqaLA\nfetching features for id: 0bYcOiKMwouNJkD00d5mRd\nfetching features for id: 2Ki680uvSg17E1p7VLZvVL\nfetching features for id: 5eB92pGxhuDFZGtfD6tWvj\nfetching features for id: 2373nwcpoD7IXHmqQBmLDl\nfetching features for id: 4xiXXumq1Hn3fHuYqTT6jl\nfetching features for id: 1glf4huG7cQpPYQ3ho7vuY\nfetching features for id: 2Q9gMRzQF3PTuJ3dCVVwmx\nfetching features for id: 1zXpbZZZ9Lb81ZrTqOzW1m\nfetching features for id: 4vSa14GE2qFNtyRp9Twc9k\nfetching features for id: 5joPaFWU96257ya2GauZfU\nfetching features for id: 50bub2xSIii5mka5owOPHH\nfetching features for id: 1tuivE7XrJtvLu7qkMl6Zx\nfetching features for id: 1jDsl8ikPxRAQcZCkRFjsd\nfetching features for id: 2HwrXACzxpTTFRbYVtOoyg\nfetching features for id: 1sb7AGizHorLbZA2mpNP1E\nfetching features for id: 1CfQZztUvK8D4aEJ9OA9IX\nfetching features for id: 3qVGeXOGIBaLdiDFoqZ7nd\nfetching features for id: 63ad8aVVsWieYeG7oqW4Kb\nfetching features for id: 3djP82UEfvX6YkM2HGg98s\nfetching features for id: 5PRBOsNq3rc59vgLnX217G\nfetching features for id: 1VV3bGtDOniEKmgyqfEAfS\nfetching features for id: 72BDusX5digwIbCTDd8QXH\nfetching features for id: 4T7KB2dRMyQ6G7Ws1LEirm\nfetching features for id: 2EYXCRp63iQdmeKHXN8Rrg\nfetching features for id: 6J0tg6nslInIHwk9NR2DL8\nfetching features for id: 5jEsZaQqkEwJW0yovgo1h5\nfetching features for id: 6JxLKiLtOlPV0XHbYJsvdl\nfetching features for id: 30nTKl0xDGt9EQVrAEGJKY\nfetching features for id: 2Bbz2NbJ0m4jwVizwLYHxC\nfetching features for id: 3ogLyZFlYqXC0ozBjBm6Nw\nfetching features for id: 7hYRSuBF8ZfV5gQlTWVqoX\nfetching features for id: 3VVRXz0SxxyCsC59xemWT7\nfetching features for id: 0hRECUb42yXfkSOS9Un1qf\nfetching features for id: 3XIehPd5pZmHDpjJlAyxrG\nfetching features for id: 5aIvM7Ov8VcKpu6lEr6Zd1\nfetching features for id: 4MFU8kCLOQD9nV03Gfvrkn\nfetching features for id: 2ub5Un3ktZ4L4ii1gx74th\nfetching features for id: 2kOp0KA5cbubEgubr33xq3\nfetching features for id: 7pDmkFLinntkJnSl1TyqDU\nfetching features for id: 0KF0vGxM2aUvJB7N2I5who\nfetching features for id: 4eLVf9XRDtMOe3X5KmQlpI\nfetching features for id: 6LHzCuLzfZQr0V8eZD9TPC\nfetching features for id: 0t22pTUA8MULIomcESJG1S\nfetching features for id: 6TO29gYhF5EHRvYPgit3EI\nfetching features for id: 0MUli897soECfYgzY3le4i\nfetching features for id: 72vR8JFvBT1PLYabJGyC4a\nfetching features for id: 1jcUhX1ocqQdHsQm0mRKrX\nfetching features for id: 00kG3Kz2rxem4XswOmtF3s\nfetching features for id: 0L36sfZ3rOf3BwReJQZBRr\nfetching features for id: 1asba4YipSYUk4WY6A2O1g\nfetching features for id: 5C94uTWvgOeUYbdPTUTGpk\nfetching features for id: 1Gw1tk0DDbGkDgmSmyi4BF\nfetching features for id: 7h2u2fQVAEL3EOuTMjvBLs\nfetching features for id: 6bLyL52pIKd3TGgNDJeQPn\nfetching features for id: 0bYjPXZEUYYhMwmd5KpVK1\nfetching features for id: 0zPpqu1oiqRhtc186hvUbv\nfetching features for id: 0cdd3tfG1tQMVyafEAY4N4\nfetching features for id: 7FYlck8UYCdZkkpvKXfWgu\nfetching features for id: 3LuHexaqSVWyVabEwR3UaO\nfetching features for id: 2AObG8D3AVTKVmdLWWSsSd\nfetching features for id: 5Lst1j9rvgyzaCciTQn0v9\nfetching features for id: 3fEIQIVqk3r5ISmiU1Lcbb\nfetching features for id: 58p1pH8V4ZLfJ1RusdeUIk\nfetching features for id: 53HP1YlOvzYkImudH6FN6K\nfetching features for id: 0R9MMH16IjnOIZXiQ9uvdj\nfetching features for id: 2v4HD5xA217fJLGJSEgj1g\nfetching features for id: 7cUhjPDUH3S6EgkgF2euQd\nfetching features for id: 4j8DrdCiiVRWw1iRfOZcQj\nfetching features for id: 2wLDsBiUgoOXJoxtCDm39a\nfetching features for id: 7H8VmgeUtAzVG9jyEyyMR1\nfetching features for id: 6dEKQOQChSmEKCrcfXdsvR\nfetching features for id: 2DhT1bCyuYmBkV1kGDAgrH\nfetching features for id: 4ZUoxQRuDxESt8m6bSNDlp\nfetching features for id: 4L5XnD2PMJrc9A4Mcp1asi\nfetching features for id: 2znDFdRm8TV6lbzmxnTPJc\nfetching features for id: 5hFanCfwPB0araPLGpEo79\nfetching features for id: 2Co8otif6KH0pIMwzNrIdf\nfetching features for id: 6CJe6vTWtIJB4sa8csqtfv\nfetching features for id: 45tNymiGXb2afxk7opDlmA\nfetching features for id: 2yS6Vq5ErsiCLKsgTfE7Nv\nfetching features for id: 0J3nckOfR9lwUdGdsPnXUX\nfetching features for id: 23eEGOLfI2pFof1rmQNpg5\nfetching features for id: 4QVLwVgNxqTp73Gfg6pS56\nfetching features for id: 39VqKubwf5yfOgAs00OOBi\nfetching features for id: 3U4fKeMEBEPTdDOjAymS40\nfetching features for id: 0y4WLsDFcdhGEiyGrTXwV0\nfetching features for id: 2LlLifYZDCgja2m1uuFZRi\nfetching features for id: 4rmB4N9wNLiq18OT40mZvd\nfetching features for id: 2EuyCCWC1aJlQKjcIANOUN\nfetching features for id: 0mD2pnYVF8DB3oqtY5KM8h\nfetching features for id: 0gtlgRyGc3dSjMkWQR3DST\nfetching features for id: 03QGOXxk0HWLuSd4sHeZrL\nfetching features for id: 7c4lxAjzuxcRvUdZrKf2Kk\nfetching features for id: 2fdSWm3CHTz5ljdESiV2hT\nfetching features for id: 0aZrrEDEqwCxxlRdXLXKJK\nfetching features for id: 6blxz1Nvtv0u0EznvX0KEa\nfetching features for id: 7iPiOABYfkx6iCTAhtVBbm\nfetching features for id: 0sOJ0OqYFdARjr4lGyYaq5\nfetching features for id: 2LzcpixYEhVKvoQq7wNbCs\nfetching features for id: 2UbUp7nSoua1BTw4zBAqGJ\nfetching features for id: 3PfdTUIPKYym6QAhb9JuZ0\nfetching features for id: 13HjLIRtAmIBafMshw0TRE\nfetching features for id: 6FH8qFGw3k9x9d1S3N5sTN\nfetching features for id: 4IISGbUpoUAXqDdZews1ve\nfetching features for id: 2UoVOxO8lFcHNgLHTQ0ATn\nfetching features for id: 65b5gubkkwqH1J5XuRwIvO\nfetching features for id: 4EuFMPZc9DLVC2Rwg8hXPK\nfetching features for id: 7ijsGsXIs2N44m7EncIyVo\nfetching features for id: 3ifchznIPMVq7aaXKg9IjS\nfetching features for id: 7lDVjmU3Iytoxz7mSdanyd\nfetching features for id: 2jqkVBMchQP8jiuEJQqQt4\nfetching features for id: 2ulPFs59dOCrd6JLkvNIJR\nfetching features for id: 0nqnKtIzXjccQBh9CqF00T\nfetching features for id: 7jon8ItjfPNUkn9cWtkhnT\nfetching features for id: 3yAVLukdNZvGDWLtr7oFGl\nfetching features for id: 7mp74IKoo9oaES5JwBWBWO\nfetching features for id: 1rY2Bku0YzUXkOpI0UKEeO\nfetching features for id: 0qVUFuuNIm5p643GyB9tP8\nfetching features for id: 5lZ3sezVWHTpmdVbHyMDBT\nfetching features for id: 1dYDIWo3LKxziC1a3stxwv\nfetching features for id: 7tpKhVU4RzCK8R6Ry0exhb\nfetching features for id: 4WbDOELVQqZ6I65U4HaZnE\nfetching features for id: 75rN9Bn96K1hBCkjyfRqCh\nfetching features for id: 5GvVxxwieOVIlZaCU9bG0R\nfetching features for id: 2aezcsIssx7sGfFZojBIxp\nfetching features for id: 4FcfemBds5WQvH9jglw85S\nfetching features for id: 4ELSyxd6DQ53eRxYLh1FPq\nfetching features for id: 0YkUuxTaLCYORCcvufbzuh\nfetching features for id: 66CXpOXmoL4OdAp9QOHfiO\nfetching features for id: 77dBhsboEbsoO8heaoHlJ8\nfetching features for id: 55UQqjMp0ehTEPw4LUP77i\nfetching features for id: 0rkG4IEfLd6VAhIgUVDr45\nfetching features for id: 1HpFPHXxeS7DNgOJCIxejA\nfetching features for id: 7I8PL2ZXS5CjmU6QPpX9Oc\nfetching features for id: 7DcXwLr5vbDroOgVKb3PzA\nfetching features for id: 4cyLb9jNC2oS0Eb0GLmMta\nfetching features for id: 5A8GIIn5vD5Jrm1etHXS3d\nfetching features for id: 0taWJ29dLX8811DhVal4UH\nfetching features for id: 1kjJZ1cuWV8cBICGmQJcy4\nfetching features for id: 6n61ehXOOma2LdikJ9A5wj\nfetching features for id: 6Be0IyyMfaE2XdOyeLHHJc\nfetching features for id: 1AZ5G23Kcn9h5Awws5Ekf3\nfetching features for id: 0ERzRJRLPikYbM1PkLIr4w\nfetching features for id: 4GPbOUUbitW3e6aEVccrhf\nfetching features for id: 146vkvsbGH7xQYRKjAYEhG\nfetching features for id: 1ur34rpRdcVfPcecoNZz0w\nfetching features for id: 1zc9G4b4zbRptS59Lg8VZu\nfetching features for id: 4N0VVY1BeWVpllqcwX7mNH\nfetching features for id: 1RdnFVwreBGgMyNn4DbFh8\nfetching features for id: 15w9bLXVzEqArvXiNNvAjl\nfetching features for id: 5CuDTzGY0Aik9gsFfdNSAY\nfetching features for id: 7Dtab9QxjMZYupetS5RBhF\nfetching features for id: 7LAMmINm2WrXyvFAD1iVnD\nfetching features for id: 27UeXVl5WQi3r23TzId3NB\nfetching features for id: 3TpkE4WMC7era1Nq5waLCa\nfetching features for id: 2c8mUfMLoIPHgKT3abqqbP\nfetching features for id: 15ztgB8XHpc0X5BcYKhNoR\nfetching features for id: 0BZut5Sp3KeiC8FWL99two\nfetching features for id: 2jiRBa5wVKmJyQlR8nGfwK\nfetching features for id: 3ySpsI2dzdyGFHBg9ykLbi\nfetching features for id: 3C4Nmn6XSJPSvOFqMCLC4u\nfetching features for id: 736V9Rm2sj3fb3Ev1vIifz\nfetching features for id: 3F1cnbWEmpv5wsscAXaWsR\nfetching features for id: 64hNKDyAZw9FEUFqH6Im9o\nfetching features for id: 3FkJ5VtmZdIjGyTjK1pHlX\nfetching features for id: 6aCw4ADUIyVQsa08UN6adY\nfetching features for id: 6FXLVvGylQoxxpcA8mWGRD\nfetching features for id: 5EGrkw1itB7rzEZVVXprJ2\nfetching features for id: 6tsjCzciIajxisQtbUkOUa\nfetching features for id: 6kAROIu6CLVjulTqSPYmAP\nfetching features for id: 1tG3zCppK6MN013nd36B3E\nfetching features for id: 6ElbEXlLbjWiRgzS1HpM2J\nfetching features for id: 4lvd9RUYyT5BGG1URmeUJz\nfetching features for id: 5n4jYHakT92Ia0005tNYPn\nfetching features for id: 0Q5rG1IxfF3qXSxifWvFOE\nfetching features for id: 1O44CWBo8UGvxj4QN4CJtb\nfetching features for id: 1WetFltHqdcEaVUJ2EvRc8\nfetching features for id: 5c8oQiQd2bj1rr5qRqEuPZ\nfetching features for id: 1CIhm1lOEztiL32I4EOisf\nfetching features for id: 6m6s3MapXf4go7fmDJv3IP\nfetching features for id: 6pMAYD5AXLfqq6r8ciuvT8\nfetching features for id: 0HmlnR9sAgKSW4NBzzCBAa\nfetching features for id: 1pVFchEqnrpIEtuGrB4srT\nfetching features for id: 0JqEy94wlCGSkbvRnONQY1\nfetching features for id: 4G8asO3AOenqqlTzbQfUkg\nfetching features for id: 1WvcgDbT79xeAa7Mv0klkK\nfetching features for id: 5Hcr7IkfNZ1FNmikY9MNly\nfetching features for id: 3AQzEgGJ0VfJDedxatbgi7\nfetching features for id: 228WP2hQYq8IRuZrp6IqBd\nfetching features for id: 20iH1uibQIUqNFk3wzG7AG\nfetching features for id: 3898C4AbdbptwYet6547e5\nfetching features for id: 7hW7DZ42AY4suLWXymcnXe\nfetching features for id: 2kRjJbZDhbrlkO9wDfAwqt\nfetching features for id: 4ep8BNCyKcSj2zFbeUenSv\nfetching features for id: 4Z7iDFkWNxRfjd9jvLgbbI\nfetching features for id: 5m6N98LZkgqYuuy8RMILdm\nfetching features for id: 4dpufyK2lWWG0Nb71zuEpV\nfetching features for id: 4J0aAZIzCpgFO9ejrNj6Mb\nfetching features for id: 3EfVzkwQkJm26BM6rywtMj\nfetching features for id: 2hS2lJrCKNWJfKZdaxzjS9\nfetching features for id: 0CEIKXrsriBeQWTTE4eJvt\nfetching features for id: 0s7UEtczBGK68zYCi1y6W8\nfetching features for id: 4UcxTnA6C5vCW79PIZ38Vx\nfetching features for id: 6L2woktiAuW35BrcROmhW5\nfetching features for id: 4rqZUbzWuy93LEr5h8Wj2c\nfetching features for id: 4jPdrkQyXbGj3WEAnw4UVT\nfetching features for id: 0WO97JiTY9A9SB4kAelsTq\nfetching features for id: 7uNVqVGLikR52Cj0BOFnQ3\nfetching features for id: 0t0lQGbNepBQLTmk8HQtMW\nfetching features for id: 0WiKz86jAqiP6ZnuAZv8nz\nfetching features for id: 6nWneWHV6S5FhXkVBeMoLE\nfetching features for id: 5fSMV4bZ7qaSIdvUMlxDpR\nfetching features for id: 23kC1wLVajqmlKE0gTVGMI\nfetching features for id: 5pntv0IjUKlK1d0jvwxmHa\nfetching features for id: 6gKqhbNsePszT9pyRtBlKN\nfetching features for id: 7ykdJKyQz2Al7pX9j4Lfjz\nfetching features for id: 4PeVnBSbbBpGMkhnXsicTl\nfetching features for id: 63tX6UDIPVPu5nwCmV44K8\nfetching features for id: 0NWZ93QeTeS5iVRnMxlAuc\nfetching features for id: 1i9KZctiFjJwuD2LFTThqN\nfetching features for id: 4RgH6QNzimOOL0SWdMhpzY\nfetching features for id: 0zpgTi6oebwADr9jFgPcCk\nfetching features for id: 0kTsnqvga2TQwcItM5Gtb0\nfetching features for id: 06oW9gZWV5nfoaAoT0MsdD\nfetching features for id: 0wGKdgfF9X8KCdZjfsvNui\nfetching features for id: 4RlxTX3OM1ISl9DIRctFw4\nfetching features for id: 306COoMcHZSFcXJmfn5bLs\nfetching features for id: 7hXngqMDRqkAFrEIBy3ewq\nfetching features for id: 08JqHYdrxzYA1mDEgZhpYV\nfetching features for id: 3ln9brkScMKIE9MsPirzLb\nfetching features for id: 1ko9hCONG4gae7DQnsUiZV\nfetching features for id: 78fBKpVQLaBQheVqYf991b\nfetching features for id: 01QicLin8hNjIcUDuE0nRP\nfetching features for id: 2ago9Xm0T613VHZpXiqLRh\nfetching features for id: 6Mm5Iy2IghUBMbwsrQg2hM\nfetching features for id: 1g6b5h7Zyo54Nub1aHlHbf\nfetching features for id: 0AsLUbhdZkGn5ZIN3X81n2\nfetching features for id: 1lPWPX4aVUbMHBz0XSvUP6\nfetching features for id: 5vxTKdtuZdPVMsjHpKdAQH\nfetching features for id: 3qRqQmEgDHYaM5rT1Zp4kZ\nfetching features for id: 3cFPZOvrPdyjmZSoXh09Db\nfetching features for id: 40yJppEKT0Yud2RUsXb9tk\nfetching features for id: 3sHGi1Ldr0JfoeZMoV9sNO\nfetching features for id: 7uJB5WRcQS5QgLrSRQpp4A\nfetching features for id: 4jR1ubOUrzx7GOmHpRCxRa\nfetching features for id: 06HBSxmvh5kXGUX3dS7RhJ\nfetching features for id: 7iSG8g8HVyGz8pgMD5cG6B\nfetching features for id: 5D3wNnaoeXETuEDg6r3Wdm\nfetching features for id: 2mwiLZNriYnykYQrIe5uK0\nfetching features for id: 2UcS1cuqNEioUR6hYJMaeY\nfetching features for id: 4rufn1lwE4IUQJWmrOWxLp\nfetching features for id: 4uKyLqJHDjT6mdA22ofxuM\nfetching features for id: 0Q01jmXxG9gEZi6qSdX7Ss\nfetching features for id: 5Yi1Fh3yymr9lwFYFCjtk0\nfetching features for id: 2SunrmrLd22GYqNI9kRSPP\nfetching features for id: 00UewKyVT1E2BawrGAwoxw\nfetching features for id: 3GO13GvPS4aai7ZN2Fnaur\nfetching features for id: 2t8n9fRJQHW7yaUOOw1g17\nfetching features for id: 5tt4gWT4Y3Vwwjqa7VNeIJ\nfetching features for id: 0kfbQsz5j5ifJmtIn9LCeH\nfetching features for id: 0EP7T1h041oiEfkNCqbST2\nfetching features for id: 1q4tN454Sqk0g3OBFvXxym\nfetching features for id: 1FpuUMrFc7yQ8lVUolmFiw\nfetching features for id: 0ryQlPz1nO3MFn4N7ij4pO\nfetching features for id: 47842etv4tqCpMXo3iQ9MQ\nfetching features for id: 58i8TipAFPTXapgSOyj8AK\nfetching features for id: 1S5ZByB4AaumR96wpHMRcV\nfetching features for id: 2BPEPkeifa5LoOg2Cq9bkx\nfetching features for id: 4GwosqCFTn7RKfyPgy3kWW\nfetching features for id: 700L8fTVK149vkRtfcHvk6\nfetching features for id: 3wS2DFx6AUsk6XgociYWwq\nfetching features for id: 18emjfeYAJGmddLuGudbzl\nfetching features for id: 77HCco7fzc834OoV3XNzW2\nfetching features for id: 1uCqMjz7ABl7dPwhzPEzIe\nfetching features for id: 43tXphvUlXym54Z0cg1rjd\nfetching features for id: 3EQBI1Fiv7z3QfqTeb9Tqk\nfetching features for id: 5uPKJXCJse6seJX3DegLDN\nfetching features for id: 1i2LC3Qc4rqTyyKQloxb0T\nfetching features for id: 3YdXeDgkVDM3vZm5wxw2AI\nfetching features for id: 0IenVuQhHGEXycoBh1TQZz\nfetching features for id: 0oJkptRc7sRzp6Fy3lSAib\nfetching features for id: 3DgIaxMZ7gwc4EEIOtor6S\nfetching features for id: 7orZfo1mFmGZRCI6j4ln2O\nfetching features for id: 6NWuPF6vDIL0N5TTRRlb9x\nfetching features for id: 1gPcCNd9BjCcBWPSucnyJH\nfetching features for id: 5FRbCNMJX9Xe6FUnVaa7mS\nfetching features for id: 4OncTbCLRRozq30FZfx2Tc\nfetching features for id: 3gBkaIy0nE5mDiYeC5iL97\nfetching features for id: 7c7rF8Ojpkzulj6lpdZ63O\nfetching features for id: 3OHz1Hr64jByifpOS61exg\nfetching features for id: 1SuXJr2CYkE54fU1Fm6aEL\nfetching features for id: 53437y5LWc8gDEGlgUrsqd\nfetching features for id: 4jZrtPRNq9zlNbbCxTbBlH\nfetching features for id: 5BzMo1c7fhTeULjaTa6Nta\nfetching features for id: 6csem6tEhhkajgNUhIDFB1\nfetching features for id: 7y4pnfDktzZZVwWEa50PDU\nfetching features for id: 2hpumOs1fXt7nGj1OkG7TR\nfetching features for id: 1nGj8dPY6iV3wppAaCs77v\nfetching features for id: 0j36IBRfwm1Vvwz2q7Ve29\nfetching features for id: 6f8RdBMG6A7EMfVF7uOEqO\nfetching features for id: 6g0ndHejSrzJ0CX43RqSq5\nfetching features for id: 5PZjtF0B9w34ivVdkX9UUM\nfetching features for id: 1v8oUOlXSyCnu47G3kjILN\nfetching features for id: 1f1ytbKtwlVmWELCg8rqjB\nfetching features for id: 2uaXefSuDDG0a2QjFKTA9o\nfetching features for id: 3Vqnf7ti0QVdLC2bT4dIw0\nfetching features for id: 27NMby3j7tjrSvexwfCgio\nfetching features for id: 30sO6VEV1pUt7jFk4ieA90\nfetching features for id: 2A1nMRmn1Z9FVSPjsX4hrl\nfetching features for id: 6DbFaoM3IQzZpZyiNuZJca\nfetching features for id: 7z72UdscXF0hhNUUYWLqUN\nfetching features for id: 2e5EMfbehH4bjvFCg5806Y\nfetching features for id: 1FKzZ9Z2mlXlxOp3FDteKr\nfetching features for id: 7tAwfqQWDA1dLKiLvn8EgH\nfetching features for id: 3LY9qYimjPrLSaoOxmliaV\nfetching features for id: 1QKoN8bfkLOr5RTIX0iQIV\nfetching features for id: 78H6bsD1Gz1NwpBYt9duDW\nfetching features for id: 0oR4XiH7xD3rEaJcQSeS2c\nfetching features for id: 7GCqhk5tefqvhZpCrsimd1\nfetching features for id: 1htnmqG4C26nDLr7Hx50aR\nfetching features for id: 3eo1eOOEkNU7Xst5lKimfF\nfetching features for id: 31yNB9p7R6CapMvmjcbj48\nfetching features for id: 0XsYuXWmbojk5SUHXqPXmP\nfetching features for id: 3d3DdH3ZL7W8wRDVY0ST5C\nfetching features for id: 11p8gGHiZ3zOJbU3CfFTiN\nfetching features for id: 6wP6XDZH9HiMs6ejL2R6nj\nfetching features for id: 4I6OrFObPlJNl1bP2KZTjA\nfetching features for id: 10CQhAezi1GXuUotRAjXqB\nfetching features for id: 6y37UUytUqxROAfwlCpXqG\nfetching features for id: 0rxUdgmRRjzISbhIOi0Di7\nfetching features for id: 0Xi2XCaLKGy9J92rq0AGps\nfetching features for id: 6pNzVVFcbpK1V9hSNxx5in\nfetching features for id: 5s9DL4WYuEuu5vQWln7FMc\nfetching features for id: 5c3ve0JNd9Qa3EcSFol2pU\nfetching features for id: 4QBAiXG5GqENph08jFmVmt\nfetching features for id: 6zDxzDBoxZp2u3JqjYBB2U\nfetching features for id: 4TkywTLvVmJH41jxRRXcDP\nfetching features for id: 5pQdkHkwQfLMkUE5UaTN2A\nfetching features for id: 41HVXlT0SYKg1kZU9znTBS\nfetching features for id: 02hIV5OSVZBRia0T3tqr5w\nfetching features for id: 0plC6iWLWj47wldCTc3oWs\nfetching features for id: 4UcxTnA6C5vCW79PIZ38Vx\nfetching features for id: 5umbw3dTR8vGjhsFLjZg9r\nfetching features for id: 6kR2BCzArduYNuJdtezM8L\nfetching features for id: 5EVWX6bTzP8YKsXex3mee6\nfetching features for id: 4hCdeVQKc1yPBnZ9D9B3Re\nfetching features for id: 39R6fdnQnRFNkEXIRrZLyK\nfetching features for id: 0YgdjmDh0OEGyFsMBHjEoH\nfetching features for id: 4U3NxdMXeHIGph2gjDyQqK\nfetching features for id: 0cb1KmRxv50ynSYpChPzds\nfetching features for id: 6ch6E0na67OZxbafYsHM2p\nfetching features for id: 04JFw8lYhqwFn1dXbdGfc7\nfetching features for id: 0uxA9vWnVtKxGizcfhufN8\nfetching features for id: 30xvqP3gbar6jQsLZrdJ2B\nfetching features for id: 22DvwrGNMvpikviGn3wSxu\nfetching features for id: 1FYsHsqLlTiUxcDqQqhy2H\nfetching features for id: 3h25PQrDynfVyIukbN5ceN\nfetching features for id: 7I3ILUr69yANI7eWOTrE4S\nfetching features for id: 3xagLqC8BB0lAJukr00TKV\nfetching features for id: 27FhNKYP8B0F3teHdcD5ob\nfetching features for id: 78Uz5O2o00RpnNNlvOMl7K\nfetching features for id: 01IpNtCBRCrPj91rgARPl3\nfetching features for id: 3nBzI7PjzcD0emsy8l6tfa\nfetching features for id: 47961iZtJPz2x1kCCAbhHl\nfetching features for id: 4ycahhaQdjoN61KNFxvGA9\nfetching features for id: 7E9VSMXD5RuAqLHIA6TQuT\nfetching features for id: 1iTu0QWGXjKNOxxna7tyl1\nfetching features for id: 1vOkVhorNtzJhRsDB5Iubg\nfetching features for id: 0rRER2rlfTJ3LqFqv5Knpj\nfetching features for id: 1qOqhSno0LFwPgXlcK3umM\nfetching features for id: 30gCedymqDf867sbH3CGyp\nfetching features for id: 34oA8HWchuqvJReQNV8VTa\nfetching features for id: 7ae7k5vJV8BGAe4xVp7dng\nfetching features for id: 0vntIQmeOMe5ybYfwghVxV\nfetching features for id: 3au2b80BEdztCcqk2wj0hG\nfetching features for id: 6NssDwHnosForAnJZyM6p3\nfetching features for id: 3M3sDDsHBXu7TkdyahTj9D\nfetching features for id: 4K5OFH9lQKVebny27xmgW4\nfetching features for id: 6U760jf1bo0toem4F9WF2S\nfetching features for id: 0d7vdZtSkKwUt5H4gxoOGz\nfetching features for id: 3wNim33XFde28o2O84JzFK\nfetching features for id: 21VxoynyT854WQSMCNyOOS\nfetching features for id: 2S5vzLXND2ZQXhGQQjChVf\nfetching features for id: 2s45jzEVXjE0Y0Fu6CjG5u\nfetching features for id: 0PdZBmBUw8vNqjCROMY50H\nfetching features for id: 2BwqYkl2WmhBC6ecGfN4YR\nfetching features for id: 0SOIRMizDwZQeNS7M5MZaF\nfetching features for id: 1UDfT4MPWSIDMc7HoVAOFo\nfetching features for id: 3yOYPyxa1KAhda8OoygJpT\nfetching features for id: 6K8EH4jCNjadapChPeiGAW\nfetching features for id: 2hTFLJSd8QaXbxOcltHfaQ\nfetching features for id: 4RpzG7AtTknnq1yGNSmo3w\nfetching features for id: 6vPS75nWOKkuH5WTLD8hDc\nfetching features for id: 2uNamFXOtxDyCO8cLGSUuK\nfetching features for id: 1DpFGVHWfpVrzy7k5H2yTe\nfetching features for id: 7gAn0WTuoRXL6QkFTqJiRO\nfetching features for id: 6r9eDOWuIHId6TAdOrZOGj\nfetching features for id: 51P4cUN0BKFM5tKmF8xpRQ\nfetching features for id: 6NYAr0Lx5iWQj7lDwh6jbO\nfetching features for id: 724k0eWcby90V6kIVbozGq\nfetching features for id: 1OT98aV87tkMKUefcKOCiZ\nfetching features for id: 54BnkUuJaiwdboFozwW6j2\nfetching features for id: 0K4019RRyJNxDgsbm0lUXe\nfetching features for id: 6EclIlaDBBDzQPUYLJpDXr\nfetching features for id: 0wh3HztmNQzf37e4mnol78\nfetching features for id: 2BCjoFA0nJlXBLRf3164bN\nfetching features for id: 6XRkqwxyWLyEGIN8pnMqAi\nfetching features for id: 76ELJgkoU0CdwByF4hrYkc\nfetching features for id: 0eAvTDkEwjRMdD8gdMqB18\nfetching features for id: 0N76QYbwzg6WLNNjJiDhgX\nfetching features for id: 7HxJgi0wLX58NrhrcViwjg\nfetching features for id: 5Raqu2XetM9zEQeNi1tPR8\nfetching features for id: 1VjxCSzzRYTpzKkutZWnhv\nfetching features for id: 1dXcUx5gQ8rkHctSsRnZuH\nfetching features for id: 1tSk1iVz7FI9BVAY7XRc0T\nfetching features for id: 2NbqctVEPkPB1p6ONwzSBA\nfetching features for id: 5DDGzWe3cSR5rJ2WfycEQI\nfetching features for id: 3XYBPq4BVWwRMc0vsmDMxg\nfetching features for id: 50lNYns1QEjDKi8hl6p1c1\nfetching features for id: 6PpxbyZHlyQdrNlpT7wRik\nfetching features for id: 1UMzgGjcCec8QIbA8VuxBB\nfetching features for id: 4uK6BtqhRoEaZopGJ6FCef\nfetching features for id: 5UcgtnKxuy5f8u1Amshm9n\nfetching features for id: 5j6Gl9NY0wZfNd3lNk4Ulf\nfetching features for id: 171cWdP8aG0KMv3OIuNZ49\nfetching features for id: 7kDwUS4LRsSShoPkaoHcfu\nfetching features for id: 39woPeTxfi48gt4qJqXIoV\nfetching features for id: 5lFQmqmOVEH6XlOsZOk9TT\nfetching features for id: 4M6QTzDNkmUuz7nOSAehgo\nfetching features for id: 1FXX4oUbO6eJEszwR7yQa8\nfetching features for id: 4DVSn9CXSXehYTkHXZl0Sy\nfetching features for id: 5HvpTcuCZ1oC1cClE83owm\nfetching features for id: 6mS99AQ7ZeG7uvrc3dqUyc\nfetching features for id: 0gkxAsG2Lng71KBT31TZNd\nfetching features for id: 3xnPDI1J1KTlqF6Tiqc2Oe\nfetching features for id: 79UEgHE04gFuv6S6Vqi07N\nfetching features for id: 7rxev6ErpRE5VYaamjs4T3\nfetching features for id: 6iFYsQckCCtGsabW0rXEPn\nfetching features for id: 71DgyAgkGnisVjQmAC0IFf\nfetching features for id: 1Uy1yJyKugigHiP4omXQbZ\nfetching features for id: 33dpZRUyyjAvGkS8xDJkkO\nfetching features for id: 7Gk1QKi2BAZCnrYlrYEDjC\nfetching features for id: 1msykqPE0qoZig4nb9khI0\nfetching features for id: 18G8KqnIIFWoZpRKlrAiQu\nfetching features for id: 2VhF1XumHGHqQhaO7bZEx4\nfetching features for id: 2YqLWoHop0cwpeNsjMcxqp\nfetching features for id: 26mfTMUbynV99KpHmZlaHF\nfetching features for id: 2JwLfiU13mnYuiqcqriOLl\nfetching features for id: 6BRIjKLGDzIOUZDQdY7I2K\nfetching features for id: 5kJez8nvgq0HqhWy3UTQFx\nfetching features for id: 1URixdT65DuE1s7nezaTom\nfetching features for id: 1oB6UQzxeh4E6dLnTjTQnd\nfetching features for id: 5OPlCwzUOZkBMomYnCj1vG\nfetching features for id: 293W5tGqQJC73Q1SEBwDo7\nfetching features for id: 43sOCBgpVLpGkgEiYEm0xI\nfetching features for id: 1TZVi6jRPIcsqQogany6wd\nfetching features for id: 1N9NFnSUv3r3huP3tuQ0IS\nfetching features for id: 4Y58VuD8vlQQrD6dap0wO4\nfetching features for id: 385JWAuc6mbUvSLZVPHikF\nfetching features for id: 2NGANvM9qjeqrD9Yo9uNHR\nfetching features for id: 09s29rfaQr6oCLWO7xFVJZ\nfetching features for id: 20IjsA4PoP8gz6mWySdsmf\nfetching features for id: 4mo62uR6qj3LBIvJ2liE4g\nfetching features for id: 1LHafkpfE4P1iXWd7DqJs6\nfetching features for id: 0ijOfCj9PHvhnrX6WXrQns\nfetching features for id: 0xeSDTXE4yc7AyF5pleSe5\nfetching features for id: 3EppYZZJ7yuyRZibNAYBed\nfetching features for id: 0zV82dxI3l2tvEiO32lrCW\nfetching features for id: 62oulSzlEuda1C1ray85kl\nfetching features for id: 0JSrMb4ohAYkFwS9qwoyA0\nfetching features for id: 6QrS5qI69YwonGIACU2JRk\nfetching features for id: 495RcEScKLPwye1XGxv5Rp\nfetching features for id: 7dje06Lly2PO8Ffn9y6Hv1\nfetching features for id: 23DIG2PAnTua8khno1RrUi\nfetching features for id: 3YSviAGEU8lslBKSNkc2Z3\nfetching features for id: 53eb2LAQeMT5WpXTZK5muB\nfetching features for id: 43Tf5c7fcBIvuuCSBODKkA\nfetching features for id: 2G2YzndIA6jeWFPBXhUjh5\nfetching features for id: 5SQlhb0WNlnr74LpiNDjHW\nfetching features for id: 6w5j6CHKVppZRanO9Z0Mhd\nfetching features for id: 1QjDcQh2cpJvf6ii5irzVz\nfetching features for id: 2LW3k5aClqz0PN9Jkf7y1H\nfetching features for id: 3n36OdwK6RDq1VeqGf2f7O\nfetching features for id: 4pkG26mP5wD4biteEf9m1Y\nfetching features for id: 40ZZwXyc7jmfrGsJt4mGri\nfetching features for id: 6BiQOxPWVZvum3BZ2Bd4a7\nfetching features for id: 0Hak8JUqy0aqo2HQgTOR6v\nfetching features for id: 26p6GNIQfv9koDVD4rTouu\nfetching features for id: 0NYARSjTaMT67cbW3JozJI\nfetching features for id: 1Z7xxz1Bkm7DVzl76VkyZH\nfetching features for id: 2c4FIAviMIwuHBonZEO6Ut\nfetching features for id: 4W4hL9jJV3tMCSqDLq12WI\nfetching features for id: 31s2mbxcx4LX1EtZg1fUvA\nfetching features for id: 3rfZtFPGCiqZokHqqY4rX1\nfetching features for id: 5stfCnP5f0oXkCUYRLGMI8\nfetching features for id: 2dCZIytqWZEKdup6XPDmn4\nfetching features for id: 4cw5ZVMd3UdRHnRsO0OlFN\nfetching features for id: 37sL2EUHL00w7uiH5tpY5P\nfetching features for id: 4MfawSfFYY62G97u7w00Ai\nfetching features for id: 38L75iLo6muLlApdRHwCGO\nfetching features for id: 6Bjkd0z117WySZOMeAjJLJ\nfetching features for id: 28ODIjaThE6hBOUAAl7y3W\nfetching features for id: 5usNPz6U1acDrkUYekGYDY\nfetching features for id: 2Z5rJNI4Wdxud3vwm42l4n\nfetching features for id: 2Q20X4fKdfYZxMbrqiO1S1\nfetching features for id: 4HU9laLeRPjx2xB224e2CI\nfetching features for id: 29sjfQL32GkhLWdAHfH0vj\nfetching features for id: 7nzK5Tor2Z8ij4srvvbf06\nfetching features for id: 2f3vUTMZ5JL2GauxqDC0la\nfetching features for id: 24mR7NmnxdNXxxYnZ4qOTf\nfetching features for id: 44Btht5msS0IQ7o2DZuFdu\nfetching features for id: 49c763ZySFmD7XCCGZyEqf\nfetching features for id: 2zEXeKw3V6tNJ0j2Xd19z2\nfetching features for id: 11zT5aL2p0v0CoDf62dTtz\nfetching features for id: 6hqzu0PhIeH0iNg1igvfVL\nfetching features for id: 2nmoiK0KxNHsbfOUR5A8bG\nfetching features for id: 5KBoYl77AuNtZhqSapT3pe\nfetching features for id: 6VoYOehq8JGLptvbye0yCw\nfetching features for id: 1P8r9ZCiVpG1WtBdge701d\nfetching features for id: 6FR1i5g41rMtTSoCK6wQZc\nfetching features for id: 0s1e2Pplqp0ExGFUdQ3mCr\nfetching features for id: 2nH8k1bk6a5q9cGR5aeWg5\nfetching features for id: 1bltfamMcxuxLvw6VzEYDC\nfetching features for id: 302CaFeYCMGRad4UvnRF8J\nfetching features for id: 4hpqwNxOhexxVStNtP9T2A\nfetching features for id: 15B1limEYsrV2tKWeUjAnS\nfetching features for id: 5f4LwMkyw5ldc4TCrvDoy6\nfetching features for id: 0W3oATzoAqEAkYLuKdAp03\nfetching features for id: 3vZyw7jyBpwgIEExgEOGJd\nfetching features for id: 2JjYsmbIK6Kh02H5cYWb69\nfetching features for id: 3mg9ZQqBTzgPxrQ46nrrAc\nfetching features for id: 50Murp0DfSrvrFkiCRODtj\nfetching features for id: 3V8IDad0qrSA1fDr6mM8c9\nfetching features for id: 20TXnu8uFxHkwrBNTuw96p\nfetching features for id: 1WvcgDbT79xeAa7Mv0klkK\nfetching features for id: 2uyTuHLAB3unDK1vkjViH5\nfetching features for id: 5kXeyV1HFzNT1b3MFOPQ0f\nfetching features for id: 3MxovyedGworiVtM7rrE7w\nfetching features for id: 3aZxXWFsmIuGU2pS7RxYCq\nfetching features for id: 2bnCTu0tOeQCUMGudRe56R\nfetching features for id: 4AFLFG6pyvhBLC37FbDwc2\nfetching features for id: 7KBmbNZ6JMvuoL74Fw9PPW\nfetching features for id: 588hXr5A0VURVbUehZzFHa\nfetching features for id: 7g5RXhBoKskemgqGozQIfl\nfetching features for id: 0R6PIalA9f5AQgD6DQk5gz\nfetching features for id: 1ueDEIC7FyoCf7MPRLSCoX\nfetching features for id: 1kWnbqS0pNBX9UNFIdBoXw\nfetching features for id: 4vwWoQ2CtxgVTjQqmtsSSW\nfetching features for id: 7L8pYSWkHZa0DXOM17rAaK\nfetching features for id: 4I9stEiAoK9voUTgYluU6W\nfetching features for id: 0RoFfoNIoo8crKUbVlC1TT\nfetching features for id: 3kl7f6V6BG31GCtHxusxXg\nfetching features for id: 2z0ObRjCedb9HKxquvvpPO\nfetching features for id: 0V0taT8CDOCW5sbjdiDc6r\nfetching features for id: 5GwXHovc5VzV5k4IJdQySi\nfetching features for id: 3RMBmHpLtkfxf3TwdEx2T4\nfetching features for id: 0MyGuEPxayhpnKbYu0d171\nfetching features for id: 44bYUSTnQmg2FUV2PfLaro\nfetching features for id: 69QS5AgPawph9CK9G84ylB\nfetching features for id: 6D06zb501VBESteKxkWvIg\nfetching features for id: 6HROMYV69bB8qzrAdQoKaM\nfetching features for id: 7DKgWPTVrXzfIG6Gv9Cah7\nfetching features for id: 27O7f9fp9X257Tg2uIK4Fq\nfetching features for id: 4V9WukPJV3VmOuTXMdYzuc\nfetching features for id: 1hxZhZlC2jCuADQW6i32zl\nfetching features for id: 01ti2HFHInilWhcII7oyeX\nfetching features for id: 0XnePuhsh6Z2txXwWbM8dB\nfetching features for id: 1ePIjPuiQvrLKKYJhW7dPh\nfetching features for id: 6tA3SPyCtfuhneMvqv64iL\nfetching features for id: 0ipAdWU3FibeMoNIngFDzo\nfetching features for id: 70Ku5YKVxSyueEFiLSWDVW\nfetching features for id: 6Pzbv8zwmq31mq998fBG5i\nfetching features for id: 7eNM9suHz80f4fbC8aobod\nfetching features for id: 5ci1SmCBwodTJrq5zeW4fp\nfetching features for id: 0r3dzk4ewWIFyZKeZA0Hog\nfetching features for id: 2g5PFYiMnXFxT1zHavYP0D\nfetching features for id: 4biXHBMoLHA2dHvyxuIkcq\nfetching features for id: 7lKJR80q1kJJXZupkR4ahk\nfetching features for id: 1EPDzqgJ2cdVQk8iMjdDMX\nfetching features for id: 5p2kF5o1t88QHTy1NdjDeA\nfetching features for id: 0elZdXAqbBu3fJGUbzvxqa\nfetching features for id: 6GdUQeyYPcRX33xZ5JcUGW\nfetching features for id: 5q8TDEiPE5F1AXxektGA25\nfetching features for id: 3Z7V4IkrhEKYp1MTK3wwuB\nfetching features for id: 5wNJU4z7SqYL28WVw4AreT\nfetching features for id: 10maCTtDF9d49etMTnA45B\nfetching features for id: 5I0UyqmIzXDcO05izsYeKE\nfetching features for id: 5y9ra8I9bXs9EJyo3kwi9K\nfetching features for id: 6YqP03ywyReyACoDSVnAHQ\nfetching features for id: 4dB7ZcvXFvjqPGXNyOu0NC\nfetching features for id: 1DLjHi9qGLiuwtqngEBAFv\nfetching features for id: 5uydHeAg7MsIKSvS7GLw3a\nfetching features for id: 0mcCiAHdtE3Sn9IqIZRV0C\nfetching features for id: 3UIA4xOZMUFJayTSNkbo6E\nfetching features for id: 0v1lcmxKiEwhjFxeuK02EM\nfetching features for id: 2rPcSFHVaiEkcFQzQZAO87\nfetching features for id: 60Bz1S8tq02DT15F4QALWR\nfetching features for id: 67CYrbNyDDpMHOdyMv4JAh\nfetching features for id: 54Zj49rnERcVrhYxqnSSm0\nfetching features for id: 2zDSCcyKhi5pgKSNmDKoir\nfetching features for id: 7llhaVBcsIx51iJU6yVidv\nfetching features for id: 604A8K85jfVZuhf8tQVTrS\nfetching features for id: 7IZT5q7l60htGkvvEOUbv8\nfetching features for id: 2Djzx2jL5Yr0zOARwKCSHT\nfetching features for id: 1v18GYOXuukysVxsE1kkO5\nfetching features for id: 1Mm00HcfXLI6u1f0YfNiVH\nfetching features for id: 1ZmXzaVkE505VGAi6imjhE\nfetching features for id: 1IzQfNfbskbiT1sgKLbnA1\nfetching features for id: 3M04PLUN7teU7cyYqOu18Z\nfetching features for id: 6nPr5EzXH7dBqRmXFRA3CU\nfetching features for id: 6EgkpbN202me6TNzhQsJD0\nfetching features for id: 3opOPdwYUPhQ7EU5Dt3CvN\nfetching features for id: 3rGZUTr7tBqpnN7YOXnpAp\nfetching features for id: 6XfdkCZNTs5GtSbYLLsaaG\nfetching features for id: 7s6xTs93vEOLYFhVpYpVdN\nfetching features for id: 0Q7eey8NSPQEQ3ol75UAAx\nfetching features for id: 2moq838vdmPDMR72OOhLCT\nfetching features for id: 0Tz8molbVbQnUawtdn4u32\nfetching features for id: 6eESmZIEy6BWDBLgzidjtu\nfetching features for id: 514dJooiPZvtIHrpYSDgK6\nfetching features for id: 5AziwA6c8TfKxiHma8pXhu\nfetching features for id: 1j0bGiIr8TI4YwBOHWpWFs\nfetching features for id: 11fcBJ6r3jauGOdiqPYbUH\nfetching features for id: 5a8g5hLCM1komEgKFa6Ad9\nfetching features for id: 6Yfj6lzy8D2ycCfPqIbS0I\nfetching features for id: 1g4TxcBA5cueNRWKr2oQE6\nfetching features for id: 24HkOeTvVDvIsFLpqBYIrQ\nfetching features for id: 4nE0cBkiJqScUgchTdvv4A\nfetching features for id: 4InqlhMqvsLboA6n5hCYOX\nfetching features for id: 5z8DiKSG4EwxmO6gUi12rZ\nfetching features for id: 2IKIAQakHzESKJ84cIwFPh\nfetching features for id: 1GrsBL78Qk93YfetbzcY92\nfetching features for id: 2039TuukTehudI3eDnIlfb\nfetching features for id: 5yPdVpOnMBoZP5r1ep5po4\nfetching features for id: 38SpiDNzvOW86mGLHAdFFe\nfetching features for id: 3FQWctzZxGRaIbzX6YDNPB\nfetching features for id: 0S1SXEp7Cvl3Ltrh3jJK4P\nfetching features for id: 6CZhzvXECaP2L9TfBujLzw\nfetching features for id: 3i7f7WQjzkzf944qQAPpmM\nfetching features for id: 7I6cSc7DJiwe8DrT1Ec6ZA\nfetching features for id: 5fQEqSNzAkdUhRrgxMd7iH\nfetching features for id: 2uRHKB4BYOF2A6a6PZXKE0\nfetching features for id: 6UPdUHjoyCdwxH09sJjedW\nfetching features for id: 3rp5zzMbbbz71pw5Yh2lnq\nfetching features for id: 5CxrCz2zXok6L2rFx4qEVZ\nfetching features for id: 30SI8MgoPEfW4bQoMNV9wF\nfetching features for id: 1TUgwU1KIVdyCt6V75Rjc8\nfetching features for id: 4uCnEM5afg5GPuG0NRT1To\nfetching features for id: 4cYxtbTPSjdzjGmCZIostu\nfetching features for id: 2WeTJahJRT1vtpS3aMeezX\nfetching features for id: 0a2WUVmZzgQYGNagszNSuZ\nfetching features for id: 0QHUVpWu5EoXyXHptUaXx8\nfetching features for id: 2nbjeVm12uDzSlG3Nywqsc\nfetching features for id: 6hlRJGUMJZTNbZmHr0qb69\nfetching features for id: 3NcZO4uwZ4gmq83neJr7G7\nfetching features for id: 3tvqPPpXyIgKrm4PR9HCf0\nfetching features for id: 3BZ5MPUF2BjEXpQBmKrL3w\nfetching features for id: 554eZbFJX1IS8S5O2PKPzr\nfetching features for id: 09au7Gh4yO6625e1jy1OHk\nfetching features for id: 6iFYsQckCCtGsabW0rXEPn\nfetching features for id: 1AkjNnyd2aqZRH4NXzP1Gn\nfetching features for id: 3WY4LT13Pcmn8C6FXPkrKY\nfetching features for id: 48n4F3hkCFyf6nVs79zctM\nfetching features for id: 25w19pmXnbdbkXZAXjEZxE\nfetching features for id: 2Px4PrbPBMeWB1silbjI5H\nfetching features for id: 1vJ8tW29PlkMxJylp8Wjnr\nfetching features for id: 7wBfgfbltl8S1ehkHeCQ3E\nfetching features for id: 4c9g1LMBE6RMxmI9DcxlQw\nfetching features for id: 4niy5jbEb70771kN6GitE6\nfetching features for id: 1l90dM1PU5YvAoNxxTHlCB\nfetching features for id: 0oeM1wuV8lRBJQg3Emvg9c\nfetching features for id: 64HZCyP1pspbAvvLMEitK1\nfetching features for id: 4jhARlCDE9K7iwCqCpJYic\nfetching features for id: 5Ss0nOlXxswlTJNKZjfFvQ\nfetching features for id: 3uwLRf5h1gnMkOfcjocjGi\nfetching features for id: 3oMyOytWFz0Ydd0GSfpdcW\nfetching features for id: 78fMs1uU2g2k6RNoNCZ7ir\nfetching features for id: 138UHyclyMPbTJ2S1N3KgE\nfetching features for id: 2uLNcaBpiOFHHE4YuMLlVd\nfetching features for id: 69s0vee9JjwF0iMXRc8RAA\nfetching features for id: 16rPqhuTKqG41N4tHUGzZN\nfetching features for id: 5U20BcJUamdd3InsY4CTFH\nfetching features for id: 4f9RM646LGTxwyNAgf7Fc0\nfetching features for id: 65X6ApzxPvmXdk6XOdwSMy\nfetching features for id: 3mdCQlCi3v2kwl3l4mFlLe\nfetching features for id: 6msWzWBvm8vqySnPtgbdR2\nfetching features for id: 3Lo39qbFAMXmPJ1u5DStjG\nfetching features for id: 6nOVq3zm9YbUh0TRmIsab5\nfetching features for id: 3oBrvZe702sGrebzGbcH5Y\nfetching features for id: 4ObJvWDuO6ZOUQOxMUurBg\nfetching features for id: 5YsyqcewwE0c1ukzHVciS3\nfetching features for id: 5A2UEZWbtV9lwLVaabJdr7\nfetching features for id: 66HiDLgn9SEfdfLPFHWxuZ\nfetching features for id: 6le3hTnCy89EKGEMVdW0yK\nfetching features for id: 1iHPrmLDVArrqoUPe83Dsz\nfetching features for id: 1zEbpILNjvvdhhK1XHk2TP\nfetching features for id: 1vOsIiq80fcYONCUEBbzHu\nfetching features for id: 0IIGck0Gbykh0ogz1MjGeC\nfetching features for id: 04uW2l7VrWeo3CJ9rQ5D6M\nfetching features for id: 5JsyNZivccIDRbtHgofJ8X\nfetching features for id: 4Gw5NvssLajGRopHbHQMcl\nfetching features for id: 1aPdVWDcTG2Qn0ziCoHkaG\nfetching features for id: 20PVWjltSTDX37f6HK89Qh\nfetching features for id: 7rJFXtN7AMv9FmbKeKRAdE\nfetching features for id: 0tyNR5EE123cytaRp50UnT\nfetching features for id: 2sHgLaPuoPL6oTuptJNBQL\nfetching features for id: 1tkyWHi97vUK6DzyQLvA4q\nfetching features for id: 4NswDMZO9E9pwx4lOwKjee\nfetching features for id: 3qjPP3g5q2Pd6SIJg9Bi4Q\nfetching features for id: 2y5OtQnC1UREBEiVfGxbNp\nfetching features for id: 7sTDRSZIEyxAlwi4yjuv8v\nfetching features for id: 6zYmF5WIfsj6wU8l1cOFYH\nfetching features for id: 53W0LwOY2nCE68snp8hFIW\nfetching features for id: 1EtQ8AIs2pHW6FwFvtYPd7\nfetching features for id: 5PnRtThBsAgt1pS3vcTZ0x\nfetching features for id: 3GoF1YKAqCnTTpBS7HUnUI\nfetching features for id: 66DAA23l2FGxEZlV5ETYzs\nfetching features for id: 6OyXFDxfyDGxPitk7qgw0f\nfetching features for id: 2XdEbixqhGHM1IuQylqi9G\nfetching features for id: 2gQBq8rmNC3kMrh9BuxJMs\nfetching features for id: 3kxeWFs0yCmoC5oqOsEmoy\nfetching features for id: 2KcmhgLLm5OqRmhefG8IRf\nfetching features for id: 6ud44OoRGtmMAkQQVkEgZt\nfetching features for id: 2QcP1XPIx69G1UfYiBeuDu\nfetching features for id: 2d1NLWc2wHDZWjKkgIjdls\nfetching features for id: 0KZAXCYT0yOFxOI4d9jqnC\nfetching features for id: 4WhT2YADjcU58IUgroWZQZ\nfetching features for id: 5J4cszsX6SqlszINJNitvm\nfetching features for id: 7j9z4xxbrlks8oBNcmdYkw\nfetching features for id: 1bDnEeHjjbuKrXIGOgOjIf\nfetching features for id: 3PY3ENkgYigHIWLsuuXmJ8\nfetching features for id: 7Fr0hIabz70l8sMbjng9KS\nfetching features for id: 0tGvjiddWph1hYUbdZ7u6Q\nfetching features for id: 2hXyd8bqs5MS3wPGR0NsHn\nfetching features for id: 4zmaoY64dO6PUDRL6pKtHN\nfetching features for id: 16ildy0iyj5FkxSacraf8L\nfetching features for id: 6wdKjjOcNsTjzQqK2q6vJM\nfetching features for id: 2zKlMyRTWA6oFdH8Zg9UBV\nfetching features for id: 7r0WdJ2mojWM1ylFLwdERY\nfetching features for id: 4Mrfq1W2qM2BGuIm1JSrLP\nfetching features for id: 3SNrnu1yw5IftmNQvmexGA\nfetching features for id: 5jPBmIFbjnERaWeickvTkS\nfetching features for id: 585ZObjzGQuYHXcaLL1GGl\nfetching features for id: 0ZPDk6tgiqMgtMoRQ7iYj1\nfetching features for id: 3QNrN2Wzl0KshnGcR1VJj3\nfetching features for id: 3zjDQSxu0WmlfLc2Yq4RdY\nfetching features for id: 4M24a4FcmIkuCGjEZ3MZMN\nfetching features for id: 0WzbqGdkyj7VGfQ8Nwl0ka\nfetching features for id: 6UjIXfHxvsI48NFsaXTXgk\nfetching features for id: 31YG7zS2G3SnUQUntTpxJL\nfetching features for id: 66aINzxJt4gG1gPl6ED3F2\nfetching features for id: 48Wr6FLDXMuJNUhyX9o5rR\nfetching features for id: 5IJhLYBgah78GpBU66RJyx\nfetching features for id: 12KirOpUDcCAjySIfqxSng\nfetching features for id: 1wW9CGapya0Fp249uOQEfv\nfetching features for id: 0qwjHNJHirzB5VsYQ4gjG4\nfetching features for id: 5NNXHQHYUhNl46k47MyTAb\nfetching features for id: 4DsbXVlHJfvvHAhg2WfPO9\nfetching features for id: 6s7coadGxGLWTqC3J3I1Gg\nfetching features for id: 5SeRewBBf8Jf0DMaguG66I\nfetching features for id: 3piobnyJRHaoyMRvFORrXl\nfetching features for id: 3PSE5zXhf70R9czOlfgWeM\nfetching features for id: 7rGOWzo4evAQQ2FzrSKp0B\nfetching features for id: 5kztrigpAlJzaadO7IJkLv\nfetching features for id: 0pK0pVV6JhVDwUJqneoi2V\nfetching features for id: 575X0VcOzaWcjJooXxpw7R\nfetching features for id: 7C6CVYJWFfJ2NiXtXfVAI8\nfetching features for id: 2pSNr8OQkHUBMEOP16SquX\nfetching features for id: 2E9vbRYksg5pcZVVzJ0t4d\nfetching features for id: 22JZcAjrAlrrnmlMKzLBBj\nfetching features for id: 5NIb0uP4CO3ckfyCIjjcFx\nfetching features for id: 3K0sJAEu0kbYyCllf8yRyv\nfetching features for id: 2Du6IbzikdqDOSYLJQwT12\nfetching features for id: 2KmVjbL5ik2qkLZvkxgIps\nfetching features for id: 1Tz6nZQahJrax0bZCh9MCh\nfetching features for id: 69X6N0XmY7426yHlP1sGzj\nfetching features for id: 60hqsO48PgfLXbr1WD56Nt\nfetching features for id: 44wDEwUCPoRdLU52Nn6lwe\nfetching features for id: 28P8VVw26hS0H9AN4iPntx\nfetching features for id: 2hhYxAqLdr49NaNFuifPAa\nfetching features for id: 7bPkugM3pcS49I0b1UJP8Q\nfetching features for id: 6YPsVqkhl1JIvZFhqSO5Qy\nfetching features for id: 1MW7hcqDWh0ODZA5hVunaO\nfetching features for id: 4dn6801JBk2Z0K8RfIO4j4\nfetching features for id: 33oIra4uXFWu9V0zvjj5Xk\nfetching features for id: 7ypZInqer5wqFTsZoVLCpN\nfetching features for id: 5nU6C8jg6f12N6lYNbSgkJ\nfetching features for id: 1K0GSEIFwIY1mn5SQNMFSr\nfetching features for id: 6Zjg6ivQ2HdNs9KQKtnH8l\nfetching features for id: 1Kv9Inxy8zPZCQ3RkoVMF5\nfetching features for id: 23mgC51wSWICSQq21Cq8l7\nfetching features for id: 2PWQbNVjTfZMk9KwY8cpxy\nfetching features for id: 1VCpk2RhA2ZYVnKEf6hMXg\nfetching features for id: 4wa5shE2P80cziwF2lPDJi\nfetching features for id: 7kPBXSztfUExKSKmf8uvmp\nfetching features for id: 2bTnyC2hJc8U5mXcCL0Mtf\nfetching features for id: 1LEJw4cJAxgItBzA1nTJzP\nfetching features for id: 1dmTbwG3fwLD7IkRxhMWpf\nfetching features for id: 7tSmFPTckorF12lsbXWBMY\nfetching features for id: 4mnTsJuNT4esYmnHqGMxJd\nfetching features for id: 1lSxosAXNmtgNBMZfFKSpB\nfetching features for id: 5y2FnGvK01cIOqGWhCN5PR\nfetching features for id: 4rbUVrcZ2qX4ld0jthoKaZ\nfetching features for id: 57rCNFS9Cfdg09dNHSAuwM\nfetching features for id: 7H9mWUsWJR4tIpzvoSih1k\nfetching features for id: 2b0Krnn7gdbfPq5cCT7EBH\nfetching features for id: 4bHbFiX0JA7rrHVZd423BB\nfetching features for id: 0XMO5f7ltVvFGI8DKlf01l\nfetching features for id: 0IxLULmLG5EOhAModzmXAh\nfetching features for id: 5a6KoWEzy9GengCXiGXTcq\nfetching features for id: 7HxGQpDnCpf7Pkb2D73M8H\nfetching features for id: 71tjsDvB4EMJqNG8EMmFnb\nfetching features for id: 5tp6LCB0eozHSqGT6LQgn9\nfetching features for id: 148uYmn7Ua5SRTlgveflqE\nfetching features for id: 2gLUnXzjCuGnbXJ0OUGOuA\nfetching features for id: 7qDKFStZHgNQhFXKVly1CQ\nfetching features for id: 32ISCMMXXyiRNMyD4Aaufs\nfetching features for id: 28uxJCh4pH4inTnDhqIFVY\nfetching features for id: 0mG8Hety4RELeo6p808fcy\nfetching features for id: 7DTdBIwbs05LCEcxnx3RRQ\nfetching features for id: 3fDsObAq6nHvxPWJOWj2vJ\nfetching features for id: 3Lj6rIJeRhQoiZp765IYUM\nfetching features for id: 29yILdocQZEy3tmYyegizl\nfetching features for id: 0SQNQbVk8gVXf0EsTrb97R\nfetching features for id: 1h2dvcIykCReWSZnsKBgKg\nfetching features for id: 2Vb4lcajzhzi3FXn4s4FEv\nfetching features for id: 3eDd2yAKGOmFwSiGweEeqB\nfetching features for id: 5u2LEgoJU42OiChNLyKcwt\nfetching features for id: 2sQ2ouPP8SAjsIFoz20Xr2\nfetching features for id: 4iobXHdCfshPI7XAUTHpvJ\nfetching features for id: 1ntld9r5VD1UxHYEO4PBT3\nfetching features for id: 6EanPjcrgRrLjs23SPQAaa\nfetching features for id: 72Z28IsvEVLjSWdUKEQgZ0\nfetching features for id: 3BvUSGMOa0RsdsOXEAEgQ7\nfetching features for id: 7BxLnEePdpkkarWd3bijfU\nfetching features for id: 4po5U2ZdSQCoTPGpGudx71\nfetching features for id: 1YQbXMi6ne8GnXD2de0d5y\nfetching features for id: 1D2LKpYa2GwDzaTCS72MEb\nfetching features for id: 7kMfhQbqicEJSk0UlKCScj\nfetching features for id: 6swV4E3xPi85UXNjPVP2qI\nfetching features for id: 4kW4TwGRdhAN3U8mp7BBmG\nfetching features for id: 3kyDtb36csVJe1RdNxonr9\nfetching features for id: 5W8lR8NrVayLlLC2PSuV3N\nfetching features for id: 3GA3PeOsy9cXqj4tRbPRS0\nfetching features for id: 6e6Kxot9nHyZ4I8GgmGKII\nfetching features for id: 7qEyGZMaJSmRtyu3lYxnUA\nfetching features for id: 6wZ0ryerZL99oWdItvyoNR\nfetching features for id: 4EKhJzaAkrlpamL0SDrSw7\nfetching features for id: 1PDPPfQjRulZSbbIUcz5v1\nfetching features for id: 5Z07bJAwIfoNc9R140nKjQ\nfetching features for id: 6wDZC8YZ97vV2hBxvx3UVt\nfetching features for id: 5h8wkSF9tS0Vvv4lbf90sq\nfetching features for id: 5tINTGsjnZBj9s4bCoVxis\nfetching features for id: 5Vx3vPFnLjwW4yD6ReZPMv\nfetching features for id: 6fPyisGwCjkJYB7r5AD921\nfetching features for id: 1N58xQLv5FZ6LgEfcxVByP\nfetching features for id: 7sgZ29LuSDARfMQ3YdhNo3\nfetching features for id: 26tY1oOrCnLsPHGoayAADo\nfetching features for id: 7fBJUr0gptVUnAu6cv0N6m\nfetching features for id: 2vhZMNLULSh4Er1O3qrBr5\nfetching features for id: 5UhOp3HtNKv10HO4zTeS4s\nfetching features for id: 6dSfyyuTVnquyJW4ObN6dj\nfetching features for id: 0infa4Uv31EQJkJ6dXWJ6A\nfetching features for id: 5iHC3iL5tOeHjozKJ6A01N\nfetching features for id: 0MpZSu8U7wZHKG2HNXoT45\nfetching features for id: 2URDMFXkfwgXSYL9kNWS6J\nfetching features for id: 1v1vy9wS7LoRYAyBE67rEZ\nfetching features for id: 7srhHSp2T99MZGF108BM8z\nfetching features for id: 4jmLzvN8QG9V7C06REIMpf\nfetching features for id: 1hyQzr1sXr9IK21ccFlwA5\nfetching features for id: 1rFGwz7rswOuW2IdXPd0fv\nfetching features for id: 6bIVxvmTWZdeIYpY5Qpc5f\nfetching features for id: 2Q7HyvRr2lkoDcyD0rBrwz\nfetching features for id: 6VAjaNm8MRPzg8dlCzNpF1\nfetching features for id: 5MVlFGoBArEqXaxT9Caock\nfetching features for id: 3cdUuaCRPsHenWNBY3LJvj\nfetching features for id: 3InRteVTWoresD8cmdVxJW\nfetching features for id: 1E5cZzKmVgLLJcLI6HPbn4\nfetching features for id: 32ZIsOItLF48D3NkMOVeru\nfetching features for id: 68FVTThRdUSwr9SLsGZsUk\nfetching features for id: 61PT35m8YfPJL4c6NBtDgL\nfetching features for id: 4eN5w9KeNWQGtvpnzJoxX1\nfetching features for id: 5VsIsLz4LWBKwmsrHayGLt\nfetching features for id: 4VXtEw18Y1BrZ3E5G5ipVp\nfetching features for id: 4YVn6eoXNhmJjzlsdxhqc6\nfetching features for id: 1GiwOMG9LsWAQPV5OziLeL\nfetching features for id: 0iiWaJLX3gUtHszswT7uZF\nfetching features for id: 70VBpCcpjJTmtQfVV5Pm4B\nfetching features for id: 3iVr0ys1UuBPkV9whnC2ER\nfetching features for id: 4gaGX5o4pe27Z6Z12R8x43\nfetching features for id: 09EOb0A1DBETowU7tODmFx\nfetching features for id: 2SMKd2T7c7PB2IauC324D4\nfetching features for id: 0x8M9S0YZVRCIshCQHZxzk\nfetching features for id: 4iZI0Z4A7gSnWGkCSzHjG9\nfetching features for id: 72rPPmZ6pLgdYyqwLVGiTH\nfetching features for id: 68cErZ0aBcmXsWN6GSDBl9\nfetching features for id: 5PXh3ByMYeMxXu0kTguS0P\nfetching features for id: 6AlHNhSph2BURliECGQCUo\nfetching features for id: 44gBeSPEmOKRquHKQENkkQ\nfetching features for id: 4epoSl8BTGb5yfvkqdhyhv\nfetching features for id: 49g2mdvm44Nli8HjYLfanl\nfetching features for id: 4NN3TFbwguBZAbQ1N2Kk2z\nfetching features for id: 18NtcPaUBR58Lq6A6bMdXf\nfetching features for id: 2c2vQsTkOEzUWTMwZcciqH\nfetching features for id: 7icUts9tjPjhY70XZ7j8Wc\nfetching features for id: 7phNHUv2ElaycVYwGvXD78\nfetching features for id: 5XOKpohpxwCG6xMvTTvDxe\nfetching features for id: 2ZFr2DExEpycpuoSlx5Loq\nfetching features for id: 4oy4gZOLCTwMh2FjNo5RQJ\nfetching features for id: 093OJWDHPlIiyBYBr5lvFz\nfetching features for id: 7dsw58DjDC6PanKffWdr0p\nfetching features for id: 4NW9pE6koSYQ65AfqaQZof\nfetching features for id: 4KmaO186C0dNwLHYa00Esu\nfetching features for id: 7ntXUTgbm0PHd68xlS7QOC\nfetching features for id: 58ojWS1EGL6z5rEOELtJmH\nfetching features for id: 1ehJNXPYU0Uq1IO9KCJJbV\nfetching features for id: 3Z05xMKFPEwR8YadgcDPWH\nfetching features for id: 5xUdcywqjzg84en1kHf1DI\nfetching features for id: 3oYog20P3yIadymvzWSIZt\nfetching features for id: 0Kb1OykZK7OYNqQ58KZ0rj\nfetching features for id: 4O3C5wHN2tetlVZEZ7V8aG\nfetching features for id: 72EbrfH8w9LRJQF3rz5Ivg\nfetching features for id: 2YcdopKk8LZBo9Ffoh7LEO\nfetching features for id: 6ADpATV1CEeBzOrHtklKdW\nfetching features for id: 2QEDCc7EGEtU6s2oRALM6r\nfetching features for id: 5MfmNO06o0t10pgIRYtMJq\nfetching features for id: 7pMGd8mk3kxnlvHwrMNWPM\nfetching features for id: 1UqTerpEPPXip3bL9P5yfW\nfetching features for id: 1EVmIILR92aM4OtRf8zglo\nfetching features for id: 5l39ITnk6hh5dPZUuhtuei\nfetching features for id: 2RsosgFuNyVqA7O5K7SIzi\nfetching features for id: 6jjXwmG1czQP9krzlFsaDw\nfetching features for id: 7EME8xI4YLbUUT8qVKgBoR\nfetching features for id: 2ajr5SRpynlFAxG5e4UgVz\nfetching features for id: 3zmZYAHzV6hz9XHrDB1phU\nfetching features for id: 5LL87PJwawxXLLT0obIg1w\nfetching features for id: 595JEfigekDXJujHrZXjRd\nfetching features for id: 1UfGX3eL0ifA3gJSwWPMBi\nfetching features for id: 73pKr3QfG3uP0QW8X9Ubgx\nfetching features for id: 4YflRqjR3TF8m16qh5e7UU\nfetching features for id: 0E3gyl8y7IQaxft5gEhvbv\nfetching features for id: 6wI57Jgz7qLaCr8PDG80Gv\nfetching features for id: 2b39AQLnuzbnfmUc7IB42Y\nfetching features for id: 7GjmzKbriJewFNDPJtVjey\nfetching features for id: 4dJBODV6g7zryS6HpszS5M\nfetching features for id: 01vU3aIv30DTDyboxD8s1i\nfetching features for id: 7xs6gLM1L7UBmBefBawVXC\nfetching features for id: 2yShRZBSQ7Pa7GPrdQBjhy\nfetching features for id: 00vH2PsEQTGRyJYhyIyDbr\nfetching features for id: 4PeZdZt3Mf46SfsL6Gj1EY\nfetching features for id: 1X594RXrF87lumS4DbvnJk\nfetching features for id: 1Zm98o9F5fTIOGRkj7iqQj\nfetching features for id: 0cp4meeLcHubVHMnl1NJRm\nfetching features for id: 6n9a8OzOg0LjASXFVeHQv9\nfetching features for id: 5t77d32zXEnD1pskSkWjVZ\nfetching features for id: 3ZihHp2tqO0D3hmlb2hbVC\nfetching features for id: 6ST7lA0LHEzG9OFqoEfdy5\nfetching features for id: 2EFuKrTmy6wCB9C7Bvyyia\nfetching features for id: 2qi7QEyfxL2Z2ez9Rd51e0\nfetching features for id: 31ip2wUAsK5IJP4wux6yXD\nfetching features for id: 2pjIyyG8cfiOBKHY0q9kFT\nfetching features for id: 7jYZxrUX7gu94T3xEjwAmn\nfetching features for id: 1USwd0VpMKlgfMYmdrJJvq\nfetching features for id: 5fnlUDWCW3wgoiaCjyfakI\nfetching features for id: 4JeScz5yRYhA3xlVTpNeor\nfetching features for id: 7ACk1x9kaXkqwdooclmdUR\nfetching features for id: 28C0mqGkQlWfIgOZLrKlnu\nfetching features for id: 4E2pWFNwyYVRiPzyLI3Izd\nfetching features for id: 37Vct7oArn3tHN3XXCJJ7p\nfetching features for id: 4x0sdtyRjJ9lf6pmsa3c03\nfetching features for id: 2UBCsvPvDbfLhQMzwcDtrJ\nfetching features for id: 2b29IX7VUkChXruI3pHKnc\nfetching features for id: 1oYef47TdXsf8ssv8U1yzb\nfetching features for id: 0JxQCmVhndnWtvNoPWgFLu\nfetching features for id: 1mxfzeVpmlAiRAcH5cv3ll\nfetching features for id: 4E81iwmbzxImJcfbzvMBNs\nfetching features for id: 5EYxarXezOkrieJr1LNFP9\nfetching features for id: 1JwNYwuXPcFyOcv0s7b5Cd\nfetching features for id: 3Id8znxIUtTYNhZsak8Unb\nfetching features for id: 4GuVS8n2zskeUADMFJM6zL\nfetching features for id: 3AdI41iDyFCutqLT6Lt1sr\nfetching features for id: 78mqMkOHEqVNtVCnPj8o2O\nfetching features for id: 6ZsJqksJCvEvKlzlCpIVE3\nfetching features for id: 54xxVmu4bycyQ7psgJdzrh\nfetching features for id: 02Cr2z2dPQ0RKCmgZTQgRq\nfetching features for id: 3HcYP0UHeSiRaPbhsXT0bd\nfetching features for id: 7bEW7s8fzVBkRT892FKnOg\nfetching features for id: 6UPnYI7fl4kJngseOzbeBe\nfetching features for id: 6UWS2yEQck1K7QynHWRuHR\nfetching features for id: 3WgjedKdtPCwsA2qJ3IK0o\nfetching features for id: 5T64WGCfoQiwXBtfsrlIlk\nfetching features for id: 1nDIuGfjm5RIc8MqWgQpJR\nfetching features for id: 4ILZLqZiTdr9rIKqvM7NUS\nfetching features for id: 2bjS5uN3J3fqSRuedR1OlA\nfetching features for id: 4X5USMkTYXSIcP3Wu6RPAj\nfetching features for id: 1f7tW1eUo3KbU2mvaioPGI\nfetching features for id: 3WmLqAHx29dL6H4DhQ0qwI\nfetching features for id: 2n7Cnh1bxm5I8YRuCu2f2t\nfetching features for id: 5Nv2kTS0kdwMuI2LvvdVaP\nfetching features for id: 7y6CCyVMBw4rVjMdd909pG\nfetching features for id: 5M7qj3MkvGK36ZAPP89euN\nfetching features for id: 6uovV5Jae2F2ca8JhVcRcb\nfetching features for id: 3drp0RrqgeVxTkcukt0jGM\nfetching features for id: 4s17d1GNitCCI205nblMy9\nfetching features for id: 0877I2tK4mGWCFfwXsx0qy\nfetching features for id: 3uJj1t0OWMxbgFHLY5Zg5Y\nfetching features for id: 5uoBVVLnkKaTbCyF6mMSTn\nfetching features for id: 1Fr9lN5agksbmJYi0qlWmG\nfetching features for id: 35sxFUaGPAAcILFl80UYog\nfetching features for id: 1SKlsmoCYFdUSB9L5elxFj\nfetching features for id: 1nFa2nnyPwjeAIuJCfFCjB\nfetching features for id: 6Pi8qqxac8KYv7QyEfgHUm\nfetching features for id: 3s6pnpgC7aDwR6tSByPpef\nfetching features for id: 6HePyAmu0BGEI2uKnPlfZY\nfetching features for id: 7FTTN1gQKi8HnM89q3Mac3\nfetching features for id: 1jZ9zwqDTfR2zRVpPkgetU\nfetching features for id: 1byudRVLvxOVYPLOG1xqt5\nfetching features for id: 6g4Oyy8IwraOjPl7nqnROq\nfetching features for id: 37T6DBXi46eyd9xlQnUsUX\nfetching features for id: 4hZGQQRphhGdjyMlJINBDI\nfetching features for id: 33aC0VE93PEf6dAqYOobRN\nfetching features for id: 15zciFDbmEEdehl84mZC5u\nfetching features for id: 1ifMS2aM3l9IA8u0xzy0FH\nfetching features for id: 50O2i8HRJIeBtnqpqlqRZh\nfetching features for id: 23YLuLDEGv9KRrQvEvwCxJ\nfetching features for id: 6J1HWy2pduS8uK2t6Xs0q2\nfetching features for id: 49l9Y5FbkOwKFLFbc9ArD6\nfetching features for id: 7xlYVBERqQ6r76euVrL9JS\nfetching features for id: 4U6I4CX15zcKLDSCgc0Eab\nfetching features for id: 1hQ41No89ehwGgOnub7CTv\nfetching features for id: 7BlWK6ZJN4oYqlUKZBz4q0\nfetching features for id: 0I4GId05Rsx9367OcyG5ij\nfetching features for id: 1M1dfIMTeh5b0UHqhNeCsA\nfetching features for id: 3FYjQxzABtEXtzrnm3DK1B\nfetching features for id: 2xnZ93pkD3wc3XSwdHbT9K\nfetching features for id: 5s2KAVm8MuFolrs3vO59PA\nfetching features for id: 7nP4dbZhIjMrBmnvSS1f0P\nfetching features for id: 5KRhZ2EezIUnaAuTWTS6uY\nfetching features for id: 1pT8Q13wqYFvGzvbpIMsqc\nfetching features for id: 395ql4WGgRleaCZmGbJ0Ny\nfetching features for id: 7KyakYezb7yyuGLcAnfi4n\nfetching features for id: 1u7zJy4RUSXclRhswkA6RT\nfetching features for id: 4ioR9ZwpWGaCtwaKHBfQfH\nfetching features for id: 6xAKCAHwaAvw9rlwAv3lkD\nfetching features for id: 7iDgqnNiEjCLMcvErfT6zT\nfetching features for id: 1wkwJJz2zGDPZoXsxok4qG\nfetching features for id: 5P6O4yi0sZ4JUOJAlmZvxM\nfetching features for id: 4qNsm0544eEStb6V0TAOf3\nfetching features for id: 1Ibjmb8gV1RERi8GNUBGi1\nfetching features for id: 6lEZnUw7X4qt5VrdXrAYp9\nfetching features for id: 5WRxN8I2Ub7i0cABMljTZs\nfetching features for id: 6cPq5rCeDUbgELgfZPrEUG\nfetching features for id: 1cve4tX1mLZkqA5c6Wf6t4\nfetching features for id: 3TFSEVxjQabU96NbB5a9PU\nfetching features for id: 5TGKESXpi2XExg93Mu7Dbs\nfetching features for id: 6olpdSFw3MJkddyxtAs5HQ\nfetching features for id: 6krfYZxsSyc56uCXC7CKn2\nfetching features for id: 6h12N1HCsDbrY2uY0NHfKb\nfetching features for id: 6BLqrmuRTePFJEbmSNCilF\nfetching features for id: 677oYgkn4m90Zvq4lQH2B8\nfetching features for id: 2gpLU5dhDI62Oo5I3kzJTZ\nfetching features for id: 3n2EQWAnkXXdsAmkyYLHrZ\nfetching features for id: 2nBtCbdNNZjKML7ELCN3M3\nfetching features for id: 2oZpnXGl3Juka15OgZ5U9U\nfetching features for id: 6EAlFsajDK1PVgfhdJbS5Y\nfetching features for id: 08ndiMccc1wveOaaKQW1Ot\nfetching features for id: 3xaObh46jUQsaPeqg13Szd\nfetching features for id: 2HtdXC83swft4uG3RoRzU4\nfetching features for id: 0ZKlLYAoqgocP7auLhMqRl\nfetching features for id: 3dVZsNXZWhBDUSjV2RkEWP\nfetching features for id: 1FY6i2ypBOzsMHU2mWXMA9\nfetching features for id: 2brxFq9YHrI4v6RGrocn5X\nfetching features for id: 2ykx6yAYgSze3BJWzZ0pLl\nfetching features for id: 0tKx5Acv0WmSxJWhrNcpAa\nfetching features for id: 5WQV0EHAxgY7QOYP4XOoqa\nfetching features for id: 15e7F6w10JTcuRcuR27QlG\nfetching features for id: 1nL8geqtGBTM1u9FVNxP4A\nfetching features for id: 3gzbPqfjQKHTUm18u71cIt\nfetching features for id: 1t4xcukIOdztApXm67avzA\nfetching features for id: 5XbZjYB4z9OIJhutiDJeSV\nfetching features for id: 4QkvXIZGRmR2k7yduFFj9L\nfetching features for id: 6Nd4YWtScMzkucT3QQ3a0F\nfetching features for id: 6R0krjcydjrNA9WsycpSat\nfetching features for id: 5nZS7VzWL7rheLw46zdtKl\nfetching features for id: 543Qtjzov7A5acT4He5yUt\nfetching features for id: 4RAWjh2sQvEOWimtJYrA45\nfetching features for id: 4CORfI5PJIhyMw8uWpDfKz\nfetching features for id: 2XvCndkWhupzR6a6dD8IIG\nfetching features for id: 6pYvB2TQO7oGLw6oNV40jm\nfetching features for id: 3vYcrJAAxldOq8Ans4li8w\nfetching features for id: 41p81P3ICTMTcTAqg5f9mr\nfetching features for id: 7LaqQ8S4kwVt8nKVHT9ESD\nfetching features for id: 0PdPDTaW4ld05aiOxs4JRy\nfetching features for id: 67JIfYdnRNXSSP1uPRmE2j\nfetching features for id: 3zjM9AxsRrzdTHvT9nhNNK\nfetching features for id: 3IAhEm8TIChChdokqgG2US\nfetching features for id: 6nPFRCNTVxvLbhdUZjahXU\nfetching features for id: 2kUHsxQhgocvFj7w162aC4\nfetching features for id: 769LeZbzwUhQ0YbcoMgGGA\nfetching features for id: 1o8b7Nn4cpb8MneytnwUVm\nfetching features for id: 40yDIZa7ncHhfeMTFC73nj\nfetching features for id: 75ZlvRqeFQy02VFklspFdK\nfetching features for id: 2fcdqoR6PsHuOlAy4r9VDh\nfetching features for id: 2HDkF217qwsPZmuMLKYBvc\nfetching features for id: 4YCnTYbq3oL1Lqpyxg33CU\nfetching features for id: 01OaqF7jurjEqpbAb49Enp\nfetching features for id: 4bDfHzfDotB2o3y4AetBN0\nfetching features for id: 5I8YPXA3U6tnYpFNvchUeb\nfetching features for id: 7m44TrK4qxvhRUh1vaibHL\nfetching features for id: 67xTBZLMacOVnrMHSJlJWx\nfetching features for id: 3Eu2gH6F9R6WUQw3O3f4lJ\nfetching features for id: 0tgwJKFJ7Itl5d386q5QkJ\nfetching features for id: 248Jb9TGzk3Rk0OaNtUGol\nfetching features for id: 5pgq6Pzyr8iF2Rs7vzC1nd\nfetching features for id: 115WODgX0QxVa8Zn7MTSk2\nfetching features for id: 29W8uzEFa8e21w4SL7ozUs\nfetching features for id: 3a0wCcSqwgBflU8cYs2ziR\nfetching features for id: 1bZnNyr6uaY7XbxZfO1JYs\nfetching features for id: 1T5cdFiSyAZeOQAYS6rWyD\nfetching features for id: 3dUcm0TvbLkJvS0MxeA4EJ\nfetching features for id: 4FSKLW5bdV3StzIyGtUvg6\nfetching features for id: 2Af6yE62PwymxZV0962qul\nfetching features for id: 2ZRpDCGMU6c3NvHIrZrpue\nfetching features for id: 3XhTHMleuyoQCP8jxOeIhD\nfetching features for id: 4IZBzpxPNFmKP7I6D3qVH6\nfetching features for id: 6VCeTm0H5ShF688xcrksaX\nfetching features for id: 7IrdWjYiRe0NiMFAYZX9IV\nfetching features for id: 7fb7gsWUEmFmJWRznlMwf9\nfetching features for id: 7H1Eqhac5EvsokLoSMRogq\nfetching features for id: 4ufiwgOtX1ceUsuTdcKK5f\nfetching features for id: 1wlPVZpAHotYmO7JRfVkGx\nfetching features for id: 4tkxuXxeKBeR428wU4XfB7\nfetching features for id: 3iOyyRe9UMmjjMh6lHquRt\nfetching features for id: 7hCuUFGz2LYsyUkZgGFDS8\nfetching features for id: 0uPj39pnV6IU28BPWIGDYU\nfetching features for id: 2AvLQRGacGcxZJ0GzzpRls\nfetching features for id: 1e5PUoWQLWYWy9FXPg1RSb\nfetching features for id: 7v4R67QJ3JHkEwvs7w3tg6\nfetching features for id: 3WjvOjzfkbaoAqshjIAtsp\nfetching features for id: 4OuL5STprD6xGnjIhMDpdh\nfetching features for id: 2dDlDQVkmeQUuZJ4q0TUYH\nfetching features for id: 2KYcLNpOQ3KmoRSKmAjhpj\nfetching features for id: 5hY7BaoT23IEirvJwqOCNF\nfetching features for id: 1yFpMhwrlUHJ9BuULu6PPg\nfetching features for id: 1VFbDxs2miKl69fkSQ982b\nfetching features for id: 6JS0rmZ0iRaX97cV85GJVg\nfetching features for id: 4VlhAIiD57dziRTObNmERG\nfetching features for id: 1TlARCew3Q8m3farPiLOhq\nfetching features for id: 51CtX5fezyO0khz7fx6lpu\nfetching features for id: 78Cg7PEJ7zEvhZQUdiGktL\nfetching features for id: 6rUtOE7BMX8iQtyJNxTUDO\nfetching features for id: 6UUjdKSv3v7Aav4P75qUKR\nfetching features for id: 2HzOTwIfaNsdsr1KVtuID7\nfetching features for id: 2hzQdu0eixbO1LAyJG8m8x\nfetching features for id: 0pcQb5YE5KG0i4AMWaEfgL\nfetching features for id: 2F4FNcz68howQWD4zaGJSi\nfetching features for id: 2Nff3m7wldOofrtSIu2n4e\nfetching features for id: 4IM9fsiXjM877LYrIgPfYD\nfetching features for id: 2ZKoeiRGJUgdx79sllP39d\nfetching features for id: 4KKQMQD6Q0TbqFlioQHPIy\nfetching features for id: 2s1NZyk8jRQc5GFqvOKogj\nfetching features for id: 6lyDqcj3PbLRnkWxUqJgqW\nfetching features for id: 3wCYTAYVMl5t5QceSHU9GL\nfetching features for id: 6DPTrgqQSUvKf1UhBzRt5O\nfetching features for id: 6L3TUUw1x872w7nEFVXkyC\nfetching features for id: 7FbwvPf7Vyd6ZRpn10jKVH\nfetching features for id: 6oebo9Zd14wvVXD4sK5Oxr\nfetching features for id: 1DkKuAjfuTMypBMQbauk1L\nfetching features for id: 18yKdEbVy2LoIlvSbgqZ2m\nfetching features for id: 12AcHr2Z7yFHC4vzZnJwut\nfetching features for id: 0ngxsPeqqodvjyTZy9NRI8\nfetching features for id: 5edxDQC7eBbaz0hqkV8Y88\nfetching features for id: 4HT3clnfZI9RgY1Bmvu1Oa\nfetching features for id: 5QhPXM0QVkHfNARSwQByF4\nfetching features for id: 27Iit0kCpobHouAgwjU9zS\nfetching features for id: 5GwlaZYksclv2EMmRzSXof\nfetching features for id: 67Ysq6ID8LYJWSATep6DdT\nfetching features for id: 6tGldcIJJwByOYCjJglMAs\nfetching features for id: 3PSyChiN8udWx9rLGDk2Ym\nfetching features for id: 134qLCzbDLIv0JMbEVY68O\nfetching features for id: 31fuTDBhu5rlU5Mum81eoG\nfetching features for id: 5Qxby3z8hxWLtGGcL6aKoW\nfetching features for id: 3fdMewsb6zb2Kdkl9KWZGc\nfetching features for id: 1fQK6JQTxImiNG58aRgUgR\nfetching features for id: 4BnAInfEjo8EJySDkXoNan\nfetching features for id: 0Zkuewgtp8XbeJ5xInG4p8\nfetching features for id: 2CjVRHeMnqekSKNhBoVxZu\nfetching features for id: 22ukRZHToZUoICnBmT8AuT\nfetching features for id: 5gykfJ6hyC9WonxY0aF8hI\nfetching features for id: 3K5ITgUFrFIWgEDM4kBcOk\nfetching features for id: 43SnuhthBn61wC8535oWnc\nfetching features for id: 0xDBb1glVyuw0fn0VLsKrM\nfetching features for id: 4A8uAIuT2eJkK3GzCAsYED\nfetching features for id: 5wTPFI9gsJWnHMDl9fwbIh\nfetching features for id: 1o9VePckQ360ICcIl2Kil4\nfetching features for id: 6pF5ZvmFOrb1Afn3Z5pdUn\nfetching features for id: 61uAY9xdzYiIbYJ07XvKso\nfetching features for id: 4vP4YOUxTfZKwAXi68iRQR\nfetching features for id: 12qaVv86r6MwvmG6A9ZrL2\nfetching features for id: 5jiM4IeoXcEpUFNLiVWwy3\nfetching features for id: 7BvqRkfYFFzDAnNd1eyB0e\nfetching features for id: 53AqEQSKNcY9FGUFHncmxs\nfetching features for id: 2Zuwqmxzgvrnb4xYE8RAyi\nfetching features for id: 4YGB5SzMVQJAB7gjsxbtQR\nfetching features for id: 5LIjieDz2qXX0OCegoBmPS\nfetching features for id: 1rDwkzgp1ezOnpatP0YZYM\nfetching features for id: 4DL21qfPuB0c5ZNr0ZkDrm\nfetching features for id: 6YosVXlnWfD9zqLyiQFA7U\nfetching features for id: 64OEadvi2rhN0Y13sK2gks\nfetching features for id: 2UTO3vuKleiyHjOGX1dppL\nfetching features for id: 3kenvAzfISpc4Y6C1GwHlh\nfetching features for id: 5mj4f31SXtotwciKLcpUPd\nfetching features for id: 41P1mrjKY2e0jaWKjjqkEh\nfetching features for id: 1ALENAjlyDwQybhEinNfqb\nfetching features for id: 3R1CKeQH6h0Mi9ViW2Id93\nfetching features for id: 4qhcrhlxMfvRcMi4nablHV\nfetching features for id: 0B1cm1Z7J9v8Wug9idr8T6\nfetching features for id: 5kxBWHsXHQKiu8LjoNf2n2\nfetching features for id: 5Kn1PO4WCJlfdLU5j2OaZX\nfetching features for id: 46CUlBVTKmG8phVX0FcEEQ\nfetching features for id: 5kpc3WPbRAinVFMlSGNcNh\nfetching features for id: 7BywmFdliPDRvotH59zWTQ\nfetching features for id: 0RtKtNBQWjgIIxsXmpSRyW\nfetching features for id: 0DdV25VgZvOjOflFP4Chlk\nfetching features for id: 5WkOR0PQIE6ODgShdxbBwL\nfetching features for id: 54NFW5w4rImNBQxo4dbAyQ\nfetching features for id: 3arpPGMRj2lEGYZPEjSWQv\nfetching features for id: 4dfn4XrQIUAgUU96Ag1Ref\nfetching features for id: 2DwQtUea5JUp81XpxFvt2l\nfetching features for id: 79H73UE3CpqhIwxAJaTtiE\nfetching features for id: 4N0r5srgTtFpFOAkqc0VmU\nfetching features for id: 4MCpmOHRa5tlUo5EQaCjjJ\nfetching features for id: 3cAThLgpx8ZjQ6jTguWYkf\nfetching features for id: 2k4k2Qytu2ZnYoqDOagNBV\nfetching features for id: 0EqzIiEAW95n7HqRu5KhQE\nfetching features for id: 1LAK4k1Bk0bKsZgamDGM5U\nfetching features for id: 1Kd5vbYmkaDSed6gJrDHGk\nfetching features for id: 5PYEBQrRHWoMuRr5Ers9vR\nfetching features for id: 5uLqudCirDBdWjC7xJcLgh\nfetching features for id: 59ifvc2IeQwG2zr3qQtPS5\nfetching features for id: 5iEhTuTOcuQ50caDT8YItz\nfetching features for id: 2pMwweGkjNLXZkSDZ63LYf\nfetching features for id: 3QaD1V1SYVrYalTPl0N9qP\nfetching features for id: 5VToyZ5cdGGwakh4wRSvMK\nfetching features for id: 1Y8WUbypd9NkPbXeBzTuIx\nfetching features for id: 3JIdgJIE0QPz890y3uz1WV\nfetching features for id: 30PanNHxt198hIfYtvU9OF\nfetching features for id: 3RDRUhfiesfrRqb8pLno8m\nfetching features for id: 56rEp6uhPXIlneMX3Eqp4b\nfetching features for id: 23c4cwGTt6ifXOoO4FlBRK\nfetching features for id: 7D3gxKkomdFwzOsyECxvvx\nfetching features for id: 5RN6RUj3m6mhlU70Bblqkf\nfetching features for id: 1YQWosTIljIvxAgHWTp7KP\nfetching features for id: 5Shcei9ByRcV2tABcpflJK\nfetching features for id: 5mPYngp8SBhJlwkBA718BJ\nfetching features for id: 79BVw1dAEhmoEIPeerlmHb\nfetching features for id: 7Gvf7lC6lV1x5Ul3eVePdH\nfetching features for id: 0DxASYDMwP2M84aD0G20nT\nfetching features for id: 1sNUxDxmEzRuZKizOpv59D\nfetching features for id: 6onPwTEL5oCwtU9c4FNk3E\nfetching features for id: 4IlsLphpkrX1fxA1vuuSjy\nfetching features for id: 3IBrdIpvRYnJQlhGY2x1JH\nfetching features for id: 3OvzW4ex4PCAkr25M85WCP\nfetching features for id: 6f483gsIVo8erb6cJQuiNA\nfetching features for id: 68ktEHI9cCz4Lz6RMmnqYV\nfetching features for id: 4OKXTJcGUrrBCnsozG4WxH\nfetching features for id: 3Rf03HXu9jgcCtCtT2oT4p\nfetching features for id: 6RATNusyYnf5rTQHsxdoVT\nfetching features for id: 3moCtvSb8mfSdLlmrCIcmc\nfetching features for id: 2BgLQqzv1wnhyMjZa8FIFN\nfetching features for id: 3VYQy0o7oi0ea9XsJOYjeq\nfetching features for id: 2meLjIcyQBz5Zt2sctwZUd\nfetching features for id: 1ATiocRsFUbrnEFhb5Y4y9\nfetching features for id: 62LUftiabO67epbYZUhcu5\nfetching features for id: 4TE1mJzeu3os3j4YcCfkMI\nfetching features for id: 61Y7DcAlDPa6BG8Fy1x8pO\nfetching features for id: 2boEGV3Zcsh7AMaL3jqBGP\nfetching features for id: 1paWReMsPIqKGEOxdlHT2D\nfetching features for id: 5QSQFf6dWrtGx6bukUd66y\nfetching features for id: 0Z06GCuyB8OoinQQltymO8\nfetching features for id: 2ZPFnNILBwHasTSHRDIy7W\nfetching features for id: 4YOujaYI7YhWtjHQg7ekDx\nfetching features for id: 7encACbtTCKvlRrl3hPQHa\nfetching features for id: 7HYfzp6GC2miSBlyhCj0aI\nfetching features for id: 4a8gc5dQMakv3T24PMVg8l\nfetching features for id: 6VFtCu4u6NeANjWZO131Sl\nfetching features for id: 06VKnttgGlIvFhZQgbuQtu\nfetching features for id: 5pgq6Pzyr8iF2Rs7vzC1nd\nfetching features for id: 5eVN3P0yaBYs44eCAHotim\nfetching features for id: 1Z2E0hzef6vJD3fCFWoWUi\nfetching features for id: 455D8RzBlovExTuwS1DGQ7\nfetching features for id: 24geY1fOQlesj5U3HqnhDV\nfetching features for id: 30txFxKhnMJqUXz0xx0x39\nfetching features for id: 1jTrURLfV3NZJoSk9kBdTw\nfetching features for id: 6taOhSWQVW7wd0lj9MX1u9\nfetching features for id: 7kYA94PXwjCZLJsv31Ec6G\nfetching features for id: 6R5Hl2c1fcHLqvWtdZr3wm\nfetching features for id: 1yd9fH4cmpm3to1jDpm6B1\nfetching features for id: 6Jh1r6aDr8Z7jU2zairxW6\nfetching features for id: 74vEXsXVLRDPw7e1R7zkjC\nfetching features for id: 3152NHAqd6T3E8BmISjGac\nfetching features for id: 73tGO4JJKrMYtjCbq1v8Oa\nfetching features for id: 2ELAMT68a78OGwhCgj9Kai\nfetching features for id: 1y7tq1Yw6VBjD6bXkxKJH0\nfetching features for id: 0WLjPfgtkpLgiQ073fLybN\nfetching features for id: 40hXwATZJS2voIn45UNRkQ\nfetching features for id: 1xVTrDT8ryK6YnbJOB6MbY\nfetching features for id: 0wfMWsPcabTqXbfacWVCej\nfetching features for id: 1B0qQzYE5tvDiJadNEJCe8\nfetching features for id: 0CUAzACLcr0QwX0rmSsn6n\nfetching features for id: 08VsAksvO7as6FqOmeX7z4\nfetching features for id: 7stF084H3lFfS8yWYdncRE\nfetching features for id: 01GarP7Iim3fsxASclkEFW\nfetching features for id: 16wsaPpSfcvo9ysLD8BZ4o\nfetching features for id: 2f4K2yeD2Vya8JJDngYP7H\nfetching features for id: 2246ygqAMd4yFdNxkGwchB\nfetching features for id: 4bgZaavtTfbdFN0Utz81th\nfetching features for id: 49JyvsH6SdsldNMAG0KKYu\nfetching features for id: 7743M862EmJ0m4KfCKYEbV\nfetching features for id: 7b5CaZGoxF5rth1jBITajb\nfetching features for id: 1jjYKkQ4nNoZFHR32FzCXB\nfetching features for id: 1Kdqwkq2AI5S0txCal6bWn\nfetching features for id: 1IlyqTe388mP1XO0vPGCBv\nfetching features for id: 419PaL8alIb8WVe2a8f2zQ\nfetching features for id: 7pXT7vvve8VEI2yy86EWva\nfetching features for id: 734EXsNUPyJ0KvqkyCO8mD\nfetching features for id: 0HkgSsFpKnOAYlzUfUz1if\nfetching features for id: 4P5sAyZWsse8eqdfF6f6fv\nfetching features for id: 0dR8YiccZ9wnpHgMLAXWW2\nfetching features for id: 3q5sp47cS6OAebFsystUB7\nfetching features for id: 5C285mitLVNBYDMjI50B89\nfetching features for id: 2BLq596hGgX7cr0QNii1jA\nfetching features for id: 0ValZZqcUDBHJqPZKiGr8D\nfetching features for id: 4jq9pn4eAqXD0YZTT6MQbz\nfetching features for id: 0GG1yxYyVk8q8XteSIK0Ul\nfetching features for id: 5Zfya0scaR3mFcIwtqX2Eg\nfetching features for id: 25pEPwORWMZslc5RabedUb\nfetching features for id: 4RdB68qqZ4QKxsnVWMi6tQ\nfetching features for id: 6C5IC9bZ0q6mI16Hay23gB\nfetching features for id: 1EWUFMTwyqAzb1p9KFnt02\nfetching features for id: 3p2xLi4pxWf81NeuBKhR6E\nfetching features for id: 2l7dlW5nopmzV6M4yEcJ9v\nfetching features for id: 6LH3R2K7clxju5NTLgtBEl\nfetching features for id: 59VGlZTbkRf0mnRr5Dmr9F\nfetching features for id: 68LIwSJYSkEj7oHcXCcHwo\nfetching features for id: 23PDpI6IkJUAjZvgPDIPh5\nfetching features for id: 1JCgRCGu6iAFQa9GdmqvuT\nfetching features for id: 3aGmELit8NfF9XOBB8A6ze\nfetching features for id: 6A3rvsveNrl3HPacFZJgEZ\nfetching features for id: 2nzEcky0rEUkYOZGx1WUAA\nfetching features for id: 374ulCqIly2iGWAaSd7xKL\nfetching features for id: 23Z7SuXIUt2WVcdD2iA9PS\nfetching features for id: 1FAtkmz8kFmbq4VmOf79fb\nfetching features for id: 5u18XA6S3bSLLz3abwpdA3\nfetching features for id: 2WL4moGPyxOA0W2p0zJZPc\nfetching features for id: 5GPypf3Iy9VoguLGUiM0pn\nfetching features for id: 7yg0J6AuFKGF78mmy1req3\nfetching features for id: 7aHn8rGeEGkDkQiUV0DpW1\nfetching features for id: 0jjQWudovK9z9BzhoFlt0f\nfetching features for id: 5gCkto8RSjGPynLo3GjOk0\nfetching features for id: 6TbQa7OPpHtmH2SIhF8ZWp\nfetching features for id: 6Y3ZRbwgdJzNFkd8knmtha\nfetching features for id: 3jgvc0IiIQLPoJxVGhUSGI\nfetching features for id: 5SoWRYrl09HS5YjXMWd3Ya\nfetching features for id: 0SAlafSCRsl5R4yHrPPYZk\nfetching features for id: 6DvxDtsae2rrcFQbtPiWP7\nfetching features for id: 1i1RRNiQOQ0qQAV6enlm3Y\nfetching features for id: 4SPtuskN0gzxkXGhOU2ZIc\nfetching features for id: 1bsz8S6OFpLxUlsNuhPYqx\nfetching features for id: 5fFH4rUUyElae5AFkCqaeU\nfetching features for id: 0is2GaJpvkJf6EuWi2gVuM\nfetching features for id: 1sVI0D7jTh5oXzVTP9aZHE\nfetching features for id: 5SfKpyvDvNohhT1NPBGTQC\nfetching features for id: 6dtWl3vou38CPcKj64xxdy\nfetching features for id: 3TX4LYL1VNKAfASr9rUP02\nfetching features for id: 0mRAY4VhmgEsBvLEFxB6Vm\nfetching features for id: 7kkuwIR1DLPZiGVbUTR3ge\nfetching features for id: 2n6gh6uZCeTy2H6rj1UJOz\nfetching features for id: 75hgmQO2o2IUx0vgl0l54e\nfetching features for id: 5tscs7pvEkhPtZS2Bi1Q0p\nfetching features for id: 7pow15KCWlKuMPe9o4nq3w\nfetching features for id: 6I8nJjLHtnL2VunrxUUyDE\nfetching features for id: 5OtJxpKGhfZMPi6T2vEhjo\nfetching features for id: 1ViXf1GXbu63qufb0eoua2\nfetching features for id: 3jHjhvyYmuyNmZilVONE59\nfetching features for id: 0Q6ylLJT5NaIKnLDeyOxp8\nfetching features for id: 7mogqg1O2pjRnQ85E5phHt\nfetching features for id: 1Taar5sR4NFefr5wU3Tb9v\nfetching features for id: 5xtEEfyUei2ojQGQzDg4g8\nfetching features for id: 7hicCfvTRlRjkrHYim5Gp4\nfetching features for id: 6VYTGkoPcOQTZNsySIPpbx\nfetching features for id: 4b7mN3AzaWHJ7GQTNB1CSs\nfetching features for id: 5ZA5vpNXOxwiPKMvTe2Mj3\nfetching features for id: 05xGj0TWWNfOF76CzY9woN\nfetching features for id: 4ECyERuKQUKEhtQGpoKKwi\nfetching features for id: 7oZA1hYBUKtKZ0cVpsQ0xK\nfetching features for id: 7yQttsqzo0g6RJbgYUElxJ\nfetching features for id: 0t5IteKDnugg7tZKEh8fYS\nfetching features for id: 3xsMYWnmdjSMLU8tVLVCVV\nfetching features for id: 0aflkSQaHnq3ZCaMoawMSu\nfetching features for id: 7KINnXORWnM4NeNwrEqlTH\nfetching features for id: 50l67uDFLIW3d9C5QCf4Kr\nfetching features for id: 05M5GJlDSyVHS6zxV5AY3k\nfetching features for id: 1uhm03Mco8BvXcucnLRtGA\nfetching features for id: 44iYHhqsZumnQedqNtOK6x\nfetching features for id: 2LhIq64BYsvV1UABBhCfG7\nfetching features for id: 3Aq3KkxoTxcTApZASmfMsk\nfetching features for id: 5NtfIcgjY7AyBQTx21FgZX\nfetching features for id: 2zXOV0ot7bh5P9d1bxZcu7\nfetching features for id: 62Y7FH3Irc175bIxMueFO7\nfetching features for id: 2C17Yxj3V6ITiNlspPWTh4\nfetching features for id: 6R8dZJ0hAurSStsFZf7EDF\nfetching features for id: 7r60cjhpBJxUuU631Q5YGd\nfetching features for id: 6dKl1rfY3GeGaPfKQnNrhQ\nfetching features for id: 4gqV9I6tpaMKztFRPwrPOP\nfetching features for id: 1vHK496r8mzXkvDiCeKw4m\nfetching features for id: 67V3iEAHSdJY1pQPbbwfDP\nfetching features for id: 5hkPR8rFMTh1iI976sMTT0\nfetching features for id: 2Vr3XMnJyu60xBUUchWUrI\nfetching features for id: 39h384i9hij2H9Qtl9XUuu\nfetching features for id: 4QC7vy5Oq4LKxhdPAnUJxc\nfetching features for id: 5sGeyWYDvUIF7QjERNCY7L\nfetching features for id: 22CJtUo2zz8CzjLoiPzjdi\nfetching features for id: 75hgmQO2o2IUx0vgl0l54e\nfetching features for id: 68cXqkPaHAFkotUv3ETwZD\nfetching features for id: 27liy9irlAusbfIBGdmVUw\nfetching features for id: 62O8bda7nGma1Op5FMSRgh\nfetching features for id: 4Z2sM6VTcNZZQjpjR267go\nfetching features for id: 03X56OrDOejgWEz5bVX1gJ\nfetching features for id: 54oRwZxnC8fN6RyjdlHBTP\nfetching features for id: 35h5WR17DqnV6sNP2tYtW9\nfetching features for id: 2VqsUEpRPvgiiBdsBuQrJz\nfetching features for id: 2RyjckhJBJcYp4Q0NQDou0\nfetching features for id: 046Ob1UgXlcBBlJLmhfIpw\nfetching features for id: 4utfRUC5RBaa9a1UcKJf6L\nfetching features for id: 0HxG43JHzmyIuKyEVwj12V\nfetching features for id: 3WYhkNYK5opOjN6i9cxKDt\nfetching features for id: 0XkuWDNnWDOZbGlZFGWMLO\nfetching features for id: 2Oy8RjlzGSZuNtmnqTHQq2\nfetching features for id: 4AbV3QiDq9ESX3f5IiouAO\nfetching features for id: 5C0W3pVo4NhIORE5fKuMzi\nfetching features for id: 6HdhIt3JLIPUJDUU9yuXdk\nfetching features for id: 2hQGSyxrGKGlvCNDCFhy9d\nfetching features for id: 3DLic67t4RSNW2oHirZYIg\nfetching features for id: 0NTfW7dIMRAmTKNQaCp7Ya\nfetching features for id: 6IuzlcoJm8VckTrjBYt6bn\nfetching features for id: 5Je693k5EPOByKPmjPoFnv\nfetching features for id: 2TRW0hzjNvaZi9SMVIRTIX\nfetching features for id: 2f6s89LHeGIcX0ygUH9sT3\nfetching features for id: 58EirkLFB6O6wwRN9JowHB\nfetching features for id: 0DwrBte9lg5HYIVC8aJ9p1\nfetching features for id: 0z57v0bDuI9gByZWZ2qpW3\nfetching features for id: 4YDV8fPzCCpghxuy0beu4P\nfetching features for id: 3UOq6MS3TiIbf5RgVFXreu\nfetching features for id: 7nn0V1maFiJDBDg2umSGMb\nfetching features for id: 3Pq3nWYY5IBHfFhCdrZgzt\nfetching features for id: 1EodQWVLRiMQLtbN2MHIK6\nfetching features for id: 7rjBc7HmaaVoXJZlgIINgg\nfetching features for id: 0Z92h6rKELLqw9uyy5vH0I\nfetching features for id: 7Emakv3drLP7KasNqvKMhP\nfetching features for id: 739SkjJC28YZmoatFrcSZV\nfetching features for id: 5oLY2gz4HnKTcXxKkaGrsa\nfetching features for id: 5Kt64y0teh7Y23o8JHNC8l\nfetching features for id: 4qpH2D4VQfOCdV7zYkt8Hr\nfetching features for id: 12va5dqzKOAO7kBLXD4m9g\nfetching features for id: 0hYEYnKYQt5d0zQdUABH3V\nfetching features for id: 4uNAdFLTuiIhDtbZsq31Og\nfetching features for id: 6A4GF6fhQyP49J3YzjajCZ\nfetching features for id: 74a9dgVrUnZfZewMWgs5xi\nfetching features for id: 15RAM0eY6ILldKcnI3jywF\nfetching features for id: 1GeEjqxTc0u7yyMv5SeHBq\nfetching features for id: 73UNWjUVZUBoF097z9Jd7J\nfetching features for id: 0FKNRNN2mcY9dBjenoxbeY\nfetching features for id: 4FsCwhDpGkBOKRtC4jLA0B\nfetching features for id: 7wAzqU8jYiqjLSvaTsfPxR\nfetching features for id: 3T4obVwhuRZkhVUepaFMf4\nfetching features for id: 15jedQZra2UBjZcd78gQu5\nfetching features for id: 4zlIDHZRhGoAmLiQ4kCSDU\nfetching features for id: 2XPEv2sB6c44DSocT2TwAc\nfetching features for id: 0dPfKBYyPmoSfoLJqeKAs4\nfetching features for id: 72EqLyVcjkRwDGAH1ATC9A\nfetching features for id: 5D9WxxdVOHUeBVMkVmQxZm\nfetching features for id: 4MYwuoHZRF7zcWH9zXUhDQ\nfetching features for id: 3jkHKjSamNIw78f68dqA93\nfetching features for id: 3Sp7BnFTClcfz3N9g36y5b\nfetching features for id: 4L7xJVwgTStG1EFiHWQFoV\nfetching features for id: 3e8Rqh9GPB4g6qPNPIYN6n\nfetching features for id: 4S2YE89fZPWOsHoEquZTLZ\nfetching features for id: 3rjOwWd1wgJNvoqskwcPhT\nfetching features for id: 1IIpglMfoS0Rge2ZpBTmRm\nfetching features for id: 3L3nhRywze6Ri0FmU6Fp7n\nfetching features for id: 7MTHzGaWm4Z4h1hrZl9vgx\nfetching features for id: 14WPjMgMJfsxSaz3dm9KwQ\nfetching features for id: 70UvKFI48DAQ2suS58bu5l\nfetching features for id: 789lq0oHhOa4pHh2y9rjkN\nfetching features for id: 499iOZxvZrkYGh0sHffvbZ\nfetching features for id: 1CrXQTpGrMhym7ASXcmTzd\nfetching features for id: 43cG3qOGaOCtRN58ez1Pgc\nfetching features for id: 7kOC1rXz5Kx6TycRg8yzMj\nfetching features for id: 00Bu7AiNb06604KMuYTQAi\nfetching features for id: 3QAkevtri76hEcI6bi1iQj\nfetching features for id: 4zj5M7tRcRj7vP1NDZgsth\nfetching features for id: 6tTGoSv21CvDXg3UoI8lQj\nfetching features for id: 77KfccxlZ8Mq6Jk1kqCaal\nfetching features for id: 5ccGCE1AceABOunwSCVAZb\nfetching features for id: 0KkmbMeW0di1BgRRzN9ca6\nfetching features for id: 22t5Efyx3X7ZjrjfjC8xMb\nfetching features for id: 0qBFeHjlC4yBTQVPCI2gg3\nfetching features for id: 0WFi7WfDuy9aT14KC5Aucu\nfetching features for id: 7M6YTNCCXcD4kK34pDBg7u\nfetching features for id: 3JLas4uh1MR6yDdfZVAmYs\nfetching features for id: 3RN7zbk7QEKR0k3IYGObfp\nfetching features for id: 6KUvIEKf0hDsbBC7rmWbvB\nfetching features for id: 7aUntLt00Tgpawu1zrQvKd\nfetching features for id: 06is43MVB1a03zyhtRBLpX\nfetching features for id: 5hleF20R6SrI99rz269cSS\nfetching features for id: 7trQt6FiO9hWIf2ltWNmuG\nfetching features for id: 5NeAsRDKMFwNoq6Mch9QFu\nfetching features for id: 0coxJ1H2iiOeCaVRCzHROs\nfetching features for id: 71TJl06EAYPxNJMEXBeGJ3\nfetching features for id: 5Zt2tF219dfVI32VAORqXH\nfetching features for id: 7MIzIo0Z9S8zmBiXdGwOeS\nfetching features for id: 1BILDfIiSgFh6MvLHjq1Jo\nfetching features for id: 5UPB1Bodo6FPlWbjkLm5WX\nfetching features for id: 3SzizrskGZBj2oSWycYkV6\nfetching features for id: 7BywmFdliPDRvotH59zWTQ\nfetching features for id: 74oEwvr2my7wMYJYl5WiMo\nfetching features for id: 67wVH9o1QtIZguaGNc3cmU\nfetching features for id: 68n5UfHyShU45aHYamc2j5\nfetching features for id: 5EILqUxXppdJhvrHia72ri\nfetching features for id: 7p64JJtRFSwlOp07twxVZs\nfetching features for id: 7Gpx2fNJiilvrf9Ss8qbit\nfetching features for id: 216MJwr3jtriO8Y8c6zal6\nfetching features for id: 62z2Gtm5Pmzecb2lvLp8PU\nfetching features for id: 6Acz8htWPRE4fpk4bvAEr2\nfetching features for id: 6FodKjIrgo6gIUQq8gcWGV\nfetching features for id: 7BYxqnEMQodVTEBrraP7jo\nfetching features for id: 57gvCnV4DqXskdbxv8z899\nfetching features for id: 3DqzSyHtpCUIRBu7n5koBN\nfetching features for id: 5hNgmiVNNQjgaoi7o5dwJR\nfetching features for id: 6Jm93p3ISmnMDH3IlN3vxG\nfetching features for id: 1fNR9tY52D1V79V5eNqxxx\nfetching features for id: 26ecDeti5s5fPVyOft93hX\nfetching features for id: 0qAtbrY5DNGHtAl1L0CQzS\nfetching features for id: 3GK5ETquucGLxWBrzBY9nY\nfetching features for id: 5MwAvp85J0t4PDqcQ0DLNQ\nfetching features for id: 7EKMfvkFVCaCPNDgeWZDm8\nfetching features for id: 5oizD47GLSKDlfCeSiEKJz\nfetching features for id: 4TbMBJgbFWa0zAvrgLhUwy\nfetching features for id: 10R2MFLIpSAtCjJdXUUsJY\nfetching features for id: 2SUe50SsXKH97zPwx1pnuL\nfetching features for id: 7CHmNiReghVLKbD032HHwz\nfetching features for id: 6aVmTAiMtCXGCdsCvoYsSC\nfetching features for id: 5eUVr9Ja4GK6e1lcIXjML7\nfetching features for id: 79Mf2Y6Qm6ajJ1AeHouZxI\nfetching features for id: 7axj2soLPxuhxJEYaUoYEg\nfetching features for id: 2JuHkrWEmd7h2plB0mIBRW\nfetching features for id: 5HSpa327ERZBmSr4Fmuyyd\nfetching features for id: 6iTps24cT4G4yiKnQ9Z5pn\nfetching features for id: 7sJMF0hrpfIxH4wO5TShMo\nfetching features for id: 4VFgAvzQ5gKTLXUBZ1yd00\nfetching features for id: 2J7WIxR3zdQGpJhS7W7QCI\nfetching features for id: 2GqjJ42UPMXqBDmg2tr2nb\nfetching features for id: 1jnbu1z52a8fmU3clOiVEG\nfetching features for id: 4UD5hLK5vmDXgFRdhoEkBZ\nfetching features for id: 0zZCYZgRwyhRGh5Emorbmx\nfetching features for id: 0hUrQ65aXYHm2zEAlBVI6F\nfetching features for id: 6gTT1GwvDp9OMakvXFvaj3\nfetching features for id: 7HkFPQouWNbR5lpHf0PtSH\nfetching features for id: 6uEuTHzvNheYW4Hj7HhPaJ\nfetching features for id: 7rbvjRc6ZTVA3GUXDx3Xtr\nfetching features for id: 69BitnFFEW0yrtbNQL4inL\nfetching features for id: 3l477StNmG8dGEwGzFshxh\nfetching features for id: 7vcJkzUTxuGvrs14PM6qSY\nfetching features for id: 558Eq590tn37GM5CuSMt5I\nfetching features for id: 1WwbO6D1l5GCgplARhJrEP\nfetching features for id: 0p7mtSBkoZY14zR7QqUtid\nfetching features for id: 5tayxTorhNcu0Ovz70VqOF\nfetching features for id: 1PeRRbSQ2kSK0a3GAXxR9L\nfetching features for id: 3E9UfnzfTjfvlKvqJjXoTJ\nfetching features for id: 3VBvrFtAU0cLikjrseZLv0\nfetching features for id: 20BJYonZzlLzd0cbhIED97\nfetching features for id: 4dmHFU1lnFhn90jjE1ILfd\nfetching features for id: 0aPiYc5UPHh4jwMPNaLugx\nfetching features for id: 1QdxAlq7M0dW9Kpu7LQbgx\nfetching features for id: 3xDafKq6lG6TJ5m0cUBgtt\nfetching features for id: 4xHlxSvdAE6o91oeAvXOfS\nfetching features for id: 4tFENy9kf1zjzRn00eI0I3\nfetching features for id: 1GIlSd95YSEUzueIjZ6Y8G\nfetching features for id: 3kvyaShezE8dMWfbL44TdI\nfetching features for id: 7N29YCbJqmuQzspqfEebXr\nfetching features for id: 519hCSC8d6zu3k45ESSVqk\nfetching features for id: 59NHKjzLapyxuTpOBQQh2s\nfetching features for id: 5vTp0vGLx8SJgqxVLnTHtK\nfetching features for id: 1YaVBWSwCVxfo8nmc2OQjW\nfetching features for id: 3jqJsdqZsSzGXuCYoeSss3\nfetching features for id: 2ib1IMnHzjMtB7k6WkMcGv\nfetching features for id: 2It5KW85y1U0G4qlt4Lx1D\nfetching features for id: 5FiTi67Q8krrT0gBM7X62V\nfetching features for id: 6OmgGmYVAF1qj0rJj4MooU\nfetching features for id: 2aJFklWpbkfYTnXTWSpMPE\nfetching features for id: 5QEJbqvNNkFWzyF1l8d2Ci\nfetching features for id: 7uXktLVq43HeS4fPU4nQna\nfetching features for id: 0ULyHRmncqiUWTEIwNvBIx\nfetching features for id: 1OnzfZTZGZPjayArD35MFZ\nfetching features for id: 7edwuDSs8CFv50fUIaz49i\nfetching features for id: 5QZyGPDlIyk75zl8sOOXbh\nfetching features for id: 5ipVuHZ3mQ8MfcejT5GWTr\nfetching features for id: 1H3v333p3oy1mI6zRS5C05\nfetching features for id: 6JEpJ4tOBsfgMZbHtzW0Fq\nfetching features for id: 6aW0GTIAQcHdPxQczS0rG4\nfetching features for id: 5bfWbTxkXYA1ASKShrkxJx\nfetching features for id: 7c2XSHiN5gOCPiVefDGq0R\nfetching features for id: 1ZUnmb11KsOyNHsx0AvZQT\nfetching features for id: 70YloubK86R8XZ7XVQOuXF\nfetching features for id: 00Vwp9jQUs52JOnbbLaz5e\nfetching features for id: 6HW8fAuEDFLfhJhVzqvd4K\nfetching features for id: 1oLHbBjM1BeeBTwKHxJ6F2\nfetching features for id: 7MOomqWm8gv2WJprpiuE7b\nfetching features for id: 6mCPGlWSH6hDQucRmSfUFq\nfetching features for id: 7EumbJIbcVSyJSYKc4aVsT\nfetching features for id: 2ZoIGwdHUOILIsI2ZUeC9l\nfetching features for id: 5UHxEcSaG7vwVOSZlM16Dc\nfetching features for id: 5sUTy0TDw5JCwQr8TCWcG3\nfetching features for id: 391TUcoPonqYykPkSZ5Z9U\nfetching features for id: 0DgdBjcAGfwbI4OiARr2kE\nfetching features for id: 06T4nDgHdNhGMNePPtfcki\nfetching features for id: 04DYwFIKeq2Bkn9aqSI9PC\nfetching features for id: 56T9QL3AHmckfvWkUOjGk4\nfetching features for id: 4mHwuovUFonfDDdAaCP9Rg\nfetching features for id: 1wX7yflolAYDRHnIkiW15b\nfetching features for id: 4RtUXIA2fQOMhuNdWsGUEv\nfetching features for id: 2Xt3YSFvqe5AzRbVrETVbv\nfetching features for id: 7d5DaXxZVLC2aww0lDSQOs\nfetching features for id: 3R8WrZ6UsjLpxOl6jXWRIu\nfetching features for id: 47mA6f44zxLtdATOoY7GjN\nfetching features for id: 6cRsRBzSaGwfnMMqNSjtFY\nfetching features for id: 4YDjNlJUlrwKaSCBtKmHX0\nfetching features for id: 5CZaYT4b735LHmFlKKHh8f\nfetching features for id: 56ZVFMZT1OJK8h5fnsk0Yp\nfetching features for id: 4m3Q6C3o1jYamTikYAm1CI\nfetching features for id: 7FoaiFjaAwad3JIi2oki7z\nfetching features for id: 5OWcriZOmfyk6RMI0QxeFg\nfetching features for id: 6cUIm6LO1lcMfDlBeGa4H4\nfetching features for id: 7LRlsrrX3IT7nyLUz1fT94\nfetching features for id: 26qG4DXBnaBgke7u2t0qvz\nfetching features for id: 6x8EqpeM8XzeoEKfatpSYP\nfetching features for id: 2TqZC7BpPlwEHbMGWnjXf8\nfetching features for id: 0dq6oBqqc02ulVK62U668q\nfetching features for id: 4eDmOSU6rMzjIim9FsDtHY\nfetching features for id: 7qinPLZ41lgIGuT40EAFs0\nfetching features for id: 2NH3RgooS3Am8LcS5m8dwW\nfetching features for id: 2hP6oI5fZAta0kj8nYn5CK\nfetching features for id: 7CUN42trHz1jiAooYUW1GC\nfetching features for id: 6lRbAvHGoMS0vT4siAaMSk\nfetching features for id: 5WzVjwYFWbT5jUsGAnXUPh\nfetching features for id: 3Ec0t5uhQ6DK9m3kPIUOkw\nfetching features for id: 1gezSbknNFRysgyTf1paTV\nfetching features for id: 3FM1yG6lv8Yy9LGwfWearT\nfetching features for id: 1CyZ8WyY72ByCIFDXdCdaq\nfetching features for id: 7v1858htfU0srTDwhxeka8\nfetching features for id: 7l9RNVHHVgG7mMqnRoEcjL\nfetching features for id: 2HWaoy5exkRIulp61fOvQj\nfetching features for id: 1t3QVqfiWHIb5YZwgrj4LN\nfetching features for id: 1XunTmhOcj3xwh4b8P3isX\nfetching features for id: 1Eo6eH8Yppywb6nkWDhB1b\nfetching features for id: 54UFvpOyoqlUu7N09UymMz\nfetching features for id: 6P18UDWGqrRAtQpMvxQ5t6\nfetching features for id: 42VGtwRV739rKlcSePEpd4\nfetching features for id: 4L6MvAch3m0KnIzMUh5RAG\nfetching features for id: 5BwSsmZPm2T7mqMnlGt6L6\nfetching features for id: 7HdpSJmQ0b2qIV1MfkaPpM\nfetching features for id: 4w8QvjM1Nr344937eMBrgn\nfetching features for id: 7yLlSCB0KabT0sy0JqRRzU\nfetching features for id: 3GsZ363BCLrRYTl5GKcShC\nfetching features for id: 5C9ZW69WIoyEGg7QGJaIWI\nfetching features for id: 7L44WJ4w3ngx2kH2hzdSdG\nfetching features for id: 2pAc35JdZuZHHOAqGqXJyX\nfetching features for id: 2FQcPtw6VJ8t2D8YC6OiCC\nfetching features for id: 7IIfi6udaIkFvw4z3xUeub\nfetching features for id: 7CdovAv39pva6nxPvxqwJg\nfetching features for id: 5foJJKZXkVdAp5XBYPR5Uw\nfetching features for id: 5lp5hSm3LhWTsN8Muqehry\nfetching features for id: 46v2DKi9gdm638SixadxLd\nfetching features for id: 18YPNs20cBfgqkY8YCJlOH\nfetching features for id: 2B4IYgIkBnemrp0YEkaUIH\nfetching features for id: 33ecRNaSLc7w7TVMRDgrQ3\nfetching features for id: 1qj9BaE3Odlp78GDtDaHQJ\nfetching features for id: 59UMLADL1NI7IY44H8aNIp\nfetching features for id: 53jsT1bT1dRXQQniKhWJqc\nfetching features for id: 6aTY7Dx6719A0rTsYY7NgK\nfetching features for id: 1sU1lBFXKG9TT5oU1UmTiE\nfetching features for id: 12zEAgwHjIQRYNiAAH5UiZ\nfetching features for id: 7F7GiHeIgnI9TvMjbk21iH\nfetching features for id: 2oGseypps0Vb50DYozqMcb\nfetching features for id: 626zdy6z4Y199W0UgVHYva\nfetching features for id: 1clawHbnjJhCFGeBbYI2Yy\nfetching features for id: 7hpa3Bcjt8Or7tC7TtOIcs\nfetching features for id: 2UdOyotmZbAILykyQdy8R9\nfetching features for id: 4kcTzSegbAtWgMQyNEWOMA\nfetching features for id: 0q5dt70CzhrvCvFziLrYah\nfetching features for id: 0mHeCqDhYLvvP3lH8qWIWC\nfetching features for id: 1Tx2BAn2UEw1O2qn5Bc8l8\nfetching features for id: 72jyKeBzD8UHh52loubHKm\nfetching features for id: 4LjtDN131EpaPRzxBftJ9z\nfetching features for id: 6UXvWxLEiR3LJ5PtAAOpN6\nfetching features for id: 4MszoBMW2yFxHFEHmjXmiT\nfetching features for id: 0VMbqYoFBmCr3DINNPBFFF\nfetching features for id: 1NAvZNIUfC79QACBJ7qhK0\nfetching features for id: 4uOiETgQvxwCjxTXfU9q1H\nfetching features for id: 2FvpvjHX32OLvendFImN9N\nfetching features for id: 5dOC0aI0j54SJKLZJTQsvH\nfetching features for id: 4lrtR5XurtYh5GOptKtmyI\nfetching features for id: 2tRs0XGkvtH4youYAGevPS\nfetching features for id: 5y5Hbumds0xwLEp1Xahne2\nfetching features for id: 1fMjmMwmqRR8C2MFvDM7eA\nfetching features for id: 3GH2K6Q555WwVemv01pBX5\nfetching features for id: 5ZPv7iC9p3qaTgaFbouI3O\nfetching features for id: 7jIEptSoRxLTp0LJAwovay\nfetching features for id: 7KfwGraITe6aLd827Wq0bh\nfetching features for id: 1QyuWKLRd64Mpn9GesCxmZ\nfetching features for id: 5MXe6dKcR6iO33GWzVW8cr\nfetching features for id: 6xp507xP2TBK2rUZkxBB5O\nfetching features for id: 3d4JtDwvj5DPtz2B9VSIAx\nfetching features for id: 5X2w0jQ9jHaYSkqKOKqtH3\nfetching features for id: 6GsCmMNA4xepZ5V4IM8iLl\nfetching features for id: 013tlTXGKvrYjMh1cz6ZVK\nfetching features for id: 12EYGS2kIMDFBzd7mGFDwu\nfetching features for id: 69cikKSonWdugzCyc38K35\nfetching features for id: 4MexmY7hcWQB9tNfbnDBIm\nfetching features for id: 7z2ahT0htzgYLlq0wmxHdR\nfetching features for id: 1ugc1PHB2qbkcuOaGiXXIu\nfetching features for id: 39UXaEcfOP4Yii2iCej2zE\nfetching features for id: 76tUNmN89oKHpNc0SsXdfU\nfetching features for id: 6Gv10QCi8aaEdBlpXrpzy8\nfetching features for id: 03M7HHQXR9Tz0Fb3o0FKSQ\nfetching features for id: 2cvztQuBIxwV38kg1Ydaww\nfetching features for id: 7JhN1mVlxu5dZzE9QsP0NS\nfetching features for id: 6V76ONUZMavBZa4jLSZHd8\nfetching features for id: 3p12NWIZxrdj4wgzYzcjLl\nfetching features for id: 5zSaNRRrhmn60aZFGYa76b\nfetching features for id: 7yuSXXGInvV9gVvamnpdbX\nfetching features for id: 0GRML5qVYv4JSzjvQWYIxK\nfetching features for id: 5apwLmnzpwqnFffc2RSZhC\nfetching features for id: 1PhAHwHFnCVMgIl1ObdkNN\nfetching features for id: 1swkskqEBEyaplF10RuzGw\nfetching features for id: 3moXW7IxJzjMMJDQt7Ja5i\nfetching features for id: 6VdJa3dGUSw7Ju141CYTJv\nfetching features for id: 12JcJfzNkD2IOsp0MazlNY\nfetching features for id: 7koZFBqxHXUqudMHxNTrb2\nfetching features for id: 0DpjGs3EbsI1TKiYSSeTRF\nfetching features for id: 60UaybLNirtwinCRJfhfX0\nfetching features for id: 2AURcFuFFPLEfzCKzCGYP0\nfetching features for id: 4FAXiLNSi3qGJA5zcX9PiZ\nfetching features for id: 1mzAgAg2Bd4AMwphrbIIMv\nfetching features for id: 18ARvnm1orP4B2JMqIsVpt\nfetching features for id: 253qAom54xbJSNlq9pGzAC\nfetching features for id: 0suTtYLoNOywMP2pj1fL0X\nfetching features for id: 1mVw4zs1czeQwo1MueFGr1\nfetching features for id: 4EUL6WeHpGltSOBSQwDWNA\nfetching features for id: 5YfbxqUoLKpIxQlxaOGXnq\nfetching features for id: 0RT3CrrdFJLxDnTMzv2qCt\nfetching features for id: 0wacMQp7uVCSV7WVipU1yK\nfetching features for id: 5vnlLrx6Be4mdJETZAPID1\nfetching features for id: 75wzEkDy6NnSj692dA99ip\nfetching features for id: 3Uw68N09eLlFFwlsVqupCM\nfetching features for id: 0maFYFOeM4ut7J0yz77RJd\nfetching features for id: 5KaJHgTXG5GIkYB5Pjh1cf\nfetching features for id: 2g2GkH3vZHk4lWzBjgQ6nY\nfetching features for id: 3Z3DNF0peaoigwGVnTYEw6\nfetching features for id: 4yzLxHEO5nPKwxjGYLF5xI\nfetching features for id: 7GUyfcCytZiqY3rpzTI0ms\nfetching features for id: 4PZ1es22XnUyLAM4GsCIlh\nfetching features for id: 75zdGixP4G3VQ5oIk1HzjU\nfetching features for id: 1WQxjJWQTUQdugmysoANRw\nfetching features for id: 0OFX1hwVTJRLkpH0nDykat\nfetching features for id: 249dLqv8lchzJqcHD6cXJK\nfetching features for id: 5UrMX4RgKcJS7vOScBHx9D\nfetching features for id: 5noWD027Ap82HrdLi2X418\nfetching features for id: 1hYcTkSpxl7gd11PazZCWY\nfetching features for id: 4HCA1iiTZvcDFSHZPmolQ9\nfetching features for id: 6jZxxO0ZNb8fakRXryav4y\nfetching features for id: 6SI0GacLVtXOgvCSKpaXqo\nfetching features for id: 2RUfmM445DIR4xDwA2vSsh\nfetching features for id: 7FOHypk9d7fS6k3KTBnLvj\nfetching features for id: 2V0U78Xx4KY843bJbuq586\nfetching features for id: 1YR24sA6nsntEjaYhoj6QW\nfetching features for id: 34OBWivSH17yViSNUBk9FA\nfetching features for id: 6geXF727GRr6ht9lNlzqCm\nfetching features for id: 5uqWLZmO5nI7nDfoT4Fk9x\nfetching features for id: 6OhEgsUJBmkEN315GC3ljV\nfetching features for id: 6lDX5Uhl1It298l8hhQpBr\nfetching features for id: 5ekabwI9PFHbfc0hz6n1xi\nfetching features for id: 0upIcv9YLNXYr7UboPfIed\nfetching features for id: 73sNfgGtfIJ6orD0ef66Vz\nfetching features for id: 776DYateBNbhanoeKGIIq3\nfetching features for id: 4eXw75zmi8x2Rz27tFyiHL\nfetching features for id: 4OgCE8bmGO7eatG3OuBEju\nfetching features for id: 0oI0ekrSQ3aHiHcMi1B2gK\nfetching features for id: 0GAXsJkYFTXhw6Rs3QWFBq\nfetching features for id: 1FGGo2R7TyRla9QlV2GBHQ\nfetching features for id: 7iFteeFOnUMJ3vL46CbehE\nfetching features for id: 3zNoELfZro8JyANvlxkm5z\nfetching features for id: 6idhXzbOlJ0Qbr0FioGFGA\nfetching features for id: 3ManMR2GV4W6xD9KQnDZHe\nfetching features for id: 7LAsDdinXejDwodvelVVTa\nfetching features for id: 0HHYIJBKj6GFchyYjJQ8nf\nfetching features for id: 4EdYYjkGMqsyNwC4rnQBfr\nfetching features for id: 7sq3da2yf4tSAIMNpMbTdJ\nfetching features for id: 2bHXXAgNA9MMvHE2e6Dgay\nfetching features for id: 7byy0qP1b25lupcuUYNCOY\nfetching features for id: 3ClUZFkkKgXjp3XERlQFdd\nfetching features for id: 6jQu1bSmE4loFJNQs7Nnh7\nfetching features for id: 0plxbPQf0Y4KQT0wFAFN95\nfetching features for id: 6IHmLFjzblJbNswndxldtp\nfetching features for id: 39tQwlllxM5cV0tz8xdK0x\nfetching features for id: 0gssZyDIxrn2CsokEML0xq\nfetching features for id: 0X3DI3v2RXnOa7JbYolPlR\nfetching features for id: 5yzjgCa5wAgyjP0yuXCvv7\nfetching features for id: 2BWgyAMj5u6eYH4JfbcKYE\nfetching features for id: 6e6B0cQW9FM5JqqIxqXgEm\nfetching features for id: 3TjvbvtdWiNF1M3BcoEDat\nfetching features for id: 4jfnYFsHmSD7yVR8Im3c2m\nfetching features for id: 26fsOSSq0SYeDuSdLLbftH\nfetching features for id: 2UJ55v5EICM8T3wekSJx30\nfetching features for id: 6HdQf3qpj3Gh1ZVZB8FnCc\nfetching features for id: 6bHEL0tDZEVZ89i8CDAEJE\nfetching features for id: 6jslRzC9teP8ls7A1tUMZo\nfetching features for id: 0aVMQFFqVPpbvoykIABQWO\nfetching features for id: 7IS9MwiLJp91PEyoUDazqb\nfetching features for id: 75UsaTIKbeJ5f3FnyEHL86\nfetching features for id: 55BzdHoOUEzvFT7zgujFmH\nfetching features for id: 3NLX1PKEXHWSJoa20PJ0EB\nfetching features for id: 5qliF3aPblospVoSMI1o1U\nfetching features for id: 0vrS4oA5PHHsVEjVa1pqo8\nfetching features for id: 6JiF18wqpOTkQ2o4jFfrRH\nfetching features for id: 24oaJWZlGGsSVMThy9YiBc\nfetching features for id: 7dDE59NX0n466e705E8Itz\nfetching features for id: 4yLBMZrf44kLJh808FpI3c\nfetching features for id: 07WmDyqnKQ1lwDNK3vqa6s\nfetching features for id: 1vugXqO8K4p9dZPAwfmyFS\nfetching features for id: 5ygqNg50ux6hm9YEbkriWS\nfetching features for id: 2Bj8nFSe3yy6myhOonbHCw\nfetching features for id: 3ADlu2SizmXRt4xwGURdY1\nfetching features for id: 7C8DO2GNmZ8sT03iQKOBvx\nfetching features for id: 5A0YCUqVTDycmhGIZ5WH3Z\nfetching features for id: 5htJrzx9BOWrWWuTFRbehY\nfetching features for id: 0AQquaENerGps8BQmbPw14\nfetching features for id: 7KjsPUMEiwqkAPm8G5hjrA\nfetching features for id: 6OJZM84zgpRxRYtBk979xG\nfetching features for id: 5Z9xikDEHaSTJPNItzTimt\nfetching features for id: 0giG9rN3UBRB0qyM6MWOU3\nfetching features for id: 4HPeKn4z8n1y1Rx88kIsBz\nfetching features for id: 46HWAi2wZzWQaRuSuX7Kdu\nfetching features for id: 4nvOMKH3EpGv64srvXI7lp\nfetching features for id: 07fZuhOPreLVpwuLkTCRLH\nfetching features for id: 6uIXlGfDTaToiaHoMIhWBm\nfetching features for id: 657T90Nnp6WpwhZ8oyBCiU\nfetching features for id: 5h1bCuHF8fASXSfIAHfkZz\nfetching features for id: 2zBwGqDcl3zF8MVDtKSKmL\nfetching features for id: 3uHrMzdYjAUbINHGTh7M5S\nfetching features for id: 0ZqhPLdc7yEv6iADB7no3F\nfetching features for id: 5SZxPtk6jheFFwp3ziaih7\nfetching features for id: 57SIMj9BOvVcPKuPT54Vpr\nfetching features for id: 5imnLE0ZCI4HIqJ8hv1Pb9\nfetching features for id: 4lbUp0TQ6niHxckeazETZL\nfetching features for id: 6GN0LtafXeYC80VbaCWG4b\nfetching features for id: 6bjT2fsUi1pOc9gk1mB9w7\nfetching features for id: 5oO9f7KhTM8qo1fJ92Z3sC\nfetching features for id: 2K9dRneu5EzYZ2Fcv7SEM1\nfetching features for id: 4vBDAxKllafwCMDWD76atv\nfetching features for id: 7upcKagl9OmfV6ETI6dH8S\nfetching features for id: 6Teu9cfIt31Y9xQtkUPUQj\nfetching features for id: 4gJfiNBHobaCb52NuYFmw9\nfetching features for id: 2MWxpIwgwY4wc4tFs6NyhS\nfetching features for id: 3XToK0X499i9odkrDCG4FL\nfetching features for id: 3Am7HbnChqAmGexDhhO2Bw\nfetching features for id: 4VfHt3eRQcXo50kWdtdcXV\nfetching features for id: 3G7spgNysEke1Fr8ZYYbwg\nfetching features for id: 6aM09078f3E1jpRGc9tuLE\nfetching features for id: 3KzgdYUlqV6TOG7JCmx2Wg\nfetching features for id: 02IsRDM3VwxibJTEeloHG9\nfetching features for id: 7moaXjSX1RHksmk6qanoPY\nfetching features for id: 36NPEs4S7ik50NrlzaqoIJ\nfetching features for id: 1RYznli2VNO7FCbW1Hq4KM\nfetching features for id: 1R4kAzLGI0PQBe32NAGP40\nfetching features for id: 429ZSPvrxPnhcF9b1WdNdk\nfetching features for id: 4aLLhVxmXDMsjNHUvDKjuw\nfetching features for id: 6uQlc0WC4jGIJhIKCIfHJo\nfetching features for id: 4YmJXCXCkLorPkYiSPFUN3\nfetching features for id: 1ZsHsqO3nrafksXkOBeUXG\nfetching features for id: 6s4b3H4BrUCfskxYinSSA8\nfetching features for id: 5z0lOvhjDBD1doNvVUHdXw\nfetching features for id: 6JHWHuirGmD0x2PKINh0v8\nfetching features for id: 1h3IWntF2SMKv8cvrPgl91\nfetching features for id: 2Wg92bsfweigVksgH5FtkJ\nfetching features for id: 4Ta3ldW6fXAO2PVbvoXU6J\nfetching features for id: 4f8hBeMXMvssn6HtFAtblo\nfetching features for id: 3kpuidZaDOsLP3YgFi8sxu\nfetching features for id: 6t60WZiWBwEcSm7yZYwauK\nfetching features for id: 2gWAdYak6pGXWDG9phdesr\nfetching features for id: 2uTP2SyKzocaFtCdcwf9GF\nfetching features for id: 6UO3m5VERw0IOI0fCXwcvJ\nfetching features for id: 38rz74MI3lV25ZfhABoQv9\nfetching features for id: 19i5F6KdCVpClxFHvIwNa3\nfetching features for id: 1xYZbzKH9rRC2x0tiOVgB7\nfetching features for id: 2MnlN2UropWELLdYv4Pj3c\nfetching features for id: 15mwNCoUIJJ1dvtNeSbY7J\nfetching features for id: 44k5V6EswuTCPYVvLveN3C\nfetching features for id: 7vu51HpremcwWKhBhVccDB\nfetching features for id: 3C1d2Sjd1LDqFaLVkzsLbW\nfetching features for id: 7yYLYKASE9MHmL4n51vAtC\nfetching features for id: 0tgVYYVcMbfZcHBZ1gF8OX\nfetching features for id: 2p7wFvRxQ4GLgVB5b6mr6U\nfetching features for id: 0crCWOYbMWjlC4pnbMqSyV\nfetching features for id: 4T1YDePBqGXRnTJMCi4nXm\nfetching features for id: 0CRxGsJrwOMQUuTr4yT5mX\nfetching features for id: 37Au0ObjCx4KD76YBe4Y9g\nfetching features for id: 58rE0XBpWlcupuTU2nso8j\nfetching features for id: 076MbdGIiRAfHrGuXXPmkz\nfetching features for id: 3pQQNCx3E2k41oEWJje38o\nfetching features for id: 7lNtHO9IJFQJIVyahuZ7hY\nfetching features for id: 6aJI1JpnsJ85NjV8rm5IAt\nfetching features for id: 7vF9CfYTUiif1wHSFdrTyN\nfetching features for id: 6nq7oPBl9iju4FAd6Rt7QR\nfetching features for id: 1hi6oZWC8g3dBaUTEj5zDG\nfetching features for id: 4W7ps5zORgRWe691hlxXzC\nfetching features for id: 1tM2BQzEMJ5f9siQ9ZjM6P\nfetching features for id: 5wwKRbqWdHVNNkJOIxXWMf\nfetching features for id: 2C69yvwxsoQEti1GFx9MHj\nfetching features for id: 5RyzUKTwaiSI2A8ZugBIzn\nfetching features for id: 6izmQUGcOH6W8A7Z30qygh\nfetching features for id: 5opP1p1Ulc1Q6Sx5pZ3eYy\nfetching features for id: 0BJHzqk1agz54Q3YzvcRez\nfetching features for id: 5ssXU6u9HqUW87W2gglY7F\nfetching features for id: 1iu7A7vOv1andAcWxcIPPF\nfetching features for id: 24fo7tdfOXJ6mlRl8PMup8\nfetching features for id: 3KC4PSP9TGhz79XCPYTNoS\nfetching features for id: 1M24NmdiAvtOCxkqyfVFdJ\nfetching features for id: 3GKYmZUBoj3mD2SBUmnAHB\nfetching features for id: 2kV7S1nOw4Jfo8rapygdmd\nfetching features for id: 3E5ndyOfO6vFDEIE42HA8o\nfetching features for id: 5HVBoRtqnH8ucSm5KFdGYG\nfetching features for id: 1T5SRm5reEOzwZHrQ6dYFR\nfetching features for id: 2LEgF8sPiwfaxVyAFWHtzK\nfetching features for id: 2F4Th5TUzfOIGH3AgJprbj\nfetching features for id: 3MMSrLLxgZ63dljiQeD5Hq\nfetching features for id: 10cjs3lGQ2lDfD1mKkduhE\nfetching features for id: 3pEwXiu1AVstyAluolJnW7\nfetching features for id: 6i9vakbAAPjn7qEUv3BNos\nfetching features for id: 3mc4hySxLwW78CIR1R89dg\nfetching features for id: 2jfyf0Yspx4fQfP3FtDr50\nfetching features for id: 1ndoZF7C8Ada2SSNxBEsMj\nfetching features for id: 1rH9Pgv6D3qvZcRjlTNyUS\nfetching features for id: 6yeJW5uiuqb3rbUduZ5UBG\nfetching features for id: 5ycXdeVKHGBiGLpdkUtl15\nfetching features for id: 5zvOXJrzzUlvXwyuwZ0toZ\nfetching features for id: 2UJENcUk6T8kICFcuFtWsA\nfetching features for id: 7oLItNFdvAwu8wwMohOvZH\nfetching features for id: 5O11gVu4Sm3B7yjQ2rELoY\nfetching features for id: 1BogZ2vCN97dA7dYvTYeou\nfetching features for id: 2OZel3ZBLmTT1bZBWOZ8fS\nfetching features for id: 3QMloujG5pu38kQxiQxVxE\nfetching features for id: 43fPIcXcSaEAOaHywu4nsW\nfetching features for id: 6Gwqyk0JyoMo0oVHwX7eKG\nfetching features for id: 6nCRUa8xPE59AyOfZBr3Qt\nfetching features for id: 2MfIhWJD9lTXKTolmYdXCP\nfetching features for id: 3UCHPG6VkGQzVXm69rd6f6\nfetching features for id: 1hnTt2qyVYnxUfmCEdDUSb\nfetching features for id: 5UNgEkAGHeLHvAVNdi4okj\nfetching features for id: 1IV8T9xWxJc0tSEFJit25F\nfetching features for id: 4uxsv9PjV3Yeyn51RdWvGJ\nfetching features for id: 3GCrbnKJ2f9276Kx4KD5Y7\nfetching features for id: 3ConsOVpS2L6rejAUcljVS\nfetching features for id: 5pV0BHdDqxKnBrxPxjnS10\nfetching features for id: 4hFhb8YNgPc94Fq0NANpIB\nfetching features for id: 0fFY4JvHfSkChtwlGNtE38\nfetching features for id: 3zeTHjRg21dMPBzAUW3Vve\nfetching features for id: 4BOp0Ddc2K357ok52GC4zn\nfetching features for id: 5OA3zJIbQytfulfE9tXEX1\nfetching features for id: 1srKo8pRCDoPMIuLycsaQ2\nfetching features for id: 6KceGfo54oVMtf7mc2m9DM\nfetching features for id: 6PdbcsYRKKA8FEo91qPRqF\nfetching features for id: 17tMfES3QZ4eu5gmxaDKsx\nfetching features for id: 2mfo3AK0aZzTGcXD0LnLqx\nfetching features for id: 4W6VKRCjQw8tFRU8pu3ExW\nfetching features for id: 3yaDHPIsjORc43AdGbYvrG\nfetching features for id: 1mMSH14XSSjzLvNuf62azx\nfetching features for id: 2Zl4RbIpuFSLLdjDznEVU2\nfetching features for id: 2bvKFIpuoWJprEiMdu0TU8\nfetching features for id: 3QqafgZ1FJVKOA0Nl0yKxm\nfetching features for id: 4U3S2BgQeX25uW0fEPDSSt\nfetching features for id: 2oiLl1u7qg6LZZA595Ok4d\nfetching features for id: 6voBht3aVPwQWiwSP2fEAe\nEsecuzione completata in 1191.7362 secondi\n"
]
],
[
[
"#Inserisco audio features nel dataset Billboard",
"_____no_output_____"
]
],
[
[
"num_datapoints = np.array(output).shape[0]\n\noutput = np.array(output).reshape((num_datapoints,14))",
"_____no_output_____"
],
[
"# creo backup df_billboard\ndf_billboard_bak = df_billboard.copy()\n\n# filtro dataset billboard tenendo solo id nell'array 'ids_new', quindi quelli che non sono presenti nel dataset principale\ndf_billboard = df_billboard[df_billboard.id.isin(ids_new)]",
"_____no_output_____"
],
[
"to_insert = ['danceability',\n 'energy',\n 'key',\n 'loudness',\n 'mode',\n 'speechiness',\n 'acousticness', \n 'instrumentalness',\n 'liveness',\n 'valence',\n 'tempo',\n 'duration_ms',\n 'release_date',\n 'explicit']\n\nfor i, col in enumerate(output.T):\n df_billboard.insert(4, to_insert[i], col)",
"_____no_output_____"
],
[
"# converto colonna 'release_date' in tipo datetime\ndf_billboard.release_date = pd.to_datetime(df_billboard.release_date,format=\"%Y-%m-%d\",exact=False)",
"_____no_output_____"
],
[
"# inserisco colonna 'year'\nyear = df_billboard['release_date'].apply(lambda x: int(x.year))\ndf_billboard.insert(6, 'year', year)",
"_____no_output_____"
],
[
"# inserisco colonna 'popularity' --> nb: inizializzo a 0 perchè verrà rimossa\ndf_billboard.insert(17, 'popularity', np.zeros(df_billboard.shape[0]))",
"_____no_output_____"
],
[
"# inserisco colonna 'hit'\nhit = np.ones(df_billboard.shape[0])\ndf_billboard.insert(3, 'hit', hit)\ndf_billboard.hit = df_billboard.hit.apply(int)",
"_____no_output_____"
],
[
"df_billboard.head()",
"_____no_output_____"
]
],
[
[
"#Esporto",
"_____no_output_____"
]
],
[
[
"# esporto in google drive\nfrom google.colab import drive\n\n# mounts the google drive to Colab Notebook\ndrive.mount('/content/drive',force_remount=True)\n\ndf_billboard.to_csv('/content/drive/My Drive/Colab Notebooks/datasets/billboard+features_3.csv')",
"Mounted at /content/drive\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0bf22b085e803595b054cdc425d911ea8b88215 | 7,959 | ipynb | Jupyter Notebook | notebook/Sliding_Window_Chloe.ipynb | Chloe-Girard/Foci_counting | 9899f8da202ab3898ac8874d0532cf4bdc6cb885 | [
"MIT"
] | null | null | null | notebook/Sliding_Window_Chloe.ipynb | Chloe-Girard/Foci_counting | 9899f8da202ab3898ac8874d0532cf4bdc6cb885 | [
"MIT"
] | null | null | null | notebook/Sliding_Window_Chloe.ipynb | Chloe-Girard/Foci_counting | 9899f8da202ab3898ac8874d0532cf4bdc6cb885 | [
"MIT"
] | null | null | null | 23.340176 | 172 | 0.564769 | [
[
[
"# Notebook use to check the result of the classifier, how well can you detect the nucleus .\n\nYou can click `shift` + `enter` to run one cell, you can also click run in top menu.\nTo run all the cells, you can click `kernel` and `Restart and run all` in the top menu.",
"_____no_output_____"
]
],
[
[
"# Some more magic so that the notebook will reload external python modules;\n# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython\n%load_ext autoreload\n%autoreload 2\n%reload_ext autoreload",
"_____no_output_____"
],
[
"%matplotlib inline\nimport matplotlib.pyplot as plt\nplt.rcParams['figure.figsize'] = 8,8\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'",
"_____no_output_____"
],
[
"import numpy as np\nimport javabridge\nimport bioformats\nfrom itkwidgets import view\nfrom sklearn.externals import joblib",
"_____no_output_____"
],
[
"# Ignore warnings in notebook\nimport warnings\nwarnings.filterwarnings('ignore')",
"_____no_output_____"
]
],
[
[
"### The following path should direct to the folder \"utils\", on Window env it should have slash \" / \" and not backslash \" \\ \" . ",
"_____no_output_____"
]
],
[
[
"# Create a temporary python PATH to the module that we are using for the analysis\nimport sys\nsys.path.insert(0, \"/Users/Espenel/Desktop/Mini-Grant-Image-analysis/2018/Chloe/ChromosomeDetectionChloe/utils\")",
"_____no_output_____"
],
[
"from chromosome_dsb import *",
"_____no_output_____"
]
],
[
[
"# Loading a typical image using bioformats",
"_____no_output_____"
]
],
[
[
"javabridge.start_vm(class_path=bioformats.JARS)",
"_____no_output_____"
]
],
[
[
"### In the path variable you should enter the path to your image of interest:",
"_____no_output_____"
]
],
[
[
"path = '/Users/Espenel/Desktop/Mini-Grant-Image-analysis/2018/Chloe/data_chloe/cku-exo1_002/2017-04-12_RAD51-HTP3_cku80-exo1_002_visit_13_D3D_ALX.dv'",
"_____no_output_____"
]
],
[
[
"## in the following cell in \"channel\" enter the the channel (starting from 0) where you will find the nucleus",
"_____no_output_____"
]
],
[
[
"img = load_data.load_bioformats(path, channel = 3, no_meta_direct = True)",
"_____no_output_____"
],
[
"img.shape",
"_____no_output_____"
],
[
"#view(visualization.convert_view(img))",
"_____no_output_____"
]
],
[
[
"# Sliding Window",
"_____no_output_____"
],
[
"### First need to load the classifier (clf) and scaler.",
"_____no_output_____"
]
],
[
[
"clf = joblib.load(\"/Users/Espenel/Desktop/Mini-Grant-Image-analysis/2018/Chloe/clf_scaler/clf\")\nscaler = joblib.load(\"/Users/Espenel/Desktop/Mini-Grant-Image-analysis/2018/Chloe/clf_scaler/scaler\")",
"_____no_output_____"
],
[
"import time",
"_____no_output_____"
],
[
"tp1 = time.time()\nresult = search.rolling_window(img, clf, scaler)\ntp2 = time.time()",
"_____no_output_____"
],
[
"print(\"It took {}sec to find the chromosomes in 1 Zstack\".format(int(tp2-tp1)))",
"_____no_output_____"
]
],
[
[
"### Optionally you can create a Heat map with the probability at every pixel that there is a nucleus",
"_____no_output_____"
]
],
[
[
"#heat_map = visualization.heatmap(result)",
"_____no_output_____"
],
[
"#view(visualization.convert_view(heat_map))",
"_____no_output_____"
]
],
[
[
"### Max projection and check how the result looks like",
"_____no_output_____"
]
],
[
[
"proj = np.amax(img, axis=0)",
"_____no_output_____"
]
],
[
[
"### When boxes are overlapping, only keep the highest probability one.\nHere you can adjust `probaThresh` and `overlaThresh`, if you find better parameters, you can change them in the function `batch.batch` in the `chromosome_dsb` folder.",
"_____no_output_____"
]
],
[
[
"box = search.non_max_suppression(result, probaThresh=0.8, overlapThresh=0.3)",
"_____no_output_____"
],
[
"import matplotlib.patches as patches\nfig, ax = plt.subplots(1, 1, figsize=(10, 10))\nax.imshow(proj, vmax = 100000)\nfor rec in box: \n rect = patches.Rectangle((rec[0],rec[1]),70,70,linewidth=3,edgecolor='y',facecolor='none')\n # Add the patch to the Axes\n ax.add_patch(rect)\nplt.axis('off')\n#plt.savefig('/Users/Espenel/Desktop/Mini-Grant-Image-analysis/2018/Chloe/data_chloe/fig.png', bbox_inches=\"tight\", pad_inches=0)",
"_____no_output_____"
]
],
[
[
"# Save the result",
"_____no_output_____"
]
],
[
[
"#path = \"/Users/Espenel/Desktop/Mini-Grant-Image-analysis/2018/Chloe/13/\"",
"_____no_output_____"
],
[
"#load_data.save_file(path, \"bbox_3D\", box, model=False)\n#load_data.save_file(path, \"bbox_3D\", binary, model=False)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d0bf2770adf68873b4292cf1ecf938df342173b3 | 11,707 | ipynb | Jupyter Notebook | python-tuts/1-intermediate/06 - Generator Based Co-routines/02 - Generator States.ipynb | timgates42/Artificial-Intelligence-Deep-Learning-Machine-Learning-Tutorials | 5a815b1429c9b3be3c4e192239488c141deeb00f | [
"Apache-2.0"
] | 3,266 | 2017-08-06T16:51:46.000Z | 2022-03-30T07:34:24.000Z | python-tuts/1-intermediate/06 - Generator Based Co-routines/02 - Generator States.ipynb | timgates42/Artificial-Intelligence-Deep-Learning-Machine-Learning-Tutorials | 5a815b1429c9b3be3c4e192239488c141deeb00f | [
"Apache-2.0"
] | 150 | 2017-08-28T14:59:36.000Z | 2022-03-11T23:21:35.000Z | python-tuts/1-intermediate/06 - Generator Based Co-routines/02 - Generator States.ipynb | timgates42/Artificial-Intelligence-Deep-Learning-Machine-Learning-Tutorials | 5a815b1429c9b3be3c4e192239488c141deeb00f | [
"Apache-2.0"
] | 1,449 | 2017-08-06T17:40:59.000Z | 2022-03-31T12:03:24.000Z | 21.324226 | 484 | 0.512087 | [
[
[
"### Generator States",
"_____no_output_____"
],
[
"Let's look at a simple generator function:",
"_____no_output_____"
]
],
[
[
"def gen(s):\n for c in s:\n yield c",
"_____no_output_____"
]
],
[
[
"We create an generator object by calling the generator function:",
"_____no_output_____"
]
],
[
[
"g = gen('abc')",
"_____no_output_____"
]
],
[
[
"At this point the generator object is **created**, but we have not actually started running it. To do so, we call `next()`, which then starts running the function body until the first `yield` is encountered:",
"_____no_output_____"
]
],
[
[
"next(g)",
"_____no_output_____"
]
],
[
[
"Now the generator is **suspended**, waiting for us to call next again:",
"_____no_output_____"
]
],
[
[
"next(g)",
"_____no_output_____"
]
],
[
[
"Every time we call `next`, the generator function runs, or is in a **running** state until the next yield is encountered, or no more results are yielded and the function actually returns:",
"_____no_output_____"
]
],
[
[
"next(g)",
"_____no_output_____"
],
[
"next(g)",
"_____no_output_____"
]
],
[
[
"Once we exhaust the generator, we get a `StopIteration` exception, and we can think of the generator as being **closed**.",
"_____no_output_____"
],
[
"As we can see, a generator can be in one of four states:\n\n* created\n* running\n* suspended\n* closed",
"_____no_output_____"
],
[
"We can actually request the state of a generator programmatically by using the `inspect` module's `getgeneratorstate()` function:",
"_____no_output_____"
]
],
[
[
"from inspect import getgeneratorstate",
"_____no_output_____"
],
[
"g = gen('abc')",
"_____no_output_____"
],
[
"getgeneratorstate(g)",
"_____no_output_____"
]
],
[
[
"We can start running the generator by calling `next`:",
"_____no_output_____"
]
],
[
[
"next(g)",
"_____no_output_____"
]
],
[
[
"And the state is now:",
"_____no_output_____"
]
],
[
[
"getgeneratorstate(g)",
"_____no_output_____"
]
],
[
[
"Once we exhaust the generator:",
"_____no_output_____"
]
],
[
[
"next(g), next(g), next(g)",
"_____no_output_____"
]
],
[
[
"The generator is now in a closed state:",
"_____no_output_____"
]
],
[
[
"getgeneratorstate(g)",
"_____no_output_____"
]
],
[
[
"Now we haven't seen the running state - to do that we just need to print the state from inside the generator - but to do that we need to have a reference to the generator object itself. This is not that easy to do, so I'm going to cheat and assume that the generator object will be referenced by a global variable `global_gen`:",
"_____no_output_____"
]
],
[
[
"def gen(s):\n for c in s:\n print(getgeneratorstate(global_gen))\n yield c",
"_____no_output_____"
],
[
"global_gen = gen('abc')",
"_____no_output_____"
],
[
"next(global_gen)",
"GEN_RUNNING\n"
]
],
[
[
"So a generator can be in these four very distinct states.\n\nWhen the generator is created, it is not in a running or suspended state - it is simply in a **created** state.\n\nWe have to kick-off, or prime, the generator by calling `next` on it.\n\nAfter the generator has yielded a value, it it is in **suspended** state.\n\nFinally, once the generator **returns** (not yields), i.e. the StopIteration is raised, the generator is **closed**.",
"_____no_output_____"
],
[
"Finally it is really important to understand that when a `yield` is encountered, the generator is suspended **exactly** at that point, but not before it has evaluated the expression to the right of the yield statement so it can produce that value in the return value of the `next()` function.\n\nTo see this, let's write a simple function and a generator function as follows:",
"_____no_output_____"
]
],
[
[
"def square(i):\n print(f'squaring {i}')\n return i ** 2",
"_____no_output_____"
],
[
"def squares(n):\n for i in range(n):\n yield square(i)\n print ('right after yield')",
"_____no_output_____"
],
[
"sq = squares(5)",
"_____no_output_____"
],
[
"next(sq)",
"squaring 0\n"
]
],
[
[
"As you can see `square(i)` was evaluated, **then** the value was yielded, and the genrator was suspended exactly at the point the `yield` statement was encountered:",
"_____no_output_____"
]
],
[
[
"next(sq)",
"right after yield\nsquaring 1\n"
]
],
[
[
"As you can see, only now does the `right after yield` string get printed from our generator.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d0bf2a230d84b344c2c9e79d105f4400a721ab78 | 3,715 | ipynb | Jupyter Notebook | research/.ipynb_checkpoints/Untitled1-checkpoint.ipynb | herminjjc/machineLearningPython | 8a46700ad15346c12df16a0c8674cf9bd331d97f | [
"MIT"
] | null | null | null | research/.ipynb_checkpoints/Untitled1-checkpoint.ipynb | herminjjc/machineLearningPython | 8a46700ad15346c12df16a0c8674cf9bd331d97f | [
"MIT"
] | null | null | null | research/.ipynb_checkpoints/Untitled1-checkpoint.ipynb | herminjjc/machineLearningPython | 8a46700ad15346c12df16a0c8674cf9bd331d97f | [
"MIT"
] | null | null | null | 33.169643 | 343 | 0.59031 | [
[
[
"import json # will be needed for saving preprocessing details\nimport numpy as np # for data manipulation\nimport pandas as pd # for data manipulation\nfrom sklearn.model_selection import train_test_split # will be used for data split\nfrom sklearn.preprocessing import LabelEncoder # for preprocessing\nfrom sklearn.ensemble import RandomForestClassifier # for training the algorithm\nfrom sklearn.ensemble import ExtraTreesClassifier # for training the algorithm\nimport joblib # for saving algorithm and preprocessing objects\n\n\n# load dataset\ndf = pd.read_csv('https://raw.githubusercontent.com/pplonski/datasets-for-start/master/adult/data.csv', skipinitialspace=True)\nx_cols = [c for c in df.columns if c != 'income']\n# set input matrix and target column\nX = df[x_cols]\ny = df['income']\n# show first rows of data\ndf.head()\n\n\n# data split train / test\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, random_state=1234)\n\n\n# fill missing values\ntrain_mode = dict(X_train.mode().iloc[0])\nX_train = X_train.fillna(train_mode)\nprint(train_mode)\n\n# convert categoricals\nencoders = {}\nfor column in ['workclass', 'education', 'marital-status',\n 'occupation', 'relationship', 'race',\n 'sex','native-country']:\n categorical_convert = LabelEncoder()\n X_train[column] = categorical_convert.fit_transform(X_train[column])\n encoders[column] = categorical_convert\n \n# train the Random Forest algorithm\nrf = RandomForestClassifier(n_estimators = 100)\nrf = rf.fit(X_train, y_train)\n\n# train the Extra Trees algorithm\net = ExtraTreesClassifier(n_estimators = 100)\net = et.fit(X_train, y_train)\n\n# save preprocessing objects and RF algorithm\njoblib.dump(train_mode, \"./train_mode.joblib\", compress=True)\njoblib.dump(encoders, \"./encoders.joblib\", compress=True)\njoblib.dump(rf, \"./random_forest.joblib\", compress=True)\njoblib.dump(et, \"./extra_trees.joblib\", compress=True)\n",
"{'age': 31.0, 'workclass': 'Private', 'fnlwgt': 121124, 'education': 'HS-grad', 'education-num': 9.0, 'marital-status': 'Married-civ-spouse', 'occupation': 'Prof-specialty', 'relationship': 'Husband', 'race': 'White', 'sex': 'Male', 'capital-gain': 0.0, 'capital-loss': 0.0, 'hours-per-week': 40.0, 'native-country': 'United-States'}\n"
]
]
] | [
"code"
] | [
[
"code"
]
] |
d0bf2c36431a53c194586faf7c67199817d94e13 | 5,304 | ipynb | Jupyter Notebook | gui.py.ipynb | chetanshivanand/Traffic-sign-detection-using-cnn | df0719a0189d69e078bee1008505300251dbbd06 | [
"MIT"
] | null | null | null | gui.py.ipynb | chetanshivanand/Traffic-sign-detection-using-cnn | df0719a0189d69e078bee1008505300251dbbd06 | [
"MIT"
] | null | null | null | gui.py.ipynb | chetanshivanand/Traffic-sign-detection-using-cnn | df0719a0189d69e078bee1008505300251dbbd06 | [
"MIT"
] | null | null | null | 39.58209 | 896 | 0.583899 | [
[
[
"import tkinter as tk\nfrom tkinter import filedialog\nfrom tkinter import *\nfrom PIL import ImageTk, Image\n\n# Load your model\n\nmodel = load_model('Saved_model.h5') # Path to your model\n\n# Initialise GUI\ntop=tk.Tk()\n# Window dimensions (800x600)\ntop.geometry('800x600')\n# Window title\ntop.title('Traffic sign classification')\n# Window background color\ntop.configure(background='#CDCDCD')\n# Window label\nlabel=Label(top,background='#CDCDCD', font=('arial',15,'bold'))\n# Sign image\nsign_image = Label(top)\n\n\n# Function to classify image\ndef classify(file_path):\n global label_packed\n # Open the image file path\n image = Image.open(file_path)\n # Resize the image\n image = image.resize((30,30))\n # Inserts a new axis that will appear at the axis position in the expanded array shape\n image = np.expand_dims(image, axis=0)\n # Convert to numpy array\n image = np.array(image)\n # Make prediction\n pred = model.predict_classes([image])[0]\n sign = classes[pred]\n print(sign)\n label.configure(foreground='#011638', text=sign) \n \n# Function to show the \"classify\" button\ndef show_classify_button(file_path):\n # Create the button\n classify_b=Button(top,text=\"Classify Image\",command=lambda: classify(file_path),padx=10,pady=5)\n # Configure button colors\n classify_b.configure(background='#364156', foreground='white',font=('arial',10,'bold'))\n # Configure button place (location)\n classify_b.place(relx=0.79,rely=0.46)\n \n# Function to upload image\ndef upload_image():\n try:\n # Path of the image\n file_path=filedialog.askopenfilename()\n # Open file path\n uploaded=Image.open(file_path)\n uploaded.thumbnail(((top.winfo_width()/2.25),(top.winfo_height()/2.25)))\n im=ImageTk.PhotoImage(uploaded)\n sign_image.configure(image=im)\n sign_image.image=im\n label.configure(text='')\n show_classify_button(file_path)\n except:\n pass\n \n# Create \"Upload\" button\nupload=Button(top,text=\"Upload an image\",command=upload_image,padx=10,pady=5)\n# \"Upload\" button colors and font\nupload.configure(background='#364156', foreground='white',font=('arial',10,'bold'))\n# Button location\nupload.pack(side=BOTTOM,pady=50)\nsign_image.pack(side=BOTTOM,expand=True)\nlabel.pack(side=BOTTOM,expand=True)\n# Window title text\nheading = Label(top, text=\"Know Your Traffic Sign\",pady=20, font=('arial',20,'bold'))\n# Window colors\nheading.configure(background='#CDCDCD',foreground='#364156')\nheading.pack()\ntop.mainloop()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code"
]
] |
d0bf388174019a7aedd74b4f858b30ef809a98f2 | 15,391 | ipynb | Jupyter Notebook | notebooks/medical-federated-learning-program/data-owners-full/02-data-owners-setup-domain.ipynb | Noob-can-Compile/PySyft | 156cf93489b16dd0205b0058d4d23d56b3a91ab8 | [
"Apache-2.0"
] | null | null | null | notebooks/medical-federated-learning-program/data-owners-full/02-data-owners-setup-domain.ipynb | Noob-can-Compile/PySyft | 156cf93489b16dd0205b0058d4d23d56b3a91ab8 | [
"Apache-2.0"
] | null | null | null | notebooks/medical-federated-learning-program/data-owners-full/02-data-owners-setup-domain.ipynb | Noob-can-Compile/PySyft | 156cf93489b16dd0205b0058d4d23d56b3a91ab8 | [
"Apache-2.0"
] | null | null | null | 26.264505 | 302 | 0.577545 | [
[
[
"# Notebook 2: Setup Domain",
"_____no_output_____"
],
[
"<img src=\"img/tab_start.png\" alt=\"tab\" style=\"width: 100px; margin:0;\" />",
"_____no_output_____"
],
[
"Now that we have sshed into our virtual machine (as described in the 👈🏿 notebook [01-data-owners-login.ipynb](01-data-owners-login.ipynb)), let's move on to provision our Domain node.\n\n**Note:** These steps are designed to work on a Ubuntu 20.04 VM however the steps for other linux versions or other OSes are very similar.",
"_____no_output_____"
],
[
"## Dependencies",
"_____no_output_____"
],
[
"PyGrid Domains require the following software dependencies to run:",
"_____no_output_____"
],
[
"- Docker (kubernetes is also available)\n- Python 3.7+\n- Git",
"_____no_output_____"
],
[
"## HAGrid CLI tool",
"_____no_output_____"
],
[
"We have a python command-line tool called `hagrid` which is capable of creating VMs as well as provisioning them.\nHowever unfortunately a fresh Ubuntu 20.04 box does not include `pip` to install HAGrid.",
"_____no_output_____"
],
[
"## Step 1: Installing HAGrid",
"_____no_output_____"
],
[
"First, lets change to the `om` user.",
"_____no_output_____"
],
[
"<img src=\"img/tab_copy_run.png\" alt=\"tab\" style=\"width: 123px; margin:0;\" />",
"_____no_output_____"
],
[
"```bash\nsudo su - om\n```",
"_____no_output_____"
],
[
"A fresh install of Ubuntu 20.04 does not come with `pip` installed, so lets quickly install!",
"_____no_output_____"
],
[
"<img src=\"img/tab_copy_run.png\" alt=\"tab\" style=\"width: 123px; margin:0;\" />",
"_____no_output_____"
],
[
"```shell\nsudo apt update && sudo apt install python3-pip\n```",
"_____no_output_____"
],
[
"Once we have pip we can install HAGrid with `pip`.",
"_____no_output_____"
],
[
"<img src=\"img/tab_copy_run.png\" alt=\"tab\" style=\"width: 123px; margin:0;\" />",
"_____no_output_____"
],
[
"```shell\npip install -U hagrid\n```",
"_____no_output_____"
],
[
"<img src=\"img/tab_info.png\" alt=\"tab\" style=\"width: 100px; margin:0;\" />",
"_____no_output_____"
],
[
"The first time you try to run HAGrid you might get an error `hagrid: command not found`, this usually means that the directory pip installed the HAGrid `console_scripts` is not in your path yet because you just installed pip. On Linux you can simply source the .profile file to update your paths:",
"_____no_output_____"
],
[
"<img src=\"img/tab_copy_run.png\" alt=\"tab\" style=\"width: 123px; margin:0;\" />",
"_____no_output_____"
],
[
"```shell\n. ~/.profile\n```",
"_____no_output_____"
],
[
"## Step 2: Test HAGrid",
"_____no_output_____"
],
[
"Once HAGrid has installed you can simply type `hagrid` on the terminal to check if it is working.",
"_____no_output_____"
],
[
"<img src=\"img/tab_copy_run.png\" alt=\"tab\" style=\"width: 123px; margin:0;\" />",
"_____no_output_____"
],
[
"```bash\nhagrid\n```\n\nYou should see the following table.",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"HAGrid checks if all dependencies required for provisioning a Domain or Network node are installed.\n\n**Note**: We can see that *Docker* is already installed to speed up the demo. However HAGrid can install *Docker* for you when we provision with the `localhost` target.",
"_____no_output_____"
],
[
"## Step 3: Provisioning the Domain Node",
"_____no_output_____"
],
[
"You can now use HAGrid to provision the Domain node. Note this can be done outside the box or inside the box or even on your local machine, however the commands vary slightly.",
"_____no_output_____"
],
[
"<img src=\"img/tab_info.png\" alt=\"tab\" style=\"width: 100px; margin:0;\" />",
"_____no_output_____"
],
[
"The HAGrid launch command follows the following format:\n```\n hagrid launch <node_type> <node_name> to <target_host>\n```\n**node_type**: In our case the default is implicitly a `domain` <br />\n**target_host**: Since we are already logged into the VM we use `docker` <br />\n**node_name**: is the name of the Domain and is an optional argument. If you don't specify a unique <node_name>, then HAGrid generates one automatically <br />\n**--tag=latest**: this flag ensures we use the `latest` pre-built containers from `dockerhub` <br />\n**--tail=false**: this flag launches everything in the background <br />\n\n**NOTE**: You can run almost any `hagrid launch` command with `--cmd=true` and it will do a dry run, print commands instead of running them.",
"_____no_output_____"
],
[
"Since we're already logged into the VM and just want to provision our domain node, we will choose target to be `docker`.",
"_____no_output_____"
],
[
"<img src=\"img/tab_copy_run.png\" alt=\"tab\" style=\"width: 123px; margin:0;\" />",
"_____no_output_____"
],
[
"```shell\nhagrid launch to docker:80 --tag=latest --tail=false\n```",
"_____no_output_____"
],
[
"When HAGrid is finished you should see all containers printing `Started` and the command prompt again.",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"## Step 4: Check if the Domain is up",
"_____no_output_____"
],
[
"The containers take a few moments to start up. To check if things are running we can:\n- ask HAGrid\n- check containers with ctop",
"_____no_output_____"
],
[
"### Ask HAGrid",
"_____no_output_____"
],
[
"<img src=\"img/tab_copy_run.png\" alt=\"tab\" style=\"width: 123px; margin:0;\" />",
"_____no_output_____"
],
[
"```shell\nhagrid check --wait\n```",
"_____no_output_____"
],
[
"<img src=\"img/tab_info.png\" alt=\"tab\" style=\"width: 100px; margin:0;\" />",
"_____no_output_____"
],
[
"When you first run this the API endpoint may not be finished starting, with the `--wait` flag hagrid will keep checking until they are all green.",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"### Step 5: Check in your Browser",
"_____no_output_____"
],
[
"<img src=\"img/tab_run.png\" alt=\"tab\" style=\"width: 100px; margin:0;\" />",
"_____no_output_____"
]
],
[
[
"# autodetect the host_ip\nfrom utils import auto_detect_domain_host_ip\n\nDOMAIN_HOST_IP = auto_detect_domain_host_ip()",
"_____no_output_____"
]
],
[
[
"<img src=\"img/tab_do.png\" alt=\"tab\" style=\"width: 100px; margin:0;\" />",
"_____no_output_____"
]
],
[
[
"print(\"Your Domain's Web Portal should now be ready:\\n\")\nprint(\"👇🏽 Click here to see PyGridUI\")\nprint(f\"http://{DOMAIN_HOST_IP}\")",
"_____no_output_____"
]
],
[
[
"To login into the your domain you will need the following credentials:\n\n- email address: We will use the email ([email protected]) set on domain creation\n- password: We will use the password (changethis) set on domain creation",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"<img src=\"img/tab_finish.png\" alt=\"tab\" style=\"width: 100px; margin:0;\" />",
"_____no_output_____"
],
[
"🙌🏽 Notebook Complete!\n\n🖐 Raise your hand in Zoom\n\n👉🏽 Then, click to continue to Notebook 3: [03-data-owners-upload-dataset.ipynb](03-data-owners-upload-dataset.ipynb)",
"_____no_output_____"
],
[
"<img src=\"img/tab_optional.png\" alt=\"tab\" style=\"width: 100px; margin:0;\" />",
"_____no_output_____"
],
[
"### Inspect Containers",
"_____no_output_____"
],
[
"If you wish to view the individual containers and their logs there a great utility called `ctop` which allows you to work with docker containers on the command line easily.\n\nWe have pre-installed it for this demo so you can take it for a spin.",
"_____no_output_____"
],
[
"<img src=\"img/tab_copy_run.png\" alt=\"tab\" style=\"width: 123px; margin:0;\" />",
"_____no_output_____"
],
[
"```shell\nsudo ctop\n```",
"_____no_output_____"
],
[
"You can use the arrow keys, enter and letter shortcuts to navigate around. You need to just press `q` (small letter q) to quit or exit from `ctop` session.\n\nFor information on `ctop` you can visit its [Github](https://github.com/bcicen/ctop) repo.",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
d0bf48651653e1db4c330db45cfc7b6fd27ad386 | 21,296 | ipynb | Jupyter Notebook | news_analyze.ipynb | Eg-nce/NLP-with-Reddit-r-news-page | 226c7d34c04fec09ed139fb063a7f14ac28d2ef5 | [
"MIT"
] | null | null | null | news_analyze.ipynb | Eg-nce/NLP-with-Reddit-r-news-page | 226c7d34c04fec09ed139fb063a7f14ac28d2ef5 | [
"MIT"
] | null | null | null | news_analyze.ipynb | Eg-nce/NLP-with-Reddit-r-news-page | 226c7d34c04fec09ed139fb063a7f14ac28d2ef5 | [
"MIT"
] | null | null | null | 39.291513 | 6,232 | 0.647164 | [
[
[
"# Importing libraries",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport random\nimport numpy as np\nfrom tensorflow.keras.preprocessing.text import Tokenizer\nfrom tensorflow.keras.preprocessing.sequence import pad_sequences\nfrom tensorflow.keras.utils import to_categorical\nfrom tensorflow.keras import regularizers\nimport matplotlib.pyplot as plt\nfrom itertools import combinations \nfrom sklearn.preprocessing import LabelEncoder\nfrom sklearn.preprocessing import OneHotEncoder\nimport re\nfrom sklearn.model_selection import train_test_split\nfrom gensim.models import Word2Vec\nimport tensorflow as tf\nimport csv\nfrom keras.layers import Dense, Dropout, LSTM, Bidirectional\nimport matplotlib.pyplot as plt\nimport json\nfrom operator import itemgetter \n",
"_____no_output_____"
]
],
[
[
"# Data loading and cleaning ",
"_____no_output_____"
]
],
[
[
"Reddit_df = pd.read_csv(\"/home/ege/selenium/r_news.csv\" )\ncontractions = pd.read_csv(\"/home/ege/Desktop/kaggle/NLP/archive/contractions.csv\")\nReddit_df.dropna( inplace = True)\nReddit_df = Reddit_df[['r/news']]\nReddit_df.drop_duplicates(inplace = True)",
"_____no_output_____"
],
[
"Reddit_df.head()",
"_____no_output_____"
],
[
"with open('/home/ege/Desktop/kaggle/NLP/word_index.json', 'r') as fp:\n tokens = json.load(fp)\n",
"_____no_output_____"
]
],
[
[
"# Defining preprocessing functions",
"_____no_output_____"
]
],
[
[
"def df_to_dict(data):\n dictionary = dict()\n col_names = data.columns\n for _ in range(data.shape[0]):\n dictionary[data[col_names[0]].iloc[_]] = data[col_names[1]].iloc[_]\n return dictionary ",
"_____no_output_____"
],
[
"def lower(data):\n columns = data.columns\n for col in columns:\n data[col] = data[col].apply(str)\n data[col] = data[col].str.lower()\n return data ",
"_____no_output_____"
],
[
"# Defining regex patterns.\nurlPattern = r\"((http://)[^ ]*|(https://)[^ ]*|(www\\.)[^ ]*)\"\nuserPattern = '@[^\\s]+'\nhashtagPattern = '#[^\\s]+'\nalphaPattern = \"[^a-z0-9<>]\"\nsequencePattern = r\"(.)\\1\\1+\"\nseqReplacePattern = r\"\\1\\1\"\n\n# Defining regex for emojis\nsmileemoji = r\"[8:=;]['`\\-]?[)d]+\"\nsademoji = r\"[8:=;]['`\\-]?\\(+\"\nneutralemoji = r\"[8:=;]['`\\-]?[\\/|l*]\"\nlolemoji = r\"[8:=;]['`\\-]?p+\"\n\ndef preprocess_data(news):\n\n # Replace all URls with '<url>'\n news = re.sub(urlPattern,'<url>',news )\n # Replace @USERNAME to '<user>'.\n news = re.sub(userPattern,'<user>', news)\n \n # Replace 3 or more consecutive letters by 2 letter.\n news = re.sub(sequencePattern, seqReplacePattern, news)\n\n # Replace all emojis.\n news = re.sub(r'<3', '<heart>', news)\n news = re.sub(smileemoji, '<smile>', news)\n news = re.sub(sademoji, '<sadface>', news)\n news = re.sub(neutralemoji, '<neutralface>', news)\n news = re.sub(lolemoji, '<lolface>', news)\n\n for contraction, replacement in contractions_dict.items():\n news = news.replace(contraction, replacement)\n\n # Remove non-alphanumeric and symbols\n news = re.sub(alphaPattern, ' ', news)\n\n # Adding space on either side of '/' to seperate words (After replacing URLS).\n news = re.sub(r'/', ' / ', news)\n return news",
"_____no_output_____"
],
[
"def split_func(df):\n data = df.values.tolist()\n clean_list =[]\n for sentences in data:\n clean_list.append(sentences[0].split())\n return clean_list ",
"_____no_output_____"
],
[
"def tokenizer_based_on_json(data, tokens):\n sentences = 0\n word = 0\n while True:\n try:\n data[sentences][word] = tokens[split_list[sentences][word]]\n word += 1\n if word == len(data[sentences]):\n sentences += 1\n word = 0\n if sentences == len(data):\n break\n except:\n data[sentences][word] = \"<oov>\"\n return data",
"_____no_output_____"
]
],
[
[
"# Data preprocessing",
"_____no_output_____"
]
],
[
[
"Reddit_df = lower(Reddit_df)\ncontractions_dict = df_to_dict(contractions)",
"_____no_output_____"
],
[
"Reddit_df['r/news'] = Reddit_df['r/news'].apply(preprocess_data)",
"_____no_output_____"
],
[
"split_list = split_func(Reddit_df)",
"_____no_output_____"
],
[
"tokenized_list = tokenizer_based_on_json(split_list , tokens)",
"_____no_output_____"
],
[
"news_padded = pad_sequences(tokenized_list , maxlen= max([len(x) for x in tokenized_list]) , \n padding= 'post', truncating= 'post')",
"_____no_output_____"
]
],
[
[
"# Importing saved tensorflow model",
"_____no_output_____"
]
],
[
[
"model = tf.keras.models.load_model(\"/home/ege/saved_model/my_model\")",
"_____no_output_____"
]
],
[
[
"# Predictions",
"_____no_output_____"
]
],
[
[
"predictions = model.predict(news_padded)",
"WARNING:tensorflow:Model was constructed with shape (None, 368) for input Tensor(\"embedding_input_4:0\", shape=(None, 368), dtype=float32), but it was called on an input with incompatible shape (None, 28).\n"
],
[
"predictions_list = []\nfor pred in predictions:\n if pred > 0.7:\n predictions_list.append(1)\n else:\n predictions_list.append(0)\n ",
"_____no_output_____"
]
],
[
[
"# Exploring predictions",
"_____no_output_____"
]
],
[
[
"print(\"negative news : \" , predictions_list.count(0) , \"positive news \" , predictions_list.count(1))",
"negative news : 230 positive news 28\n"
],
[
"fig = plt.figure()\nax = fig.add_axes([0,0,1,1])\nlabels = ['negative news', 'positive news']\nnumbers = [ predictions_list.count(0) , predictions_list.count(1)]\nax.bar(labels,numbers ,color = 'rg')\nplt.show()",
"<ipython-input-166-e1562d8ba15d>:5: MatplotlibDeprecationWarning: Using a string of single character colors as a color sequence is deprecated. Use an explicit list instead.\n ax.bar(labels,numbers ,color = 'rg')\n"
],
[
"Poz_index = [i for i, x in enumerate(predictions_list) if x == 1]\nNeg_index = [i for i, x in enumerate(predictions_list) if x == 0]",
"_____no_output_____"
],
[
"positive_news = []\nnegative_news = []\nfor poz in Poz_index:\n positive_news.append(Reddit_df['r/news'].iloc[poz])\nfor neg in Neg_index:\n negative_news.append(Reddit_df['r/news'].iloc[neg]) ",
"_____no_output_____"
],
[
"number_of_sentences = 7\nprint(\"-- Some negative news labeled by model -- \" , \"\\n\\n\")\nfor sen in range(number_of_sentences):\n print(sen+1 , negative_news[random.randrange(0,len(negative_news))],\".\")\n ",
"-- Some negative news labeled by model -- \n\n\n1 ontario confirms canada s 1st known cases of uk coronavirus variant .\n2 highly suspicious fire at black church in massachusetts being investigated as arson .\n3 some overflowing la hospitals resort to putting patients in gift shops conference rooms .\n4 city of edmonton sues former employees accused of stealing 1 6m in false invoicing scheme .\n5 russia gives kremlin critic navalny an ultimatum return immediately or face jail .\n6 shelly beach rescue fisherman saves kayaker helicopter sent to scene .\n7 seafarer exemption from new covid restrictions recommended by european commission .\n"
],
[
"number_of_sentences = 7\nprint(\"-- Some positive news labeled by model -- \", \"\\n\\n\")\nfor sen in range(number_of_sentences):\n print( sen+1 , positive_news[random.randrange(0,len(positive_news))],\".\")\n ",
"-- Some positive news labeled by model -- \n\n\n1 l a coronavirus update county surpasses 7 00 covid 19 hospitalizations for first time .\n2 beverly hills eatery reportedly planning indoor speakeasy nye party draws police interest .\n3 azerbaijani energy ministry acwa power company sign agreement on wind farm project .\n4 amazon to acquire wondery in podcast push .\n5 brazil vice president tests positive for coronavirus .\n6 pentagon sends 7 00 gallons of eggnog and 21 00 pounds of ham to u s troops around the world .\n7 archaeologists uncover ancient street food shop in pompeii .\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0bf5285a252faf530dfbd8ecfe9748be15795ec | 15,068 | ipynb | Jupyter Notebook | intermediate-lessons/geospatial-data/gd-1.ipynb | mohsenumn/lessons | a85aa31a76f1da422bfd7fd01eb6b40c82dcc989 | [
"BSD-3-Clause"
] | null | null | null | intermediate-lessons/geospatial-data/gd-1.ipynb | mohsenumn/lessons | a85aa31a76f1da422bfd7fd01eb6b40c82dcc989 | [
"BSD-3-Clause"
] | null | null | null | intermediate-lessons/geospatial-data/gd-1.ipynb | mohsenumn/lessons | a85aa31a76f1da422bfd7fd01eb6b40c82dcc989 | [
"BSD-3-Clause"
] | null | null | null | 34.013544 | 576 | 0.608707 | [
[
[
"# Intermediate Lesson on Geospatial Data \n\n## Data, Information, Knowledge and Wisdom\n\n<strong>Lesson Developers:</strong> Jayakrishnan Ajayakumar, Shana Crosson, Mohsen Ahmadkhani\n\n#### Part 1 of 5",
"_____no_output_____"
]
],
[
[
"# This code cell starts the necessary setup for Hour of CI lesson notebooks.\n# First, it enables users to hide and unhide code by producing a 'Toggle raw code' button below.\n# Second, it imports the hourofci package, which is necessary for lessons and interactive Jupyter Widgets.\n# Third, it helps hide/control other aspects of Jupyter Notebooks to improve the user experience\n# This is an initialization cell\n# It is not displayed because the Slide Type is 'Skip'\n\nfrom IPython.display import HTML, IFrame, Javascript, display\nfrom ipywidgets import interactive\nimport ipywidgets as widgets\nfrom ipywidgets import Layout\n\nimport getpass # This library allows us to get the username (User agent string)\n\n# import package for hourofci project\nimport sys\nsys.path.append('../../supplementary') # relative path (may change depending on the location of the lesson notebook)\n# sys.path.append('supplementary')\nimport hourofci\ntry:\n import os\n os.chdir('supplementary')\nexcept:\n pass\n\n# load javascript to initialize/hide cells, get user agent string, and hide output indicator\n# hide code by introducing a toggle button \"Toggle raw code\"\nHTML(''' \n <script type=\"text/javascript\" src=\\\"../../supplementary/js/custom.js\\\"></script>\n \n <style>\n .output_prompt{opacity:0;}\n </style>\n \n <input id=\"toggle_code\" type=\"button\" value=\"Toggle raw code\">\n''')",
"_____no_output_____"
]
],
[
[
"## Reminder\n<a href=\"#/slide-2-0\" class=\"navigate-right\" style=\"background-color:blue;color:white;padding:8px;margin:2px;font-weight:bold;\">Continue with the lesson</a>\n\n<br>\n</br>\n<font size=\"+1\">\n\nBy continuing with this lesson you are granting your permission to take part in this research study for the Hour of Cyberinfrastructure: Developing Cyber Literacy for GIScience project. In this study, you will be learning about cyberinfrastructure and related concepts using a web-based platform that will take approximately one hour per lesson. Participation in this study is voluntary.\n\nParticipants in this research must be 18 years or older. If you are under the age of 18 then please exit this webpage or navigate to another website such as the Hour of Code at https://hourofcode.com, which is designed for K-12 students.\n\nIf you are not interested in participating please exit the browser or navigate to this website: http://www.umn.edu. Your participation is voluntary and you are free to stop the lesson at any time.\n\nFor the full description please navigate to this website: <a href=\"../../gateway-lesson/gateway/gateway-1.ipynb\">Gateway Lesson Research Study Permission</a>.\n\n</font>",
"_____no_output_____"
],
[
"## Do you believe in Global Warming???\nWhat if I ask you this question and throw some numbers at you?!\n",
"_____no_output_____"
]
],
[
[
"from ipywidgets import Button, HBox, VBox,widgets,Layout\nfrom IPython.display import display\nimport pandas as pd\ntable1 = pd.read_csv('databases/antartica_mass.csv').sample(frac = 1)\ntable1['0'] = pd.to_datetime(table1['0'])\ntable2 = pd.read_csv('databases/global_temperature.csv').sample(frac = 1)\ntable2['0'] = pd.to_datetime(table2['0'],format='%Y')\ntable3 = pd.read_csv('databases/carbon_dioxide.csv').sample(frac = 1)\ntable3['2'] = pd.to_datetime(table3['2'])\ntable1_disp = widgets.Output()\ntable2_disp = widgets.Output()\ntable3_disp = widgets.Output()\nwith table1_disp:\n display(table1)\nwith table2_disp:\n display(table2)\nwith table3_disp:\n display(table3)\nout=HBox([VBox([table1_disp],layout = Layout(margin='0 100px 0 0')),VBox([table2_disp],layout = Layout(margin='0 100px 0 0')),VBox([table3_disp])])\nout",
"_____no_output_____"
]
],
[
[
"These are just symbols and numbers (of course we can identify the date as we have seen that pattern before) and doesn't convey anything.\n\nThis is what we essentially call **Data (or raw Data)**.",
"_____no_output_____"
],
[
"## What is Data?",
"_____no_output_____"
],
[
">**Data is a collection of facts** in a **raw or unorganized form** such as **numbers or characters**.\n\nWithout **context** data has no value!!\n\nNow what if we are provided with the **information** about **what** these symbols represent, **who** collected the data, **where** is this data collected from and **when** was the data collected.",
"_____no_output_____"
],
[
"## What is Information (Data+Context)?",
"_____no_output_____"
],
[
">**Information** is a **collection of data** that is **arranged and ordered in a consistent way**. Data in the form of information becomes **more useful because storage and retrieval are easy**.\n\nFor our sample datasets, what if we know about the **\"what, who, where, and when\"** questions. For example, if we are provided with the information that these datasets represent the change in Antartic Ice mass in giga tonnes, the temperature anomaly across the globe in celsius, and the carbon dioxide content in the atmosphere as parts per million, we can try to deduce patterns from the data. ",
"_____no_output_____"
]
],
[
[
"table1.columns = [\"Time\", \"Antartic_Mass(Gt)\"]\ntable2.columns = [\"Time\", \"Temperature_Anomaly(C)\"]\ntable3.columns = [\"Time\", \"Carbon_Dioxide(PPM)\"]\ntable1_disp = widgets.Output()\ntable2_disp = widgets.Output()\ntable3_disp = widgets.Output()\nwith table1_disp:\n display(table1)\nwith table2_disp:\n display(table2)\nwith table3_disp:\n display(table3)\nout=HBox([VBox([table1_disp],layout = Layout(margin='0 100px 0 0')),VBox([table2_disp],layout = Layout(margin='0 100px 0 0')),VBox([table3_disp])])\nout",
"_____no_output_____"
]
],
[
[
"We can do more **processing** on the data and can convert it into structured forms. For example we can **sort** these datasets to look for temporal changes",
"_____no_output_____"
]
],
[
[
"table1_disp = widgets.Output()\ntable2_disp = widgets.Output()\ntable3_disp = widgets.Output()\nwith table1_disp:\n display(table1.sort_values(by='Time'))\nwith table2_disp:\n display(table2.sort_values(by='Time'))\nwith table3_disp:\n display(table3.sort_values(by='Time'))\nout=HBox([VBox([table1_disp],layout = Layout(margin='0 100px 0 0')),VBox([table2_disp],layout = Layout(margin='0 100px 0 0')),VBox([table3_disp])])\nout",
"_____no_output_____"
]
],
[
[
"After sorting we can see that, with Time, there is a depletion in Antratic mass and increase in temperature anomaly as well as carbon dioxide content in the atmosphere.\n\nWe can also **visualize** these datasets (as a picture is worth 1000 words!!) to bolster our arguments",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\nfig, axes = plt.subplots(1,3)\nfig.set_figheight(8)\nfig.set_figwidth(20)\ntable1.sort_values(by='Time').plot(x='Time',y='Antartic_Mass(Gt)',ax=axes[0]);\ntable2.sort_values(by='Time').plot(x='Time',y='Temperature_Anomaly(C)',ax=axes[1]);\ntable3.sort_values(by='Time').plot(x='Time',y='Carbon_Dioxide(PPM)',ax=axes[2]);\nplt.show()",
"_____no_output_____"
]
],
[
[
"By asking relevant questions about ‘who’, ‘what’, ‘when’, ‘where’, etc., we can derive valuable information from the data and make it more useful for us.",
"_____no_output_____"
],
[
"## What is Knowledge? (Patterns from Information)",
"_____no_output_____"
],
[
">**Knowledge** is the **appropriate collection of information**, such that it's **intent is to be useful**.\n\nKnowledge deals with the question of **\"How\"**.\n\n**\"How\"** is the **information, derived from the collected data, relevant to our goals?**\n\n**\"How\"** are the **pieces of this information connected to other pieces** to add more meaning and value?\n\nSo how do we find this connection between our pieces of information. For example now we have the information that with time there is a decrease in Antartic Ice mass and a corresponding increase in Temperature Anomaly and the Carbon Dioxide content in the atmosphere. Can we prove that there is a relationship? This is where the simulation and model building skills come into play. Machine Learning (which has been a buzz word for long time) is used to answer such questions from large sets of data.\n",
"_____no_output_____"
],
[
"## What is Wisdom?(Acting up on Knowledge)",
"_____no_output_____"
],
[
">**Wisdom** is the **ability to select the best way to reach the desired outcome based on knowledge**. \n\nSo its a very subjective concept. In our example, we now have the knowledge that (from developing climatic models) increases in atmospheric carbon dioxide are responsible for about two-thirds of the total energy imbalance that is causing earth's temperature to rise and we also know that rise in temperature leads to melting of ice mass which is a big threat to earth's bio-diversity. So what are we going to do about that? What is the **best way** to do it? So **Wisdom** here is **acting up on this knowledge** regarding carbon emissions and finding ways to reduce it.",
"_____no_output_____"
],
[
"## The DIKW pyramid\nWe can essentially represent these concepts in a Pyramid with Data at the bottom and Wisdom at the top. \n ",
"_____no_output_____"
],
[
"So where does **Database** fits into this Pyrmaid (or model)?\n\nWe are going to look at that in the upcoming chapters",
"_____no_output_____"
],
[
"#### Resources\nhttps://www.ontotext.com/knowledgehub/fundamentals/dikw-pyramid/\nhttps://www.systems-thinking.org/dikw/dikw.htm\nhttps://www.csestack.org/dikw-pyramid-model-difference-between-data-information/\nhttps://developer.ibm.com/articles/ba-data-becomes-knowledge-1/\nhttps://www.certguidance.com/explaining-dikw-hierarchy/\nhttps://www.spreadingscience.com/our-approach/diffusion-of-innovations-in-a-community/1-the-dikw-model-of-innovation/\nhttps://climate.nasa.gov/vital-signs/ice-sheets/\nhttps://climate.nasa.gov/vital-signs/global-temperature/\nhttps://climate.nasa.gov/vital-signs/carbon-dioxide/",
"_____no_output_____"
],
[
"### Click on the link below to move on!\n<br>\n\n<font size=\"+1\"><a style=\"background-color:blue;color:white;padding:12px;margin:10px;font-weight:bold;\" href=\"gd-2.ipynb\">Click here to go to the next notebook.</a></font>",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
d0bf5c7ed29f5fb75069faac6df9c81e1136d6b6 | 120,714 | ipynb | Jupyter Notebook | tfFlowers_demo.ipynb | tfrizza/DALL-E-tf | 266eb5a2e70bbbff741f041e239cb4a3e81c034e | [
"MIT"
] | null | null | null | tfFlowers_demo.ipynb | tfrizza/DALL-E-tf | 266eb5a2e70bbbff741f041e239cb4a3e81c034e | [
"MIT"
] | null | null | null | tfFlowers_demo.ipynb | tfrizza/DALL-E-tf | 266eb5a2e70bbbff741f041e239cb4a3e81c034e | [
"MIT"
] | null | null | null | 68.2 | 5,185 | 0.493928 | [
[
[
"<a href=\"https://colab.research.google.com/github/tfrizza/DALL-E-tf/blob/main/tfFlowers_demo.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
]
],
[
[
"%pip install -q tensorflow_addons",
"\u001b[?25l\r\u001b[K |▌ | 10kB 20.4MB/s eta 0:00:01\r\u001b[K |█ | 20kB 14.1MB/s eta 0:00:01\r\u001b[K |█▍ | 30kB 9.7MB/s eta 0:00:01\r\u001b[K |█▉ | 40kB 8.5MB/s eta 0:00:01\r\u001b[K |██▎ | 51kB 5.1MB/s eta 0:00:01\r\u001b[K |██▉ | 61kB 5.7MB/s eta 0:00:01\r\u001b[K |███▎ | 71kB 5.9MB/s eta 0:00:01\r\u001b[K |███▊ | 81kB 6.3MB/s eta 0:00:01\r\u001b[K |████▏ | 92kB 5.9MB/s eta 0:00:01\r\u001b[K |████▋ | 102kB 6.4MB/s eta 0:00:01\r\u001b[K |█████▏ | 112kB 6.4MB/s eta 0:00:01\r\u001b[K |█████▋ | 122kB 6.4MB/s eta 0:00:01\r\u001b[K |██████ | 133kB 6.4MB/s eta 0:00:01\r\u001b[K |██████▌ | 143kB 6.4MB/s eta 0:00:01\r\u001b[K |███████ | 153kB 6.4MB/s eta 0:00:01\r\u001b[K |███████▌ | 163kB 6.4MB/s eta 0:00:01\r\u001b[K |████████ | 174kB 6.4MB/s eta 0:00:01\r\u001b[K |████████▍ | 184kB 6.4MB/s eta 0:00:01\r\u001b[K |████████▉ | 194kB 6.4MB/s eta 0:00:01\r\u001b[K |█████████▎ | 204kB 6.4MB/s eta 0:00:01\r\u001b[K |█████████▉ | 215kB 6.4MB/s eta 0:00:01\r\u001b[K |██████████▎ | 225kB 6.4MB/s eta 0:00:01\r\u001b[K |██████████▊ | 235kB 6.4MB/s eta 0:00:01\r\u001b[K |███████████▏ | 245kB 6.4MB/s eta 0:00:01\r\u001b[K |███████████▋ | 256kB 6.4MB/s eta 0:00:01\r\u001b[K |████████████▏ | 266kB 6.4MB/s eta 0:00:01\r\u001b[K |████████████▋ | 276kB 6.4MB/s eta 0:00:01\r\u001b[K |█████████████ | 286kB 6.4MB/s eta 0:00:01\r\u001b[K |█████████████▌ | 296kB 6.4MB/s eta 0:00:01\r\u001b[K |██████████████ | 307kB 6.4MB/s eta 0:00:01\r\u001b[K |██████████████▍ | 317kB 6.4MB/s eta 0:00:01\r\u001b[K |███████████████ | 327kB 6.4MB/s eta 0:00:01\r\u001b[K |███████████████▍ | 337kB 6.4MB/s eta 0:00:01\r\u001b[K |███████████████▉ | 348kB 6.4MB/s eta 0:00:01\r\u001b[K |████████████████▎ | 358kB 6.4MB/s eta 0:00:01\r\u001b[K |████████████████▊ | 368kB 6.4MB/s eta 0:00:01\r\u001b[K |█████████████████▎ | 378kB 6.4MB/s eta 0:00:01\r\u001b[K |█████████████████▊ | 389kB 6.4MB/s eta 0:00:01\r\u001b[K |██████████████████▏ | 399kB 6.4MB/s eta 0:00:01\r\u001b[K |██████████████████▋ | 409kB 6.4MB/s eta 0:00:01\r\u001b[K |███████████████████ | 419kB 6.4MB/s eta 0:00:01\r\u001b[K |███████████████████▋ | 430kB 6.4MB/s eta 0:00:01\r\u001b[K |████████████████████ | 440kB 6.4MB/s eta 0:00:01\r\u001b[K |████████████████████▌ | 450kB 6.4MB/s eta 0:00:01\r\u001b[K |█████████████████████ | 460kB 6.4MB/s eta 0:00:01\r\u001b[K |█████████████████████▍ | 471kB 6.4MB/s eta 0:00:01\r\u001b[K |██████████████████████ | 481kB 6.4MB/s eta 0:00:01\r\u001b[K |██████████████████████▍ | 491kB 6.4MB/s eta 0:00:01\r\u001b[K |██████████████████████▉ | 501kB 6.4MB/s eta 0:00:01\r\u001b[K |███████████████████████▎ | 512kB 6.4MB/s eta 0:00:01\r\u001b[K |███████████████████████▊ | 522kB 6.4MB/s eta 0:00:01\r\u001b[K |████████████████████████▎ | 532kB 6.4MB/s eta 0:00:01\r\u001b[K |████████████████████████▊ | 542kB 6.4MB/s eta 0:00:01\r\u001b[K |█████████████████████████▏ | 552kB 6.4MB/s eta 0:00:01\r\u001b[K |█████████████████████████▋ | 563kB 6.4MB/s eta 0:00:01\r\u001b[K |██████████████████████████ | 573kB 6.4MB/s eta 0:00:01\r\u001b[K |██████████████████████████▋ | 583kB 6.4MB/s eta 0:00:01\r\u001b[K |███████████████████████████ | 593kB 6.4MB/s eta 0:00:01\r\u001b[K |███████████████████████████▌ | 604kB 6.4MB/s eta 0:00:01\r\u001b[K |████████████████████████████ | 614kB 6.4MB/s eta 0:00:01\r\u001b[K |████████████████████████████▍ | 624kB 6.4MB/s eta 0:00:01\r\u001b[K |████████████████████████████▉ | 634kB 6.4MB/s eta 0:00:01\r\u001b[K |█████████████████████████████▍ | 645kB 6.4MB/s eta 0:00:01\r\u001b[K |█████████████████████████████▉ | 655kB 6.4MB/s eta 0:00:01\r\u001b[K |██████████████████████████████▎ | 665kB 6.4MB/s eta 0:00:01\r\u001b[K |██████████████████████████████▊ | 675kB 6.4MB/s eta 0:00:01\r\u001b[K |███████████████████████████████▏| 686kB 6.4MB/s eta 0:00:01\r\u001b[K |███████████████████████████████▊| 696kB 6.4MB/s eta 0:00:01\r\u001b[K |████████████████████████████████| 706kB 6.4MB/s \n\u001b[?25h"
],
[
"!git clone https://github.com/tfrizza/DALL-E-tf.git",
"Cloning into 'DALL-E-tf'...\nremote: Enumerating objects: 75, done.\u001b[K\nremote: Counting objects: 100% (75/75), done.\u001b[K\nremote: Compressing objects: 100% (57/57), done.\u001b[K\nremote: Total 75 (delta 39), reused 38 (delta 16), pack-reused 0\u001b[K\nUnpacking objects: 100% (75/75), done.\n"
],
[
"%cd DALL-E-tf",
"/content/DALL-E-tf\n"
],
[
"import tensorflow as tf\nfrom tensorflow.keras import Model, mixed_precision\nfrom tensorflow.keras.losses import Loss, MeanSquaredError, MeanAbsoluteError, MSE, MAE\nfrom tensorflow.keras.optimizers import Adam\nfrom tensorflow.keras.optimizers.schedules import PolynomialDecay\nfrom tensorflow.keras.callbacks import Callback\n\nimport tensorflow_probability as tfp\nfrom tensorflow_probability import distributions as tfd\nfrom tensorflow_probability import layers as tfpl\n\nimport tensorflow_datasets as tfds\nfrom tensorflow_addons.optimizers import LAMB, AdamW\n\nfrom dall_e_tf.encoder import dvae_encoder\nfrom dall_e_tf.decoder import dvae_decoder\nfrom dall_e_tf.vae import dVAE\nfrom dall_e_tf.losses import LatentLoss\nfrom dall_e_tf.utils import plot_reconstructions\n\nimport numpy as np\nimport attr\nfrom functools import partial\n\nmixed_precision.set_global_policy('float32')\nAUTOTUNE = tf.data.AUTOTUNE",
"_____no_output_____"
],
[
"try:\n tpu = tf.distribute.cluster_resolver.TPUClusterResolver() # TPU detection\n print('Running on TPU ', tpu.cluster_spec().as_dict()['worker'])\nexcept ValueError:\n raise BaseException('ERROR: Not connected to a TPU runtime; please see the previous cell in this notebook for instructions!')\n\ntf.config.experimental_connect_to_cluster(tpu)\ntf.tpu.experimental.initialize_tpu_system(tpu)\nstrategy = tf.distribute.TPUStrategy(tpu)\nNUM_DEVICES = strategy.num_replicas_in_sync\nprint(\"REPLICAS: \", NUM_DEVICES)",
"Running on TPU ['10.51.112.18:8470']\nWARNING:tensorflow:TPU system grpc://10.51.112.18:8470 has already been initialized. Reinitializing the TPU can cause previously created variables on TPU to be lost.\n"
],
[
"def crop(image):\n y_nonzero, x_nonzero, _ = tf.experimental.numpy.nonzero(image)\n return image[tf.reduce_min(y_nonzero):tf.reduce_max(y_nonzero), tf.reduce_min(x_nonzero):tf.reduce_max(x_nonzero)]\n\ndef preprocess(data, h=128, w=128):\n img = crop(data['image'])\n img = tf.image.resize(img, size=(h,w), antialias=False)\n img /= 255\n return img\n\nGLOBAL_BATCH_SIZE = 16 * NUM_DEVICES\n\ntrain_dataset = tfds.load('tf_flowers', \n split='train', \n shuffle_files=True,\n try_gcs=True\n )\ntrain_dataset = train_dataset.map(preprocess, num_parallel_calls=AUTOTUNE)\\\n .batch(GLOBAL_BATCH_SIZE)\\\n .prefetch(buffer_size=AUTOTUNE)\n\ntrain_dist_dataset = strategy.experimental_distribute_dataset(train_dataset)\ntrain_dataset",
"_____no_output_____"
],
[
"class LatentLoss(Loss):\n\n def call(self, dummy_ground_truth, outputs):\n del dummy_ground_truth\n z_e, z_q = tf.split(outputs, 2, axis=-1)\n vq_loss = tf.reduce_mean(tf.square(tf.stop_gradient(z_e) - z_q))\n commit_loss = tf.reduce_mean(tf.square(z_e - tf.stop_gradient(z_q)))\n return vq_loss + 1.0 * commit_loss",
"_____no_output_____"
],
[
"from tensorflow.python.keras import backend as K\nfrom tensorflow.python.framework import ops\n\n# class TemperatureScheduler(Callback):\n# def __init__(self, schedule, layer_name='gumbel-softmax', verbose=0):\n# super(TemperatureScheduler, self).__init__()\n# self.schedule = schedule\n# self.layer_name = layer_name\n# self.verbose = verbose\n\n# def on_epoch_begin(self, epoch, logs=None):\n# layer = self.model.get_layer(self.layer_name)\n# if not hasattr(layer, '_most_recently_built_distribution'):\n# raise ValueError('Layer must have a \"_most_recently_built_distribution\" attribute.')\n# distrib = layer._most_recently_built_distribution\n# if not hasattr(distrib, 'temperature'):\n# raise ValueError('Distribution must have a \"temperature\" attribute.')\n# # T = float(K.get_value(distrib.temperature))\n# # T = distrib.temperature\n# T = self.schedule(epoch)\n# if not isinstance(T, (ops.Tensor, float, np.float32, np.float64)):\n# raise ValueError('The output of the \"schedule\" function '\n# 'should be float.')\n# if isinstance(T, ops.Tensor) and not T.dtype.is_floating:\n# raise ValueError('The dtype of Tensor should be float')\n# K.set_value(distrib.temperature, K.get_value(T))\n# if self.verbose > 0:\n# print('\\nEpoch %05d: TemperatureScheduler reducing temperature '\n# 'rate to %s.' % (epoch + 1, T))\n\n# def on_epoch_end(self, epoch, logs=None):\n# logs = logs or {}\n# T = self.model.get_layer(self.layer_name)._most_recently_built_distribution.temperature\n# logs['temperature'] = K.get_value(T)\n\n\nclass TemperatureScheduler(Callback):\n def __init__(self, schedule, verbose=False):\n super(TemperatureScheduler, self).__init__()\n self.schedule = schedule\n self.verbose = verbose\n\n def on_epoch_begin(self, epoch, logs=None):\n temperature = self.schedule(epoch)\n # self.model.temperature = temperature\n if self.verbose:\n print(f'Setting temperature to {self.model.temperature}')",
"_____no_output_____"
],
[
"vocab_size = 8192//2\nn_hid = 256//2\n\nclass dVAE(Model):\n def __init__(self, enc, dec, initial_temp=1.0, temp_decay=0.9):\n super(dVAE, self).__init__()\n self.enc = enc\n self.dec = dec\n self.temperature = initial_temp\n self.temp_decay = temp_decay\n\n self.gumbel_softmax = tfpl.DistributionLambda(\n lambda x: tfd.RelaxedOneHotCategorical(temperature=x[0], logits=x[1]) # Gumbel-softmax\n , name='gumbel-softmax'\n )\n\n def call(self, inputs, training=False):\n x, temperature = inputs\n z_e = self.enc(x)\n z_q = self.gumbel_softmax([temperature, z_e])\n\n z_hard = tf.math.argmax(z_e, axis=-1) # non-differentiable\n z_hard = tf.one_hot(z_hard, enc.output.shape[-1], dtype=z_q.dtype)\n\n z = z_q + tf.stop_gradient(z_hard - z_q) # straight-through Gumbel-softmax\n x_rec = self.dec(z)\n latents = tf.stack([z_hard, z_q], -1, name='latent')\n return x_rec, latents, temperature\n \n def train_step(self, x):\n with tf.GradientTape() as tape:\n x_pred, latents, T = self([x, self.temperature], training=True) # Forward pass\n # Compute the loss value\n # (the loss function is configured in `compile()`)\n loss = self.compiled_loss(x, x_pred, regularization_losses=self.losses)\n\n gradients = tape.gradient(loss, self.trainable_variables)\n self.optimizer.apply_gradients(zip(gradients, self.trainable_variables))\n # Update metrics (includes the metric that tracks the loss)\n self.compiled_metrics.update_state(x, x_pred)\n results = {m.name: m.result() for m in self.metrics}\n results['temperature'] = self.temperature\n return results\n\nwith strategy.scope():\n enc = dvae_encoder(group_count=2, n_hid=n_hid, n_blk_per_group=2, input_channels=3, vocab_size=vocab_size, activation='swish')\n dec = dvae_decoder(group_count=2, n_init=n_hid//2, n_hid=n_hid, n_blk_per_group=2, output_channels=3, vocab_size=vocab_size, activation='swish')\n\n vae = dVAE(enc, dec)\n\n epochs = 200\n temp_schedule = PolynomialDecay(0.9, epochs, 1/16, 8) # quadratic decay\n lr_schedule = PolynomialDecay(1e-3, epochs, 1e-4, 0.1) # sqrt decay\n optimizer = AdamW(weight_decay=1e-4, learning_rate=lr_schedule)\n\n def psnr(x1, x2):\n return tf.image.psnr(x1, x2, max_val=1.0)\n vae.compile(loss=['mse', None], optimizer=optimizer, metrics=[psnr])\n\n# vae.build(input_shape=(128,128,128,3))\n# vae.summary(line_length=200)",
"_____no_output_____"
],
[
"vae.fit(train_dataset,\n # validation_data=x_test,\n epochs=1000,\n callbacks=[TemperatureScheduler(temp_schedule, True)]\n )",
"Epoch 1/1000\nSetting temperature to 1.0\n 6/29 [=====>........................] - ETA: 6s - loss: 0.0956 - psnr: 10.5565 - temperature: 1.0000WARNING:tensorflow:Callback method `on_train_batch_end` is slow compared to the batch time (batch time: 0.0025s vs `on_train_batch_end` time: 3.2613s). Check your callbacks.\n"
],
[
"train_batch = next(iter(train_dataset))\nplot_reconstructions(vae(train_batch[:10])[0], train_batch[:10])",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0bf699ac2b38e5a83aa8634a7b11793e779b137 | 176,575 | ipynb | Jupyter Notebook | SGD-from0.ipynb | poodarchu/gluon_step_by_step | 5c98a057f1ef0b30dfbe47fa7b6bc7e667e0bb3b | [
"MIT"
] | 1 | 2018-04-03T07:03:01.000Z | 2018-04-03T07:03:01.000Z | SGD-from0.ipynb | poodarchu/gluon_step_by_step | 5c98a057f1ef0b30dfbe47fa7b6bc7e667e0bb3b | [
"MIT"
] | null | null | null | SGD-from0.ipynb | poodarchu/gluon_step_by_step | 5c98a057f1ef0b30dfbe47fa7b6bc7e667e0bb3b | [
"MIT"
] | null | null | null | 564.13738 | 112,976 | 0.941385 | [
[
[
"%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport sys\nsys.path.append('.')\nimport utils",
"_____no_output_____"
],
[
"def f(x):\n return x * np.cos(np.pi*x)",
"_____no_output_____"
],
[
"utils.set_fig_size(mpl, (4.5, 2.5))\nx = np.arange(-1.0, 2.0, 0.1)\nfig = plt.figure()\nsubplot = fig.add_subplot(111)\nsubplot.annotate('local minimum', xy=(-0.3, -0.25), xytext=(-0.77, -1.0), arrowprops=dict(facecolor='black', shrink=0.05))\nsubplot.annotate('global minimum', xy=(1.1, -0.9), xytext=(0.6, 0.8), arrowprops=dict(facecolor='black', shrink=0.05))\nplt.plot(x, f(x))\nplt.xlabel('x')\nplt.ylabel('f(x)')\nplt.show()",
"_____no_output_____"
],
[
"x = np.arange(-2.0, 2.0, 0.1)\nfig = plt.figure()\nsubplt = fig.add_subplot(111)\nsubplt.annotate('saddle point', xy=(0, -0.2), xytext=(-0.52, -5.0),\n arrowprops=dict(facecolor='black', shrink=0.05))\nplt.plot(x, x**3)\nplt.xlabel('x')\nplt.ylabel('f(x)')\nplt.show()",
"_____no_output_____"
],
[
"from mpl_toolkits.mplot3d import Axes3D",
"_____no_output_____"
],
[
"fig = plt.figure()\nax = fig.add_subplot(111, projection='3d')\nx, y = np.mgrid[-1:1:31j, -1:1:31j]\nz = x**2 - y**2\nax.plot_surface(x, y, z, **{'rstride':1, 'cstride':1, 'cmap':'Greens_r'})\nax.plot([0], [0], [0], 'ro')\nax.view_init(azim=50, elev=20)\nplt.xticks([-1, -0.5, 0, 0.5, 1])\nplt.yticks([-1, -0.5, 0, 0.5, 1])\nax.set_zticks([-1, -0.5, 0, 0.5, 1])\nplt.xlabel('x')\nplt.ylabel('y')\nplt.show()",
"_____no_output_____"
],
[
"# mini-batch SGD\ndef sgd(params, lr, batch_size):\n for param in params:\n param[:] = param - lr * param.grad/batch_size",
"_____no_output_____"
],
[
"%config InlineBackend.figure_format='retina'\n%matplotlib inline\nimport mxnet as mx\nfrom mxnet import autograd\nfrom mxnet import gluon\nfrom mxnet import nd\nimport numpy as np\nimport random\nimport sys\nsys.path.append('..')\nimport utils",
"_____no_output_____"
],
[
"# 生成数据集。\nnum_inputs = 2\nnum_examples = 1000\ntrue_w = [2, -3.4]\ntrue_b = 4.2\nX = nd.random_normal(scale=1, shape=(num_examples, num_inputs))\ny = true_w[0] * X[:, 0] + true_w[1] * X[:, 1] + true_b\ny += .01 * nd.random_normal(scale=1, shape=y.shape)",
"_____no_output_____"
],
[
"def init_params():\n w = nd.random_normal(scale=1, shape=(num_inputs, 1))\n b = nd.zeros(shape=(1,))\n params = [w, b]\n for param in params:\n param.attach_grad()\n return params\n\ndef linreg(X, w, b):\n return nd.dot(X, w) + b\n\ndef squared_loss(yhat, y):\n return (yhat - y.reshape(yhat.shape))**2/2\n\ndef data_iter(batch_size, num_examples, X, y):\n idx = list(range(num_examples))\n random.shuffle(idx)\n for i in range(0, num_examples, batch_size):\n j = nd.array(idx[i:min(i+batch_size, num_examples)])\n yield X.take(j), y.take(j)\n ",
"_____no_output_____"
],
[
"net = linreg\nsquared_loss = squared_loss\n# mini-batch sgd 时,当迭代周期大于2,lr 在每个迭代周期开始时自乘 0.1 做衰减 decay\ndef optimize(batch_size, lr, num_epochs, log_interval, decay_epoch):\n w, b = init_params()\n y_vals = [squared_loss(net(X, w, b), y).mean().asnumpy()]\n print('batch_size', batch_size)\n for epoch in range(1, num_epochs+1):\n # 学习率自我衰减\n if decay_epoch and epoch > decay_epoch:\n lr *= 0.1\n for batch_i, (features, label) in enumerate(data_iter(batch_size, num_examples, X, y)):\n with autograd.record():\n output = net(features, w, b)\n loss = squared_loss(output, label)\n loss.backward()\n sgd([w, b], lr, batch_size)\n if batch_i*batch_size % log_interval == 0:\n y_vals.append(squared_loss(net(X, w, b), y).mean().asnumpy())\n print('epoch %d, learning rage %f, loss %.4e' %(epoch, lr, y_vals[-1]))\n # 为了便于打印,改变输出形状并转化成numpy数组。\n print('w', w.reshape((1, -1)).asnumpy(), 'b', b.asscalar(), '\\n')\n x_vals = np.linspace(0, num_epochs, len(y_vals), endpoint=True)\n utils.semilogy(x_vals, y_vals, 'epoch', 'loss')",
"_____no_output_____"
],
[
"optimize(batch_size=2, lr=0.2, num_epochs=3, decay_epoch=2, log_interval=10)",
"('batch_size', 2)\nepoch 1, learning rage 0.200000, loss 5.7228e-05\nepoch 2, learning rage 0.200000, loss 5.8437e-05\nepoch 3, learning rage 0.020000, loss 4.8656e-05\n('w', array([[ 1.9996711 , -3.40064144]], dtype=float32), 'b', 4.2009869, '\\n')\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0bf70c19e6dc8bd6055ae192bbd177c0238339a | 58,684 | ipynb | Jupyter Notebook | Week 9/4.10. Predicting House Prices on Kaggle.ipynb | Hanif-2610/Machine-Learning-Homework-Project | 16257c4b06eeb66ef847352b20e7009ed3ffaaf8 | [
"Unlicense"
] | 1 | 2021-10-05T16:33:51.000Z | 2021-10-05T16:33:51.000Z | Week 9/4.10. Predicting House Prices on Kaggle.ipynb | Hanif-2610/Machine-Learning-Homework-Project | 16257c4b06eeb66ef847352b20e7009ed3ffaaf8 | [
"Unlicense"
] | null | null | null | Week 9/4.10. Predicting House Prices on Kaggle.ipynb | Hanif-2610/Machine-Learning-Homework-Project | 16257c4b06eeb66ef847352b20e7009ed3ffaaf8 | [
"Unlicense"
] | null | null | null | 58,684 | 58,684 | 0.672176 | [
[
[
"!pip install d2l==0.17.2",
"Requirement already satisfied: d2l==0.17.2 in /usr/local/lib/python3.7/dist-packages (0.17.2)\nRequirement already satisfied: numpy==1.18.5 in /usr/local/lib/python3.7/dist-packages (from d2l==0.17.2) (1.18.5)\nRequirement already satisfied: pandas==1.2.2 in /usr/local/lib/python3.7/dist-packages (from d2l==0.17.2) (1.2.2)\nCollecting matplotlib==3.3.3\n Using cached matplotlib-3.3.3-cp37-cp37m-manylinux1_x86_64.whl (11.6 MB)\nRequirement already satisfied: requests==2.25.1 in /usr/local/lib/python3.7/dist-packages (from d2l==0.17.2) (2.25.1)\nRequirement already satisfied: jupyter==1.0.0 in /usr/local/lib/python3.7/dist-packages (from d2l==0.17.2) (1.0.0)\nRequirement already satisfied: ipykernel in /usr/local/lib/python3.7/dist-packages (from jupyter==1.0.0->d2l==0.17.2) (4.10.1)\nRequirement already satisfied: ipywidgets in /usr/local/lib/python3.7/dist-packages (from jupyter==1.0.0->d2l==0.17.2) (7.6.5)\nRequirement already satisfied: qtconsole in /usr/local/lib/python3.7/dist-packages (from jupyter==1.0.0->d2l==0.17.2) (5.2.2)\nRequirement already satisfied: nbconvert in /usr/local/lib/python3.7/dist-packages (from jupyter==1.0.0->d2l==0.17.2) (5.6.1)\nRequirement already satisfied: notebook in /usr/local/lib/python3.7/dist-packages (from jupyter==1.0.0->d2l==0.17.2) (5.3.1)\nRequirement already satisfied: jupyter-console in /usr/local/lib/python3.7/dist-packages (from jupyter==1.0.0->d2l==0.17.2) (5.2.0)\nRequirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.3 in /usr/local/lib/python3.7/dist-packages (from matplotlib==3.3.3->d2l==0.17.2) (3.0.6)\nRequirement already satisfied: pillow>=6.2.0 in /usr/local/lib/python3.7/dist-packages (from matplotlib==3.3.3->d2l==0.17.2) (7.1.2)\nRequirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.7/dist-packages (from matplotlib==3.3.3->d2l==0.17.2) (0.11.0)\nRequirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib==3.3.3->d2l==0.17.2) (2.8.2)\nRequirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib==3.3.3->d2l==0.17.2) (1.3.2)\nRequirement already satisfied: pytz>=2017.3 in /usr/local/lib/python3.7/dist-packages (from pandas==1.2.2->d2l==0.17.2) (2018.9)\nRequirement already satisfied: urllib3<1.27,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests==2.25.1->d2l==0.17.2) (1.24.3)\nRequirement already satisfied: chardet<5,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests==2.25.1->d2l==0.17.2) (3.0.4)\nRequirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests==2.25.1->d2l==0.17.2) (2.10)\nRequirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests==2.25.1->d2l==0.17.2) (2021.10.8)\nRequirement already satisfied: six>=1.5 in /usr/local/lib/python3.7/dist-packages (from python-dateutil>=2.1->matplotlib==3.3.3->d2l==0.17.2) (1.15.0)\nRequirement already satisfied: jupyter-client in /usr/local/lib/python3.7/dist-packages (from ipykernel->jupyter==1.0.0->d2l==0.17.2) (5.3.5)\nRequirement already satisfied: tornado>=4.0 in /usr/local/lib/python3.7/dist-packages (from ipykernel->jupyter==1.0.0->d2l==0.17.2) (5.1.1)\nRequirement already satisfied: traitlets>=4.1.0 in /usr/local/lib/python3.7/dist-packages (from ipykernel->jupyter==1.0.0->d2l==0.17.2) (5.1.1)\nRequirement already satisfied: ipython>=4.0.0 in /usr/local/lib/python3.7/dist-packages (from ipykernel->jupyter==1.0.0->d2l==0.17.2) (5.5.0)\nRequirement already satisfied: pygments in /usr/local/lib/python3.7/dist-packages (from ipython>=4.0.0->ipykernel->jupyter==1.0.0->d2l==0.17.2) (2.6.1)\nRequirement already satisfied: pexpect in /usr/local/lib/python3.7/dist-packages (from ipython>=4.0.0->ipykernel->jupyter==1.0.0->d2l==0.17.2) (4.8.0)\nRequirement already satisfied: prompt-toolkit<2.0.0,>=1.0.4 in /usr/local/lib/python3.7/dist-packages (from ipython>=4.0.0->ipykernel->jupyter==1.0.0->d2l==0.17.2) (1.0.18)\nRequirement already satisfied: decorator in /usr/local/lib/python3.7/dist-packages (from ipython>=4.0.0->ipykernel->jupyter==1.0.0->d2l==0.17.2) (4.4.2)\nRequirement already satisfied: simplegeneric>0.8 in /usr/local/lib/python3.7/dist-packages (from ipython>=4.0.0->ipykernel->jupyter==1.0.0->d2l==0.17.2) (0.8.1)\nRequirement already satisfied: setuptools>=18.5 in /usr/local/lib/python3.7/dist-packages (from ipython>=4.0.0->ipykernel->jupyter==1.0.0->d2l==0.17.2) (57.4.0)\nRequirement already satisfied: pickleshare in /usr/local/lib/python3.7/dist-packages (from ipython>=4.0.0->ipykernel->jupyter==1.0.0->d2l==0.17.2) (0.7.5)\nRequirement already satisfied: wcwidth in /usr/local/lib/python3.7/dist-packages (from prompt-toolkit<2.0.0,>=1.0.4->ipython>=4.0.0->ipykernel->jupyter==1.0.0->d2l==0.17.2) (0.2.5)\nRequirement already satisfied: nbformat>=4.2.0 in /usr/local/lib/python3.7/dist-packages (from ipywidgets->jupyter==1.0.0->d2l==0.17.2) (5.1.3)\nRequirement already satisfied: ipython-genutils~=0.2.0 in /usr/local/lib/python3.7/dist-packages (from ipywidgets->jupyter==1.0.0->d2l==0.17.2) (0.2.0)\nRequirement already satisfied: widgetsnbextension~=3.5.0 in /usr/local/lib/python3.7/dist-packages (from ipywidgets->jupyter==1.0.0->d2l==0.17.2) (3.5.2)\nRequirement already satisfied: jupyterlab-widgets>=1.0.0 in /usr/local/lib/python3.7/dist-packages (from ipywidgets->jupyter==1.0.0->d2l==0.17.2) (1.0.2)\nRequirement already satisfied: jupyter-core in /usr/local/lib/python3.7/dist-packages (from nbformat>=4.2.0->ipywidgets->jupyter==1.0.0->d2l==0.17.2) (4.9.1)\nRequirement already satisfied: jsonschema!=2.5.0,>=2.4 in /usr/local/lib/python3.7/dist-packages (from nbformat>=4.2.0->ipywidgets->jupyter==1.0.0->d2l==0.17.2) (4.3.3)\nRequirement already satisfied: importlib-metadata in /usr/local/lib/python3.7/dist-packages (from jsonschema!=2.5.0,>=2.4->nbformat>=4.2.0->ipywidgets->jupyter==1.0.0->d2l==0.17.2) (4.10.0)\nRequirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.7/dist-packages (from jsonschema!=2.5.0,>=2.4->nbformat>=4.2.0->ipywidgets->jupyter==1.0.0->d2l==0.17.2) (21.4.0)\nRequirement already satisfied: importlib-resources>=1.4.0 in /usr/local/lib/python3.7/dist-packages (from jsonschema!=2.5.0,>=2.4->nbformat>=4.2.0->ipywidgets->jupyter==1.0.0->d2l==0.17.2) (5.4.0)\nRequirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.7/dist-packages (from jsonschema!=2.5.0,>=2.4->nbformat>=4.2.0->ipywidgets->jupyter==1.0.0->d2l==0.17.2) (0.18.0)\nRequirement already satisfied: typing-extensions in /usr/local/lib/python3.7/dist-packages (from jsonschema!=2.5.0,>=2.4->nbformat>=4.2.0->ipywidgets->jupyter==1.0.0->d2l==0.17.2) (3.10.0.2)\nRequirement already satisfied: zipp>=3.1.0 in /usr/local/lib/python3.7/dist-packages (from importlib-resources>=1.4.0->jsonschema!=2.5.0,>=2.4->nbformat>=4.2.0->ipywidgets->jupyter==1.0.0->d2l==0.17.2) (3.7.0)\nRequirement already satisfied: Send2Trash in /usr/local/lib/python3.7/dist-packages (from notebook->jupyter==1.0.0->d2l==0.17.2) (1.8.0)\nRequirement already satisfied: terminado>=0.8.1 in /usr/local/lib/python3.7/dist-packages (from notebook->jupyter==1.0.0->d2l==0.17.2) (0.12.1)\nRequirement already satisfied: jinja2 in /usr/local/lib/python3.7/dist-packages (from notebook->jupyter==1.0.0->d2l==0.17.2) (2.11.3)\nRequirement already satisfied: pyzmq>=13 in /usr/local/lib/python3.7/dist-packages (from jupyter-client->ipykernel->jupyter==1.0.0->d2l==0.17.2) (22.3.0)\nRequirement already satisfied: ptyprocess in /usr/local/lib/python3.7/dist-packages (from terminado>=0.8.1->notebook->jupyter==1.0.0->d2l==0.17.2) (0.7.0)\nRequirement already satisfied: MarkupSafe>=0.23 in /usr/local/lib/python3.7/dist-packages (from jinja2->notebook->jupyter==1.0.0->d2l==0.17.2) (2.0.1)\nRequirement already satisfied: bleach in /usr/local/lib/python3.7/dist-packages (from nbconvert->jupyter==1.0.0->d2l==0.17.2) (4.1.0)\nRequirement already satisfied: mistune<2,>=0.8.1 in /usr/local/lib/python3.7/dist-packages (from nbconvert->jupyter==1.0.0->d2l==0.17.2) (0.8.4)\nRequirement already satisfied: testpath in /usr/local/lib/python3.7/dist-packages (from nbconvert->jupyter==1.0.0->d2l==0.17.2) (0.5.0)\nRequirement already satisfied: entrypoints>=0.2.2 in /usr/local/lib/python3.7/dist-packages (from nbconvert->jupyter==1.0.0->d2l==0.17.2) (0.3)\nRequirement already satisfied: defusedxml in /usr/local/lib/python3.7/dist-packages (from nbconvert->jupyter==1.0.0->d2l==0.17.2) (0.7.1)\nRequirement already satisfied: pandocfilters>=1.4.1 in /usr/local/lib/python3.7/dist-packages (from nbconvert->jupyter==1.0.0->d2l==0.17.2) (1.5.0)\nRequirement already satisfied: webencodings in /usr/local/lib/python3.7/dist-packages (from bleach->nbconvert->jupyter==1.0.0->d2l==0.17.2) (0.5.1)\nRequirement already satisfied: packaging in /usr/local/lib/python3.7/dist-packages (from bleach->nbconvert->jupyter==1.0.0->d2l==0.17.2) (21.3)\nRequirement already satisfied: qtpy in /usr/local/lib/python3.7/dist-packages (from qtconsole->jupyter==1.0.0->d2l==0.17.2) (2.0.0)\nInstalling collected packages: matplotlib\n Attempting uninstall: matplotlib\n Found existing installation: matplotlib 3.5.1\n Uninstalling matplotlib-3.5.1:\n Successfully uninstalled matplotlib-3.5.1\n\u001b[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\nalbumentations 0.1.12 requires imgaug<0.2.7,>=0.2.5, but you have imgaug 0.2.9 which is incompatible.\u001b[0m\nSuccessfully installed matplotlib-3.3.3\n"
],
[
"# implement several utility functions to facilitate data downloading\nimport hashlib\nimport os\nimport tarfile\nimport zipfile\nimport requests\n\nDATA_HUB = dict()\nDATA_URL = 'http://d2l-data.s3-accelerate.amazonaws.com/'",
"_____no_output_____"
],
[
"# download function to download a dataset\ndef download(name, cache_dir=os.path.join('..', 'data')):\n \"\"\"Download a file inserted into DATA_HUB, return the local filename.\"\"\"\n assert name in DATA_HUB, f\"{name} does not exist in {DATA_HUB}.\"\n url, sha1_hash = DATA_HUB[name]\n os.makedirs(cache_dir, exist_ok=True)\n fname = os.path.join(cache_dir, url.split('/')[-1])\n if os.path.exists(fname):\n sha1 = hashlib.sha1()\n with open(fname, 'rb') as f:\n while True:\n data = f.read(1048576)\n if not data:\n break\n sha1.update(data)\n if sha1.hexdigest() == sha1_hash:\n return fname # Hit cache\n print(f'Downloading {fname} from {url}...')\n r = requests.get(url, stream=True, verify=True)\n with open(fname, 'wb') as f:\n f.write(r.content)\n return fname",
"_____no_output_____"
],
[
"# implement two additional utility functions: one is to download and extract a zip or tar file and the other to download all the datasets used in this book from DATA_HUB into the cache directory\ndef download_extract(name, folder=None):\n \"\"\"Download and extract a zip/tar file.\"\"\"\n fname = download(name)\n base_dir = os.path.dirname(fname)\n data_dir, ext = os.path.splitext(fname)\n if ext == '.zip':\n fp = zipfile.ZipFile(fname, 'r')\n elif ext in ('.tar', '.gz'):\n fp = tarfile.open(fname, 'r')\n else:\n assert False, 'Only zip/tar files can be extracted.'\n fp.extractall(base_dir)\n return os.path.join(base_dir, folder) if folder else data_dir\n\ndef download_all():\n \"\"\"Download all files in the DATA_HUB.\"\"\"\n for name in DATA_HUB:\n download(name)",
"_____no_output_____"
]
],
[
[
"Accessing and Reading the Dataset",
"_____no_output_____"
]
],
[
[
"# If pandas is not installed, please uncomment the following line:\n!pip install pandas\n\n%matplotlib inline\nimport numpy as np\nimport pandas as pd\nimport tensorflow as tf\nfrom d2l import tensorflow as d2l",
"Requirement already satisfied: pandas in /usr/local/lib/python3.7/dist-packages (1.2.2)\nRequirement already satisfied: pytz>=2017.3 in /usr/local/lib/python3.7/dist-packages (from pandas) (2018.9)\nRequirement already satisfied: numpy>=1.16.5 in /usr/local/lib/python3.7/dist-packages (from pandas) (1.18.5)\nRequirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.7/dist-packages (from pandas) (2.8.2)\nRequirement already satisfied: six>=1.5 in /usr/local/lib/python3.7/dist-packages (from python-dateutil>=2.7.3->pandas) (1.15.0)\n"
],
[
"# download and cache the Kaggle housing dataset\nDATA_HUB['kaggle_house_train'] = ( \n DATA_URL + 'kaggle_house_pred_train.csv',\n '585e9cc93e70b39160e7921475f9bcd7d31219ce')\n\nDATA_HUB['kaggle_house_test'] = ( \n DATA_URL + 'kaggle_house_pred_test.csv',\n 'fa19780a7b011d9b009e8bff8e99922a8ee2eb90')",
"_____no_output_____"
],
[
"# use pandas to load the two csv files containing training and test data respectively\ntrain_data = pd.read_csv(download('kaggle_house_train'))\ntest_data = pd.read_csv(download('kaggle_house_test'))",
"_____no_output_____"
],
[
"# training dataset includes 1460 examples, 80 features, and 1 label, while the test data contains 1459 examples and 80 features\nprint(train_data.shape)\nprint(test_data.shape)",
"(1460, 81)\n(1459, 80)\n"
],
[
"# take a look at the first four and last two features as well as the label (SalePrice)\nprint(train_data.iloc[0:4, [0, 1, 2, 3, -3, -2, -1]])",
" Id MSSubClass MSZoning LotFrontage SaleType SaleCondition SalePrice\n0 1 60 RL 65.0 WD Normal 208500\n1 2 20 RL 80.0 WD Normal 181500\n2 3 60 RL 68.0 WD Normal 223500\n3 4 70 RL 60.0 WD Abnorml 140000\n"
],
[
"all_features = pd.concat((train_data.iloc[:, 1:-1], test_data.iloc[:, 1:]))",
"_____no_output_____"
]
],
[
[
"Data Preprocessing",
"_____no_output_____"
]
],
[
[
"# If test data were inaccessible, mean and standard deviation could be\n# calculated from training data\nnumeric_features = all_features.dtypes[all_features.dtypes != 'object'].index\nall_features[numeric_features] = all_features[numeric_features].apply(\n lambda x: (x - x.mean()) / (x.std()))\n# After standardizing the data all means vanish, hence we can set missing\n# values to 0\nall_features[numeric_features] = all_features[numeric_features].fillna(0)",
"_____no_output_____"
],
[
"# `Dummy_na=True` considers \"na\" (missing value) as a valid feature value, and\n# creates an indicator feature for it\nall_features = pd.get_dummies(all_features, dummy_na=True)\nall_features.shape",
"_____no_output_____"
],
[
"# extract the NumPy format from the pandas format and convert it into the tensor\nn_train = train_data.shape[0]\ntrain_features = tf.constant(all_features[:n_train].values, dtype=tf.float32)\ntest_features = tf.constant(all_features[n_train:].values, dtype=tf.float32)\ntrain_labels = tf.constant(\n train_data.SalePrice.values.reshape(-1, 1), dtype=tf.float32)",
"_____no_output_____"
]
],
[
[
"Training",
"_____no_output_____"
]
],
[
[
"loss = tf.keras.losses.MeanSquaredError()\n\ndef get_net():\n net = tf.keras.models.Sequential()\n net.add(tf.keras.layers.Dense(\n 1, kernel_regularizer=tf.keras.regularizers.l2(weight_decay)))\n return net",
"_____no_output_____"
],
[
"def log_rmse(y_true, y_pred):\n # To further stabilize the value when the logarithm is taken, set the\n # value less than 1 as 1\n clipped_preds = tf.clip_by_value(y_pred, 1, float('inf'))\n return tf.sqrt(tf.reduce_mean(loss(\n tf.math.log(y_true), tf.math.log(clipped_preds))))",
"_____no_output_____"
],
[
"def train(net, train_features, train_labels, test_features, test_labels,\n num_epochs, learning_rate, weight_decay, batch_size):\n train_ls, test_ls = [], []\n train_iter = d2l.load_array((train_features, train_labels), batch_size)\n # The Adam optimization algorithm is used here\n optimizer = tf.keras.optimizers.Adam(learning_rate)\n net.compile(loss=loss, optimizer=optimizer)\n for epoch in range(num_epochs):\n for X, y in train_iter:\n with tf.GradientTape() as tape:\n y_hat = net(X)\n l = loss(y, y_hat)\n params = net.trainable_variables\n grads = tape.gradient(l, params)\n optimizer.apply_gradients(zip(grads, params))\n train_ls.append(log_rmse(train_labels, net(train_features)))\n if test_labels is not None:\n test_ls.append(log_rmse(test_labels, net(test_features)))\n return train_ls, test_ls",
"_____no_output_____"
]
],
[
[
"K -Fold Cross-Validation",
"_____no_output_____"
]
],
[
[
"# function that returns the ith fold of the data in a K -fold cross-validation procedure\ndef get_k_fold_data(k, i, X, y):\n assert k > 1\n fold_size = X.shape[0] // k\n X_train, y_train = None, None\n for j in range(k):\n idx = slice(j * fold_size, (j + 1) * fold_size)\n X_part, y_part = X[idx, :], y[idx]\n if j == i:\n X_valid, y_valid = X_part, y_part\n elif X_train is None:\n X_train, y_train = X_part, y_part\n else:\n X_train = tf.concat([X_train, X_part], 0)\n y_train = tf.concat([y_train, y_part], 0)\n return X_train, y_train, X_valid, y_valid",
"_____no_output_____"
],
[
"# The training and verification error averages are returned when we train K times in the K -fold cross-validation\ndef k_fold(k, X_train, y_train, num_epochs, learning_rate, weight_decay,\n batch_size):\n train_l_sum, valid_l_sum = 0, 0\n for i in range(k):\n data = get_k_fold_data(k, i, X_train, y_train)\n net = get_net()\n train_ls, valid_ls = train(net, *data, num_epochs, learning_rate,\n weight_decay, batch_size)\n train_l_sum += train_ls[-1]\n valid_l_sum += valid_ls[-1]\n if i == 0:\n d2l.plot(list(range(1, num_epochs + 1)), [train_ls, valid_ls],\n xlabel='epoch', ylabel='rmse', xlim=[1, num_epochs],\n legend=['train', 'valid'], yscale='log')\n print(f'fold {i + 1}, train log rmse {float(train_ls[-1]):f}, '\n f'valid log rmse {float(valid_ls[-1]):f}')\n return train_l_sum / k, valid_l_sum / k",
"_____no_output_____"
]
],
[
[
"Model Selection",
"_____no_output_____"
]
],
[
[
"!pip uninstall matplotlib\n!pip install --upgrade matplotlib",
"Found existing installation: matplotlib 3.3.3\nUninstalling matplotlib-3.3.3:\n Would remove:\n /usr/local/lib/python3.7/dist-packages/matplotlib-3.3.3-py3.7-nspkg.pth\n /usr/local/lib/python3.7/dist-packages/matplotlib-3.3.3.dist-info/*\n /usr/local/lib/python3.7/dist-packages/matplotlib/*\n /usr/local/lib/python3.7/dist-packages/mpl_toolkits/axes_grid/*\n /usr/local/lib/python3.7/dist-packages/mpl_toolkits/axes_grid1/*\n /usr/local/lib/python3.7/dist-packages/mpl_toolkits/axisartist/*\n /usr/local/lib/python3.7/dist-packages/mpl_toolkits/mplot3d/*\n /usr/local/lib/python3.7/dist-packages/mpl_toolkits/tests/*\n /usr/local/lib/python3.7/dist-packages/pylab.py\nProceed (y/n)? y\n Successfully uninstalled matplotlib-3.3.3\nCollecting matplotlib\n Using cached matplotlib-3.5.1-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl (11.2 MB)\nRequirement already satisfied: numpy>=1.17 in /usr/local/lib/python3.7/dist-packages (from matplotlib) (1.18.5)\nRequirement already satisfied: python-dateutil>=2.7 in /usr/local/lib/python3.7/dist-packages (from matplotlib) (2.8.2)\nRequirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib) (1.3.2)\nRequirement already satisfied: pillow>=6.2.0 in /usr/local/lib/python3.7/dist-packages (from matplotlib) (7.1.2)\nRequirement already satisfied: fonttools>=4.22.0 in /usr/local/lib/python3.7/dist-packages (from matplotlib) (4.28.5)\nRequirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.7/dist-packages (from matplotlib) (21.3)\nRequirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.7/dist-packages (from matplotlib) (0.11.0)\nRequirement already satisfied: pyparsing>=2.2.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib) (3.0.6)\nRequirement already satisfied: six>=1.5 in /usr/local/lib/python3.7/dist-packages (from python-dateutil>=2.7->matplotlib) (1.15.0)\nInstalling collected packages: matplotlib\n\u001b[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\nd2l 0.17.2 requires matplotlib==3.3.3, but you have matplotlib 3.5.1 which is incompatible.\nalbumentations 0.1.12 requires imgaug<0.2.7,>=0.2.5, but you have imgaug 0.2.9 which is incompatible.\u001b[0m\nSuccessfully installed matplotlib-3.5.1\n"
],
[
"k, num_epochs, lr, weight_decay, batch_size = 5, 100, 5, 0, 64\ntrain_l, valid_l = k_fold(k, train_features, train_labels, num_epochs, lr,\n weight_decay, batch_size)\nprint(f'{k}-fold validation: avg train log rmse: {float(train_l):f}, '\n f'avg valid log rmse: {float(valid_l):f}')",
"fold 1, train log rmse 0.170148, valid log rmse 0.156585\nfold 2, train log rmse 0.162369, valid log rmse 0.191238\nfold 3, train log rmse 0.163789, valid log rmse 0.168228\nfold 4, train log rmse 0.167955, valid log rmse 0.154542\nfold 5, train log rmse 0.163108, valid log rmse 0.182900\n5-fold validation: avg train log rmse: 0.165474, avg valid log rmse: 0.170699\n"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d0bf7278a614607b04a1161633ec36d99face53c | 52,830 | ipynb | Jupyter Notebook | Vacation_Search.ipynb | bbar12/World_Weather_Analysis | 3329343d1ac7401829e01cb0210463156873ed1f | [
"MIT"
] | null | null | null | Vacation_Search.ipynb | bbar12/World_Weather_Analysis | 3329343d1ac7401829e01cb0210463156873ed1f | [
"MIT"
] | null | null | null | Vacation_Search.ipynb | bbar12/World_Weather_Analysis | 3329343d1ac7401829e01cb0210463156873ed1f | [
"MIT"
] | null | null | null | 33.373342 | 112 | 0.339807 | [
[
[
"#Goal: Have customers narrow their travel searches based on temp and precipitation\nimport pandas as pd\nimport requests\nimport gmaps\n\nfrom config import g_key",
"_____no_output_____"
],
[
"weather_data_df=pd.read_csv(\"data/WeatherPy_Database.csv\")\nweather_data_df",
"_____no_output_____"
],
[
"weather_data_df.dtypes",
"_____no_output_____"
],
[
"#configure gmaps to use the appropriate key\ngmaps.configure(api_key=g_key)",
"_____no_output_____"
],
[
"#Confuguring inputs \nmin_temp=float(input(\"What is the minimum temperature you would like for your vacation?\"))\nmax_temp=float(input(\"What is the maximum temperature you would like for your vacation?\"))",
"What is the minimum temperature you would like for your vacation?50\nWhat is the maximum temperature you would like for your vacation?90\n"
],
[
"#Ask if they would like it to rain \nrain_check=input(\"Would you like it to be raining? (yes/no)\")",
"Would you like it to be raining? (yes/no)no\n"
],
[
"#Do the same for snow \nsnow_check=input(\"Would you like it to be snowing? (yes/no)\")",
"Would you like it to be snowing? (yes/no)no\n"
],
[
"#performing the conditionals for rain check and snow check with min and max temps\n#starting if loop and developing the loop with elifs for other criteria to be met\nif rain_check == \"no\" and snow_check == \"no\":\n resulting_cities=weather_data_df.loc[(weather_data_df[\"Max Temp\"] <= max_temp) &\n (weather_data_df[\"Max Temp\"] >= min_temp) &\n (weather_data_df[\"Rainfall\"] == 0) &\n (weather_data_df[\"Snowfall\"] == 0)]\nelif rain_check == \"no\" and snow_check == \"yes\":\n resulting_cities=weather_data_df.loc[(weather_data_df[\"Max Temp\"] <= max_temp) &\n (weather_data_df[\"Max Temp\"] >= min_temp) &\n (weather_data_df[\"Rainfall\"] == 0) &\n (weather_data_df[\"Snowfall\"] > 0.0)]\nelif rain_check == \"yes\" and snow_check == \"no\":\n resulting_cities=weather_data_df.loc[(weather_data_df[\"Max Temp\"] <= max_temp) &\n (weather_data_df[\"Max Temp\"] >= min_temp) &\n (weather_data_df[\"Rainfall\"] > 0.0) &\n (weather_data_df[\"Snowfall\"] == 0)]\nelse:\n resulting_cities=weather_data_df.loc[(weather_data_df[\"Max Temp\"] <= max_temp) &\n (weather_data_df[\"Max Temp\"] >= min_temp) &\n (weather_data_df[\"Rainfall\"] > 0.0) &\n (weather_data_df[\"Snowfall\"] > 0.0)]",
"_____no_output_____"
],
[
"#checking results\nresulting_cities.count()",
"_____no_output_____"
],
[
"#Display top ten cities\nresulting_cities.head(10)",
"_____no_output_____"
],
[
"#drop null values\nresulting_cities=resulting_cities.dropna()\nresulting_cities",
"_____no_output_____"
],
[
"#making new dataframe to set up for markers by copying resulting dataframe and adding hotel name column\nhotel_markers_df=resulting_cities[[\"City\", \"Country\", \"Max Temp\", \"Lat\", \"Lng\"]].copy()\nhotel_markers_df[\"Hotel Name\"]=\"\"\nhotel_markers_df.head(10)",
"_____no_output_____"
],
[
"# Set parameters to search for a hotel.\nparams = {\n \"radius\": 5000,\n \"type\": \"lodging\",\n \"key\": g_key\n}\n# Iterate through the DataFrame.\nfor index, row in hotel_markers_df.iterrows():\n # Get the latitude and longitude.\n lat = row[\"Lat\"]\n lng = row[\"Lng\"]\n \n params[\"location\"]=f\"{lat},{lng}\"\n \n base_url = \"https://maps.googleapis.com/maps/api/place/nearbysearch/json\"\n hotels = requests.get(base_url, params=params).json()\n try:\n hotel_markers_df.loc[index, \"Hotel Name\"] = hotels[\"results\"][0][\"name\"]\n except (IndexError):\n print(\"Hotel not found... skipping.\")\n",
"Hotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\nHotel not found... skipping.\n"
],
[
"#checking results\nhotel_markers_df.head(10)",
"_____no_output_____"
],
[
"#creating the csv file to store in \nhotel_data_file=\"data/WeatherPy_Vacation.csv\"\nhotel_markers_df.to_csv(hotel_data_file, index_label=\"City_ID\")",
"_____no_output_____"
],
[
"vacation_hotel_df=pd.read_csv(\"data/WeatherPy_Vacation.csv\")\nvacation_hotel_df.head()",
"_____no_output_____"
],
[
"#adding hotel marks to map\ninfo_box_template = \"\"\"\n<dl>\n<dt>Hotel Name</dt><dd>{Hotel Name}</dd>\n<dt>City</dt><dd>{City}</dd>\n<dt>Country</dt><dd>{Country}</dd>\n<dt>Hotel Name</dt><dd>{Hotel Name} at {Max Temp}</dd>\n</dl>\n\"\"\"\n# Store the DataFrame Row.\nhotel_info = [info_box_template.format(**row) for index, row in vacation_hotel_df.iterrows()]\nlocations=vacation_hotel_df[[\"Lat\", \"Lng\"]]\n# Add a heatmap of temperature for the vacation spots.\nmax_temp = vacation_hotel_df[\"Max Temp\"]\nfig = gmaps.figure(center=(30.0, 31.0), zoom_level=1.5)\nmarker_layer = gmaps.marker_layer(locations, info_box_content=hotel_info)\nfig.add_layer(marker_layer)\n# Call the figure to plot the data.\nfig",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0bf7f9da669bdc8336a872042d6547751db7f0b | 14,069 | ipynb | Jupyter Notebook | jupyter_notebooks/Numpy_Memo0.ipynb | 0pt3ryx/0pt3ryx.github.io | 8e5584a85f888bd4a5150b3f547ff7fdbd4bb826 | [
"MIT"
] | null | null | null | jupyter_notebooks/Numpy_Memo0.ipynb | 0pt3ryx/0pt3ryx.github.io | 8e5584a85f888bd4a5150b3f547ff7fdbd4bb826 | [
"MIT"
] | null | null | null | jupyter_notebooks/Numpy_Memo0.ipynb | 0pt3ryx/0pt3ryx.github.io | 8e5584a85f888bd4a5150b3f547ff7fdbd4bb826 | [
"MIT"
] | null | null | null | 17.58625 | 92 | 0.392139 | [
[
[
"import numpy as np",
"_____no_output_____"
]
],
[
[
"# 1. 배열 생성",
"_____no_output_____"
]
],
[
[
"np.array([1, 2, 3, 4])",
"_____no_output_____"
],
[
"np.array([[1, 2, 3], [4, 5, 6]])",
"_____no_output_____"
]
],
[
[
"## 1.1. 자료형 지정",
"_____no_output_____"
]
],
[
[
"np.array([1.2, 2.1, -1.6, 4.45], dtype='f')",
"_____no_output_____"
]
],
[
[
"## 1.2. 영벡터 생성",
"_____no_output_____"
]
],
[
[
"np.zeros(3, dtype='i')",
"_____no_output_____"
],
[
"np.zeros((2, 3))",
"_____no_output_____"
]
],
[
[
"## 1.3. 원소가 1인 벡터 생성",
"_____no_output_____"
]
],
[
[
"np.ones((2, 6))",
"_____no_output_____"
]
],
[
[
"## 1.4. 다른 배열을 이용한 벡터 생성",
"_____no_output_____"
]
],
[
[
"a = np.array([[1, 2,], [3, 4], [5, 6]])\nprint(a)\nprint(a.shape)\nb = np.zeros_like(a, dtype='f')\n# b = np.ones_like(a, dtype='f')\nprint()\nprint(b)\nprint(b.shape)",
"[[1 2]\n [3 4]\n [5 6]]\n(3, 2)\n\n[[1. 1.]\n [1. 1.]\n [1. 1.]]\n(3, 2)\n"
]
],
[
[
"## 1.5. 초기화 없이 배열 생성\n### @메모리에 있는 값을 바탕으로 배열 요소값이 지정됨",
"_____no_output_____"
]
],
[
[
"a = np.empty((10, 3))\nprint(a)",
"[[1.41158355e-311 9.58487353e-322 0.00000000e+000]\n [0.00000000e+000 6.95293141e-310 5.02034658e+175]\n [1.27010965e-075 2.57751692e-056 4.16150492e-061]\n [6.12673096e-062 1.47763641e+248 1.16096346e-028]\n [7.69165785e+218 1.35617292e+248 1.10208290e-046]\n [5.64373615e-038 1.40415688e+165 2.89818752e-057]\n [4.26050046e-096 6.32299154e+233 6.48224638e+170]\n [5.22411352e+257 5.74020278e+180 8.37174974e-144]\n [1.41529402e+161 9.16651763e-072 4.52660428e+097]\n [1.24716443e-047 3.58569463e+126 2.12201162e-314]]\n"
]
],
[
[
"## 1.6. Arange 사용",
"_____no_output_____"
]
],
[
[
"np.arange(10)",
"_____no_output_____"
],
[
"# np.arange(시작, 끝, 공차)\nnp.arange(3, 20, 4)",
"_____no_output_____"
]
],
[
[
"# 2. 차원 확인",
"_____no_output_____"
]
],
[
[
"a = np.array([[1, 2, 3], [4, 5, 6]])\nprint(a.shape)\nprint(np.ndim(a))",
"(2, 3)\n2\n"
],
[
"a = np.array([[[1, 2, 3], [4, 5, 6]], [[2, 4, 6], [-1, 0, 0]]])\nprint(a)\nprint(a.shape)",
"[[[ 1 2 3]\n [ 4 5 6]]\n\n [[ 2 4 6]\n [-1 0 0]]]\n(2, 2, 3)\n"
],
[
"a = np.array([[[1, 2, 3]], [[-1, 0, 0]]])\nprint(a)\nprint(a.shape)\nprint(np.ndim(a))",
"[[[ 1 2 3]]\n\n [[-1 0 0]]]\n(2, 1, 3)\n3\n"
]
],
[
[
"# 3. 차원 변경",
"_____no_output_____"
]
],
[
[
"a = np.array([1, 2, 3, 4, 5, 6])\nprint(a)",
"[1 2 3 4 5 6]\n"
],
[
"print(a.reshape([2, 3]))",
"[[1 2 3]\n [4 5 6]]\n"
],
[
"print(a.reshape([2, -1]))",
"[[1 2 3]\n [4 5 6]]\n"
],
[
"print(a.reshape([-1, 2]))",
"[[1 2]\n [3 4]\n [5 6]]\n"
],
[
"print(a.reshape([2, 3, 1]))",
"[[[1]\n [2]\n [3]]\n\n [[4]\n [5]\n [6]]]\n"
]
],
[
[
"## 3.1. 전치",
"_____no_output_____"
]
],
[
[
"a = np.array([[1, 2, 3,], [4, 5, 6]])\nprint(a)\nprint(a.shape)\nprint()\n\nb = a.T\nprint(b)\nprint(b.shape)",
"[[1 2 3]\n [4 5 6]]\n(2, 3)\n\n[[1 4]\n [2 5]\n [3 6]]\n(3, 2)\n"
]
],
[
[
"## 3.2. 차원축 추가",
"_____no_output_____"
]
],
[
[
"a = np.array([[1, 2, 3], [4, 5, 6]])\nprint(a.shape)\nprint()\n\na = np.expand_dims(a, axis=0)\nprint(a.shape)",
"(2, 3)\n\n(1, 2, 3)\n"
]
],
[
[
"# 4. 자료형 변경",
"_____no_output_____"
]
],
[
[
"a = np.array([[1, 2, 3], [4, 5, 6]])\nprint(a)\nprint(a.astype('float32'))",
"[[1 2 3]\n [4 5 6]]\n[[1. 2. 3.]\n [4. 5. 6.]]\n"
]
],
[
[
"# 5. 배열 계산",
"_____no_output_____"
],
[
"## 5.1. Elementwise 계산",
"_____no_output_____"
]
],
[
[
"a = np.array([[1, 2, 3], [4, 5, 6]]).astype('float32')\nprint(a)\nprint(a/2)",
"[[1. 2. 3.]\n [4. 5. 6.]]\n[[0.5 1. 1.5]\n [2. 2.5 3. ]]\n"
],
[
"a = np.array([1, 2, 3, 4, 5, 6])\nprint(a)\nb = a >= 4\nprint(b)\nprint(b.astype(np.int))",
"[1 2 3 4 5 6]\n[False False False True True True]\n[0 0 0 1 1 1]\n"
]
],
[
[
"## 5.2. 행렬 곱",
"_____no_output_____"
]
],
[
[
"a = np.array([[1, 0], [2, 2]])\nprint(a.shape)\n\nb = np.array([[2, 2], [3, 0]])\nprint(b.shape)\n\nprint(np.dot(a, b))",
"(2, 2)\n(2, 2)\n[[ 2 2]\n [10 4]]\n"
],
[
"a = np.array(\n [[1, 1, 2],\n [0, 2, 0]])\nprint(a.shape)\n\nb = np.array(\n [[1, 2],\n [0, 3],\n [1, 1]])\nprint(b.shape)\n\nprint(np.dot(a, b))",
"(2, 3)\n(3, 2)\n[[3 7]\n [0 6]]\n"
]
],
[
[
"# 6. hstack",
"_____no_output_____"
]
],
[
[
"a = np.array([1, 1, [2, 2, -1], [3, 0], 6])\nprint(a)\nb = np.hstack(a)\nprint(b)",
"[1 1 list([2, 2, -1]) list([3, 0]) 6]\n[ 1 1 2 2 -1 3 0 6]\n"
],
[
"a = np.array([[1, 1, [2, 2, -1], [3, 0], 6], [12, 12, [21, 21, -11], [30, 0], 16]])\nprint(a)\nb = np.hstack(a)\nprint(b)",
"[[1 1 list([2, 2, -1]) list([3, 0]) 6]\n [12 12 list([21, 21, -11]) list([30, 0]) 16]]\n[1 1 list([2, 2, -1]) list([3, 0]) 6 12 12 list([21, 21, -11])\n list([30, 0]) 16]\n"
],
[
"tmp = list()\nfor i in a:\n tmp.append(np.hstack(i))\n\ntmp1 = np.vstack(tmp)\nprint(tmp1)",
"[[ 1 1 2 2 -1 3 0 6]\n [ 12 12 21 21 -11 30 0 16]]\n"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
d0bf8e1e9a2e73d504f92a4a027b88cee14b7319 | 2,980 | ipynb | Jupyter Notebook | idaes/apps/uncertainty_propagation/examples/uncertainty_propagation_NRTL.ipynb | carldlaird/idaes-pse | cc7a32ca9fa788f483fa8ef85f3d1186ef4a596f | [
"RSA-MD"
] | 112 | 2019-02-11T23:16:36.000Z | 2022-03-23T20:59:57.000Z | idaes/apps/uncertainty_propagation/examples/uncertainty_propagation_NRTL.ipynb | carldlaird/idaes-pse | cc7a32ca9fa788f483fa8ef85f3d1186ef4a596f | [
"RSA-MD"
] | 621 | 2019-03-01T14:44:12.000Z | 2022-03-31T19:49:25.000Z | idaes/apps/uncertainty_propagation/examples/uncertainty_propagation_NRTL.ipynb | carldlaird/idaes-pse | cc7a32ca9fa788f483fa8ef85f3d1186ef4a596f | [
"RSA-MD"
] | 154 | 2019-02-01T23:46:33.000Z | 2022-03-23T15:07:10.000Z | 32.043011 | 114 | 0.57349 | [
[
[
"##############################################################################\n# Institute for the Design of Advanced Energy Systems Process Systems\n# Engineering Framework (IDAES PSE Framework) Copyright (c) 2018-2019, by the\n# software owners: The Regents of the University of California, through\n# Lawrence Berkeley National Laboratory, National Technology & Engineering\n# Solutions of Sandia, LLC, Carnegie Mellon University, West Virginia\n# University Research Corporation, et al. All rights reserved.\n#\n# Please see the files COPYRIGHT.txt and LICENSE.txt for full copyright and\n# license information, respectively. Both files are also available online\n# at the URL \"https://github.com/IDAES/idaes-pse\".\n##############################################################################",
"_____no_output_____"
],
[
"import sys\nimport os\nsys.path.append(os.path.abspath('..')) # current folder is ~/examples\nimport pandas as pd\nfrom idaes.apps.uncertainty_propagation.uncertainties import quantify_propagate_uncertainty\nfrom idaes.apps.uncertainty_propagation.examples.NRTL_model_scripts import NRTL_model, NRTL_model_opt",
"_____no_output_____"
],
[
"def SSE(model, data):\n expr = ((float(data[\"vap_benzene\"]) -\n model.fs.flash.vap_outlet.mole_frac_comp[0, \"benzene\"])**2 +\n (float(data[\"liq_benzene\"]) -\n model.fs.flash.liq_outlet.mole_frac_comp[0, \"benzene\"])**2)\n return expr*1E4",
"_____no_output_____"
],
[
"variable_name = [\"fs.properties.tau['benzene', 'toluene']\", \"fs.properties.tau['toluene','benzene']\"]\ncurrent_path = os.path.dirname(os.path.realpath(__file__))\ndata = pd.read_csv(os.path.join(current_path, 'BT_NRTL_dataset.csv'))",
"_____no_output_____"
],
[
"results = quantify_propagate_uncertainty(NRTL_model,NRTL_model_opt, data, variable_name, SSE)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code"
]
] |
d0bf91a344beb88d903b9de7b67d09b161f0bac1 | 27,727 | ipynb | Jupyter Notebook | argedis.ipynb | AurelienGalicher/DStoolkit | e6fc578efc4553bf61bd4b357d0a9d64475cc512 | [
"BSD-3-Clause"
] | null | null | null | argedis.ipynb | AurelienGalicher/DStoolkit | e6fc578efc4553bf61bd4b357d0a9d64475cc512 | [
"BSD-3-Clause"
] | null | null | null | argedis.ipynb | AurelienGalicher/DStoolkit | e6fc578efc4553bf61bd4b357d0a9d64475cc512 | [
"BSD-3-Clause"
] | null | null | null | 36.482895 | 1,313 | 0.579579 | [
[
[
"!pip install lightgbm",
"Requirement already satisfied: lightgbm in /home/nbuser/anaconda3_420/lib/python3.5/site-packages\nRequirement already satisfied: scipy in /home/nbuser/anaconda3_420/lib/python3.5/site-packages (from lightgbm)\nRequirement already satisfied: numpy in /home/nbuser/anaconda3_420/lib/python3.5/site-packages (from lightgbm)\nRequirement already satisfied: scikit-learn in /home/nbuser/anaconda3_420/lib/python3.5/site-packages (from lightgbm)\n"
],
[
"!pip install xgboost",
"Collecting xgboost\n Downloading xgboost-0.7.post3.tar.gz (450kB)\n\u001b[K 100% |################################| 460kB 1.9MB/s ta 0:00:011\n\u001b[?25hRequirement already satisfied: numpy in /home/nbuser/anaconda3_420/lib/python3.5/site-packages (from xgboost)\nRequirement already satisfied: scipy in /home/nbuser/anaconda3_420/lib/python3.5/site-packages (from xgboost)\nBuilding wheels for collected packages: xgboost\n Running setup.py bdist_wheel for xgboost ... \u001b[?25ldone\n\u001b[?25h Stored in directory: /home/nbuser/.cache/pip/wheels/ca/b3/02/d44d5e12c5c1eecff4a822555bac96b182551cd5e13c4795f6\nSuccessfully built xgboost\nInstalling collected packages: xgboost\nSuccessfully installed xgboost-0.7.post3\n"
],
[
"import lightgbm as lgb\nimport pandas as pd\nfrom sklearn.metrics import mean_squared_error\nfrom sklearn.model_selection import GridSearchCV\nimport xgboost as xgb",
"_____no_output_____"
],
[
"import zipfile\narchive = zipfile.ZipFile('test.csv.zip', 'r')\ntest = pd.read_csv(archive.open('test.csv'), sep=\";\", decimal=\",\",parse_dates=True)",
"_____no_output_____"
],
[
"archive = zipfile.ZipFile('train.csv.zip', 'r')\ntrain = pd.read_csv(archive.open('train.csv'), sep=\";\", decimal=\",\",parse_dates=True)",
"_____no_output_____"
],
[
"import datetime\ntest.date = test.date.str.split('-').apply(lambda x: datetime.datetime(int(x[0]),int(x[1]),int(x[2])))\ntrain.date = train.date.str.split('-').apply(lambda x: datetime.datetime(int(x[0]),int(x[1]),int(x[2])))",
"_____no_output_____"
],
[
"train['dayofweek'] = train.date.dt.dayofweek\ntest['dayofweek'] = test.date.dt.dayofweek\ntrain['quarter'] = train.date.dt.quarter\ntest['quarter'] = test.date.dt.quarter\ntrain['week'] = train.date.dt.week\ntest['week'] = test.date.dt.week\ntrain['month'] = train.date.dt.month\ntest['month'] = test.date.dt.month",
"_____no_output_____"
],
[
"## some more feature engineering\ntrain[\"qteG\"] = train.article_nom.str.extract('(\\d+)G',expand=True).fillna(0).astype(int)\ntest[\"qteG\"] = test.article_nom.str.extract('(\\d+)G',expand=True).fillna(0).astype(int)\ntrain['qteX'] = train.article_nom.str.extract('X ?(\\d)',expand=True).fillna(0).astype(int)\ntest['qteX'] = test.article_nom.str.extract('X ?(\\d)',expand=True).fillna(0).astype(int)\ntrain['qteMl'] = train.article_nom.str.extract('(\\d+) ?Ml',expand=True).fillna(0).astype(int)\ntest['qteMl'] = test.article_nom.str.extract('(\\d+) ?Ml',expand=True).fillna(0).astype(int)",
"_____no_output_____"
],
[
"ytrain = train.set_index('id').qte_article_vendue",
"_____no_output_____"
],
[
"cat_features = ['implant', 'article_nom']",
"_____no_output_____"
],
[
"from sklearn import preprocessing\nlabel_encoders = {}\nfor cat in cat_features:\n label_encoders.update({cat:preprocessing.LabelEncoder()})",
"_____no_output_____"
],
[
"for cat, le in label_encoders.items():\n cat_str = cat+'_label'\n train[cat_str] = le.fit_transform(train[cat])\n test[cat_str] = le.transform(test[cat])",
"_____no_output_____"
],
[
"##aggregates\n#data = pd.concat([train.set_index('id'),test.set_index('id')],axis=0)\n",
"_____no_output_____"
],
[
"#data.groupby(['article_nom','date','implant']).qte_article_vendue.rolling(2).mean().reset_index()",
"_____no_output_____"
],
[
"trainingset = train.set_index('id').select_dtypes(include=['float64','int64']).drop('qte_article_vendue', axis=1)\ntestset = test.set_index('id').select_dtypes(include=['float64','int64'])",
"_____no_output_____"
],
[
"# Feature Selection\nfrom sklearn.ensemble import ExtraTreesRegressor\nfrom sklearn.feature_selection import SelectFromModel\nregressor = ExtraTreesRegressor().fit(trainingset, ytrain)\n#lsvc = LinearSVC(C=0.01, penalty=\"l1\", dual=False).fit(trainingset, ytrain)\nmodel = SelectFromModel(regressor, prefit=True)\nX = model.transform(trainingset)\nXpredict = model.transform(testset)",
"_____no_output_____"
]
],
[
[
"# Modeling",
"_____no_output_____"
]
],
[
[
"trainingset.columns[model.get_support()]",
"_____no_output_____"
],
[
"from sklearn.model_selection import train_test_split\nX_train, X_test, y_train, y_test = train_test_split(trainingset, ytrain, test_size=0.05, random_state=42)",
"_____no_output_____"
],
[
"print('Start training...')\n# train\ngbm = lgb.LGBMRegressor(objective='regression',\n num_leaves=60,\n learning_rate=0.1,\n n_estimators=150, random_state=42)\ngbm.fit(X_train, y_train,\n eval_set=[(X_test, y_test)],\n eval_metric='rmse',\n early_stopping_rounds=5)\n\nprint('Start predicting...')\n# predict\ny_pred = gbm.predict(X_test, num_iteration=gbm.best_iteration_)\n# eval\nprint('The rmse of prediction is:', mean_squared_error(y_test, y_pred) ** 0.5)\n\n# feature importances\nprint('Feature importances:', list(gbm.feature_importances_))",
"Start training...\n[1]\tvalid_0's rmse: 0.824954\nTraining until validation scores don't improve for 5 rounds.\n[2]\tvalid_0's rmse: 0.794949\n[3]\tvalid_0's rmse: 0.769341\n[4]\tvalid_0's rmse: 0.748389\n[5]\tvalid_0's rmse: 0.729549\n[6]\tvalid_0's rmse: 0.714751\n[7]\tvalid_0's rmse: 0.701308\n[8]\tvalid_0's rmse: 0.690887\n[9]\tvalid_0's rmse: 0.681769\n[10]\tvalid_0's rmse: 0.67352\n[11]\tvalid_0's rmse: 0.666433\n[12]\tvalid_0's rmse: 0.660194\n[13]\tvalid_0's rmse: 0.654688\n[14]\tvalid_0's rmse: 0.65034\n[15]\tvalid_0's rmse: 0.646665\n[16]\tvalid_0's rmse: 0.64352\n[17]\tvalid_0's rmse: 0.640808\n[18]\tvalid_0's rmse: 0.638041\n[19]\tvalid_0's rmse: 0.635866\n[20]\tvalid_0's rmse: 0.634015\n[21]\tvalid_0's rmse: 0.632183\n[22]\tvalid_0's rmse: 0.630454\n[23]\tvalid_0's rmse: 0.629355\n[24]\tvalid_0's rmse: 0.627942\n[25]\tvalid_0's rmse: 0.626457\n[26]\tvalid_0's rmse: 0.625055\n[27]\tvalid_0's rmse: 0.624297\n[28]\tvalid_0's rmse: 0.623282\n[29]\tvalid_0's rmse: 0.622281\n[30]\tvalid_0's rmse: 0.621367\n[31]\tvalid_0's rmse: 0.620757\n[32]\tvalid_0's rmse: 0.620001\n[33]\tvalid_0's rmse: 0.619423\n[34]\tvalid_0's rmse: 0.618684\n[35]\tvalid_0's rmse: 0.617698\n[36]\tvalid_0's rmse: 0.6168\n[37]\tvalid_0's rmse: 0.616534\n[38]\tvalid_0's rmse: 0.615811\n[39]\tvalid_0's rmse: 0.615377\n[40]\tvalid_0's rmse: 0.614908\n[41]\tvalid_0's rmse: 0.614696\n[42]\tvalid_0's rmse: 0.61408\n[43]\tvalid_0's rmse: 0.613596\n[44]\tvalid_0's rmse: 0.612832\n[45]\tvalid_0's rmse: 0.612368\n[46]\tvalid_0's rmse: 0.611901\n[47]\tvalid_0's rmse: 0.611447\n[48]\tvalid_0's rmse: 0.610893\n[49]\tvalid_0's rmse: 0.610597\n[50]\tvalid_0's rmse: 0.61004\n[51]\tvalid_0's rmse: 0.609743\n[52]\tvalid_0's rmse: 0.609457\n[53]\tvalid_0's rmse: 0.609121\n[54]\tvalid_0's rmse: 0.608916\n[55]\tvalid_0's rmse: 0.608737\n[56]\tvalid_0's rmse: 0.608501\n[57]\tvalid_0's rmse: 0.608146\n[58]\tvalid_0's rmse: 0.607998\n[59]\tvalid_0's rmse: 0.607859\n[60]\tvalid_0's rmse: 0.607631\n[61]\tvalid_0's rmse: 0.607442\n[62]\tvalid_0's rmse: 0.607241\n[63]\tvalid_0's rmse: 0.606984\n[64]\tvalid_0's rmse: 0.606862\n[65]\tvalid_0's rmse: 0.606788\n[66]\tvalid_0's rmse: 0.606796\n[67]\tvalid_0's rmse: 0.606517\n[68]\tvalid_0's rmse: 0.606409\n[69]\tvalid_0's rmse: 0.606345\n[70]\tvalid_0's rmse: 0.605998\n[71]\tvalid_0's rmse: 0.605796\n[72]\tvalid_0's rmse: 0.605436\n[73]\tvalid_0's rmse: 0.605242\n[74]\tvalid_0's rmse: 0.605226\n[75]\tvalid_0's rmse: 0.605071\n[76]\tvalid_0's rmse: 0.604913\n[77]\tvalid_0's rmse: 0.604748\n[78]\tvalid_0's rmse: 0.604586\n[79]\tvalid_0's rmse: 0.60469\n[80]\tvalid_0's rmse: 0.604479\n[81]\tvalid_0's rmse: 0.60429\n[82]\tvalid_0's rmse: 0.603949\n[83]\tvalid_0's rmse: 0.603777\n[84]\tvalid_0's rmse: 0.603497\n[85]\tvalid_0's rmse: 0.603526\n[86]\tvalid_0's rmse: 0.603344\n[87]\tvalid_0's rmse: 0.60326\n[88]\tvalid_0's rmse: 0.603202\n[89]\tvalid_0's rmse: 0.603011\n[90]\tvalid_0's rmse: 0.602634\n[91]\tvalid_0's rmse: 0.602547\n[92]\tvalid_0's rmse: 0.602456\n[93]\tvalid_0's rmse: 0.60251\n[94]\tvalid_0's rmse: 0.602468\n[95]\tvalid_0's rmse: 0.602376\n[96]\tvalid_0's rmse: 0.60223\n[97]\tvalid_0's rmse: 0.602163\n[98]\tvalid_0's rmse: 0.602211\n[99]\tvalid_0's rmse: 0.602072\n[100]\tvalid_0's rmse: 0.602145\n[101]\tvalid_0's rmse: 0.602033\n[102]\tvalid_0's rmse: 0.601907\n[103]\tvalid_0's rmse: 0.601749\n[104]\tvalid_0's rmse: 0.601565\n[105]\tvalid_0's rmse: 0.601525\n[106]\tvalid_0's rmse: 0.601463\n[107]\tvalid_0's rmse: 0.601464\n[108]\tvalid_0's rmse: 0.601419\n[109]\tvalid_0's rmse: 0.601331\n[110]\tvalid_0's rmse: 0.601322\n[111]\tvalid_0's rmse: 0.60113\n[112]\tvalid_0's rmse: 0.601027\n[113]\tvalid_0's rmse: 0.600924\n[114]\tvalid_0's rmse: 0.600925\n[115]\tvalid_0's rmse: 0.600872\n[116]\tvalid_0's rmse: 0.601014\n[117]\tvalid_0's rmse: 0.601064\n[118]\tvalid_0's rmse: 0.60098\n[119]\tvalid_0's rmse: 0.601012\n[120]\tvalid_0's rmse: 0.600805\n[121]\tvalid_0's rmse: 0.600885\n[122]\tvalid_0's rmse: 0.600686\n[123]\tvalid_0's rmse: 0.600611\n[124]\tvalid_0's rmse: 0.600603\n[125]\tvalid_0's rmse: 0.600572\n[126]\tvalid_0's rmse: 0.600507\n[127]\tvalid_0's rmse: 0.600321\n[128]\tvalid_0's rmse: 0.600163\n[129]\tvalid_0's rmse: 0.600242\n[130]\tvalid_0's rmse: 0.600336\n[131]\tvalid_0's rmse: 0.600287\n[132]\tvalid_0's rmse: 0.600224\n[133]\tvalid_0's rmse: 0.600217\nEarly stopping, best iteration is:\n[128]\tvalid_0's rmse: 0.600163\nStart predicting...\nThe rmse of prediction is: 0.600163386271\nFeature importances: [207, 186, 89, 250, 116, 67, 186, 226, 127, 34, 178, 160, 91, 60, 144, 172, 75, 48, 193, 62, 19, 2, 25, 1, 29, 102, 26, 5, 13, 13, 19, 679, 1032, 364, 264, 266, 373, 299, 13, 451, 43, 304, 14, 65, 167, 293]\n"
],
[
"import numpy as np\nfor i in np.argsort(gbm.feature_importances_)[::-1][:10]:\n print(trainingset.columns[i])",
"vente_j_8_14\nvente_j_7\nweek\nvente_cat4_j_8_14\nvente_cat5_j_7\nqteG\ndayofweek\narticle_nom_label\nvente_cat4_j_7\nvente_cat5_j_8_14\n"
],
[
"help(xgbReg.fit)",
"Help on method fit in module xgboost.sklearn:\n\nfit(X, y, sample_weight=None, eval_set=None, eval_metric=None, early_stopping_rounds=None, verbose=True, xgb_model=None) method of xgboost.sklearn.XGBRegressor instance\n Fit the gradient boosting model\n \n Parameters\n ----------\n X : array_like\n Feature matrix\n y : array_like\n Labels\n sample_weight : array_like\n instance weights\n eval_set : list, optional\n A list of (X, y) tuple pairs to use as a validation set for\n early-stopping\n eval_metric : str, callable, optional\n If a str, should be a built-in evaluation metric to use. See\n doc/parameter.md. If callable, a custom evaluation metric. The call\n signature is func(y_predicted, y_true) where y_true will be a\n DMatrix object such that you may need to call the get_label\n method. It must return a str, value pair where the str is a name\n for the evaluation and value is the value of the evaluation\n function. This objective is always minimized.\n early_stopping_rounds : int\n Activates early stopping. Validation error needs to decrease at\n least every <early_stopping_rounds> round(s) to continue training.\n Requires at least one item in evals. If there's more than one,\n will use the last. Returns the model from the last iteration\n (not the best one). If early stopping occurs, the model will\n have three additional fields: bst.best_score, bst.best_iteration\n and bst.best_ntree_limit.\n (Use bst.best_ntree_limit to get the correct value if num_parallel_tree\n and/or num_class appears in the parameters)\n verbose : bool\n If `verbose` and an evaluation set is used, writes the evaluation\n metric measured on the validation set to stderr.\n xgb_model : str\n file name of stored xgb model or 'Booster' instance Xgb model to be\n loaded before training (allows training continuation).\n\n"
],
[
"xgbReg = xgb.XGBRegressor(nthread=-1, min_child_weight=4, subsample=0.9, max_depth=5) \nxgbReg.fit(X_train, y_train,\n eval_metric='rmse',\n eval_set=[(X_test, y_test)],\n early_stopping_rounds=5)\n\n",
"[0]\tvalidation_0-rmse:0.826859\nWill train until validation_0-rmse hasn't improved in 5 rounds.\n[1]\tvalidation_0-rmse:0.798098\n[2]\tvalidation_0-rmse:0.772668\n[3]\tvalidation_0-rmse:0.751844\n[4]\tvalidation_0-rmse:0.734243\n[5]\tvalidation_0-rmse:0.719077\n[6]\tvalidation_0-rmse:0.706491\n[7]\tvalidation_0-rmse:0.696243\n[8]\tvalidation_0-rmse:0.68749\n[9]\tvalidation_0-rmse:0.679917\n[10]\tvalidation_0-rmse:0.673718\n[11]\tvalidation_0-rmse:0.66858\n[12]\tvalidation_0-rmse:0.663864\n[13]\tvalidation_0-rmse:0.660004\n[14]\tvalidation_0-rmse:0.656571\n[15]\tvalidation_0-rmse:0.653245\n[16]\tvalidation_0-rmse:0.65026\n[17]\tvalidation_0-rmse:0.647687\n[18]\tvalidation_0-rmse:0.645823\n[19]\tvalidation_0-rmse:0.644358\n[20]\tvalidation_0-rmse:0.642686\n[21]\tvalidation_0-rmse:0.641419\n[22]\tvalidation_0-rmse:0.63989\n[23]\tvalidation_0-rmse:0.638882\n[24]\tvalidation_0-rmse:0.6377\n[25]\tvalidation_0-rmse:0.6365\n[26]\tvalidation_0-rmse:0.635653\n[27]\tvalidation_0-rmse:0.634763\n[28]\tvalidation_0-rmse:0.634005\n[29]\tvalidation_0-rmse:0.632881\n[30]\tvalidation_0-rmse:0.632189\n[31]\tvalidation_0-rmse:0.631473\n[32]\tvalidation_0-rmse:0.630768\n[33]\tvalidation_0-rmse:0.630484\n[34]\tvalidation_0-rmse:0.630282\n[35]\tvalidation_0-rmse:0.629654\n[36]\tvalidation_0-rmse:0.629012\n[37]\tvalidation_0-rmse:0.628461\n[38]\tvalidation_0-rmse:0.627771\n[39]\tvalidation_0-rmse:0.627302\n[40]\tvalidation_0-rmse:0.626285\n[41]\tvalidation_0-rmse:0.626089\n[42]\tvalidation_0-rmse:0.625384\n[43]\tvalidation_0-rmse:0.624773\n[44]\tvalidation_0-rmse:0.624378\n[45]\tvalidation_0-rmse:0.624054\n[46]\tvalidation_0-rmse:0.623799\n[47]\tvalidation_0-rmse:0.623576\n[48]\tvalidation_0-rmse:0.623227\n[49]\tvalidation_0-rmse:0.62293\n[50]\tvalidation_0-rmse:0.622721\n[51]\tvalidation_0-rmse:0.62275\n[52]\tvalidation_0-rmse:0.622563\n[53]\tvalidation_0-rmse:0.62257\n[54]\tvalidation_0-rmse:0.622452\n[55]\tvalidation_0-rmse:0.622265\n[56]\tvalidation_0-rmse:0.621668\n[57]\tvalidation_0-rmse:0.621515\n[58]\tvalidation_0-rmse:0.62147\n[59]\tvalidation_0-rmse:0.621317\n[60]\tvalidation_0-rmse:0.62119\n[61]\tvalidation_0-rmse:0.620701\n[62]\tvalidation_0-rmse:0.620731\n[63]\tvalidation_0-rmse:0.620737\n[64]\tvalidation_0-rmse:0.620449\n[65]\tvalidation_0-rmse:0.620349\n[66]\tvalidation_0-rmse:0.619975\n[67]\tvalidation_0-rmse:0.619659\n[68]\tvalidation_0-rmse:0.619545\n[69]\tvalidation_0-rmse:0.619322\n[70]\tvalidation_0-rmse:0.619293\n[71]\tvalidation_0-rmse:0.618752\n[72]\tvalidation_0-rmse:0.618843\n[73]\tvalidation_0-rmse:0.618651\n[74]\tvalidation_0-rmse:0.618492\n[75]\tvalidation_0-rmse:0.618023\n[76]\tvalidation_0-rmse:0.617687\n[77]\tvalidation_0-rmse:0.617492\n[78]\tvalidation_0-rmse:0.617392\n[79]\tvalidation_0-rmse:0.617372\n[80]\tvalidation_0-rmse:0.617398\n[81]\tvalidation_0-rmse:0.617571\n[82]\tvalidation_0-rmse:0.617296\n[83]\tvalidation_0-rmse:0.616955\n[84]\tvalidation_0-rmse:0.616561\n[85]\tvalidation_0-rmse:0.616405\n[86]\tvalidation_0-rmse:0.616325\n[87]\tvalidation_0-rmse:0.616226\n[88]\tvalidation_0-rmse:0.61602\n[89]\tvalidation_0-rmse:0.615876\n[90]\tvalidation_0-rmse:0.616279\n[91]\tvalidation_0-rmse:0.616269\n[92]\tvalidation_0-rmse:0.615728\n[93]\tvalidation_0-rmse:0.615591\n[94]\tvalidation_0-rmse:0.615444\n[95]\tvalidation_0-rmse:0.615076\n[96]\tvalidation_0-rmse:0.614916\n[97]\tvalidation_0-rmse:0.614819\n[98]\tvalidation_0-rmse:0.614784\n[99]\tvalidation_0-rmse:0.614503\nStart predicting...\n"
],
[
"print('Start predicting...')\n# predict\ny_pred2 = xgbReg.predict(X_test)\n# eval\nprint('The rmse of prediction is:', mean_squared_error(y_test, y_pred2) ** 0.5)\n\n# feature importances\nprint('Feature importances:', list(xgbReg.feature_importances_))\n\nimport numpy as np\nfor i in np.argsort(xgbReg.feature_importances_)[::-1][:10]:\n print(trainingset.columns[i])",
"Start predicting...\nThe rmse of prediction is: 0.614503423079\nFeature importances: [0.016853932, 0.015449438, 0.0066713481, 0.028441012, 0.014396068, 0.0056179776, 0.018258426, 0.024227528, 0.019662922, 0.0070224721, 0.016151685, 0.015098315, 0.0056179776, 0.0049157306, 0.013693821, 0.020365169, 0.0014044944, 0.0056179776, 0.0098314611, 0.022120787, 0.001755618, 0.0, 0.0052668541, 0.0010533708, 0.0045646066, 0.025632022, 0.0056179776, 0.0, 0.0, 0.0024578653, 0.0049157306, 0.11341292, 0.18820225, 0.04985955, 0.031601124, 0.023174157, 0.051966291, 0.047752809, 0.0038623596, 0.066011235, 0.0070224721, 0.024578651, 0.0003511236, 0.0080758426, 0.024929775, 0.036516853]\nvente_j_8_14\nvente_j_7\nweek\nvente_cat4_j_8_14\nvente_cat5_j_7\ndayofweek\narticle_nom_label\nvente_cat5_j_8_14\nt_9h_rouen\nretour_zone_1\n"
],
[
"print('The rmse of prediction is:', mean_squared_error(y_test, 0.5*(y_pred+y_pred2)) ** 0.5)",
"The rmse of prediction is: 0.603037683074\n"
],
[
"y_sub = gbm.predict(testset, num_iteration=gbm.best_iteration_)",
"_____no_output_____"
],
[
"y_sub2 = xgbReg.predict(testset)",
"_____no_output_____"
],
[
"pd.DataFrame(0.5*(y_sub+y_sub2),index=testset.index,columns=['quantite_vendue']).to_csv('sub.csv',sep=';',decimal=',')",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0bf99f9c4e31e128ea66bcff563bf0794b3369b | 1,015 | ipynb | Jupyter Notebook | notebooks/Common_Structure_Analysis_Tasks.ipynb | J-81/ProteinResearch_TechManuals | a627b3084db9060ce11bbfc21ca11f3021b9649f | [
"MIT"
] | null | null | null | notebooks/Common_Structure_Analysis_Tasks.ipynb | J-81/ProteinResearch_TechManuals | a627b3084db9060ce11bbfc21ca11f3021b9649f | [
"MIT"
] | null | null | null | notebooks/Common_Structure_Analysis_Tasks.ipynb | J-81/ProteinResearch_TechManuals | a627b3084db9060ce11bbfc21ca11f3021b9649f | [
"MIT"
] | null | null | null | 22.065217 | 136 | 0.566502 | [
[
[
"# Contents\n## Working with Structures\n1. TBA : [Downloading-PDB-Structures](#\"Downloading-PDB-Structures\")\n1. TBA : [Calculating-Minimum-Alpha-Carbon-Distances](#\"Calculating-RMSD\")\n\n\n### Note: Most of this is distilled from the Biopython Documentation [link](http://biopython.org/DIST/docs/tutorial/Tutorial.html)",
"_____no_output_____"
]
]
] | [
"markdown"
] | [
[
"markdown"
]
] |
d0bf9f9ab9dfa4b0d49f0fcc78f49db4d2693f83 | 19,246 | ipynb | Jupyter Notebook | final.ipynb | ia2067/TRAPP | a4a7a6b04fda3c781c10d023e1fc7408ea522194 | [
"MIT"
] | null | null | null | final.ipynb | ia2067/TRAPP | a4a7a6b04fda3c781c10d023e1fc7408ea522194 | [
"MIT"
] | null | null | null | final.ipynb | ia2067/TRAPP | a4a7a6b04fda3c781c10d023e1fc7408ea522194 | [
"MIT"
] | null | null | null | 46.375904 | 1,359 | 0.520316 | [
[
[
"## Final Output",
"_____no_output_____"
]
],
[
[
"%matplotlib inline \nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nfrom statistics import mean, median, variance\nplt.rcParams['figure.figsize'] = [10, 5]\nimport pprint\nimport math\nimport tabulate\n\ndef get_overheads(file_name):\n data = []\n with open(file_name, 'r') as results:\n for line in results:\n line = line.split(\",\")\n trip_duration = float(line[4])\n overhead = float(line[6])\n agent = line[7]\n preference = line[8].replace('\\r', '').replace('\\n', '')\n \n data.append(overhead)\n return data\n\ndef get_utilizations(file_name):\n utilizations = []\n line_no = 0\n \n with open(file_name, 'r') as results:\n for line in results:\n line = line.split(\",\")\n line[len(line)-1] = line[len(line)-1].replace('\\r', '').replace('\\n', '')\n line_no = line_no + 1\n if line_no == 1:\n edges = line\n else: \n utilizations.append([float(u) for u in line[1:]]) \n \n streets_data = {}\n for i in range(len(edges)):\n streets_data[edges[i]] = [utilization[i] for utilization in utilizations]\n\n streets_utilizations = {}\n for key, value in streets_data.iteritems():\n streets_utilizations[key] = mean(value)\n \n return streets_utilizations\n\ndef get_wait_times(file_name):\n wait_times = []\n line_no = 0\n \n with open(file_name, 'r') as results:\n for line in results:\n line = line.split(\",\")\n line[len(line)-1] = line[len(line)-1].replace('\\r', '').replace('\\n', '')\n line_no = line_no + 1\n if line_no == 1:\n lanes = line\n else: \n wait_times.append([float(u) for u in line[1:]]) \n \n wait_times_data = {}\n for i in range(len(lanes)):\n wait_times_data[lanes[i]] = [wait_time[i] for wait_time in wait_times]\n\n lane_wait_times = {}\n for key, value in wait_times_data.iteritems():\n lane_wait_times[key] = mean(value)\n \n return lane_wait_times\n\nlb_b0p0_overhead_csv = \"data/lb_b0p0overheads.csv\"\nlb_b0p0_streets_csv = \"data/lb_b0p0streets.csv\"\nlb_b0p0_waits_csv = \"data/lb_b0p0waits.csv\"\nlb_b0p9_overhead_csv = \"data/lb_b0p9overheads.csv\"\nlb_b0p9_streets_csv = \"data/lb_b0p9streets.csv\"\nlb_b0p9_waits_csv = \"data/lb_b0p9waits.csv\"\nlb_b1p0_overhead_csv = \"data/lb_b1p0overheads.csv\"\nlb_b1p0_streets_csv = \"data/lb_b1p0streets.csv\"\nlb_b1p0_waits_csv = \"data/lb_b1p0waits.csv\"\n\nlb_b0p0_overhead = get_overheads(lb_b0p0_overhead_csv)\nlb_b0p0_streets = get_utilizations(lb_b0p0_streets_csv)\nlb_b0p0_waits = get_wait_times(lb_b0p0_waits_csv)\n\nlb_b0p9_overhead = get_overheads(lb_b0p9_overhead_csv)\nlb_b0p9_streets = get_utilizations(lb_b0p9_streets_csv)\nlb_b0p9_waits = get_wait_times(lb_b0p9_waits_csv)\n\nlb_b1p0_overhead = get_overheads(lb_b1p0_overhead_csv)\nlb_b1p0_streets = get_utilizations(lb_b1p0_streets_csv)\nlb_b1p0_waits = get_wait_times(lb_b1p0_waits_csv)\n\nlb_overheads = []\nlb_overheads.append(lb_b1p0_overhead)\nlb_overheads.append(lb_b0p0_overhead)\nlb_overheads.append(lb_b0p9_overhead)\n\nlb_utilizations = []\nlb_utilizations.append(lb_b1p0_streets)\nlb_utilizations.append(lb_b0p0_streets)\nlb_utilizations.append(lb_b0p9_streets)\n\nlb_waits = []\nlb_waits.append(lb_b1p0_waits)\nlb_waits.append(lb_b0p0_waits)\nlb_waits.append(lb_b0p9_waits)\n\n\nlabels = []\nlabels.append(\"Beta\")\nlabels.append(\"Median of Trip Overhead\")\nlabels.append(\"Δ\")\nlabels.append(\"Variance of Street Utilization\")\nlabels.append(\"Δ\")\nlabels.append(\"Mean Wait time of lanes\")\nlabels.append(\"Δ\")\n\nbetas = [1.0, 0.0, 0.9]\n\noutput = []\nline = 0\nbaseline_o = float()\nbaseline_u = float()\nbaseline_w = float()\n\nfor i in range(len(betas)):\n if line == 0:\n baseline_o = median(lb_overheads[i])\n baseline_u = variance(lb_utilizations[i].values())\n baseline_w = mean(lb_waits[i].values())\n temp = []\n temp.append(betas[i])\n \n temp.append(median(lb_overheads[i]))\n if(line != 0):\n temp.append(float(((median(lb_overheads[i]) - baseline_o) / baseline_o) * 100))\n else:\n temp.append(\"(baseline)\")\n \n temp.append(variance(lb_utilizations[i].values()))\n if(line != 0):\n temp.append(float(((variance(lb_utilizations[i].values()) - baseline_u) / baseline_u) * 100))\n else:\n temp.append(\"(baseline)\")\n \n temp.append(mean(lb_waits[i].values()))\n if(line != 0):\n temp.append(float(((mean(lb_waits[i].values()) - baseline_w) / baseline_w) * 100))\n else:\n temp.append(\"(baseline)\")\n \n output.append(temp)\n line = line + 1\n \n# output\n \ntable = tabulate.tabulate(output, labels, 'unsafehtml')\ntable",
"_____no_output_____"
],
[
"# %matplotlib inline \nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nfrom statistics import mean, median, variance\nplt.rcParams['figure.figsize'] = [10, 5]\nimport pprint\nimport math\nimport tabulate\n\ndef get_overheads(file_name):\n data = []\n with open(file_name, 'r') as results:\n for line in results:\n line = line.split(\",\")\n trip_duration = float(line[4])\n overhead = float(line[6])\n agent = line[7]\n preference = line[8].replace('\\r', '').replace('\\n', '')\n \n data.append(overhead)\n return data\n\ndef get_utilizations(file_name):\n utilizations = []\n line_no = 0\n \n with open(file_name, 'r') as results:\n for line in results:\n line = line.split(\",\")\n line[len(line)-1] = line[len(line)-1].replace('\\r', '').replace('\\n', '')\n line_no = line_no + 1\n if line_no == 1:\n edges = line\n else: \n utilizations.append([float(u) for u in line[1:]]) \n \n streets_data = {}\n for i in range(len(edges)):\n streets_data[edges[i]] = [utilization[i] for utilization in utilizations]\n\n streets_utilizations = {}\n for key, value in streets_data.iteritems():\n streets_utilizations[key] = mean(value)\n \n return streets_utilizations\n\ndef get_wait_times(file_name):\n wait_times = []\n line_no = 0\n \n with open(file_name, 'r') as results:\n for line in results:\n line = line.split(\",\")\n line[len(line)-1] = line[len(line)-1].replace('\\r', '').replace('\\n', '')\n line_no = line_no + 1\n if line_no == 1:\n lanes = line\n else: \n wait_times.append([float(u) for u in line[1:]]) \n \n wait_times_data = {}\n for i in range(len(lanes)):\n wait_times_data[lanes[i]] = [wait_time[i] for wait_time in wait_times]\n\n lane_wait_times = {}\n for key, value in wait_times_data.iteritems():\n lane_wait_times[key] = mean(value)\n \n return lane_wait_times\n\ntl_b0p0_overhead_csv = \"data/tl_b0p0overheads.csv\"\ntl_b0p0_streets_csv = \"data/tl_b0p0streets.csv\"\ntl_b0p0_waits_csv = \"data/tl_b0p0waits.csv\"\ntl_b0p9_overhead_csv = \"data/tl_b0p9overheads.csv\"\ntl_b0p9_streets_csv = \"data/tl_b0p9streets.csv\"\ntl_b0p9_waits_csv = \"data/tl_b0p9waits.csv\"\ntl_b1p0_overhead_csv = \"data/tl_b1p0overheads.csv\"\ntl_b1p0_streets_csv = \"data/tl_b1p0streets.csv\"\ntl_b1p0_waits_csv = \"data/tl_b1p0waits.csv\"\n\ntl_b0p0_overhead = get_overheads(tl_b0p0_overhead_csv)\ntl_b0p0_streets = get_utilizations(tl_b0p0_streets_csv)\ntl_b0p0_waits = get_wait_times(tl_b0p0_waits_csv)\n\ntl_b0p9_overhead = get_overheads(tl_b0p9_overhead_csv)\ntl_b0p9_streets = get_utilizations(tl_b0p9_streets_csv)\ntl_b0p9_waits = get_wait_times(tl_b0p9_waits_csv)\n\ntl_b1p0_overhead = get_overheads(tl_b1p0_overhead_csv)\ntl_b1p0_streets = get_utilizations(tl_b1p0_streets_csv)\ntl_b1p0_waits = get_wait_times(tl_b1p0_waits_csv)\n\ntl_overheads = []\ntl_overheads.append(tl_b1p0_overhead)\ntl_overheads.append(tl_b0p0_overhead)\ntl_overheads.append(tl_b0p9_overhead)\n\ntl_utilizations = []\ntl_utilizations.append(tl_b1p0_streets)\ntl_utilizations.append(tl_b0p0_streets)\ntl_utilizations.append(tl_b0p9_streets)\n\ntl_waits = []\ntl_waits.append(tl_b1p0_waits)\ntl_waits.append(tl_b0p0_waits)\ntl_waits.append(tl_b0p9_waits)\n\nlabels = []\nlabels.append(\"Beta\")\nlabels.append(\"Median of Trip Overhead\")\nlabels.append(\"Δ\")\nlabels.append(\"Variance of Street Utilization\")\nlabels.append(\"Δ\")\nlabels.append(\"Mean Wait time of lanes\")\nlabels.append(\"Δ\")\n\noutput = []\nline = 0\nbaseline_o = float()\nbaseline_u = float()\nbaseline_w = float()\nfor i in range(len(betas)):\n if line == 0:\n baseline_o = median(tl_overheads[i])\n baseline_u = variance(tl_utilizations[i].values())\n baseline_w = mean(tl_waits[i].values())\n temp = []\n temp.append(betas[i])\n \n temp.append(median(tl_overheads[i]))\n if(line != 0):\n temp.append(float(((median(tl_overheads[i]) - baseline_o) / baseline_o) * 100))\n else:\n temp.append(\"(baseline)\")\n \n temp.append(variance(tl_utilizations[i].values()))\n if(line != 0):\n temp.append(float(((variance(tl_utilizations[i].values()) - baseline_u) / baseline_u) * 100))\n else:\n temp.append(\"(baseline)\")\n \n temp.append(mean(tl_waits[i].values()))\n if(line != 0):\n temp.append(float(((mean(tl_waits[i].values()) - baseline_w) / baseline_w) * 100))\n else:\n temp.append(\"(baseline)\")\n \n output.append(temp)\n line = line + 1\n \n# output\n \ntable = tabulate.tabulate(output, labels, 'unsafehtml')\ntable",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
]
] |
d0bfa63b29242b4e9d4f203ebfe1b8489235c7b8 | 35,950 | ipynb | Jupyter Notebook | 07c_ML_titanic.ipynb | stat4decision/python-data-lbp-fev20 | db84b0be4eb8e23bb6439564227b7c4adfb83c26 | [
"MIT"
] | null | null | null | 07c_ML_titanic.ipynb | stat4decision/python-data-lbp-fev20 | db84b0be4eb8e23bb6439564227b7c4adfb83c26 | [
"MIT"
] | null | null | null | 07c_ML_titanic.ipynb | stat4decision/python-data-lbp-fev20 | db84b0be4eb8e23bb6439564227b7c4adfb83c26 | [
"MIT"
] | null | null | null | 31.646127 | 426 | 0.399388 | [
[
[
"## Machine learning sur le titanic",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np",
"_____no_output_____"
]
],
[
[
"On importe les données",
"_____no_output_____"
]
],
[
[
"titanic = pd.read_csv(\"./data/titanic_train.csv\")",
"_____no_output_____"
],
[
"titanic.head()",
"_____no_output_____"
]
],
[
[
"On sélectionne les colonnes de x",
"_____no_output_____"
]
],
[
[
"x = titanic.drop([\"PassengerId\",\"Survived\",\"Name\",\"Ticket\"],axis=1)",
"_____no_output_____"
],
[
"y = titanic[\"Survived\"]",
"_____no_output_____"
]
],
[
[
"On simplifie la colonne `Cabin`",
"_____no_output_____"
]
],
[
[
"x[\"Cabin\"]=x[\"Cabin\"].str[0].fillna(\"No\").replace({\"T\":\"No\",\"G\":\"No\"})#.replace(\"G\",\"No\")",
"_____no_output_____"
],
[
"# on transforme toutes colonnes quali en binaires\nx = pd.get_dummies(x,columns=[\"Sex\",\"Cabin\",\"Embarked\"])",
"_____no_output_____"
],
[
"def transfo(x):\n \"\"\" Cette fonction permet de transformer en binaires toutes les colonnes\n objet d'un DataFrame en utilisant get_dummies()\n \"\"\"\n list_col_quali =[]\n for col in x.columns:\n if x[col].dtype == object:\n list_col_quali.append(col)\n print(list_col_quali) \n return pd.get_dummies(x,columns=list_col_quali)",
"_____no_output_____"
],
[
"x = transfo(x)",
"[]\n"
],
[
"# on remplace par la médiane\nx[\"Age\"]=x[\"Age\"].fillna(x[\"Age\"].median())",
"_____no_output_____"
]
],
[
[
"# Séparation apprentissage / test",
"_____no_output_____"
]
],
[
[
"from sklearn.model_selection import train_test_split",
"_____no_output_____"
]
],
[
[
"On veut découper nos données en train / test",
"_____no_output_____"
]
],
[
[
"x_train, x_test, y_train, y_test = train_test_split(x,y,test_size = 0.3)",
"_____no_output_____"
],
[
"print(x_train.shape, x_test.shape)",
"(623, 17) (268, 17)\n"
]
],
[
[
"On va construire et estimer des modèles de ML",
"_____no_output_____"
]
],
[
[
"from sklearn.linear_model import LogisticRegression\nfrom sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn.neural_network import MLPClassifier\nfrom sklearn.metrics import confusion_matrix, roc_auc_score, accuracy_score",
"_____no_output_____"
],
[
"dico_modeles = dict(logit=LogisticRegression(),\n rf=RandomForestClassifier(n_estimators=1000),\n gbm=GradientBoostingClassifier(),\n knn = KNeighborsClassifier(),\n rn = MLPClassifier()\n )",
"_____no_output_____"
],
[
"for modele in dico_modeles.keys():\n dico_modeles[modele].fit(x_train,y_train)\n y_predict = dico_modeles[modele].predict(x_test)\n y_predict_proba = dico_modeles[modele].predict_proba(x_test)\n print(\"Matrice de confusion pour modèle {} \".format(modele), confusion_matrix(y_test,y_predict),sep=\"\\n\")\n print(\"Auc pour modèle {} \".format(modele) ,roc_auc_score(y_test,y_predict_proba[:,1] ))\n print(\"Accuracy pour modèle {} \".format(modele), accuracy_score(y_test,y_predict))",
"C:\\Users\\s4d-asus-14\\Anaconda3\\lib\\site-packages\\sklearn\\linear_model\\_logistic.py:940: ConvergenceWarning: lbfgs failed to converge (status=1):\nSTOP: TOTAL NO. of ITERATIONS REACHED LIMIT.\n\nIncrease the number of iterations (max_iter) or scale the data as shown in:\n https://scikit-learn.org/stable/modules/preprocessing.html\nPlease also refer to the documentation for alternative solver options:\n https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression\n extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG)\n"
],
[
"pd.DataFrame(dico_modeles['rf'].feature_importances_,index=x.columns,\n columns=[\"importance\"]).sort_values(\"importance\",ascending = False)",
"_____no_output_____"
]
],
[
[
"On va rechercher les hyper-paramètres du modèle en utilisant une grille",
"_____no_output_____"
]
],
[
[
"from sklearn.model_selection import GridSearchCV",
"_____no_output_____"
],
[
"# on construit la grille de paramètres\nparam = dict(n_estimators=[10,100,1000], max_depth=[3,5,7,9])\n\n# on crée un objet de la classe GridSearchCV\nmodele_grid= GridSearchCV(RandomForestClassifier(),param,scoring=\"roc_auc\",cv=4)",
"_____no_output_____"
],
[
"modele_grid.fit(x_train,y_train)",
"_____no_output_____"
],
[
"modele_grid.best_score_",
"_____no_output_____"
],
[
"modele_grid.best_params_",
"_____no_output_____"
],
[
"pd.DataFrame(modele_grid.cv_results_)",
"_____no_output_____"
]
],
[
[
"Si on veut exporter un modèle, on peut utiliser :",
"_____no_output_____"
]
],
[
[
"from sklearn.externals import joblib",
"C:\\Users\\s4d-asus-14\\Anaconda3\\lib\\site-packages\\sklearn\\externals\\joblib\\__init__.py:15: FutureWarning: sklearn.externals.joblib is deprecated in 0.21 and will be removed in 0.23. Please import this functionality directly from joblib, which can be installed with: pip install joblib. If this warning is raised when loading pickled models, you may need to re-serialize those models with scikit-learn 0.21+.\n warnings.warn(msg, category=FutureWarning)\n"
],
[
"joblib.dump(modele_grid,\"modele_grid.pkl\")",
"_____no_output_____"
]
],
[
[
"## Construction d'un pipeline",
"_____no_output_____"
]
],
[
[
"from sklearn.pipeline import Pipeline\nfrom sklearn.svm import SVC\nfrom sklearn.decomposition import PCA",
"_____no_output_____"
],
[
"# création d'un objet de la classe Pipeline\nmon_pipe = Pipeline(steps=[(\"acp\",PCA(n_components=4)),(\"svm\",SVC())])",
"_____no_output_____"
],
[
"mon_pipe.fit(x_train,y_train)",
"_____no_output_____"
],
[
"confusion_matrix(y_test,mon_pipe.predict(x_test))",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
d0bfa70322adc1ea05e3cc93b236fb3862bcd60a | 638,197 | ipynb | Jupyter Notebook | Project3- Power Plant Data Analysis.ipynb | Shruti0630/Shruti_PythonProject3 | 3f9d63c5eaa902adfa19f4641541692e30709289 | [
"Unlicense"
] | null | null | null | Project3- Power Plant Data Analysis.ipynb | Shruti0630/Shruti_PythonProject3 | 3f9d63c5eaa902adfa19f4641541692e30709289 | [
"Unlicense"
] | null | null | null | Project3- Power Plant Data Analysis.ipynb | Shruti0630/Shruti_PythonProject3 | 3f9d63c5eaa902adfa19f4641541692e30709289 | [
"Unlicense"
] | null | null | null | 530.504572 | 419,516 | 0.94392 | [
[
[
"# Imporing Libraries and Dataset",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport warnings\nwarnings.simplefilter(action=\"ignore\", category=FutureWarning)",
"_____no_output_____"
],
[
"data_train= pd.read_csv(r\"C:\\Users\\shruti\\Desktop\\Decodr Session Recording\\Project\\Decodr Project\\Power Plant Data Analysis\\train.csv\", delimiter=\",\")",
"_____no_output_____"
],
[
"data_train.head()",
"_____no_output_____"
],
[
"data_train.shape",
"_____no_output_____"
],
[
"y_train= data_train[\" EP\"]",
"_____no_output_____"
],
[
"del data_train[\" EP\"]",
"_____no_output_____"
],
[
"data_train.head()",
"_____no_output_____"
],
[
"y_train.head()",
"_____no_output_____"
]
],
[
[
"# Structure of Dataset",
"_____no_output_____"
]
],
[
[
"data_train.describe()",
"_____no_output_____"
],
[
"y_train.shape",
"_____no_output_____"
]
],
[
[
"# Checking for Null values",
"_____no_output_____"
]
],
[
[
"data_train.isnull().sum()",
"_____no_output_____"
],
[
"data_train.isna().sum()",
"_____no_output_____"
],
[
"y_train.isnull().sum()",
"_____no_output_____"
],
[
"y_train.isna().sum()",
"_____no_output_____"
]
],
[
[
"# Exploratory Data Analysis",
"_____no_output_____"
]
],
[
[
"# Statistics\n\nmin_EP= y_train.min()\nmax_EP= y_train.max()\nmean_EP= y_train.mean()\nmedian_EP= y_train.median()\nstd_EP= y_train.std()",
"_____no_output_____"
],
[
"# Quartile calculator\n\nfirst_quar= np.percentile(y_train, 25)\nthird_quar= np.percentile(y_train, 75)\ninter_quar= third_quar - first_quar",
"_____no_output_____"
],
[
"# Print Statistics\n\nprint(\"Statistics for combined cycle Power Plant:\\n\")\nprint(\"Minimum EP:\", min_EP)\nprint(\"Maximum EP:\", max_EP)\nprint(\"Mean EP:\", mean_EP)\nprint(\"Median EP:\", median_EP)\nprint(\"Standard Deviation of EP:\", std_EP)\nprint(\"First Quartile of EP:\", first_quar)\nprint(\"Third Quartile of EP:\", third_quar)\nprint(\"InterQuartile of EP:\",inter_quar)",
"Statistics for combined cycle Power Plant:\n\nMinimum EP: 420.26\nMaximum EP: 495.76\nMean EP: 454.43129319955347\nMedian EP: 451.74\nStandard Deviation of EP: 17.134571175425727\nFirst Quartile of EP: 439.7375\nThird Quartile of EP: 468.6675\nInterQuartile of EP: 28.930000000000007\n"
]
],
[
[
"# Plotting",
"_____no_output_____"
]
],
[
[
"sns.set(rc={\"figure.figsize\":(5,5)})\nsns.distplot(data_train, bins=30, color= \"orange\")\nplt.show()",
"_____no_output_____"
]
],
[
[
"# Correlation",
"_____no_output_____"
]
],
[
[
"corr_df=data_train.copy()\ncorr_df[\"EP\"]=y_train\ncorr_df.head()",
"_____no_output_____"
],
[
"sns.set(style=\"ticks\", color_codes=True)\nplt.figure(figsize=(12,12))\nsns.heatmap(corr_df.astype(\"float32\").corr(), linewidths=0.1, square=True, annot=True)\nplt.show()",
"_____no_output_____"
]
],
[
[
"# Features Plot",
"_____no_output_____"
]
],
[
[
"# Print all Features\n\ndata_train.columns",
"_____no_output_____"
],
[
"plt.plot(corr_df[\"# T\"], corr_df[\"EP\"], \"+\", color= \"green\")\nplt.plot(np.unique(corr_df[\"# T\"]), np.poly1d(np.polyfit(corr_df[\"# T\"], corr_df[\"EP\"], 1))\n (np.unique(corr_df[\"# T\"])), color=\"yellow\")\nplt.xlabel(\"Temperature\", fontsize=12)\nplt.ylabel(\"EP\", fontsize=12)\nplt.show()",
"_____no_output_____"
],
[
"plt.plot(corr_df[\" V\"], corr_df[\"EP\"], \"o\", color= \"pink\")\nplt.plot(np.unique(corr_df[\" V\"]), np.poly1d(np.polyfit(corr_df[\" V\"], corr_df[\"EP\"], 1))\n (np.unique(corr_df[\" V\"])), color=\"blue\")\nplt.xlabel(\"Exhaust Vaccum\", fontsize=12)\nplt.ylabel(\"EP\", fontsize=12)\n\nplt.show()",
"_____no_output_____"
],
[
"plt.plot(corr_df[\" AP\"], corr_df[\"EP\"], \"o\", color= \"orange\")\nplt.plot(np.unique(corr_df[\" AP\"]), np.poly1d(np.polyfit(corr_df[\" AP\"], corr_df[\"EP\"], 1))\n (np.unique(corr_df[\" AP\"])), color=\"green\")\nplt.xlabel(\"Ambient Pressure\", fontsize=12)\nplt.ylabel(\"EP\", fontsize=12)\nplt.show()",
"_____no_output_____"
],
[
"plt.plot(corr_df[\" RH\"], corr_df[\"EP\"], \"o\", color= \"seagreen\")\nplt.plot(np.unique(corr_df[\" RH\"]), np.poly1d(np.polyfit(corr_df[\" RH\"], corr_df[\"EP\"], 1))\n (np.unique(corr_df[\" RH\"])), color=\"blue\")\nplt.xlabel(\"Relative Humidity\", fontsize=12)\nplt.ylabel(\"EP\", fontsize=12)\n\nplt.show()",
"_____no_output_____"
],
[
"fig, ax=plt.subplots(ncols=4, nrows=1, figsize=(20,10))\nindex=0\nax=ax.flatten()\nfor i,v in data_train.items():\n sns.boxplot(y=i, data=data_train, ax=ax[index], color= \"orangered\")\n index+=1\nplt.tight_layout(pad=0.4, w_pad=0.5, h_pad=0.5)",
"_____no_output_____"
],
[
"sns.set(style=\"whitegrid\")\nfeatures_plot=data_train.columns\n\nsns.pairplot(data_train[features_plot]);\nplt.tight_layout\nplt.show()",
"_____no_output_____"
]
],
[
[
"# Feature Scaling",
"_____no_output_____"
]
],
[
[
"from sklearn.preprocessing import StandardScaler\nscaler= StandardScaler()\nscaler.fit_transform(data_train)",
"_____no_output_____"
]
],
[
[
"# Gradient Descent Model",
"_____no_output_____"
]
],
[
[
"x_train= data_train",
"_____no_output_____"
],
[
"x_train.shape, y_train.shape",
"_____no_output_____"
],
[
"from sklearn.ensemble import GradientBoostingRegressor\n\ngbr=GradientBoostingRegressor(learning_rate=1.9, n_estimators=2000)\ngbr",
"_____no_output_____"
],
[
"gbr.fit(x_train, y_train)",
"_____no_output_____"
],
[
"x_test= np.genfromtxt(r\"C:\\Users\\shruti\\Desktop\\Decodr Session Recording\\Project\\Decodr Project\\Power Plant Data Analysis\\test.csv\", delimiter=\",\")\ny_train.ravel(order=\"A\")\ny_pred=gbr.predict(x_test)",
"_____no_output_____"
],
[
"y_pred",
"_____no_output_____"
]
],
[
[
"# Model Evaluation",
"_____no_output_____"
]
],
[
[
"gbr.score(x_train, y_train)",
"_____no_output_____"
]
],
[
[
"# Saving the Prediction",
"_____no_output_____"
]
],
[
[
"np.savetxt(\"Predict_csv\", y_pred, fmt=\"%.5f\")",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0bfc4974bd6cd0ac48aeebbf128d3c09df7a2f7 | 19,809 | ipynb | Jupyter Notebook | 99 Cache/lesson1-1 NumPy.ipynb | Leon-Hou/PythonLearningNotebook | 4eb0ab11e20ab9869489853c8ab5998c1b8b5ef3 | [
"MIT"
] | null | null | null | 99 Cache/lesson1-1 NumPy.ipynb | Leon-Hou/PythonLearningNotebook | 4eb0ab11e20ab9869489853c8ab5998c1b8b5ef3 | [
"MIT"
] | null | null | null | 99 Cache/lesson1-1 NumPy.ipynb | Leon-Hou/PythonLearningNotebook | 4eb0ab11e20ab9869489853c8ab5998c1b8b5ef3 | [
"MIT"
] | null | null | null | 17.180399 | 172 | 0.418295 | [
[
[
"# NumPy - 科学计算",
"_____no_output_____"
],
[
"## 一、简介\n\n NumPy是Python语言的一个扩充程序库。支持高级大量的维度数组与矩阵运算,此外也针对数组运算提供大量的数学函数库。Numpy内部解除了[CPython的GIL](https://www.cnblogs.com/wj-1314/p/9056555.html)(全局解释器锁),运行效率极好,是大量机器学习框架的基础库!\nNumPy的全名为Numeric Python,是一个开源的Python科学计算库,它包括:\n- 一个强大的N维数组对象ndrray;\n- 比较成熟的(广播)函数库;\n- 用于整合C/C++和Fortran代码的工具包;\n- 实用的线性代数、傅里叶变换和随机数生成函数 \n\nNumPy的优点:\n- 对于同样的数值计算任务,使用NumPy要比直接编写Python代码便捷得多;\n- NumPy中的数组的存储效率和输入输出性能均远远优于Python中等价的基本数据结构,且其能够提升的性能是与数组中的元素成比例的;\n- NumPy的大部分代码都是用C语言写的,其底层算法在设计时就有着优异的性能,这使得NumPy比纯Python代码高效得多 \n\nNumPy的缺点: \n- 由于NumPy使用内存映射文件以达到最优的数据读写性能,而内存的大小限制了其对TB级大文件的处理; \n- 此外,NumPy数组的通用性不及Python提供的list容器。 \n因此,在科学计算之外的领域,NumPy的优势也就不那么明显。",
"_____no_output_____"
],
[
"以下内容来自于:https://www.bilibili.com/video/av8727995?from=search&seid=12457925641538891537",
"_____no_output_____"
],
[
"## 二、基本使用\n### 2.1 ndarray属性",
"_____no_output_____"
]
],
[
[
"# 调用\nimport numpy as np",
"_____no_output_____"
],
[
"# 将列表转换为矩阵\na = [[1,2,3],[2,3,4]]\nprint('list:\\n',a)\narray = np.array(a)\nprint('array:\\n',array)",
"list:\n [[1, 2, 3], [2, 3, 4]]\narray:\n [[1 2 3]\n [2 3 4]]\n"
],
[
"# 获取矩阵基本属性\nprint('num of dim:',array.ndim) # 秩,矩阵维度\nprint('shape:',array.shape) # 矩阵形状\nprint('size:',array.size) # 矩阵元素数量",
"num of dim: 2\nshape: (2, 3)\nsize: 6\n"
]
],
[
[
"### 2.2 矩阵生成\n(1)指定元素格式。 \nNumPy 支持比 Python 更多种类的数值类型,为了区别于 Python 原生的数据类型,bool、int、float、complex、str 等类型名称末尾都加了_,详细的数据类型列表可在以下网址查看:https://www.cnblogs.com/gl1573/p/10549547.html。",
"_____no_output_____"
]
],
[
[
"# 将列表转化为指定格式的矩阵\na = np.array([2,3,4],dtype=np.float32)\nprint(a.dtype)",
"float32\n"
]
],
[
[
"(2)全零矩阵",
"_____no_output_____"
]
],
[
[
"zero = np.zeros((3,4))\nprint(zero)",
"[[0. 0. 0. 0.]\n [0. 0. 0. 0.]\n [0. 0. 0. 0.]]\n"
]
],
[
[
"(3) 全1矩阵",
"_____no_output_____"
]
],
[
[
"one = np.ones((3,4),dtype=np.int32)\nprint(one)",
"[[1 1 1 1]\n [1 1 1 1]\n [1 1 1 1]]\n"
]
],
[
[
"(4) 全空矩阵",
"_____no_output_____"
]
],
[
[
"empty = np.empty((3,4))\nprint(empty)",
"[[0. 0. 0. 0.]\n [0. 0. 0. 0.]\n [0. 0. 0. 0.]]\n"
]
],
[
[
"(5) 有序数列",
"_____no_output_____"
]
],
[
[
"range = np.arange(10,20,2) # [10,20)\nprint(range)",
"[10 12 14 16 18]\n"
]
],
[
[
"(6) 指定shape的矩阵",
"_____no_output_____"
]
],
[
[
"shape = np.arange(12).reshape((3,4))\nprint(shape)",
"[[ 0 1 2 3]\n [ 4 5 6 7]\n [ 8 9 10 11]]\n"
]
],
[
[
"(7) 线段",
"_____no_output_____"
]
],
[
[
"line = np.linspace(10,20,5)\nprint(line)",
"[10. 12.5 15. 17.5 20. ]\n"
]
],
[
[
"### 2.3 按位运算",
"_____no_output_____"
],
[
"(1) 加减乘除 \n直接加减乘除是对矩阵各元素同位运算:",
"_____no_output_____"
]
],
[
[
"a1 = np.array([5,8,11,15,20])\na2 = np.arange(1,6)\nb1 = a1 + a2\nprint(b1)\nb2 = a1 - a2\nprint(b2)\nb3 = a1 * a2\nprint(b3)\nb4 = a1 / a2\nprint(b4)",
"[ 6 10 14 19 25]\n[ 4 6 8 11 15]\n[ 5 16 33 60 100]\n[5. 4. 3.66666667 3.75 4. ]\n"
]
],
[
[
"(2) 幂次方 (双星号)",
"_____no_output_____"
]
],
[
[
"b5 = a2**3\nprint(b5)",
"[ 1 8 27 64 125]\n"
]
],
[
[
"(4) sin/cos",
"_____no_output_____"
]
],
[
[
"b6 = 10*np.sin(a2)\nprint(b6)",
"[ 8.41470985 9.09297427 1.41120008 -7.56802495 -9.58924275]\n"
]
],
[
[
"(5) 判断",
"_____no_output_____"
]
],
[
[
"print(a2<5)",
"[ True True True True False]\n"
]
],
[
[
"### 2.4 矩阵运算",
"_____no_output_____"
]
],
[
[
"c1 = np.array([[1,2],[1,2]])\nc2 = np.arange(4).reshape((2,2))\nprint(c1)\nprint(c2)",
"[[1 2]\n [1 2]]\n[[0 1]\n [2 3]]\n"
]
],
[
[
"(1) 乘",
"_____no_output_____"
]
],
[
[
"c_dot = np.dot(c1,c2) \nc_dot2 = c1.dot(c2)\nprint(c_dot)\nprint(c_dot2)",
"[[4 7]\n [4 7]]\n[[4 7]\n [4 7]]\n"
]
],
[
[
"(2) 矩阵求和/最大值/最小值/平均值",
"_____no_output_____"
]
],
[
[
"c3 = np.random.random((2,4))\nprint(c3)",
"[[0.33013443 0.35689548 0.97856181 0.40584133]\n [0.47531833 0.22821532 0.37049442 0.66761687]]\n"
],
[
"print(np.sum(c3))\nprint(np.sum(c3,axis=1)) # 行内",
"3.8130779886731725\n[2.07143305 1.74164494]\n"
],
[
"print(np.min(c3))\nprint(np.sum(c3,axis=0)) # 列内",
"0.22821531752731306\n[0.80545276 0.5851108 1.34905622 1.0734582 ]\n"
],
[
"print(np.max(c3))",
"0.9785618051493579\n"
],
[
"print(np.mean(c3))\nprint(c3.mean())",
"0.47663474858414656\n0.47663474858414656\n"
]
],
[
[
"(3) 元素索引",
"_____no_output_____"
]
],
[
[
"print(np.argmin(c3))",
"5\n"
],
[
"print(np.argmax(c3))",
"2\n"
]
],
[
[
"(4) 元素累加",
"_____no_output_____"
]
],
[
[
"print(np.cumsum(c3))",
"[0.33013443 0.68702991 1.66559172 2.07143305 2.54675138 2.7749667\n 3.14546112 3.81307799]\n"
]
],
[
[
"(5) 矩阵转置",
"_____no_output_____"
]
],
[
[
"print(c3.T)",
"[[0.33013443 0.47531833]\n [0.35689548 0.22821532]\n [0.97856181 0.37049442]\n [0.40584133 0.66761687]]\n"
]
],
[
[
"(6) 截止(限制矩阵的最大值和最小值)",
"_____no_output_____"
]
],
[
[
"print(c3.clip(0.25,0.75))",
"[[0.33013443 0.35689548 0.75 0.40584133]\n [0.47531833 0.25 0.37049442 0.66761687]]\n"
]
],
[
[
"## 三 进阶技巧\n### 3.1 索引 \n索引都是从0开始",
"_____no_output_____"
]
],
[
[
"d1 = np.arange(3,15).reshape((3,4))\nprint(d1)",
"[[ 3 4 5 6]\n [ 7 8 9 10]\n [11 12 13 14]]\n"
],
[
"print(d1[1]) # 整行索引\nprint(d1[1,:]) # 整行索引",
"[ 7 8 9 10]\n[ 7 8 9 10]\n"
],
[
"print(d1[1][3]) # \nprint(d1[1,3]) # 使用这种方法吧",
"10\n10\n"
],
[
"print(d1[:,2]) # 整列索引",
"[ 5 9 13]\n"
],
[
"print(d1[0:2,2]) # ",
"[5 9]\n"
],
[
"# 按行打印\nfor row in d1:\n print(row)",
"[3 4 5 6]\n[ 7 8 9 10]\n[11 12 13 14]\n"
],
[
"# 按列打印\nfor colum in d1.T:\n print(colum)",
"[ 3 7 11]\n[ 4 8 12]\n[ 5 9 13]\n[ 6 10 14]\n"
],
[
"# 按索引打印\nfor items in d1.flat: #flat将矩阵展开为行矩阵\n print(items)",
"3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n"
]
],
[
[
"### 3.2 合并",
"_____no_output_____"
]
],
[
[
"e1 = np.array([1,1,1])\ne2 = np.array([2,2,2])",
"_____no_output_____"
],
[
"# 上下合并\nf1 = np.vstack((e1,e2))\nprint(f1)",
"[[1 1 1]\n [2 2 2]]\n"
],
[
"# 左右合并\nf2 = np.hstack((e1,e2))\nprint(f2)",
"[1 1 1 2 2 2]\n"
],
[
"# 行向量转列向量\nprint(e1[:,np.newaxis])\nprint(e1.T) # .T适用于矩阵,不适用于向量",
"[[1]\n [1]\n [1]]\n[1 1 1]\n"
],
[
"# concatenate行列合并0=列,1=行\nf3 = np.concatenate((e1,e2),axis=0)\nprint(f3)",
"[1 1 1 2 2 2]\n"
]
],
[
[
"### 3.3 分割",
"_____no_output_____"
],
[
"(1) 等量分割",
"_____no_output_____"
]
],
[
[
"g1 = np.arange(12).reshape((3,4))\nprint(g1)\nprint(np.split(g1,2,axis=1))\nprint(np.split(g1,3,axis=0))",
"[[ 0 1 2 3]\n [ 4 5 6 7]\n [ 8 9 10 11]]\n[array([[0, 1],\n [4, 5],\n [8, 9]]), array([[ 2, 3],\n [ 6, 7],\n [10, 11]])]\n[array([[0, 1, 2, 3]]), array([[4, 5, 6, 7]]), array([[ 8, 9, 10, 11]])]\n"
],
[
"print(np.hsplit(g1,2))\nprint(np.vsplit(g1,3))",
"[array([[0, 1],\n [4, 5],\n [8, 9]]), array([[ 2, 3],\n [ 6, 7],\n [10, 11]])]\n[array([[0, 1, 2, 3]]), array([[4, 5, 6, 7]]), array([[ 8, 9, 10, 11]])]\n"
]
],
[
[
"(2) 不等量分割",
"_____no_output_____"
]
],
[
[
"print(np.array_split(g1,2,axis=0))",
"[array([[0, 1, 2, 3],\n [4, 5, 6, 7]]), array([[ 8, 9, 10, 11]])]\n"
]
],
[
[
"### 3.4 复制",
"_____no_output_____"
],
[
"(1) 浅复制",
"_____no_output_____"
]
],
[
[
"a = np.arange(4)\nb = a\nc = a\nd = b\na[0] = 15\nd[3] = 20\nprint(a)\nprint(b)\nprint(c)\nprint(d)\nprint(b is a)",
"[15 1 2 20]\n[15 1 2 20]\n[15 1 2 20]\n[15 1 2 20]\nTrue\n"
]
],
[
[
"(2) 深复制(deep copy)",
"_____no_output_____"
]
],
[
[
"a = np.arange(6)\nb = np.copy(a)\na[0]=666\nprint(a)\nprint(b)",
"[666 1 2 3 4 5]\n[0 1 2 3 4 5]\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0bfcddd77d2e8b2228afc20245b919f73516766 | 12,914 | ipynb | Jupyter Notebook | sagemaker/1_fill_missing_values.ipynb | awslabs/filling-in-missing-values-in-tabular-records | b229fccd136b9e87961ffd7e94a8e2497ebc4c7b | [
"Apache-2.0"
] | 1 | 2022-01-21T15:26:40.000Z | 2022-01-21T15:26:40.000Z | sagemaker/1_fill_missing_values.ipynb | awslabs/filling-in-missing-values-in-tabular-records | b229fccd136b9e87961ffd7e94a8e2497ebc4c7b | [
"Apache-2.0"
] | null | null | null | sagemaker/1_fill_missing_values.ipynb | awslabs/filling-in-missing-values-in-tabular-records | b229fccd136b9e87961ffd7e94a8e2497ebc4c7b | [
"Apache-2.0"
] | null | null | null | 31.043269 | 304 | 0.574028 | [
[
[
"# Filling in Missing Values in Tabular Records\n\nYou can select Run->Run All Cells from the menu to run all cells in Studio (or Cell->Run All in a SageMaker Notebook Instance).",
"_____no_output_____"
],
[
"## Introduction\n\nMissing data values are common due to omissions during manual entry or optional input. Simple data imputation such as using the median/mode/average may not be satisfactory. When there are many features, we can sometimes train a model to use the existing features to predict the desired feature. \n\nThis solution provides and end-to-end example that takes a tabular data set with a target column, trains and deploys an endpoint, and calls that endpoint to make predictions.\n\n## Architecture\nAs part of the solution, the following services are used:\n\n* Amazon S3: Used to store datasets.\n* Amazon SageMaker Notebook: Used to preprocess and process the data, and to train the deep learning model.\n* Amazon SageMaker Endpoint: Used to deploy the trained model.\n\n\n\n## Data Set\nWe will use public data from the City of Cincinnati Public Services describing Fleet Inventory. We will train a model to predict missing values of a 'target' column based on the other columns.\n\nPlease see.\nhttps://www.cincinnati-oh.gov/public-services/about-public-services/fleet-services/\nhttps://data.cincinnati-oh.gov/Thriving-Neighborhoods/Fleet-Inventory/m8ba-xmjz\n\n## Acknowledgements\nAutoPilot code based on\nhttps://github.com/aws/amazon-sagemaker-examples/blob/master/autopilot/sagemaker_autopilot_direct_marketing.ipynb",
"_____no_output_____"
]
],
[
[
"# Replace these with your train/test CSV data and target columns. \n# If left empty, the sample data set will be used.\ndata_location = '' # Ex. s3://your_bucket/your_file.csv\ntarget = '' # Specify target column name\n\nif data_location == '':\n # Use sample dataset.\n dataset_file = 'data/dataset.csv'\n target = 'ASSET_TYPE'\nelse:\n # Download custom dataset.\n !aws s3 cp $data_location data/custom_dataset.csv\n print('Downloaded custom dataset')\n dataset_file = 'data/custom_dataset.csv'",
"_____no_output_____"
]
],
[
[
"## Inspect the Data",
"_____no_output_____"
]
],
[
[
"import pandas as pd\n\ndata = pd.read_csv(dataset_file)\ndata",
"_____no_output_____"
]
],
[
[
"## Preprocess Data\nSome of the entries in the target column are null. We will remove those entries for training/testing.",
"_____no_output_____"
]
],
[
[
"import numpy as np\n\ndef remove_null_rows(data, target):\n idx = data[target].notna()\n return data.loc[idx]\n\ndef split_train_test(data, p=.9):\n idx = np.random.choice([True, False], replace = True, size = len(data), p=[.8, .2])\n train_df = data.iloc[idx]\n test_df = data.iloc[[not i for i in idx]]\n return train_df, test_df",
"_____no_output_____"
],
[
"non_null_data = remove_null_rows(data, target)\ntrain, test = split_train_test(non_null_data)\n\ntrain_file = 'data/train.csv'\ntest_file = 'data/test.csv'\n\ntrain.to_csv(train_file, index=False, header=True)\ntest.to_csv(test_file, index=False, header=True)",
"_____no_output_____"
]
],
[
[
"## Store Processed Data on S3\n\nNow that we have our data in files, we store this data to S3 so we can use SageMaker AutoPilot.",
"_____no_output_____"
]
],
[
[
"import sagemaker\nfrom sagemaker.s3 import S3Uploader\nimport json\n\nwith open('stack_outputs.json') as f:\n sagemaker_configs = json.load(f)\n \ns3_bucket = sagemaker_configs['S3Bucket']\n\ntrain_data_s3_path = S3Uploader.upload(train_file, 's3://{}/data'.format(s3_bucket))\nprint('Train data uploaded to: ' + train_data_s3_path)\ntest_data_s3_path = S3Uploader.upload(test_file, 's3://{}/data'.format(s3_bucket))\nprint('Test data uploaded to: ' + test_data_s3_path)",
"_____no_output_____"
]
],
[
[
"### Configure AutoPilot\n\nFor the purposes of a demo, we will use only 2 candidates. Remove this parameter to run AutoPilot with its defaults (note: for this data set a full run will take ~ 4 several hours.)",
"_____no_output_____"
]
],
[
[
"input_data_config = [{\n 'DataSource': {\n 'S3DataSource': {\n 'S3DataType': 'S3Prefix',\n 'S3Uri': 's3://{}/data/train'.format(s3_bucket)\n }\n },\n 'TargetAttributeName': target\n}]\n\noutput_data_config = {\n 'S3OutputPath': 's3://{}/data/output'.format(s3_bucket)\n }\nautoml_job_config ={\n 'CompletionCriteria': {\n 'MaxCandidates': 2 # Remove this option for the default run.\n }\n}\n",
"_____no_output_____"
],
[
"import boto3 \nfrom time import gmtime, strftime, sleep\n\nrole = sagemaker_configs['SageMakerIamRole']\n\nsolution_prefix = sagemaker_configs['SolutionPrefix']\n\nauto_ml_job_name = solution_prefix + strftime('%d-%H-%M-%S', gmtime())\nprint('AutoMLJobName: ' + auto_ml_job_name)\n\nsm = boto3.Session().client(service_name='sagemaker',region_name='us-west-2')\nsm.create_auto_ml_job(AutoMLJobName=auto_ml_job_name,\n InputDataConfig=input_data_config,\n OutputDataConfig=output_data_config,\n AutoMLJobConfig=automl_job_config,\n RoleArn=role)",
"_____no_output_____"
],
[
"# This will take approximately 20 minutes to run.\nsecondary_status = ''\nwhile True:\n describe_response = sm.describe_auto_ml_job(AutoMLJobName=auto_ml_job_name)\n job_run_status = describe_response['AutoMLJobStatus']\n \n if job_run_status in ('Failed', 'Completed', 'Stopped'):\n print('\\n{}: {}'.format(describe_response['AutoMLJobSecondaryStatus'], job_run_status))\n break\n\n if secondary_status == describe_response['AutoMLJobSecondaryStatus']:\n print('.', end='') \n else:\n secondary_status = describe_response['AutoMLJobSecondaryStatus']\n print('\\n{}: {}'.format(secondary_status, job_run_status), end='')\n \n sleep(60)",
"_____no_output_____"
],
[
"best_candidate = sm.describe_auto_ml_job(AutoMLJobName=auto_ml_job_name)['BestCandidate']\nbest_candidate_name = best_candidate['CandidateName']\nprint(best_candidate)\nprint('\\n')\nprint(\"CandidateName: \" + best_candidate_name)\nprint(\"FinalAutoMLJobObjectiveMetricName: \" + best_candidate['FinalAutoMLJobObjectiveMetric']['MetricName'])\nprint(\"FinalAutoMLJobObjectiveMetricValue: \" + str(best_candidate['FinalAutoMLJobObjectiveMetric']['Value']))",
"_____no_output_____"
],
[
"model_name = sagemaker_configs['SageMakerModelName']\n\nmodel = sm.create_model(Containers=best_candidate['InferenceContainers'],\n ModelName=model_name,\n ExecutionRoleArn=role)\n",
"_____no_output_____"
]
],
[
[
"## Deploy and Endpoint",
"_____no_output_____"
]
],
[
[
"print(\"Building endpoint with model {}\".format(model))",
"_____no_output_____"
],
[
"endpoint_config_name = sagemaker_configs['SageMakerEndpointName'] + '-config'\ncreate_endpoint_config_response = sm.create_endpoint_config(\n EndpointConfigName = endpoint_config_name,\n ProductionVariants=[{\n 'InstanceType':'ml.m5.xlarge',\n 'InitialVariantWeight':1,\n 'InitialInstanceCount':1,\n 'ModelName':model_name,\n 'VariantName':'AllTraffic'}])",
"_____no_output_____"
],
[
"endpoint_name = sagemaker_configs['SageMakerEndpointName']\ncreate_endpoint_response = sm.create_endpoint(\n EndpointName=endpoint_name,\n EndpointConfigName=endpoint_config_name,\n )\nprint(create_endpoint_response['EndpointArn'])",
"_____no_output_____"
],
[
"resp = sm.describe_endpoint(EndpointName=endpoint_name)\nstatus = resp['EndpointStatus']\nprint(\"Status: \" + status)",
"_____no_output_____"
],
[
"import time\n\nprint('Creating Endpoint... this may take several minutes')\nwhile status=='Creating':\n resp = sm.describe_endpoint(EndpointName=endpoint_name)\n status = resp['EndpointStatus']\n print('.', end='')\n time.sleep(15) \nprint(\"\\nStatus: \" + status)",
"_____no_output_____"
]
],
[
[
"## Test the Endpoint",
"_____no_output_____"
]
],
[
[
"runtime_client = boto3.client('runtime.sagemaker')\n\ntest_input = test.drop(columns=[target])[0:10]\ntest_input_csv = test_input.to_csv(index=False, header=False).split('\\n')\ntest_labels = test[target][0:10]\n\n\n\nfor i, (single_test, single_label) in enumerate(zip(test_input_csv, test_labels)):\n print('=== Test {} ===\\nInput: {}\\n'.format(i, single_test)) \n response = runtime_client.invoke_endpoint(EndpointName = endpoint_name,\n ContentType = 'text/csv',\n Body = single_test)\n result = response['Body'].read().decode('ascii')\n print('Predicted label is {}\\nCorrect label is {}\\n'.format(result.rstrip(), single_label.rstrip())) ",
"_____no_output_____"
]
],
[
[
"## Clean up",
"_____no_output_____"
],
[
"Stack deletion will clean up all created resources including S3 buckets, Endpoint configurations, Endpoints and Models.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
d0bfee9044ef78ec5551f7621e08a1857922690e | 2,486 | ipynb | Jupyter Notebook | playbook/tactics/privilege-escalation/T1134.005.ipynb | haresudhan/The-AtomicPlaybook | 447b1d6bca7c3750c5a58112634f6bac31aff436 | [
"MIT"
] | 8 | 2021-05-25T15:25:31.000Z | 2021-11-08T07:14:45.000Z | playbook/tactics/privilege-escalation/T1134.005.ipynb | haresudhan/The-AtomicPlaybook | 447b1d6bca7c3750c5a58112634f6bac31aff436 | [
"MIT"
] | 1 | 2021-08-23T17:38:02.000Z | 2021-10-12T06:58:19.000Z | playbook/tactics/privilege-escalation/T1134.005.ipynb | haresudhan/The-AtomicPlaybook | 447b1d6bca7c3750c5a58112634f6bac31aff436 | [
"MIT"
] | 2 | 2021-05-29T20:24:24.000Z | 2021-08-05T23:44:12.000Z | 55.244444 | 1,165 | 0.724457 | [
[
[
"# T1134.005 - SID-History Injection\nAdversaries may use SID-History Injection to escalate privileges and bypass access controls. The Windows security identifier (SID) is a unique value that identifies a user or group account. SIDs are used by Windows security in both security descriptors and access tokens. (Citation: Microsoft SID) An account can hold additional SIDs in the SID-History Active Directory attribute (Citation: Microsoft SID-History Attribute), allowing inter-operable account migration between domains (e.g., all values in SID-History are included in access tokens).\n\nWith Domain Administrator (or equivalent) rights, harvested or well-known SID values (Citation: Microsoft Well Known SIDs Jun 2017) may be inserted into SID-History to enable impersonation of arbitrary users/groups such as Enterprise Administrators. This manipulation may result in elevated access to local resources and/or access to otherwise inaccessible domains via lateral movement techniques such as [Remote Services](https://attack.mitre.org/techniques/T1021), [Windows Admin Shares](https://attack.mitre.org/techniques/T1077), or [Windows Remote Management](https://attack.mitre.org/techniques/T1028).",
"_____no_output_____"
],
[
"## Atomic Tests:\nCurrently, no tests are available for this technique.",
"_____no_output_____"
],
[
"## Detection\nExamine data in user’s SID-History attributes using the PowerShell <code>Get-ADUser</code> cmdlet (Citation: Microsoft Get-ADUser), especially users who have SID-History values from the same domain. (Citation: AdSecurity SID History Sept 2015) Also monitor account management events on Domain Controllers for successful and failed changes to SID-History. (Citation: AdSecurity SID History Sept 2015) (Citation: Microsoft DsAddSidHistory)\n\nMonitor for Windows API calls to the <code>DsAddSidHistory</code> function. (Citation: Microsoft DsAddSidHistory)",
"_____no_output_____"
]
]
] | [
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
]
] |
d0bff0ff1d34b0011502c0d541cc9ceef59f39d8 | 13,850 | ipynb | Jupyter Notebook | notebooks/Run Self Defined Problem.ipynb | zsweet/bert-multitask-learning | 7e6bd301904285614549871cff13a67e9c794532 | [
"MIT"
] | 2 | 2020-10-19T11:35:17.000Z | 2022-01-07T15:04:12.000Z | notebooks/Run Self Defined Problem.ipynb | zsweet/bert-multitask-learning | 7e6bd301904285614549871cff13a67e9c794532 | [
"MIT"
] | null | null | null | notebooks/Run Self Defined Problem.ipynb | zsweet/bert-multitask-learning | 7e6bd301904285614549871cff13a67e9c794532 | [
"MIT"
] | 1 | 2020-10-19T11:36:32.000Z | 2020-10-19T11:36:32.000Z | 43.28125 | 752 | 0.631119 | [
[
[
"## Define new problem type and data reading function\n\nWe'll use IMDB dataset as example",
"_____no_output_____"
]
],
[
[
"import tensorflow as tf\nfrom tensorflow import keras",
"_____no_output_____"
],
[
"cd ../",
"/data3/yjp/bert-multitask-learning\n"
],
[
"from bert_multitask_learning import (get_or_make_label_encoder, FullTokenizer, \n create_single_problem_generator, train_bert_multitask, \n eval_bert_multitask, DynamicBatchSizeParams, TRAIN, EVAL, PREDICT)\nimport pickle",
"_____no_output_____"
],
[
"new_problem_type = {'imdb_cls': 'cls'}\n\ndef imdb_cls(params, mode):\n tokenizer = FullTokenizer(vocab_file=params.vocab_file)\n \n # get data\n (train_data, train_labels), (test_data, test_labels) = keras.datasets.imdb.load_data(num_words=10000)\n label_encoder = get_or_make_label_encoder(params, 'imdb_cls', mode, train_labels+test_labels)\n word_to_id = keras.datasets.imdb.get_word_index()\n index_from=3\n word_to_id = {k:(v+index_from) for k,v in word_to_id.items()}\n word_to_id[\"<PAD>\"] = 0\n word_to_id[\"<START>\"] = 1\n word_to_id[\"<UNK>\"] = 2\n id_to_word = {value:key for key,value in word_to_id.items()}\n\n train_data = [[id_to_word[i] for i in sentence] for sentence in train_data]\n test_data = [[id_to_word[i] for i in sentence] for sentence in test_data]\n \n if mode == TRAIN:\n input_list = train_data\n target_list = train_labels\n else:\n input_list = test_data\n target_list = test_labels\n \n if mode == PREDICT:\n return input_list, target_list, label_encoder\n \n return create_single_problem_generator('imdb_cls', input_list, target_list, label_encoder, params, tokenizer, mode)\n\nnew_problem_process_fn_dict = {'imdb_cls': imdb_cls}\n ",
"_____no_output_____"
]
],
[
[
"## Train Model\n\nPlease make sure you're using the correct checkpoint to initialize model.",
"_____no_output_____"
]
],
[
[
"params = DynamicBatchSizeParams()\nparams.init_checkpoint = 'models/cased_L-12_H-768_A-12'\ntf.logging.set_verbosity(tf.logging.DEBUG)\ntrain_bert_multitask(problem='imdb_cls', num_gpus=1, \n num_epochs=10, params=params, \n problem_type_dict=new_problem_type, processing_fn_dict=new_problem_process_fn_dict)",
"Adding new problem imdb_cls, problem type: cls\nINFO:tensorflow:Device is available but not used by distribute strategy: /device:CPU:0\nINFO:tensorflow:Device is available but not used by distribute strategy: /device:XLA_CPU:0\nINFO:tensorflow:Device is available but not used by distribute strategy: /device:XLA_GPU:0\nINFO:tensorflow:Device is available but not used by distribute strategy: /device:XLA_GPU:1\nINFO:tensorflow:Device is available but not used by distribute strategy: /device:XLA_GPU:2\nINFO:tensorflow:Device is available but not used by distribute strategy: /device:XLA_GPU:3\nINFO:tensorflow:Device is available but not used by distribute strategy: /device:GPU:1\nINFO:tensorflow:Device is available but not used by distribute strategy: /device:GPU:2\nINFO:tensorflow:Device is available but not used by distribute strategy: /device:GPU:3\nINFO:tensorflow:Configured nccl all-reduce.\nINFO:tensorflow:Initializing RunConfig with distribution strategies.\nINFO:tensorflow:Not using Distribute Coordinator.\nINFO:tensorflow:Using config: {'_model_dir': 'models/imdb_cls_ckpt', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': allow_soft_placement: true\ngraph_options {\n rewrite_options {\n meta_optimizer_iterations: ONE\n }\n}\n, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': <tensorflow.contrib.distribute.python.mirrored_strategy.MirroredStrategy object at 0x7fceaec24240>, '_device_fn': None, '_protocol': None, '_eval_distribute': <tensorflow.contrib.distribute.python.mirrored_strategy.MirroredStrategy object at 0x7fceaec24240>, '_experimental_distribute': None, '_service': None, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7fcf795f3748>, '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1, '_distribute_coordinator_mode': None}\nINFO:tensorflow:Create RestoreCheckpointHook.\nINFO:tensorflow:Skipping training since max_steps has already saved.\n"
]
],
[
[
"## Evaluate Model\n",
"_____no_output_____"
]
],
[
[
"print(eval_bert_multitask(problem='imdb_cls', num_gpus=1, \n params=params, eval_scheme='acc',\n problem_type_dict=new_problem_type, processing_fn_dict=new_problem_process_fn_dict))",
"Params problem assigned. Problem list: ['imdb_cls']\nINFO:tensorflow:Device is available but not used by distribute strategy: /device:CPU:0\nINFO:tensorflow:Device is available but not used by distribute strategy: /device:XLA_CPU:0\nINFO:tensorflow:Device is available but not used by distribute strategy: /device:XLA_GPU:0\nINFO:tensorflow:Device is available but not used by distribute strategy: /device:XLA_GPU:1\nINFO:tensorflow:Device is available but not used by distribute strategy: /device:XLA_GPU:2\nINFO:tensorflow:Device is available but not used by distribute strategy: /device:XLA_GPU:3\nINFO:tensorflow:Device is available but not used by distribute strategy: /device:GPU:1\nINFO:tensorflow:Device is available but not used by distribute strategy: /device:GPU:2\nINFO:tensorflow:Device is available but not used by distribute strategy: /device:GPU:3\nINFO:tensorflow:Configured nccl all-reduce.\nINFO:tensorflow:Initializing RunConfig with distribution strategies.\nINFO:tensorflow:Not using Distribute Coordinator.\nINFO:tensorflow:Using config: {'_model_dir': 'models/imdb_cls_ckpt', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': allow_soft_placement: true\ngraph_options {\n rewrite_options {\n meta_optimizer_iterations: ONE\n }\n}\n, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': <tensorflow.contrib.distribute.python.mirrored_strategy.MirroredStrategy object at 0x7fcea60f1080>, '_device_fn': None, '_protocol': None, '_eval_distribute': <tensorflow.contrib.distribute.python.mirrored_strategy.MirroredStrategy object at 0x7fcea60f1080>, '_experimental_distribute': None, '_service': None, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7fceaec27550>, '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1, '_distribute_coordinator_mode': None}\nWARNING:tensorflow:From /data3/yjp/anaconda3/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.\nInstructions for updating:\nColocations handled automatically by placer.\nWARNING:tensorflow:From /data3/yjp/anaconda3/lib/python3.7/site-packages/tensorflow/python/data/ops/dataset_ops.py:429: py_func (from tensorflow.python.ops.script_ops) is deprecated and will be removed in a future version.\nInstructions for updating:\ntf.py_func is deprecated in TF V2. Instead, use\n tf.py_function, which takes a python function which manipulates tf eager\n tensors instead of numpy arrays. It's easy to convert a tf eager tensor to\n an ndarray (just call tensor.numpy()) but having access to eager tensors\n means `tf.py_function`s can use accelerators such as GPUs as well as\n being differentiable using a gradient tape.\n \nINFO:tensorflow:Calling model_fn.\nWARNING:tensorflow:From /data3/yjp/bert-multitask-learning/bert_multitask_learning/bert/modeling.py:673: dense (from tensorflow.python.layers.core) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse keras.layers.dense instead.\nDEBUG:tensorflow:Converted call: <function stop_grad at 0x7fceaef32e18>; owner: None\nDEBUG:tensorflow:Converting <function stop_grad at 0x7fceaef32e18>\nDEBUG:tensorflow:Compiled output of <function stop_grad at 0x7fceaef32e18>:\n\ndef stop_grad(global_step, tensor, freeze_step):\n try:\n with ag__.function_scope('stop_grad'):\n cond_1 = ag__.gt(freeze_step, 0)\n\n def if_true_1():\n with ag__.function_scope('if_true_1'):\n tensor_2, = tensor,\n cond = ag__.lt_e(global_step, freeze_step)\n\n def if_true():\n with ag__.function_scope('if_true'):\n tensor_1, = tensor_2,\n tensor_1 = tf.stop_gradient(tensor_1)\n return tensor_1\n\n def if_false():\n with ag__.function_scope('if_false'):\n return tensor_2\n tensor_2 = ag__.if_stmt(cond, if_true, if_false)\n return tensor_2\n\n def if_false_1():\n with ag__.function_scope('if_false_1'):\n return tensor\n tensor = ag__.if_stmt(cond_1, if_true_1, if_false_1)\n return tensor\n except:\n ag__.rewrite_graph_construction_error(ag_source_map__)\n\n\n\nstop_grad.autograph_info__ = {}\n\n\nINFO:tensorflow:Done calling model_fn.\nINFO:tensorflow:Graph was finalized.\nWARNING:tensorflow:From /data3/yjp/anaconda3/lib/python3.7/site-packages/tensorflow/python/training/saver.py:1266: checkpoint_exists (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse standard file APIs to check for files with this prefix.\nINFO:tensorflow:Restoring parameters from models/imdb_cls_ckpt/model.ckpt-7812\nINFO:tensorflow:Running local_init_op.\nINFO:tensorflow:Done running local_init_op.\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0bff34c187457648d0397a7c43dadbb48f08251 | 17,075 | ipynb | Jupyter Notebook | Developing_ipynb/project4.ipynb | chuchun2/ATMS-597-Project-4-Group-C | 858f7afbbcd6123f13ae21e8c41dce18badaa8ec | [
"MIT"
] | 1 | 2020-03-19T23:03:30.000Z | 2020-03-19T23:03:30.000Z | Developing_ipynb/project4.ipynb | chuchun2/ATMS-597-Project-4-Group-C | 858f7afbbcd6123f13ae21e8c41dce18badaa8ec | [
"MIT"
] | null | null | null | Developing_ipynb/project4.ipynb | chuchun2/ATMS-597-Project-4-Group-C | 858f7afbbcd6123f13ae21e8c41dce18badaa8ec | [
"MIT"
] | 1 | 2020-03-12T17:35:09.000Z | 2020-03-12T17:35:09.000Z | 33.946322 | 242 | 0.324744 | [
[
[
"<a href=\"https://colab.research.google.com/github/szymbor2/ATMS-597-Project-4-Group-C/blob/master/project4.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"# Imports",
"_____no_output_____"
],
[
"Import Libraries",
"_____no_output_____"
]
],
[
[
"import tarfile\nimport pandas as pd\nimport os\n\nfrom google.colab import drive\ndrive.mount('/content/drive')",
"Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount(\"/content/drive\", force_remount=True).\n"
]
],
[
[
"Set Your Directory",
"_____no_output_____"
]
],
[
[
"YOUR_DIRECTORY = '/content/drive/My Drive/Colab Notebooks/ATMS597/project4/'",
"_____no_output_____"
]
],
[
[
"Import GFS data and save to pd.DataFrame",
"_____no_output_____"
]
],
[
[
"daily = tarfile.open(name = YOUR_DIRECTORY + 'daily.tar.gz') # Set the archive for opening\n\n# Aggregate to PD DataFrame\ncur_file = daily.next() # Initiate while loop using the first file in the tar archive\ndaily_gfs = pd.DataFrame(columns=['TMAX', 'TMIN', 'WMAX', 'RTOT'])\ni = 0\nwhile cur_file != None:\n i += 1\n if i % 350 == 0:\n print(float(i/3500))\n working_file = YOUR_DIRECTORY + cur_file.name\n daily.extract(cur_file, path=YOUR_DIRECTORY) # Extract TarInfo Object\n convert_to_df = pd.read_csv(working_file, index_col=0) # Convert cur_file (TarInfo Object) to string, then to PD \n daily_gfs = df.append(convert_to_df) # Append PD to DF\n os.remove(working_file) # Remove file extracted in directory\n cur_file = daily.next() # Go to next file in archive\n\ndaily_gfs['TMAX'] = daily_gfs['TMAX'].apply(lambda x: (x*(9/5))).apply(lambda x: x+32) # Change TMAX to Celsius\ndaily_gfs['TMIN'] = daily_gfs['TMIN'].apply(lambda x: (x*(9/5))).apply(lambda x: x+32) # Change TMIN to Celsius\ndaily.close() # Close .tar",
"0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1.0\n"
],
[
"daily_gfs",
"_____no_output_____"
]
],
[
[
"Import obs daily data",
"_____no_output_____"
]
],
[
[
"daily_obs = pd.read_csv(YOUR_DIRECTORY + 'KCMI_daily.csv', header=4, usecols=[0,1,2,3,4])\ndaily_obs",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d0bff3550f92a9630c5ca8734d7a08e62fb83278 | 914,713 | ipynb | Jupyter Notebook | sheet_06/sheet_06_machine-learning.ipynb | ArielMant0/ml2018 | 676dcf028766c369f94c164529ce16c4ef7716aa | [
"MIT"
] | null | null | null | sheet_06/sheet_06_machine-learning.ipynb | ArielMant0/ml2018 | 676dcf028766c369f94c164529ce16c4ef7716aa | [
"MIT"
] | null | null | null | sheet_06/sheet_06_machine-learning.ipynb | ArielMant0/ml2018 | 676dcf028766c369f94c164529ce16c4ef7716aa | [
"MIT"
] | null | null | null | 143.619564 | 258,640 | 0.821987 | [
[
[
"Osnabrück University - Machine Learning (Summer Term 2018) - Prof. Dr.-Ing. G. Heidemann, Ulf Krumnack",
"_____no_output_____"
],
[
"# Exercise Sheet 06",
"_____no_output_____"
],
[
"## Introduction\n\nThis week's sheet should be solved and handed in before the end of **Sunday, May 20, 2018**. If you need help (and Google and other resources were not enough), feel free to contact your groups designated tutor or whomever of us you run into first. Please upload your results to your group's studip folder.",
"_____no_output_____"
],
[
"## Assignment 0: Math recap (Hyperplanes) [2 Bonus Points]\n\nThis exercise is supposed to be very easy and is voluntary. There will be a similar exercise on every sheet. It is intended to revise some basic mathematical notions that are assumed throughout this class and to allow you to check if you are comfortable with them. Usually you should have no problem to answer these questions offhand, but if you feel unsure, this is a good time to look them up again. You are always welcome to discuss questions with the tutors or in the practice session. Also, if you have a (math) topic you would like to recap, please let us know.",
"_____no_output_____"
],
[
"**a)** What is a *hyperplane*? What are the hyperlanes in $\\mathbb{R}^2$ and $\\mathbb{R}^3$? How are the usually described?",
"_____no_output_____"
],
[
"A hyperplane is a subspace that has one less dimensio than its ambient space. A hpyerplane in $\\mathbb{R}^2$ is $\\mathbb{R}^1$ (a line) and a hyperplane in $\\mathbb{R}^3$ is $\\mathbb{R}^2$ (a plane). \n\n**Description:** \n$$\\vec{x}\\cdot\\vec{n} = d$$\nwhere $\\vec{x}$ is a position vector of a point, $\\cdot$ is the dot product and $d$ is the distance to the origin. All points that fulfill this equation lie on(inside?) the hyperplane.",
"_____no_output_____"
],
[
"**b)** What is the Hesse normal form? What is the intuition behind? What are its advantages?",
"_____no_output_____"
],
[
"**Definition** \nThe Hesse normal form is a special type of equation which describes a line in $\\mathbb{R}^2$ or a plane in $\\mathbb{R}^3$ (or even higher-dimensional hyperplanes) through a unit normal vector and the distance to the origin. The Hesse normal form is useful when wanting to calculate the distance of a point to a plane or a line. \n\n**Intuition** \n\n**Advatages**",
"_____no_output_____"
],
[
"**c)** Can you transform the standard form of a hyperplane into the Hesse normal form and vice versa?",
"_____no_output_____"
],
[
"Yes, the standard form of a hyperplane can be transformed into the Hesse normal form and vice versa:\n\\begin{align*}\n\\vec{x}\\cdot\\vec{n} &= d \\\\\n\\Rightarrow \\sum_{i=1}^n x_in_i &= 0\n\\end{align*}",
"_____no_output_____"
],
[
"## Assignment 1: Local PCA (8 Points)",
"_____no_output_____"
],
[
"In the lecture we learned that regular PCA is ill suited for special cases of data. In this assignment we will take a look at local PCA which is used for clustered data (ML-06, Slide 25). This is mostly a repetition of algorithms we already used. Feel free to use the built-in functions for k-means clustering and PCA from the libraries (we already included the right imports to set you on track).",
"_____no_output_____"
]
],
[
[
"%matplotlib notebook\n\nimport numpy as np\nimport numpy.random as rnd\nimport matplotlib.colors as mplc\nimport matplotlib.pyplot as plt\n\nfrom numpy.random import multivariate_normal as multNorm\n\nfrom scipy.cluster.vq import kmeans, vq\nfrom sklearn.decomposition.pca import PCA\n\ndef pdist2(x, y):\n \"\"\"\n Pairwise distance between all points of two datasets.\n \n Args:\n x (ndarray): Containing j data points of dimension n. Shape (j, n).\n y (ndarray): Containing k data points of dimension n. Shape (k, n).\n \n Returns:\n ndarray: Pairwise distances between all data points. Shape (j, k).\n \"\"\"\n distance_mat = np.empty((x.shape[0], y.shape[0]))\n for i in range(y.shape[0]):\n distance_mat[:, i] = np.linalg.norm(x - y[i], axis=-1)\n return distance_mat\n\n# Generate clustered data - you may plot the data to take a look at it\ndata = np.vstack((multNorm([2,2],[[0.1, 0], [0, 1]],100), multNorm([-2,-4],[[1, 0], [0, 0.3]],100)))\n\n# colors = ['indianred','steelblue','yellowgreen','lightseegreen','wheat','purple','sandybrown']\ncolors = ['red','blue','green','cyan','yellow','magenta','orange']\n\n# Apply k-means to the data (for k=1,3,5)\nfor k in [3, 5, 7]:\n centroids, distortion = kmeans(data, k)\n \n # Generate distance matrix for all observations with all centroids\n distances = pdist2(data, centroids)\n # Assign data to best matching centroid\n labels = [min_c for min_c in np.argmin(distances, axis=1)]\n\n # Plot the results of k-means\n fig = plt.figure('k-means for k ={}'.format(k))\n plt.scatter(data[:,0], data[:,1], c=labels)\n plt.scatter(centroids[:,0], centroids[:,1], \n c=list(set(labels)), alpha=.1, marker='o',\n s=np.array([np.count_nonzero(labels==label) for label in set(labels)])*100)\n plt.title('k = {}'.format(k))\n \n # Plot the results of local PCA\n pca_fig = plt.figure('projected data and components for k ={}'.format(k))\n comps = np.array([[0, 0]])\n # Apply PCA for each cluster and store each two largest components.\n for i, cluster in enumerate(centroids):\n pca = PCA(n_components=2)\n cluster_data = np.array([data[idx] for idx,label in enumerate(labels) if label == i])\n pca.fit(cluster_data)\n comps = np.concatenate((comps, [pca.components_[0,:]]), axis=0)\n comps = np.concatenate((comps, [pca.components_[1,:]]), axis=0)\n # row_sums = cluster_data.sum(axis=1)\n # proj = cluster_data / row_sums[:, np.newaxis]\n proj = np.array(cluster_data @ pca.components_)\n plt.scatter(proj[:,0], proj[:,1], c=np.zeros(proj.shape[0]).fill(i))\n \n comps = np.delete(comps, 0, 0)\n filler = np.zeros(len(comps))\n plt.quiver(filler, filler, comps[:,0], comps[:,1], scale=0.2) #, color=colors, scale=1)\n ",
"_____no_output_____"
]
],
[
[
"## Assignment 2: Data Visualization and Chernoff Faces (6 Points)",
"_____no_output_____"
],
[
"The following exercise contains no programming (unless you want to go through the implementation). Answer the questions that are posted below the code segment (and run the code before - it's really worth it!). In case you are even more interested - here is a link to the [original paper](http://www.dtic.mil/cgi-bin/GetTRDoc?AD=AD0738473).",
"_____no_output_____"
]
],
[
[
"%matplotlib notebook\n\nimport matplotlib.pyplot as plt\nfrom matplotlib.patches import Ellipse, Arc\nfrom numpy.random import rand\nimport numpy as np\n\ndef cface(ax, x1, x2, x3, x4, x5, x6, x7, x8, x9, x10, x11, x12, x13, x14, x15, x16, x17, x18):\n \"\"\"\n This implementation of chernov faces is taken from Abraham Flaxman. You can\n find the original source files here: https://gist.github.com/aflaxman/4043086\n Only minor adjustments have been made.\n\n x1 = height of upper face\n x2 = overlap of lower face\n x3 = half of vertical size of face\n x4 = width of upper face\n x5 = width of lower face\n x6 = length of nose\n x7 = vertical position of mouth\n x8 = curvature of mouth\n x9 = width of mouth\n x10 = vertical position of eyes\n x11 = separation of eyes\n x12 = slant of eyes\n x13 = eccentricity of eyes\n x14 = size of eyes\n x15 = position of pupils\n x16 = vertical position of eyebrows\n x17 = slant of eyebrows\n x18 = size of eyebrows\n \"\"\"\n\n # transform some values so that input between 0,1 yields variety of output\n x3 = 1.9 * (x3 - .5)\n x4 = (x4 + .25)\n x5 = (x5 + .2)\n x6 = .3 * (x6 + .01)\n x8 = 5 * (x8 + .001)\n x11 /= 5\n x12 = 2 * (x12 - .5)\n x13 += .05\n x14 += .1\n x15 = .5 * (x15 - .5)\n x16 = .25 * x16\n x17 = .5 * (x17 - .5)\n x18 = .5 * (x18 + .1)\n\n # top of face, in box with l=-x4, r=x4, t=x1, b=x3\n e = Ellipse((0, (x1 + x3) / 2), 2 * x4, (x1 - x3), ec='black', linewidth=2)\n ax.add_artist(e)\n\n # bottom of face, in box with l=-x5, r=x5, b=-x1, t=x2+x3\n e = Ellipse((0, (-x1 + x2 + x3) / 2), 2 * x5, (x1 + x2 + x3), fc='white', ec='black', linewidth=2)\n ax.add_artist(e)\n\n # cover overlaps\n e = Ellipse((0, (x1 + x3) / 2), 2 * x4, (x1 - x3), fc='white', ec='none')\n ax.add_artist(e)\n e = Ellipse((0, (-x1 + x2 + x3) / 2), 2 * x5, (x1 + x2 + x3), fc='white', ec='none')\n ax.add_artist(e)\n\n # draw nose\n plt.plot([0, 0], [-x6 / 2, x6 / 2], 'k')\n\n # draw mouth\n p = Arc((0, -x7 + .5 / x8), 1 / x8, 1 / x8, theta1=270 - 180 / np.pi * np.arctan(x8 * x9),\n theta2=270 + 180 / np.pi * np.arctan(x8 * x9))\n ax.add_artist(p)\n\n # draw eyes\n p = Ellipse((-x11 - x14 / 2, x10), x14, x13 * x14, angle=-180 / np.pi * x12, fc='white', ec='black')\n ax.add_artist(p)\n\n p = Ellipse((x11 + x14 / 2, x10), x14, x13 * x14, angle=180 / np.pi * x12, fc='white', ec='black')\n ax.add_artist(p)\n\n # draw pupils\n p = Ellipse((-x11 - x14 / 2 - x15 * x14 / 2, x10), .05, .05, facecolor='black')\n ax.add_artist(p)\n p = Ellipse((x11 + x14 / 2 - x15 * x14 / 2, x10), .05, .05, facecolor='black')\n ax.add_artist(p)\n\n # draw eyebrows\n plt.plot([-x11 - x14 / 2 - x14 * x18 / 2, -x11 - x14 / 2 + x14 * x18 / 2],\n [x10 + x13 * x14 * (x16 + x17), x10 + x13 * x14 * (x16 - x17)], 'k')\n plt.plot([x11 + x14 / 2 + x14 * x18 / 2, x11 + x14 / 2 - x14 * x18 / 2],\n [x10 + x13 * x14 * (x16 + x17), x10 + x13 * x14 * (x16 - x17)], 'k')\n\n\nfig = plt.figure('Chernoff Faces', figsize=(11, 11))\nfor i in range(25):\n ax = fig.add_subplot(5, 5, i + 1, aspect='equal')\n cface(ax, .9, *rand(17))\n ax.axis([-1.2, 1.2, -1.2, 1.2])\n ax.set_xticks([])\n ax.set_yticks([])\n\nfig.subplots_adjust(hspace=0, wspace=0)\nfig.canvas.draw()\n",
"_____no_output_____"
]
],
[
[
"### a) Data Visualization Techniques\n\nWhy do we need data visualization techniques and what are techniques to visualize high dimensional data?",
"_____no_output_____"
],
[
"Automated analysis of high-dimensional data is rarely possible, but humans have remarkable pattern recognition abilities, so visualizing data for humans to analyze is an important field of study. \n\n**Available Techniques:**\n\n- PCA\n - reduce dimensions and project data onto those to display\n- Scatterplot Matrix\n - project onto 2 dimensions and display all combinations as scatterplots\n- Glyphs \n - use some kind of geometry, where each dimension controls one parameter of the geometry\n - Chernovb Faces: map dimensions onto features of a face (as was done in the above plot)\n- Parallel Coordinate Plots\n - make *columns* for each dimension and connect them via lines",
"_____no_output_____"
],
[
"### b) Chernoff faces\n\nWhy did Chernoff use faces for his representation? Why not something else, like dogs or houses?",
"_____no_output_____"
],
[
"Humans have a highly developed ability for (human) facial recognition, so it makes sense to show data features using human faces. Our ability to distinguish faces of other animals is rather poor. Also, for items such as houses, there may be less features availabe that can be varied in an easily recognisable way.",
"_____no_output_____"
],
[
"### c) Alternatives\n\nExplain at least one other data visualization technique from the lecture.",
"_____no_output_____"
],
[
"The **Parallel Coordinate Plot** maps the different features/dimensions of data onto columns on the x-axis and the the values for those features is mapped onto the y-axis. Then, one line per datum is drawn from the first to the last column, at the height that represent the value of the respective feature.\n\nWhat is somewhat troublesome when using these kinds of plots is that scaling makes a huge difference (maybe the ranges of values for different features vary a lot) in terms of interpretability and navigation becomes harder the more data (e.g. lines) are present.\n\n**Example Image:**\n\n",
"_____no_output_____"
],
[
"## Assignment 3: Hebbian Learning (6 Points)",
"_____no_output_____"
],
[
"In the lecture (ML-07, Slides 10ff.) there is a simplified version of Ivan Pavlov's famous experiment on classical conditioning. In this exercise you will take a look into this simplified model and create your own conditionable dog with a simple Hebbian learning rule.",
"_____no_output_____"
],
[
"### a) Programming a Dog\nTo model the dog Saliva behavior we will need to model an unconditioned and a conditioned stimulus: food and bell. They are represented as lists: `weight_food` and `weight_bell`. Note that one could just use a single number, the lists are only here to keep track of the history for a nice output. It is possible to access the current weight by selecting the last item of each list, respectively: `weight_food[-1]`.\n\nA list of trials is already given as well as a condition database. Each entry represents an index to select from the `condition_db`. To figure out the value of the stimulus `food` in the second trial (which maps to condition `1`) one could do: `condition_db[1][\"food\"]`.\n\nYour task is to implement a `for` loop over all trials. In each iteration select the correct values for $x_1$ and $x_2$ from the condition database and retrieve the current weights $w_1$ and $w_2$. Then calculate the response of the dog with the threshold $\\theta$:\n\n$$\nr_t = \\Theta(x_{1,t-1} w_{1,t-1} + x_{2,t-1} w_{2,t-1})\\\\\n\\Theta(x)= \\begin{cases}1 \\text{ if } x >= \\theta\\\\0 \\text{ else }\\end{cases}\n$$\n\nWith this response calculate both $w_{n,t}$ according to the Hebbian rule:\n\n$$w_{n,t} = w_{n, t-1} + \\epsilon \\cdot r_t \\cdot x_{n,t}$$\n\n*Note: While you program the output might look a little messy, don't worry about it. Once you fill up all three lists properly, it will look much like on ML-07, Slide 14.*",
"_____no_output_____"
]
],
[
[
"# Initialization\ncondition_db = [{\"food\": 1, \"bell\": 0}, \n {\"food\": 0, \"bell\": 1},\n {\"food\": 1, \"bell\": 1}]\n\ntrials = [0, 1, 2, 2, 1, 2, 1]\n\nepsilon = 0.2\ntheta = 1/2\n\nresponses = []\nweight_food = [1]\nweight_bell = [0]\n\ndef calc_response(sample):\n if weight_food[-1]*sample[\"food\"] + weight_bell[-1]*sample[\"bell\"] > theta:\n return 1\n else:\n return 0\n \ndef update_weights(response, sample):\n weight_food.append(weight_food[-1] + epsilon * response * sample[\"food\"])\n weight_bell.append(weight_bell[-1] + epsilon * response * sample[\"bell\"])\n\n# For each trial, update the current weights of the US and CS and store\n# the results in the respective lists. Also store the response.\n\nfor t in trials:\n responses.append(calc_response(condition_db[t]))\n update_weights(responses[-1], condition_db[t])\n\n# Output\nprint(\"| Food | |\" + \"| |\".join([\"{:3d}\".format(condition_db[trial][\"food\"]) for trial in trials]) + \"| |\")\nprint(\"| Bell | |\" + \"| |\".join([\"{:3d}\".format(condition_db[trial][\"bell\"]) for trial in trials]) + \"| |\")\nprint(\"| Saliva | |\" + \"| |\".join([\"{:3d}\".format(response) for response in responses]) + \"| |\")\nprint(\"| w_Food |\" + \"| |\".join([\"{:3.1f}\".format(w) for w in weight_food]) + \"|\")\nprint(\"| w_Bell |\" + \"| |\".join([\"{:3.1f}\".format(w) for w in weight_bell]) + \"|\")",
"| Food | | 1| | 0| | 1| | 1| | 0| | 1| | 0| |\n| Bell | | 0| | 1| | 1| | 1| | 1| | 1| | 1| |\n| Saliva | | 1| | 0| | 1| | 1| | 0| | 1| | 1| |\n| w_Food |1.0| |1.2| |1.2| |1.4| |1.6| |1.6| |1.8| |1.8|\n| w_Bell |0.0| |0.0| |0.0| |0.2| |0.4| |0.4| |0.6| |0.8|\n"
]
],
[
[
"### b) Parameter adjustment\n\nIn the above default setting of trials (`[0, 1, 2, 2, 1, 2, 1]`, in case you changed it), how many learning steps did you need until the dog started to produce saliva on the conditioned stimulus? What happens if you change the parameters $\\epsilon$ and $\\theta$? Try smaller and bigger values for each or present different conditions to the dog.",
"_____no_output_____"
],
[
"**How many learning steps were needed with the default settings?** → 5 steps were needed\n\n**Smaller Values:** \nFor smaller values ($\\epsilon = 0.01$ and $\\theta = 0.1$), the default number of trials does not suffice for the dog to learn to react to the conditioned stimulus.\n\n**Larger Values:** \nFor larger values ($\\epsilon = 0.5$ and $\\theta = 0.9$), the dog already responds to the conditioned stimulus after only **one** trial.\n\n**Explanation:** \nSince we always increase the likelihood to resond to a stimulus $s_i$ by $\\theta$, choosing a large value for $\\theta$ results in the subject responding to the stimulus much faster, while a small value for $\\theta$ means that it takes a much longer time to reach the threshold for the subject to respond to the stimulus. \nIn contrast, when choosing a large value for $\\epsilon$, the reaction threshold, the subject only responds after a stimulus has a likelihood value that high. When choosing a small value for $\\epsilon$, the subject resonds to stimuli much faster.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
d0c003671177be0f66aa6f70f59b41f49ed814d3 | 2,974 | ipynb | Jupyter Notebook | matplotlib/gallery_jupyter/text_labels_and_annotations/fancyarrow_demo.ipynb | kingreatwill/penter | 2d027fd2ae639ac45149659a410042fe76b9dab0 | [
"MIT"
] | 13 | 2020-01-04T07:37:38.000Z | 2021-08-31T05:19:58.000Z | matplotlib/gallery_jupyter/text_labels_and_annotations/fancyarrow_demo.ipynb | kingreatwill/penter | 2d027fd2ae639ac45149659a410042fe76b9dab0 | [
"MIT"
] | 3 | 2020-06-05T22:42:53.000Z | 2020-08-24T07:18:54.000Z | matplotlib/gallery_jupyter/text_labels_and_annotations/fancyarrow_demo.ipynb | kingreatwill/penter | 2d027fd2ae639ac45149659a410042fe76b9dab0 | [
"MIT"
] | 9 | 2020-10-19T04:53:06.000Z | 2021-08-31T05:20:01.000Z | 41.305556 | 1,399 | 0.488231 | [
[
[
"%matplotlib inline",
"_____no_output_____"
]
],
[
[
"\n# Fancyarrow Demo\n\n",
"_____no_output_____"
]
],
[
[
"import matplotlib.patches as mpatches\nimport matplotlib.pyplot as plt\n\nstyles = mpatches.ArrowStyle.get_styles()\n\nncol = 2\nnrow = (len(styles) + 1) // ncol\nfigheight = (nrow + 0.5)\nfig = plt.figure(figsize=(4 * ncol / 1.5, figheight / 1.5))\nfontsize = 0.2 * 70\n\n\nax = fig.add_axes([0, 0, 1, 1], frameon=False, aspect=1.)\n\nax.set_xlim(0, 4 * ncol)\nax.set_ylim(0, figheight)\n\n\ndef to_texstring(s):\n s = s.replace(\"<\", r\"$<$\")\n s = s.replace(\">\", r\"$>$\")\n s = s.replace(\"|\", r\"$|$\")\n return s\n\n\nfor i, (stylename, styleclass) in enumerate(sorted(styles.items())):\n x = 3.2 + (i // nrow) * 4\n y = (figheight - 0.7 - i % nrow) # /figheight\n p = mpatches.Circle((x, y), 0.2)\n ax.add_patch(p)\n\n ax.annotate(to_texstring(stylename), (x, y),\n (x - 1.2, y),\n ha=\"right\", va=\"center\",\n size=fontsize,\n arrowprops=dict(arrowstyle=stylename,\n patchB=p,\n shrinkA=5,\n shrinkB=5,\n fc=\"k\", ec=\"k\",\n connectionstyle=\"arc3,rad=-0.05\",\n ),\n bbox=dict(boxstyle=\"square\", fc=\"w\"))\n\nax.xaxis.set_visible(False)\nax.yaxis.set_visible(False)\n\nplt.show()",
"_____no_output_____"
]
],
[
[
"------------\n\nReferences\n\"\"\"\"\"\"\"\"\"\"\n\nThe use of the following functions, methods, classes and modules is shown\nin this example:\n\n",
"_____no_output_____"
]
],
[
[
"import matplotlib\nmatplotlib.patches\nmatplotlib.patches.ArrowStyle\nmatplotlib.patches.ArrowStyle.get_styles\nmatplotlib.axes.Axes.annotate",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0c02d80902f7cfcc1398ac626ad6577b5b09301 | 71,466 | ipynb | Jupyter Notebook | apple_health/apple_health_extractor.ipynb | LeooDong/health_monitoring | 447bd2aa787239cf7d3ffcf7905bfbe6dd74eb8c | [
"MIT"
] | null | null | null | apple_health/apple_health_extractor.ipynb | LeooDong/health_monitoring | 447bd2aa787239cf7d3ffcf7905bfbe6dd74eb8c | [
"MIT"
] | null | null | null | apple_health/apple_health_extractor.ipynb | LeooDong/health_monitoring | 447bd2aa787239cf7d3ffcf7905bfbe6dd74eb8c | [
"MIT"
] | null | null | null | 29.617074 | 131 | 0.419276 | [
[
[
"# Apple Health Extractor",
"_____no_output_____"
],
[
"This code will parse your Apple Health export data, create multiple CSV and do some simple data checks and data analysis. \n\nEnjoy! ",
"_____no_output_____"
],
[
"--------",
"_____no_output_____"
],
[
"## Extract Data and Export to CSVs from Apple Health's Export.xml",
"_____no_output_____"
],
[
"* Command Line Tool to Process apple health's export.xml file \n* Create multiple CSV files for each data type. \n* Original Source: https://github.com/tdda/applehealthdata\n* Based on the size of your Apple Health Data, this script may take several minutes to complete.\n\n**NOTE: Currently there are a few minror errors based on additional data from Apple Health that require some updates.** ",
"_____no_output_____"
],
[
"## Setup and Usage NOTE\n\n* Export your data from Apple Health App on your phone. \n* Unzip export.zip into this directory and rename to data. \n* Inside your directory there should be a directory and file here: /data/export.xml\n* Run inside project or in the command line.",
"_____no_output_____"
]
],
[
[
"# %run -i 'apple-health-data-parser' 'export.xml' \n%run -i 'apple-health-data-parser' 'data/export.xml' ",
"Reading data from data/export.xml . . . done\nUnexpected node of type ExportDate.\n\nTags:\nActivitySummary: 145\nExportDate: 1\nMe: 1\nRecord: 291418\nWorkout: 11\n\nFields:\nHKCharacteristicTypeIdentifierBiologicalSex: 1\nHKCharacteristicTypeIdentifierBloodType: 1\nHKCharacteristicTypeIdentifierCardioFitnessMedicationsUse: 1\nHKCharacteristicTypeIdentifierDateOfBirth: 1\nHKCharacteristicTypeIdentifierFitzpatrickSkinType: 1\nactiveEnergyBurned: 145\nactiveEnergyBurnedGoal: 145\nactiveEnergyBurnedUnit: 145\nappleExerciseTime: 145\nappleExerciseTimeGoal: 145\nappleMoveTime: 145\nappleMoveTimeGoal: 145\nappleStandHours: 145\nappleStandHoursGoal: 145\ncreationDate: 291429\ndateComponents: 145\ndevice: 260666\nduration: 11\ndurationUnit: 11\nendDate: 291429\nsourceName: 291429\nsourceVersion: 290893\nstartDate: 291429\ntotalDistance: 11\ntotalDistanceUnit: 11\ntotalEnergyBurned: 11\ntotalEnergyBurnedUnit: 11\ntype: 291418\nunit: 285441\nvalue: 291369\nworkoutActivityType: 11\n\nRecord types:\nActiveEnergyBurned: 56815\nAppleExerciseTime: 4701\nAppleStandHour: 1531\nBasalEnergyBurned: 11712\nBodyFatPercentage: 113\nBodyMass: 162\nBodyMassIndex: 120\nDietaryCalcium: 11\nDietaryCarbohydrates: 11\nDietaryCholesterol: 11\nDietaryEnergyConsumed: 11\nDietaryFatMonounsaturated: 11\nDietaryFatPolyunsaturated: 11\nDietaryFatSaturated: 11\nDietaryFatTotal: 11\nDietaryFiber: 11\nDietaryIron: 11\nDietaryPotassium: 11\nDietaryProtein: 11\nDietarySodium: 11\nDietarySugar: 11\nDietaryVitaminC: 11\nDistanceWalkingRunning: 57694\nFlightsClimbed: 13155\nHKDataTypeSleepDurationGoal: 1\nHeadphoneAudioExposure: 20011\nHeartRate: 26045\nHeartRateVariabilitySDNN: 1\nHeight: 2\nLeanBodyMass: 113\nMindfulSession: 50\nRespiratoryRate: 133\nSleepAnalysis: 4396\nStepCount: 57928\nWaistCircumference: 1\nWalkingAsymmetryPercentage: 4522\nWalkingDoubleSupportPercentage: 8697\nWalkingSpeed: 11680\nWalkingStepLength: 11670\n\nOpening /Users/leo/repos/health_monitoring/apple_health/data/BodyMassIndex.csv for writing\nOpening /Users/leo/repos/health_monitoring/apple_health/data/Height.csv for writing\nOpening /Users/leo/repos/health_monitoring/apple_health/data/BodyMass.csv for writing\nOpening /Users/leo/repos/health_monitoring/apple_health/data/HeartRate.csv for writing\nOpening /Users/leo/repos/health_monitoring/apple_health/data/RespiratoryRate.csv for writing\nOpening /Users/leo/repos/health_monitoring/apple_health/data/BodyFatPercentage.csv for writing\nOpening /Users/leo/repos/health_monitoring/apple_health/data/LeanBodyMass.csv for writing\nOpening /Users/leo/repos/health_monitoring/apple_health/data/StepCount.csv for writing\nOpening /Users/leo/repos/health_monitoring/apple_health/data/DistanceWalkingRunning.csv for writing\nOpening /Users/leo/repos/health_monitoring/apple_health/data/BasalEnergyBurned.csv for writing\nOpening /Users/leo/repos/health_monitoring/apple_health/data/ActiveEnergyBurned.csv for writing\nOpening /Users/leo/repos/health_monitoring/apple_health/data/FlightsClimbed.csv for writing\nOpening /Users/leo/repos/health_monitoring/apple_health/data/DietaryFatTotal.csv for writing\nOpening /Users/leo/repos/health_monitoring/apple_health/data/DietaryFatPolyunsaturated.csv for writing\nOpening /Users/leo/repos/health_monitoring/apple_health/data/DietaryFatMonounsaturated.csv for writing\nOpening /Users/leo/repos/health_monitoring/apple_health/data/DietaryFatSaturated.csv for writing\nOpening /Users/leo/repos/health_monitoring/apple_health/data/DietaryCholesterol.csv for writing\nOpening /Users/leo/repos/health_monitoring/apple_health/data/DietarySodium.csv for writing\nOpening /Users/leo/repos/health_monitoring/apple_health/data/DietaryCarbohydrates.csv for writing\nOpening /Users/leo/repos/health_monitoring/apple_health/data/DietaryFiber.csv for writing\nOpening /Users/leo/repos/health_monitoring/apple_health/data/DietarySugar.csv for writing\nOpening /Users/leo/repos/health_monitoring/apple_health/data/DietaryEnergyConsumed.csv for writing\nOpening /Users/leo/repos/health_monitoring/apple_health/data/DietaryProtein.csv for writing\nOpening /Users/leo/repos/health_monitoring/apple_health/data/DietaryVitaminC.csv for writing\nOpening /Users/leo/repos/health_monitoring/apple_health/data/DietaryCalcium.csv for writing\nOpening /Users/leo/repos/health_monitoring/apple_health/data/DietaryIron.csv for writing\nOpening /Users/leo/repos/health_monitoring/apple_health/data/DietaryPotassium.csv for writing\nOpening /Users/leo/repos/health_monitoring/apple_health/data/AppleExerciseTime.csv for writing\nOpening /Users/leo/repos/health_monitoring/apple_health/data/WaistCircumference.csv for writing\nOpening /Users/leo/repos/health_monitoring/apple_health/data/HeadphoneAudioExposure.csv for writing\nOpening /Users/leo/repos/health_monitoring/apple_health/data/WalkingDoubleSupportPercentage.csv for writing\nOpening /Users/leo/repos/health_monitoring/apple_health/data/WalkingSpeed.csv for writing\nOpening /Users/leo/repos/health_monitoring/apple_health/data/WalkingStepLength.csv for writing\nOpening /Users/leo/repos/health_monitoring/apple_health/data/WalkingAsymmetryPercentage.csv for writing\nOpening /Users/leo/repos/health_monitoring/apple_health/data/HKDataTypeSleepDurationGoal.csv for writing\nOpening /Users/leo/repos/health_monitoring/apple_health/data/SleepAnalysis.csv for writing\nOpening /Users/leo/repos/health_monitoring/apple_health/data/AppleStandHour.csv for writing\nOpening /Users/leo/repos/health_monitoring/apple_health/data/MindfulSession.csv for writing\nOpening /Users/leo/repos/health_monitoring/apple_health/data/HeartRateVariabilitySDNN.csv for writing\nOpening /Users/leo/repos/health_monitoring/apple_health/data/Workout.csv for writing\nOpening /Users/leo/repos/health_monitoring/apple_health/data/ActivitySummary.csv for writing\nWritten BodyMassIndex data.\nWritten Height data.\nWritten BodyMass data.\nWritten HeartRate data.\nWritten RespiratoryRate data.\nWritten BodyFatPercentage data.\nWritten LeanBodyMass data.\nWritten StepCount data.\nWritten DistanceWalkingRunning data.\nWritten BasalEnergyBurned data.\nWritten ActiveEnergyBurned data.\nWritten FlightsClimbed data.\nWritten DietaryFatTotal data.\nWritten DietaryFatPolyunsaturated data.\nWritten DietaryFatMonounsaturated data.\nWritten DietaryFatSaturated data.\nWritten DietaryCholesterol data.\nWritten DietarySodium data.\nWritten DietaryCarbohydrates data.\nWritten DietaryFiber data.\nWritten DietarySugar data.\nWritten DietaryEnergyConsumed data.\nWritten DietaryProtein data.\nWritten DietaryVitaminC data.\nWritten DietaryCalcium data.\nWritten DietaryIron data.\nWritten DietaryPotassium data.\nWritten AppleExerciseTime data.\nWritten WaistCircumference data.\nWritten HeadphoneAudioExposure data.\nWritten WalkingDoubleSupportPercentage data.\nWritten WalkingSpeed data.\nWritten WalkingStepLength data.\nWritten WalkingAsymmetryPercentage data.\nWritten HKDataTypeSleepDurationGoal data.\nWritten SleepAnalysis data.\nWritten AppleStandHour data.\nWritten MindfulSession data.\nWritten HeartRateVariabilitySDNN data.\nWritten Workout data.\nWritten ActivitySummary data.\n"
]
],
[
[
"-----",
"_____no_output_____"
],
[
"# Apple Health Data Check and Simple Data Analysis",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd\nimport glob",
"_____no_output_____"
]
],
[
[
"----",
"_____no_output_____"
],
[
"# Weight",
"_____no_output_____"
]
],
[
[
"weight = pd.read_csv(\"data/BodyMass.csv\")",
"_____no_output_____"
],
[
"weight.tail()",
"_____no_output_____"
],
[
"weight.describe()",
"_____no_output_____"
]
],
[
[
"----",
"_____no_output_____"
],
[
"## Steps",
"_____no_output_____"
]
],
[
[
"steps = pd.read_csv(\"data/StepCount.csv\")",
"_____no_output_____"
],
[
"len(steps)",
"_____no_output_____"
],
[
"steps.columns",
"_____no_output_____"
],
[
"steps.describe()",
"_____no_output_____"
],
[
"steps.tail()",
"_____no_output_____"
],
[
"# total all-time steps\nsteps.value.sum()",
"_____no_output_____"
]
],
[
[
"-------",
"_____no_output_____"
],
[
"## Stand Count",
"_____no_output_____"
]
],
[
[
"stand = pd.read_csv(\"data/AppleStandHour.csv\")",
"_____no_output_____"
],
[
"len(stand)",
"_____no_output_____"
],
[
"stand.columns",
"_____no_output_____"
],
[
"stand.describe()",
"_____no_output_____"
],
[
"stand.tail()",
"_____no_output_____"
]
],
[
[
"------",
"_____no_output_____"
],
[
"## Resting Heart Rate (HR)",
"_____no_output_____"
]
],
[
[
"restingHR = pd.read_csv(\"data/RestingHeartRate.csv\")",
"_____no_output_____"
],
[
"len(restingHR)",
"_____no_output_____"
],
[
"restingHR.describe()",
"_____no_output_____"
]
],
[
[
"---",
"_____no_output_____"
],
[
"## Walking Heart Rate (HR) Average",
"_____no_output_____"
]
],
[
[
"walkingHR = pd.read_csv(\"data/WalkingHeartRateAverage.csv\")",
"_____no_output_____"
],
[
"len(walkingHR)",
"_____no_output_____"
],
[
"walkingHR.describe()",
"_____no_output_____"
]
],
[
[
"---",
"_____no_output_____"
],
[
"## Heart Rate Variability (HRV)",
"_____no_output_____"
]
],
[
[
"hrv = pd.read_csv(\"data/HeartRateVariabilitySDNN.csv\")",
"_____no_output_____"
],
[
"len(hrv)",
"_____no_output_____"
],
[
"hrv.columns",
"_____no_output_____"
],
[
"hrv.describe()",
"_____no_output_____"
],
[
"hrv.tail()",
"_____no_output_____"
]
],
[
[
"-------",
"_____no_output_____"
],
[
"## VO2 Max",
"_____no_output_____"
]
],
[
[
"vo2max = pd.read_csv(\"data/VO2Max.csv\")",
"_____no_output_____"
],
[
"len(vo2max)",
"_____no_output_____"
],
[
"vo2max.describe()",
"_____no_output_____"
]
],
[
[
"----",
"_____no_output_____"
],
[
"## Blood Pressure",
"_____no_output_____"
]
],
[
[
"diastolic = pd.read_csv(\"data/BloodPressureDiastolic.csv\")\nsystolic = pd.read_csv(\"data/BloodPressureSystolic.csv\")",
"_____no_output_____"
],
[
"diastolic.describe()",
"_____no_output_____"
],
[
"systolic.describe()",
"_____no_output_____"
]
],
[
[
"------",
"_____no_output_____"
],
[
"## Sleep",
"_____no_output_____"
]
],
[
[
"sleep = pd.read_csv(\"data/SleepAnalysis.csv\")",
"_____no_output_____"
],
[
"sleep.tail()",
"_____no_output_____"
],
[
"sleep.describe()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
]
] |
d0c037870bf88afd3ac711ca9b977dbf3936f2e8 | 7,602 | ipynb | Jupyter Notebook | notebooks/02_Test_run.ipynb | Julia-chan/ml_engineering_example | 29d05b7046e74b2848816e4c71de942d9497139a | [
"MIT"
] | null | null | null | notebooks/02_Test_run.ipynb | Julia-chan/ml_engineering_example | 29d05b7046e74b2848816e4c71de942d9497139a | [
"MIT"
] | null | null | null | notebooks/02_Test_run.ipynb | Julia-chan/ml_engineering_example | 29d05b7046e74b2848816e4c71de942d9497139a | [
"MIT"
] | null | null | null | 47.217391 | 65 | 0.62865 | [
[
[
"import pandas as pd\n\nfrom source.pipeline import features, train_model\n\nconfig_path='../params.yaml'\n\nfeatures.features(config_path)\ntrain_model.train(config_path)",
"Learning rate set to 0.134262\n0:\tlearn: 0.6247673\ttotal: 58.6ms\tremaining: 5.8s\n1:\tlearn: 0.5731308\ttotal: 75ms\tremaining: 3.67s\n2:\tlearn: 0.5283159\ttotal: 90.5ms\tremaining: 2.92s\n3:\tlearn: 0.4937239\ttotal: 109ms\tremaining: 2.62s\n4:\tlearn: 0.4645442\ttotal: 125ms\tremaining: 2.37s\n5:\tlearn: 0.4406272\ttotal: 142ms\tremaining: 2.22s\n6:\tlearn: 0.4170083\ttotal: 155ms\tremaining: 2.06s\n7:\tlearn: 0.3985750\ttotal: 172ms\tremaining: 1.98s\n8:\tlearn: 0.3829925\ttotal: 186ms\tremaining: 1.88s\n9:\tlearn: 0.3695462\ttotal: 202ms\tremaining: 1.82s\n10:\tlearn: 0.3589306\ttotal: 225ms\tremaining: 1.82s\n11:\tlearn: 0.3486824\ttotal: 239ms\tremaining: 1.75s\n12:\tlearn: 0.3401001\ttotal: 254ms\tremaining: 1.7s\n13:\tlearn: 0.3313851\ttotal: 269ms\tremaining: 1.65s\n14:\tlearn: 0.3244149\ttotal: 287ms\tremaining: 1.63s\n15:\tlearn: 0.3181175\ttotal: 302ms\tremaining: 1.59s\n16:\tlearn: 0.3132283\ttotal: 327ms\tremaining: 1.6s\n17:\tlearn: 0.3095714\ttotal: 349ms\tremaining: 1.59s\n18:\tlearn: 0.3049008\ttotal: 366ms\tremaining: 1.56s\n19:\tlearn: 0.3012266\ttotal: 382ms\tremaining: 1.53s\n20:\tlearn: 0.2978768\ttotal: 401ms\tremaining: 1.51s\n21:\tlearn: 0.2951095\ttotal: 420ms\tremaining: 1.49s\n22:\tlearn: 0.2927080\ttotal: 436ms\tremaining: 1.46s\n23:\tlearn: 0.2895600\ttotal: 455ms\tremaining: 1.44s\n24:\tlearn: 0.2870337\ttotal: 477ms\tremaining: 1.43s\n25:\tlearn: 0.2841093\ttotal: 496ms\tremaining: 1.41s\n26:\tlearn: 0.2816830\ttotal: 514ms\tremaining: 1.39s\n27:\tlearn: 0.2787703\ttotal: 531ms\tremaining: 1.36s\n28:\tlearn: 0.2771579\ttotal: 545ms\tremaining: 1.33s\n29:\tlearn: 0.2755011\ttotal: 560ms\tremaining: 1.31s\n30:\tlearn: 0.2736696\ttotal: 576ms\tremaining: 1.28s\n31:\tlearn: 0.2716818\ttotal: 593ms\tremaining: 1.26s\n32:\tlearn: 0.2701212\ttotal: 609ms\tremaining: 1.24s\n33:\tlearn: 0.2680908\ttotal: 625ms\tremaining: 1.21s\n34:\tlearn: 0.2667568\ttotal: 639ms\tremaining: 1.19s\n35:\tlearn: 0.2654912\ttotal: 652ms\tremaining: 1.16s\n36:\tlearn: 0.2646165\ttotal: 667ms\tremaining: 1.14s\n37:\tlearn: 0.2635769\ttotal: 684ms\tremaining: 1.12s\n38:\tlearn: 0.2625919\ttotal: 698ms\tremaining: 1.09s\n39:\tlearn: 0.2617449\ttotal: 716ms\tremaining: 1.07s\n40:\tlearn: 0.2601076\ttotal: 729ms\tremaining: 1.05s\n41:\tlearn: 0.2590812\ttotal: 742ms\tremaining: 1.02s\n42:\tlearn: 0.2588673\ttotal: 757ms\tremaining: 1s\n43:\tlearn: 0.2581082\ttotal: 771ms\tremaining: 981ms\n44:\tlearn: 0.2570862\ttotal: 785ms\tremaining: 959ms\n45:\tlearn: 0.2558423\ttotal: 802ms\tremaining: 941ms\n46:\tlearn: 0.2548635\ttotal: 818ms\tremaining: 922ms\n47:\tlearn: 0.2538514\ttotal: 833ms\tremaining: 903ms\n48:\tlearn: 0.2527767\ttotal: 849ms\tremaining: 884ms\n49:\tlearn: 0.2521057\ttotal: 865ms\tremaining: 865ms\n50:\tlearn: 0.2510523\ttotal: 880ms\tremaining: 846ms\n51:\tlearn: 0.2498215\ttotal: 895ms\tremaining: 826ms\n52:\tlearn: 0.2491395\ttotal: 914ms\tremaining: 811ms\n53:\tlearn: 0.2482872\ttotal: 928ms\tremaining: 791ms\n54:\tlearn: 0.2474993\ttotal: 944ms\tremaining: 772ms\n55:\tlearn: 0.2469564\ttotal: 960ms\tremaining: 754ms\n56:\tlearn: 0.2457428\ttotal: 975ms\tremaining: 736ms\n57:\tlearn: 0.2449345\ttotal: 993ms\tremaining: 719ms\n58:\tlearn: 0.2441182\ttotal: 1.01s\tremaining: 701ms\n59:\tlearn: 0.2437322\ttotal: 1.02s\tremaining: 683ms\n60:\tlearn: 0.2426810\ttotal: 1.04s\tremaining: 664ms\n61:\tlearn: 0.2416859\ttotal: 1.05s\tremaining: 646ms\n62:\tlearn: 0.2408635\ttotal: 1.07s\tremaining: 628ms\n63:\tlearn: 0.2399502\ttotal: 1.09s\tremaining: 611ms\n64:\tlearn: 0.2388562\ttotal: 1.1s\tremaining: 595ms\n65:\tlearn: 0.2382197\ttotal: 1.12s\tremaining: 577ms\n66:\tlearn: 0.2367613\ttotal: 1.14s\tremaining: 560ms\n67:\tlearn: 0.2359358\ttotal: 1.16s\tremaining: 544ms\n68:\tlearn: 0.2351531\ttotal: 1.17s\tremaining: 525ms\n69:\tlearn: 0.2345109\ttotal: 1.19s\tremaining: 508ms\n70:\tlearn: 0.2340052\ttotal: 1.2s\tremaining: 489ms\n71:\tlearn: 0.2330028\ttotal: 1.21s\tremaining: 471ms\n72:\tlearn: 0.2322470\ttotal: 1.22s\tremaining: 453ms\n73:\tlearn: 0.2321160\ttotal: 1.24s\tremaining: 436ms\n74:\tlearn: 0.2314586\ttotal: 1.26s\tremaining: 420ms\n75:\tlearn: 0.2306077\ttotal: 1.28s\tremaining: 403ms\n76:\tlearn: 0.2300994\ttotal: 1.29s\tremaining: 386ms\n77:\tlearn: 0.2298328\ttotal: 1.31s\tremaining: 369ms\n78:\tlearn: 0.2296329\ttotal: 1.32s\tremaining: 352ms\n79:\tlearn: 0.2286839\ttotal: 1.34s\tremaining: 335ms\n80:\tlearn: 0.2280998\ttotal: 1.35s\tremaining: 318ms\n81:\tlearn: 0.2278809\ttotal: 1.37s\tremaining: 300ms\n82:\tlearn: 0.2270695\ttotal: 1.38s\tremaining: 283ms\n83:\tlearn: 0.2262681\ttotal: 1.39s\tremaining: 266ms\n84:\tlearn: 0.2260658\ttotal: 1.41s\tremaining: 249ms\n85:\tlearn: 0.2251992\ttotal: 1.42s\tremaining: 231ms\n86:\tlearn: 0.2245708\ttotal: 1.43s\tremaining: 214ms\n87:\tlearn: 0.2240325\ttotal: 1.45s\tremaining: 197ms\n88:\tlearn: 0.2234995\ttotal: 1.46s\tremaining: 180ms\n89:\tlearn: 0.2229582\ttotal: 1.47s\tremaining: 164ms\n90:\tlearn: 0.2223584\ttotal: 1.49s\tremaining: 147ms\n91:\tlearn: 0.2221803\ttotal: 1.5s\tremaining: 131ms\n92:\tlearn: 0.2217319\ttotal: 1.52s\tremaining: 114ms\n93:\tlearn: 0.2212486\ttotal: 1.53s\tremaining: 97.7ms\n94:\tlearn: 0.2211592\ttotal: 1.55s\tremaining: 81.4ms\n95:\tlearn: 0.2206083\ttotal: 1.56s\tremaining: 65.2ms\n96:\tlearn: 0.2204184\ttotal: 1.58s\tremaining: 48.8ms\n97:\tlearn: 0.2198743\ttotal: 1.59s\tremaining: 32.5ms\n98:\tlearn: 0.2192406\ttotal: 1.6s\tremaining: 16.2ms\n99:\tlearn: 0.2187343\ttotal: 1.61s\tremaining: 0us\n"
]
]
] | [
"code"
] | [
[
"code"
]
] |
d0c0423f0fee26f232ef800709b405ac1021fdc4 | 329,639 | ipynb | Jupyter Notebook | experiments/ede_exp/notebooks/train_xgbv_mem.ipynb | IeAT-ASPIDE/Event-Detection-Engine | 08f36d5fa56cae0e9ef86f61edf193aa4e780177 | [
"Apache-2.0"
] | null | null | null | experiments/ede_exp/notebooks/train_xgbv_mem.ipynb | IeAT-ASPIDE/Event-Detection-Engine | 08f36d5fa56cae0e9ef86f61edf193aa4e780177 | [
"Apache-2.0"
] | 20 | 2020-12-09T15:07:25.000Z | 2022-01-30T20:40:31.000Z | experiments/ede_exp/notebooks/train_xgbv_mem.ipynb | IeAT-ASPIDE/Event-Detection-Engine | 08f36d5fa56cae0e9ef86f61edf193aa4e780177 | [
"Apache-2.0"
] | null | null | null | 514.25741 | 47,887 | 0.933294 | [
[
[
"import numpy as np\nimport pandas as pd\nfrom sklearn.model_selection import train_test_split, StratifiedKFold, KFold, StratifiedShuffleSplit\nfrom sklearn.preprocessing import StandardScaler\nimport xgboost as xgb\nfrom sklearn.metrics import precision_score, recall_score, jaccard_score, roc_auc_score, accuracy_score, classification_report, balanced_accuracy_score\nfrom sklearn.metrics import confusion_matrix\nfrom sklearn.externals import joblib\nimport os\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n%matplotlib inline\n\n\npath_parent = os.path.dirname(os.getcwd())\ndata_dir = os.path.join(path_parent,'data')\nmodel_dir = os.path.join(path_parent,'models')\nprocessed_dir = os.path.join(data_dir,'processed')\n# df_anomaly = pd.read_csv(os.path.join(processed_dir,\"anomaly_anotated.csv\"))\ndf_audsome = pd.read_csv(os.path.join(processed_dir,\"anomaly_anotated_audsome.csv\"))",
"_____no_output_____"
],
[
"print(\"Dataset chosen ...\")\ndata = df_audsome\ndrop_col = ['t1','t2','t3','t4']\nprint(\"Remove unwanted columns ...\")\nprint(\"Shape before drop: {}\".format(data.shape))\ndata.drop(drop_col, axis=1, inplace=True)\nprint(\"Shape after drop: {}\".format(data.shape))",
"Dataset chosen ...\nRemove unwanted columns ...\nShape before drop: (3900, 67)\nShape after drop: (3900, 63)\n"
],
[
"# Nice print\nnice_y = data['target']\n\n# Uncomment for removing dummy\nprint(\"Removed Dummy class\")\ndata.loc[data.target == \"dummy\", 'target'] = \"0\"\n# Uncomment for removing cpu\ndata.loc[data.target == \"cpu\", 'target'] = \"0\"\n# Uncomment for removing copy\ndata.loc[data.target == \"copy\", 'target'] = \"0\"\n\n# Print new unique columns\nprint(data['target'].unique())\n\n#Creating the dependent variable class\nfactor = pd.factorize(data['target'])\ndata.target = factor[0]\ndefinitions = factor[1]\n# print(data.target.head())\n# print(definitions)",
"Removed Dummy class\n['0' 'mem']\n"
],
[
"# Plot class distribution\nprint(\"Ploting class distribution ..\")\nclass_dist = sns.countplot(nice_y)\n# plt.title('Confusion Matrix Fold {}'.format(fold), fontsize = 15) # title with fontsize 20\n# plt.xlabel('Ground Truth', fontsize = 10) # x-axis label with fontsize 15\n# plt.ylabel('Predictions', fontsize = 10) # y-axis label with fontsize 15\ndist_fig = \"Class_distribution.png\"\nclass_dist.figure.savefig(os.path.join(model_dir, dist_fig))",
"Ploting class distribution ..\n"
],
[
"print(\"Splitting dataset into training and ground truth ...\")\nX = data.drop(['target', 'time'], axis=1)\ny = data['target']",
"Splitting dataset into training and ground truth ...\n"
],
[
"scaler = StandardScaler()",
"_____no_output_____"
],
[
"# XGB best performing\n\n# paramgrid = {\"n_estimators\": 50,\n# \"max_depth\": 4,\n# \"learning_rate\": 0.1,\n# \"subsample\": 0.2,\n# \"min_child_weight\": 6,\n# \"gamma\": 1,\n# \"seed\": 42,\n# \"objective\": \"multi:softmax\"}\n\nparamgrid = {\"n_estimators\": 1000,\n \"max_depth\": 4,\n \"learning_rate\": 0.01,\n \"subsample\": 0.2,\n \"min_child_weight\": 6,\n \"gamma\": 0,\n \"seed\": 42,\n \"objective\": \"binary:logistic\",\n \"n_jobs\": -1}\n\n# paramgird = {\"n_estimators\": 50,\n# \"max_depth\": 4,\n# \"learning_rate\": 0.1,\n# \"subsample\": 0.2,\n# \"min_child_weight\": 6,\n# \"gamma\": 1,\n# \"seed\": 42,\n# \"objective\": \"multi:softmax\"}\n\nmodel = xgb.XGBClassifier(**paramgrid)\nmodel.get_params().keys()",
"_____no_output_____"
],
[
"# skFold = StratifiedKFold(n_splits=5)\nsss = StratifiedShuffleSplit(n_splits=5, test_size=0.25, random_state=21)\nml_method = 'xgb_mem'\nprint(\"=\"*100)\nclf_models = []\nreport = {\n \"Accuracy\": [],\n \"BallancedAccuracy\": [],\n \"Jaccard\": []\n}\nfold = 1\nfor train_index, test_index in sss.split(X, y):\n # print(\"Train:\", train_index, \"Test:\", test_index)\n print(\"Starting fold {}\".format(fold))\n Xtrain, Xtest = X.iloc[train_index], X.iloc[test_index]\n ytrain, ytest = y.iloc[train_index], y.iloc[test_index]\n print(\"Scaling data ....\")\n Xtrain = scaler.fit_transform(Xtrain)\n Xtest = scaler.transform(Xtest)\n print(\"Start training ....\")\n\n eval_set = [(Xtest, ytest)]\n model.fit(Xtrain, ytrain, early_stopping_rounds=10, eval_set=eval_set, verbose=0)\n # model.fit(Xtrain, ytrain, verbose=True)\n\n # sys.exit()\n # Append model\n clf_models.append(model)\n print(\"Predicting ....\")\n ypred = model.predict(Xtest, ntree_limit=model.best_ntree_limit)\n print(\"-\"*100)\n acc = accuracy_score(ytest, ypred)\n report['Accuracy'].append(acc)\n print(\"Accuracy score fold {} is: {}\".format(fold, acc))\n bacc = balanced_accuracy_score(ytest, ypred)\n report['BallancedAccuracy'].append(bacc)\n print(\"Ballanced accuracy fold {} score is: {}\".format(fold, bacc))\n jaccard = jaccard_score(ytest, ypred)\n print(\"Jaccard score fold {}: {}\".format(fold, jaccard))\n report['Jaccard'].append(jaccard)\n\n print(\"Full classification report for fold {}\".format(fold))\n print(classification_report(ytest, ypred, digits=4,target_names=definitions))\n\n cf_report = classification_report(ytest, ypred, output_dict=True, digits=4, target_names=definitions)\n df_classification_report = pd.DataFrame(cf_report).transpose()\n print(\"Saving classification report\")\n classification_rep_name = \"classification_{}_fold_{}.csv\".format(ml_method, fold)\n df_classification_report.to_csv(os.path.join(model_dir,classification_rep_name), index=False)\n\n print(\"Generating confusion matrix fold {}\".format(fold))\n cf_matrix = confusion_matrix(ytest, ypred)\n\n ht_cf=sns.heatmap(cf_matrix, annot=True, yticklabels=list(definitions), xticklabels=list(definitions))\n plt.title('Confusion Matrix Fold {}'.format(fold), fontsize = 15) # title with fontsize 20\n plt.xlabel('Ground Truth', fontsize = 10) # x-axis label with fontsize 15\n plt.ylabel('Predictions', fontsize = 10) # y-axis label with fontsize 15\n cf_fig = \"CM_{}_{}.png\".format(ml_method, fold)\n ht_cf.figure.savefig(os.path.join(model_dir, cf_fig))\n plt.show()\n\n\n print(\"Extracting Feature improtance ...\")\n\n # xgb.plot_importance(model)\n # plt.title(\"xgboost.plot_importance(model)\")\n # plt.show()\n feat_importances = pd.Series(model.feature_importances_, index=X.columns)\n sorted_feature = feat_importances.sort_values(ascending=True)\n # print(sorted_feature)\n\n\n # Number of columns\n sorted_feature = sorted_feature.tail(20)\n n_col = len(sorted_feature)\n\n # Plot the feature importances of the forest\n\n plt.figure()\n plt.title(\"Feature importances Fold {}\".format(fold), fontsize = 15)\n plt.barh(range(n_col), sorted_feature,\n color=\"r\", align=\"center\")\n\n # If you want to define your own labels,\n # change indices to a list of labels on the following line.\n plt.yticks(range(n_col), sorted_feature.index)\n plt.ylim([-1, n_col])\n fi_fig = \"FI_{}_{}.png\".format(ml_method, fold)\n plt.savefig(os.path.join(model_dir, fi_fig))\n plt.show()\n\n #increment fold count\n fold+=1\n print(\"#\"*100)",
"====================================================================================================\nStarting fold 1\nScaling data ....\nStart training ....\nPredicting ....\n----------------------------------------------------------------------------------------------------\nAccuracy score fold 1 is: 0.9958974358974358\nBallanced accuracy fold 1 score is: 0.9948879260727761\nJaccard score fold 1: 0.9958974358974358\nFull classification report for fold 1\n precision recall f1-score support\n\n 0 0.9988 0.9964 0.9976 823\n mem 0.9805 0.9934 0.9869 152\n\n accuracy 0.9959 975\n macro avg 0.9897 0.9949 0.9922 975\nweighted avg 0.9959 0.9959 0.9959 975\n\nSaving classification report\nGenerating confusion matrix fold 1\nExtracting Feature improtance ...\n####################################################################################################\nStarting fold 2\nScaling data ....\nStart training ....\nPredicting ....\n----------------------------------------------------------------------------------------------------\nAccuracy score fold 2 is: 0.9958974358974358\nBallanced accuracy fold 2 score is: 0.9975698663426489\nJaccard score fold 2: 0.9958974358974358\nFull classification report for fold 2\n precision recall f1-score support\n\n 0 1.0000 0.9951 0.9976 823\n mem 0.9744 1.0000 0.9870 152\n\n accuracy 0.9959 975\n macro avg 0.9872 0.9976 0.9923 975\nweighted avg 0.9960 0.9959 0.9959 975\n\nSaving classification report\nGenerating confusion matrix fold 2\nExtracting Feature improtance ...\n####################################################################################################\nStarting fold 3\nScaling data ....\nStart training ....\nPredicting ....\n----------------------------------------------------------------------------------------------------\nAccuracy score fold 3 is: 0.9969230769230769\nBallanced accuracy fold 3 score is: 0.9928135192172411\nJaccard score fold 3: 0.9969230769230769\nFull classification report for fold 3\n precision recall f1-score support\n\n 0 0.9976 0.9988 0.9982 823\n mem 0.9934 0.9868 0.9901 152\n\n accuracy 0.9969 975\n macro avg 0.9955 0.9928 0.9941 975\nweighted avg 0.9969 0.9969 0.9969 975\n\nSaving classification report\nGenerating confusion matrix fold 3\nExtracting Feature improtance ...\n####################################################################################################\nStarting fold 4\nScaling data ....\nStart training ....\nPredicting ....\n----------------------------------------------------------------------------------------------------\nAccuracy score fold 4 is: 0.9948717948717949\nBallanced accuracy fold 4 score is: 0.996962332928311\nJaccard score fold 4: 0.9948717948717949\nFull classification report for fold 4\n precision recall f1-score support\n\n 0 1.0000 0.9939 0.9970 823\n mem 0.9682 1.0000 0.9838 152\n\n accuracy 0.9949 975\n macro avg 0.9841 0.9970 0.9904 975\nweighted avg 0.9950 0.9949 0.9949 975\n\nSaving classification report\nGenerating confusion matrix fold 4\nExtracting Feature improtance ...\n####################################################################################################\nStarting fold 5\nScaling data ....\nStart training ....\nPredicting ....\n----------------------------------------------------------------------------------------------------\nAccuracy score fold 5 is: 0.9938461538461538\nBallanced accuracy fold 5 score is: 0.9936728592441005\nJaccard score fold 5: 0.9938461538461538\nFull classification report for fold 5\n precision recall f1-score support\n\n 0 0.9988 0.9939 0.9963 823\n mem 0.9679 0.9934 0.9805 152\n\n accuracy 0.9938 975\n macro avg 0.9834 0.9937 0.9884 975\nweighted avg 0.9940 0.9938 0.9939 975\n\nSaving classification report\nGenerating confusion matrix fold 5\nExtracting Feature improtance ...\n####################################################################################################\n"
],
[
"print(\"Saving final report ...\")\n# Validation Report\ndf_report = pd.DataFrame(report)\nfinal_report = \"Model_{}_report.csv\".format(ml_method)\ndf_report.to_csv(os.path.join(model_dir,final_report), index=False)",
"Saving final report ...\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0c05d9936a0d2ebd2684e4464821b86f8063af9 | 14,519 | ipynb | Jupyter Notebook | examples/learning/learn_communication_channel.ipynb | hunse/nengo | 5fcd7b18aa9496e5c47c38c6408430cd9f68a720 | [
"BSD-2-Clause"
] | null | null | null | examples/learning/learn_communication_channel.ipynb | hunse/nengo | 5fcd7b18aa9496e5c47c38c6408430cd9f68a720 | [
"BSD-2-Clause"
] | null | null | null | examples/learning/learn_communication_channel.ipynb | hunse/nengo | 5fcd7b18aa9496e5c47c38c6408430cd9f68a720 | [
"BSD-2-Clause"
] | null | null | null | 33.531178 | 155 | 0.529995 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
d0c0682abf5b31acfafcecc75ebf31edd19f1a92 | 73,690 | ipynb | Jupyter Notebook | examples/Causal Games.ipynb | arpan0317/pgmpy | e36176139cbe2c7706bc3e8cdf2765f8919c6eb8 | [
"MIT"
] | 2,144 | 2015-01-05T21:25:04.000Z | 2022-03-31T08:24:15.000Z | examples/Causal Games.ipynb | arpan0317/pgmpy | e36176139cbe2c7706bc3e8cdf2765f8919c6eb8 | [
"MIT"
] | 1,181 | 2015-01-04T18:19:44.000Z | 2022-03-30T17:21:19.000Z | examples/Causal Games.ipynb | arpan0317/pgmpy | e36176139cbe2c7706bc3e8cdf2765f8919c6eb8 | [
"MIT"
] | 777 | 2015-01-01T11:13:27.000Z | 2022-03-28T12:31:57.000Z | 111.820941 | 10,528 | 0.857525 | [
[
[
"<a href=\"https://colab.research.google.com/github/mrklees/pgmpy/blob/feature%2Fcausalmodel/examples/Causal_Games.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"# Causal Games\n\nCausal Inference is a new feature for pgmpy, so I wanted to develop a few examples which show off the features that we're developing! \n\nThis particular notebook walks through the 5 games that used as examples for building intuition about backdoor paths in *The Book of Why* by Judea Peal. I have consistently been using them to test different implementations of backdoor adjustment from different libraries and include them as unit tests in pgmpy, so I wanted to walk through them and a few other related games as a potential resource to both understand the implementation of CausalInference in pgmpy, as well as develope some useful intuitions about backdoor paths. \n\n## Objective of the Games\n\nFor each game we get a causal graph, and our goal is to identify the set of deconfounders (often denoted $Z$) which will close all backdoor paths from nodes $X$ to $Y$. For the time being, I'll assume that you're familiar with the concept of backdoor paths, though I may expand this portion to explain it. ",
"_____no_output_____"
]
],
[
[
"import sys\n\n!pip3 install -q daft\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport daft\nfrom daft import PGM\n\n# We can now import the development version of pgmpy\nfrom pgmpy.models.BayesianModel import BayesianModel\nfrom pgmpy.inference.CausalInference import CausalInference\n\ndef convert_pgm_to_pgmpy(pgm):\n \"\"\"Takes a Daft PGM object and converts it to a pgmpy BayesianModel\"\"\"\n edges = [(edge.node1.name, edge.node2.name) for edge in pgm._edges]\n model = BayesianModel(edges)\n return model",
"_____no_output_____"
],
[
"#@title # Game 1\n#@markdown While this is a \"trivial\" example, many statisticians would consider including either or both A and B in their models \"just for good measure\". Notice though how controlling for A would close off the path of causal information from X to Y, actually *impeding* your effort to measure that effect.\npgm = PGM(shape=[4, 3])\n\npgm.add_node(daft.Node('X', r\"X\", 1, 2))\npgm.add_node(daft.Node('Y', r\"Y\", 3, 2))\npgm.add_node(daft.Node('A', r\"A\", 2, 2))\npgm.add_node(daft.Node('B', r\"B\", 2, 1))\n\n\npgm.add_edge('X', 'A')\npgm.add_edge('A', 'Y')\npgm.add_edge('A', 'B')\n\npgm.render()\nplt.show()",
"_____no_output_____"
],
[
"#@markdown Notice how there are no nodes with arrows pointing into X. Said another way, X has no parents. Therefore, there can't be any backdoor paths confounding X and Y. pgmpy will confirm this in the following way:\ngame1 = convert_pgm_to_pgmpy(pgm)\ninference1 = CausalInference(game1)\nprint(f\"Are there are active backdoor paths? {not inference1.is_valid_backdoor_adjustment_set('X', 'Y')}\")\nadj_sets = inference1.get_all_backdoor_adjustment_sets(\"X\", \"Y\")\nprint(f\"If so, what's the possible backdoor adjustment sets? {adj_sets}\")",
"Are there are active backdoor paths? False\nIf so, what's the possible backdoor adjustment sets? frozenset()\n"
],
[
"#@title # Game 2\n#@markdown This graph looks harder, but actualy is also trivial to solve. The key is noticing the one backdoor path, which goes from X <- A -> B <- D -> E -> Y, has a collider at B (or a 'V structure'), and therefore the backdoor path is closed. \npgm = PGM(shape=[4, 4])\n\npgm.add_node(daft.Node('X', r\"X\", 1, 1))\npgm.add_node(daft.Node('Y', r\"Y\", 3, 1))\npgm.add_node(daft.Node('A', r\"A\", 1, 3))\npgm.add_node(daft.Node('B', r\"B\", 2, 3))\npgm.add_node(daft.Node('C', r\"C\", 3, 3))\npgm.add_node(daft.Node('D', r\"D\", 2, 2))\npgm.add_node(daft.Node('E', r\"E\", 2, 1))\n\n\npgm.add_edge('X', 'E')\npgm.add_edge('A', 'X')\npgm.add_edge('A', 'B')\npgm.add_edge('B', 'C')\npgm.add_edge('D', 'B')\npgm.add_edge('D', 'E')\npgm.add_edge('E', 'Y')\n\npgm.render()\nplt.show()",
"_____no_output_____"
],
[
"graph = convert_pgm_to_pgmpy(pgm)\ninference = CausalInference(graph)\nprint(f\"Are there are active backdoor paths? {not inference.is_valid_backdoor_adjustment_set('X', 'Y')}\")\nadj_sets = inference.get_all_backdoor_adjustment_sets(\"X\", \"Y\")\nprint(f\"If so, what's the possible backdoor adjustment sets? {adj_sets}\")",
"Are there are active backdoor paths? False\nIf so, what's the possible backdoor adjustment sets? frozenset()\n"
],
[
"#@title # Game 3\n#@markdown This game actually requires some action. Notice the backdoor path X <- B -> Y. This is a confounding pattern, is one of the clearest signs that we'll need to control for something, in this case B. \npgm = PGM(shape=[4, 4])\n\npgm.add_node(daft.Node('X', r\"X\", 1, 1))\npgm.add_node(daft.Node('Y', r\"Y\", 3, 1))\npgm.add_node(daft.Node('A', r\"A\", 2, 1.75))\npgm.add_node(daft.Node('B', r\"B\", 2, 3))\n\n\npgm.add_edge('X', 'Y')\npgm.add_edge('X', 'A')\npgm.add_edge('B', 'A')\npgm.add_edge('B', 'X')\npgm.add_edge('B', 'Y')\n\npgm.render()\nplt.show()",
"_____no_output_____"
],
[
"graph = convert_pgm_to_pgmpy(pgm)\ninference = CausalInference(graph)\nprint(f\"Are there are active backdoor paths? {not inference.is_valid_backdoor_adjustment_set('X', 'Y')}\")\nadj_sets = inference.get_all_backdoor_adjustment_sets(\"X\", \"Y\")\nprint(f\"If so, what's the possible backdoor adjustment sets? {adj_sets}\")",
"Are there are active backdoor paths? True\nIf so, what's the possible backdoor adjustment sets? frozenset({frozenset({'B'})})\n"
],
[
"#@title # Game 4\n#@markdown Pearl named this particular configuration \"M Bias\", not only because of it's shape, but also because of the common practice of statisticians to want to control for B in many situations. However, notice how in this configuration X and Y start out as *not confounded* and how by controlling for B we would actually introduce confounding by opening the path at the collider, B. \npgm = PGM(shape=[4, 4])\n\npgm.add_node(daft.Node('X', r\"X\", 1, 1))\npgm.add_node(daft.Node('Y', r\"Y\", 3, 1))\npgm.add_node(daft.Node('A', r\"A\", 1, 3))\npgm.add_node(daft.Node('B', r\"B\", 2, 2))\npgm.add_node(daft.Node('C', r\"C\", 3, 3))\n\n\npgm.add_edge('A', 'X')\npgm.add_edge('A', 'B')\npgm.add_edge('C', 'B')\npgm.add_edge('C', 'Y')\n\npgm.render()\nplt.show()",
"_____no_output_____"
],
[
"graph = convert_pgm_to_pgmpy(pgm)\ninference = CausalInference(graph)\nprint(f\"Are there are active backdoor paths? {not inference.is_valid_backdoor_adjustment_set('X', 'Y')}\")\nadj_sets = inference.get_all_backdoor_adjustment_sets(\"X\", \"Y\")\nprint(f\"If so, what's the possible backdoor adjustment sets? {adj_sets}\")",
"Are there are active backdoor paths? False\nIf so, what's the possible backdoor adjustment sets? frozenset()\n"
],
[
"#@title # Game 5\n#@markdown This is the last game in The Book of Why is the most complex. In this case we have two backdoor paths, one going through A and the other through B, and it's important to notice that if we only control for B that the path: X <- A -> B <- C -> Y (which starts out as closed because B is a collider) actually is opened. Therefore we have to either close both A and B or, as astute observers will notice, we can also just close C and completely close both backdoor paths. pgmpy will nicely confirm these results for us. \npgm = PGM(shape=[4, 4])\n\npgm.add_node(daft.Node('X', r\"X\", 1, 1))\npgm.add_node(daft.Node('Y', r\"Y\", 3, 1))\npgm.add_node(daft.Node('A', r\"A\", 1, 3))\npgm.add_node(daft.Node('B', r\"B\", 2, 2))\npgm.add_node(daft.Node('C', r\"C\", 3, 3))\n\n\npgm.add_edge('A', 'X')\npgm.add_edge('A', 'B')\npgm.add_edge('C', 'B')\npgm.add_edge('C', 'Y')\npgm.add_edge(\"X\", \"Y\")\npgm.add_edge(\"B\", \"X\")\n\npgm.render()\nplt.show()",
"_____no_output_____"
],
[
"graph = convert_pgm_to_pgmpy(pgm)\ninference = CausalInference(graph)\nprint(f\"Are there are active backdoor paths? {not inference.is_valid_backdoor_adjustment_set('X', 'Y')}\")\nadj_sets = inference.get_all_backdoor_adjustment_sets(\"X\", \"Y\")\nprint(f\"If so, what's the possible backdoor adjustment sets? {adj_sets}\")",
"Are there are active backdoor paths? True\nIf so, what's the possible backdoor adjustment sets? frozenset({frozenset({'A', 'B'}), frozenset({'C'})})\n"
],
[
"#@title # Game 6\n#@markdown So these are no longer drawn from The Book of Why, but were either drawn from another source (which I will reference) or a developed to try to induce a specific bug. \n#@markdown This example is drawn from Causality by Pearl on p. 80. This example is kind of interesting because there are many possible combinations of nodes which will close the two backdoor paths which exist in this graph. In turns out that D plus any other node in {A, B, C, E} will deconfound X and Y. \n\npgm = PGM(shape=[4, 4])\n\npgm.add_node(daft.Node('X', r\"X\", 1, 1))\npgm.add_node(daft.Node('Y', r\"Y\", 3, 1))\npgm.add_node(daft.Node('A', r\"A\", 1, 3))\npgm.add_node(daft.Node('B', r\"B\", 3, 3))\npgm.add_node(daft.Node('C', r\"C\", 1, 2))\npgm.add_node(daft.Node('D', r\"D\", 2, 2))\npgm.add_node(daft.Node('E', r\"E\", 3, 2))\npgm.add_node(daft.Node('F', r\"F\", 2, 1))\n\n\npgm.add_edge('X', 'F')\npgm.add_edge('F', 'Y')\npgm.add_edge('C', 'X')\npgm.add_edge('A', 'C')\npgm.add_edge('A', 'D')\npgm.add_edge('D', 'X')\npgm.add_edge('D', 'Y')\npgm.add_edge('B', 'D')\npgm.add_edge('B', 'E')\npgm.add_edge('E', 'Y')\n\npgm.render()\nplt.show()",
"_____no_output_____"
],
[
"graph = convert_pgm_to_pgmpy(pgm)\ninference = CausalInference(graph)\nprint(f\"Are there are active backdoor paths? {not inference.is_valid_backdoor_adjustment_set('X', 'Y')}\")\nbd_adj_sets = inference.get_all_backdoor_adjustment_sets(\"X\", \"Y\")\nprint(f\"If so, what's the possible backdoor adjustment sets? {bd_adj_sets}\")\nfd_adj_sets = inference.get_all_frontdoor_adjustment_sets(\"X\", \"Y\")\nprint(f\"Ehat's the possible front adjustment sets? {fd_adj_sets}\")",
"Are there are active backdoor paths? True\nIf so, what's the possible backdoor adjustment sets? frozenset({frozenset({'A', 'D'}), frozenset({'E', 'D'}), frozenset({'B', 'D'}), frozenset({'D', 'C'})})\nEhat's the possible front adjustment sets? frozenset({frozenset({'F'})})\n"
],
[
"#@title # Game 7\n#@markdown This game tests the front door adjustment. B is taken to be unobserved, and therfore we cannot close the backdoor path X <- B -> Y. \npgm = PGM(shape=[4, 3])\n\npgm.add_node(daft.Node('X', r\"X\", 1, 1))\npgm.add_node(daft.Node('Y', r\"Y\", 3, 1))\npgm.add_node(daft.Node('A', r\"A\", 2, 1))\npgm.add_node(daft.Node('B', r\"B\", 2, 2))\n\n\npgm.add_edge('X', 'A')\npgm.add_edge('A', 'Y')\npgm.add_edge('B', 'X')\npgm.add_edge('B', 'Y')\n\npgm.render()\nplt.show()",
"_____no_output_____"
],
[
"graph = convert_pgm_to_pgmpy(pgm)\ninference = CausalInference(graph)\nprint(f\"Are there are active backdoor paths? {not inference.is_valid_backdoor_adjustment_set('X', 'Y')}\")\nbd_adj_sets = inference.get_all_backdoor_adjustment_sets(\"X\", \"Y\")\nprint(f\"If so, what's the possible backdoor adjustment sets? {bd_adj_sets}\")\nfd_adj_sets = inference.get_all_frontdoor_adjustment_sets(\"X\", \"Y\")\nprint(f\"Ehat's the possible front adjustment sets? {fd_adj_sets}\")",
"Are there are active backdoor paths? True\nIf so, what's the possible backdoor adjustment sets? frozenset({frozenset({'B'})})\nEhat's the possible front adjustment sets? frozenset({frozenset({'A'})})\n"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0c07d921c66ec4e36b551ca859c944c927c1465 | 22,129 | ipynb | Jupyter Notebook | Contradictory, My Dear Watson/contradictory.ipynb | thesaravanakumar/kaggle | 2021f98c80aefe3b85f8a94347b5f55c2055aac4 | [
"MIT"
] | null | null | null | Contradictory, My Dear Watson/contradictory.ipynb | thesaravanakumar/kaggle | 2021f98c80aefe3b85f8a94347b5f55c2055aac4 | [
"MIT"
] | null | null | null | Contradictory, My Dear Watson/contradictory.ipynb | thesaravanakumar/kaggle | 2021f98c80aefe3b85f8a94347b5f55c2055aac4 | [
"MIT"
] | null | null | null | 50.522831 | 307 | 0.545212 | [
[
[
"import sys\nimport os\nimport math\nimport subprocess\nimport pandas as pd\nimport numpy as np\nfrom tqdm import tqdm\nimport random\nimport torch\nimport torch.nn as nn\n\n#Initialise the random seeds\ndef random_init(**kwargs):\n random.seed(kwargs['seed'])\n torch.manual_seed(kwargs['seed'])\n torch.cuda.manual_seed(kwargs['seed'])\n torch.backends.cudnn.deterministic = True\n\ndef normalise(text):\n chars = list('ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789')\n text = text.upper()\n words=[]\n for w in text.strip().split():\n if w.startswith('HTTP'):\n continue\n while len(w)>0 and w[0] not in chars:\n w = w[1:]\n while len(w)>0 and w[-1] not in chars:\n w = w[:-1]\n if len(w) == 0:\n continue\n words.append(w)\n text=' '.join(words)\n return text\n\ndef read_vocabulary(train_text, **kwargs):\n vocab = dict()\n counts = dict()\n num_words = 0\n for line in train_text:\n line = (list(line.strip()) if kwargs['characters'] else line.strip().split())\n for char in line:\n if char not in vocab:\n vocab[char] = num_words\n counts[char] = 0\n num_words+=1\n counts[char] += 1\n num_words = 0\n vocab2 = dict()\n if not kwargs['characters']:\n for w in vocab:\n if counts[w] >= args['min_count']:\n vocab2[w] = num_words\n num_words += 1\n vocab = vocab2\n for word in [kwargs['start_token'],kwargs['end_token'],kwargs['unk_token']]:\n if word not in vocab:\n vocab[word] = num_words\n num_words += 1\n return vocab\n\ndef load_data(premise, hypothesis, targets=None, cv=False, **kwargs):\n assert len(premise) == len(hypothesis)\n num_seq = len(premise)\n max_words = max([len(t) for t in premise+hypothesis])+2\n dataset = len(kwargs['vocab'])*torch.ones((2,max_words,num_seq),dtype=torch.long)\n labels = torch.zeros((num_seq),dtype=torch.uint8)\n idx = 0\n utoken_value = kwargs['vocab'][kwargs['unk_token']]\n for i,line in tqdm(enumerate(premise),desc='Allocating data memory',disable=(kwargs['verbose']<2)):\n words = (list(line.strip()) if kwargs['characters'] else line.strip().split())\n if len(words)==0 or words[0] != kwargs['start_token']:\n words.insert(0,kwargs['start_token'])\n if words[-1] != kwargs['end_token']:\n words.append(kwargs['end_token'])\n for jdx,word in enumerate(words):\n dataset[0,jdx,idx] = kwargs['vocab'].get(word,utoken_value)\n line=hypothesis[i]\n words = (list(line.strip()) if kwargs['characters'] else line.strip().split())\n if len(words)==0 or words[0] != kwargs['start_token']:\n words.insert(0,kwargs['start_token'])\n if words[-1] != kwargs['end_token']:\n words.append(kwargs['end_token'])\n for jdx,word in enumerate(words):\n dataset[1,jdx,idx] = kwargs['vocab'].get(word,utoken_value)\n if targets is not None:\n labels[idx] = targets[i]\n idx += 1\n\n if cv == False:\n return dataset, labels\n\n idx = [i for i in range(num_seq)]\n random.shuffle(idx)\n trainset = dataset[:,:,idx[0:int(num_seq*(1-kwargs['cv_percentage']))]]\n trainlabels = labels[idx[0:int(num_seq*(1-kwargs['cv_percentage']))]]\n validset = dataset[:,:,idx[int(num_seq*(1-kwargs['cv_percentage'])):]]\n validlabels = labels[idx[int(num_seq*(1-kwargs['cv_percentage'])):]]\n return trainset, validset, trainlabels, validlabels\n\nclass LSTMEncoder(nn.Module):\n def __init__(self, **kwargs):\n \n super(LSTMEncoder, self).__init__()\n #Base variables\n self.vocab = kwargs['vocab']\n self.in_dim = len(self.vocab)\n self.start_token = kwargs['start_token']\n self.end_token = kwargs['end_token']\n self.unk_token = kwargs['unk_token']\n self.characters = kwargs['characters']\n self.embed_dim = kwargs['embedding_size']\n self.hid_dim = kwargs['hidden_size']\n self.n_layers = kwargs['num_layers']\n \n #Define the embedding layer\n self.embed = nn.Embedding(self.in_dim+1,self.embed_dim,padding_idx=self.in_dim)\n #Define the lstm layer\n self.lstm = nn.LSTM(input_size=self.embed_dim,hidden_size=self.hid_dim,num_layers=self.n_layers)\n \n def forward(self, inputs, lengths):\n #Inputs are size (LxBx1)\n #Forward embedding layer\n emb = self.embed(inputs)\n #Embeddings are size (LxBxself.embed_dim)\n\n #Pack the sequences for GRU\n packed = torch.nn.utils.rnn.pack_padded_sequence(emb, lengths)\n #Forward the GRU\n packed_rec, self.hidden = self.lstm(packed,self.hidden)\n #Unpack the sequences\n rec, _ = torch.nn.utils.rnn.pad_packed_sequence(packed_rec)\n #Hidden outputs are size (LxBxself.hidden_size)\n \n #Get last embeddings\n out = rec[lengths-1,list(range(rec.shape[1])),:]\n #Outputs are size (Bxself.hid_dim)\n \n return out\n \n def init_hidden(self, bsz):\n #Initialise the hidden state\n weight = next(self.parameters())\n self.hidden = (weight.new_zeros(self.n_layers, bsz, self.hid_dim),weight.new_zeros(self.n_layers, bsz, self.hid_dim))\n\n def detach_hidden(self):\n #Detach the hidden state\n self.hidden=(self.hidden[0].detach(),self.hidden[1].detach())\n\n def cpu_hidden(self):\n #Set the hidden state to CPU\n self.hidden=(self.hidden[0].detach().cpu(),self.hidden[1].detach().cpu())\n \nclass Predictor(nn.Module):\n def __init__(self, **kwargs):\n \n super(Predictor, self).__init__()\n self.hid_dim = kwargs['hidden_size']*2\n self.out_dim = 3\n #Define the output layer and softmax\n self.linear = nn.Linear(self.hid_dim,self.out_dim)\n self.softmax = nn.LogSoftmax(dim=1)\n \n def forward(self,input1,input2):\n #Outputs are size (Bxself.hid_dim)\n inputs = torch.cat((input1,input2),dim=1)\n out = self.softmax(self.linear(inputs))\n return out\n\ndef train_model(trainset,trainlabels,encoder,predictor,optimizer,criterion,**kwargs):\n trainlen = trainset.shape[2]\n nbatches = math.ceil(trainlen/kwargs['batch_size'])\n total_loss = 0\n total_backs = 0\n with tqdm(total=nbatches,disable=(kwargs['verbose']<2)) as pbar:\n encoder = encoder.train()\n for b in range(nbatches):\n #Data batch\n X1 = trainset[0,:,b*kwargs['batch_size']:min(trainlen,(b+1)*kwargs['batch_size'])].clone().long().to(kwargs['device'])\n mask1 = torch.clamp(len(kwargs['vocab'])-X1,max=1)\n seq_length1 = torch.sum(mask1,dim=0)\n ordered_seq_length1, dec_index1 = seq_length1.sort(descending=True)\n max_seq_length1 = torch.max(seq_length1)\n X1 = X1[:,dec_index1]\n X1 = X1[0:max_seq_length1]\n rev_dec_index1 = list(range(seq_length1.shape[0]))\n for i,j in enumerate(dec_index1):\n rev_dec_index1[j] = i\n X2 = trainset[1,:,b*kwargs['batch_size']:min(trainlen,(b+1)*kwargs['batch_size'])].clone().long().to(kwargs['device'])\n mask2 = torch.clamp(len(kwargs['vocab'])-X2,max=1)\n seq_length2 = torch.sum(mask2,dim=0)\n ordered_seq_length2, dec_index2 = seq_length2.sort(descending=True)\n max_seq_length2 = torch.max(seq_length2)\n X2 = X2[:,dec_index2]\n X2 = X2[0:max_seq_length2]\n rev_dec_index2 = list(range(seq_length2.shape[0]))\n for i,j in enumerate(dec_index2):\n rev_dec_index2[j] = i\n Y = trainlabels[b*kwargs['batch_size']:min(trainlen,(b+1)*kwargs['batch_size'])].clone().long().to(kwargs['device'])\n #Forward pass\n encoder.init_hidden(X1.size(1))\n embeddings1 = encoder(X1,ordered_seq_length1)\n encoder.detach_hidden()\n encoder.init_hidden(X2.size(1))\n embeddings2 = encoder(X2,ordered_seq_length2)\n embeddings1 = embeddings1[rev_dec_index1]\n embeddings2 = embeddings2[rev_dec_index2]\n posteriors = predictor(embeddings1,embeddings2)\n loss = criterion(posteriors,Y)\n #Backpropagate\n optimizer.zero_grad()\n loss.backward()\n optimizer.step()\n #Estimate the latest loss\n if total_backs == 100:\n total_loss = total_loss*0.99+loss.detach().cpu().numpy()\n else:\n total_loss += loss.detach().cpu().numpy()\n total_backs += 1\n encoder.detach_hidden()\n pbar.set_description(f'Training epoch. Loss {total_loss/(total_backs+1):.2f}')\n pbar.update()\n return total_loss/(total_backs+1)\n\ndef evaluate_model(testset,encoder,predictor,**kwargs):\n testlen = testset.shape[2]\n nbatches = math.ceil(testlen/kwargs['batch_size'])\n predictions = np.zeros((testlen,))\n with torch.no_grad():\n encoder = encoder.eval()\n for b in range(nbatches):\n #Data batch\n X1 = testset[0,:,b*kwargs['batch_size']:min(testlen,(b+1)*kwargs['batch_size'])].clone().long().to(kwargs['device'])\n mask1 = torch.clamp(len(kwargs['vocab'])-X1,max=1)\n seq_length1 = torch.sum(mask1,dim=0)\n ordered_seq_length1, dec_index1 = seq_length1.sort(descending=True)\n max_seq_length1 = torch.max(seq_length1)\n X1 = X1[:,dec_index1]\n X1 = X1[0:max_seq_length1]\n rev_dec_index1 = list(range(seq_length1.shape[0]))\n for i,j in enumerate(dec_index1):\n rev_dec_index1[j] = i\n X2 = testset[1,:,b*kwargs['batch_size']:min(testlen,(b+1)*kwargs['batch_size'])].clone().long().to(kwargs['device'])\n mask2 = torch.clamp(len(kwargs['vocab'])-X2,max=1)\n seq_length2 = torch.sum(mask2,dim=0)\n ordered_seq_length2, dec_index2 = seq_length2.sort(descending=True)\n max_seq_length2 = torch.max(seq_length2)\n X2 = X2[:,dec_index2]\n X2 = X2[0:max_seq_length2]\n rev_dec_index2 = list(range(seq_length2.shape[0]))\n for i,j in enumerate(dec_index2):\n rev_dec_index2[j] = i\n #Forward pass\n encoder.init_hidden(X1.size(1))\n embeddings1 = encoder(X1,ordered_seq_length1)\n encoder.init_hidden(X2.size(1))\n embeddings2 = encoder(X2,ordered_seq_length2)\n embeddings1 = embeddings1[rev_dec_index1]\n embeddings2 = embeddings2[rev_dec_index2]\n posteriors = predictor(embeddings1,embeddings2)\n #posteriors = model(X,ordered_seq_length)\n estimated = torch.argmax(posteriors,dim=1)\n predictions[b*kwargs['batch_size']:min(testlen,(b+1)*kwargs['batch_size'])] = estimated.detach().cpu().numpy()\n return predictions\n \n#Arguments\nargs = {\n 'cv_percentage': 0.1,\n 'epochs': 20,\n 'batch_size': 128,\n 'embedding_size': 16,\n 'hidden_size': 64,\n 'num_layers': 1,\n 'learning_rate': 0.01,\n 'seed': 0,\n 'start_token': '<s>',\n 'end_token': '<\\s>',\n 'unk_token': '<UNK>',\n 'verbose': 1,\n 'characters': False,\n 'min_count': 15,\n 'device': torch.device(('cuda:0' if torch.cuda.is_available() else 'cpu'))\n }\n\n#Read data\ntrain_data = pd.read_csv('/kaggle/input/contradictory-my-dear-watson/train.csv')\ntest_data = pd.read_csv('/kaggle/input/contradictory-my-dear-watson/test.csv')\n#Extract only English language cases\ntrain_data = train_data.loc[train_data['language']=='English']\ntest_data = test_data.loc[test_data['language']=='English']\n#Extract premises and hypothesis\ntrain_premise = [normalise(v) for v in train_data.premise.values]\ntrain_hypothesis = [normalise(v) for v in train_data.hypothesis.values]\ntest_premise = [normalise(v) for v in test_data.premise.values]\ntest_hypothesis = [normalise(v) for v in test_data.hypothesis.values]\ntrain_targets = train_data.label.values\nprint('Training: {0:d} pairs in English. Evaluation: {1:d} pairs in English'.format(len(train_premise),len(test_premise)))\nprint('Label distribution in training set: {0:s}'.format(str({i:'{0:.2f}%'.format(100*len(np.where(train_targets==i)[0])/len(train_targets)) for i in [0,1,2]})))\n\nbatch_sizes = [64,128,256]\nmin_counts = [5,15,25]\n\nit_idx = 0\nvalid_predictions = dict()\ntest_predictions = dict()\nvalid_accuracies = dict()\n\nfor batch_size in batch_sizes:\n for min_count in min_counts:\n args['batch_size'] = batch_size\n args['min_count'] = min_count\n \n random_init(**args)\n\n #Make vocabulary and load data\n args['vocab'] = read_vocabulary(train_premise+train_hypothesis, **args)\n #print('Vocabulary size: {0:d} tokens'.format(len(args['vocab'])))\n trainset, validset, trainlabels, validlabels = load_data(train_premise, train_hypothesis, train_targets, cv=True, **args)\n testset, _ = load_data(test_premise, test_hypothesis, None, cv=False, **args)\n\n #Create model, optimiser and criterion\n encoder = LSTMEncoder(**args).to(args['device'])\n predictor = Predictor(**args).to(args['device'])\n optimizer = torch.optim.Adam(list(encoder.parameters())+list(predictor.parameters()),lr=args['learning_rate'])\n criterion = nn.NLLLoss(reduction='mean').to(args['device'])\n\n #Train epochs\n best_acc = 0.0\n for ep in range(1,args['epochs']+1):\n loss = train_model(trainset,trainlabels,encoder,predictor,optimizer,criterion,**args)\n val_pred = evaluate_model(validset,encoder,predictor,**args)\n test_pred = evaluate_model(testset,encoder,predictor,**args)\n acc = 100*len(np.where((val_pred-validlabels.numpy())==0)[0])/validset.shape[2]\n if acc >= best_acc:\n best_acc = acc\n best_epoch = ep\n best_loss = loss\n valid_predictions[it_idx] = val_pred\n valid_accuracies[it_idx] = acc\n test_predictions[it_idx] = test_pred\n print('Run {0:d}. Best epoch: {1:d} of {2:d}. Training loss: {3:.2f}, validation accuracy: {4:.2f}%, test label distribution: {5:s}'.format(it_idx+1,best_epoch,args['epochs'],best_loss,best_acc,str({i:'{0:.2f}%'.format(100*len(np.where(test_pred==i)[0])/len(test_pred)) for i in [0,1,2]})))\n it_idx += 1\n\n#Do the score combination\nbest_epochs = np.argsort([valid_accuracies[ep] for ep in range(it_idx)])[::-1]\nval_pred = np.array([valid_predictions[ep] for ep in best_epochs[0:5]])\nval_pred = np.argmax(np.array([np.sum((val_pred==i).astype(int),axis=0) for i in [0,1,2]]),axis=0)\ntest_pred = np.array([test_predictions[ep] for ep in best_epochs[0:5]])\ntest_pred = np.argmax(np.array([np.sum((test_pred==i).astype(int),axis=0) for i in [0,1,2]]),axis=0)\nacc = 100*len(np.where((val_pred-validlabels.numpy())==0)[0])/validset.shape[2]\nprint('Ensemble. Cross-validation accuracy: {0:.2f}%, test label distribution: {1:s}'.format(acc,str({i:'{0:.2f}%'.format(100*len(np.where(test_pred==i)[0])/len(test_pred)) for i in [0,1,2]})))\n#Set all predictions to the majority category\ndf_out = pd.DataFrame({'id': pd.read_csv('/kaggle/input/contradictory-my-dear-watson/test.csv')['id'], 'prediction': np.argmax([len(np.where(train_targets==i)[0]) for i in [0,1,2]])})\n#Set only English language cases to the predicted labels\ndf_out.loc[df_out['id'].isin(test_data['id']),'prediction']=test_pred\ndf_out.to_csv('/kaggle/working/submission.csv'.format(it_idx,acc),index=False)",
"Training: 6870 pairs in English. Evaluation: 2945 pairs in English\nLabel distribution in training set: {0: '35.33%', 1: '31.53%', 2: '33.14%'}\nRun 1. Best epoch: 4 of 20. Training loss: 0.94, validation accuracy: 42.50%, test label distribution: {0: '35.93%', 1: '32.43%', 2: '31.65%'}\nRun 2. Best epoch: 6 of 20. Training loss: 0.86, validation accuracy: 43.38%, test label distribution: {0: '39.15%', 1: '26.93%', 2: '33.92%'}\nRun 3. Best epoch: 3 of 20. Training loss: 1.01, validation accuracy: 43.81%, test label distribution: {0: '30.87%', 1: '37.01%', 2: '32.12%'}\nRun 4. Best epoch: 4 of 20. Training loss: 0.91, validation accuracy: 41.78%, test label distribution: {0: '35.04%', 1: '30.46%', 2: '34.50%'}\nRun 5. Best epoch: 6 of 20. Training loss: 0.82, validation accuracy: 47.02%, test label distribution: {0: '35.79%', 1: '34.57%', 2: '29.64%'}\nRun 6. Best epoch: 5 of 20. Training loss: 0.94, validation accuracy: 45.41%, test label distribution: {0: '35.28%', 1: '30.02%', 2: '34.70%'}\nRun 7. Best epoch: 4 of 20. Training loss: 0.93, validation accuracy: 41.19%, test label distribution: {0: '30.36%', 1: '38.61%', 2: '31.04%'}\nRun 8. Best epoch: 6 of 20. Training loss: 0.84, validation accuracy: 45.41%, test label distribution: {0: '32.50%', 1: '36.77%', 2: '30.73%'}\nRun 9. Best epoch: 5 of 20. Training loss: 0.93, validation accuracy: 44.54%, test label distribution: {0: '24.14%', 1: '31.00%', 2: '44.86%'}\nEnsemble. Cross-validation accuracy: 48.03%, test label distribution: {0: '43.50%', 1: '29.30%', 2: '27.20%'}\n"
]
]
] | [
"code"
] | [
[
"code"
]
] |
d0c0904987fb1f99d2b8a42676627964d898f55f | 64,150 | ipynb | Jupyter Notebook | Data_Science/Matplotlib/2D/2D_Linha_Parte_3.ipynb | maledicente/cursos | 00ace48da7e48b04485e4ca97b3ca9ba5f33a283 | [
"MIT"
] | 1 | 2021-05-03T22:59:38.000Z | 2021-05-03T22:59:38.000Z | Data_Science/Matplotlib/2D/2D_Linha_Parte_3.ipynb | maledicente/cursos | 00ace48da7e48b04485e4ca97b3ca9ba5f33a283 | [
"MIT"
] | null | null | null | Data_Science/Matplotlib/2D/2D_Linha_Parte_3.ipynb | maledicente/cursos | 00ace48da7e48b04485e4ca97b3ca9ba5f33a283 | [
"MIT"
] | null | null | null | 278.913043 | 15,692 | 0.929462 | [
[
[
"# Matplotlib - Gráfico de linha - Parte 3",
"_____no_output_____"
],
[
"* Parte 3: Tipo da linha\n",
"_____no_output_____"
],
[
"* Importando bibliotecas",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport matplotlib.pyplot as plt",
"_____no_output_____"
]
],
[
[
"* Criando variável X",
"_____no_output_____"
]
],
[
[
"X = np.linspace(0,5,11)",
"_____no_output_____"
]
],
[
[
"* Criando funções Y(X) e Z(X)",
"_____no_output_____"
]
],
[
[
"Y = np.power(X-2,2.0)",
"_____no_output_____"
],
[
"Z = np.exp(X/2.0)",
"_____no_output_____"
]
],
[
[
"* Gráfico de Y e Z em função de X",
"_____no_output_____"
]
],
[
[
"plt.plot(X,Y,label='Y(X)')\nplt.plot(X,Z,label='Z(X)')\nplt.xlabel('Eixo X')\nplt.ylabel('Eixo Y')\nplt.show()",
"_____no_output_____"
]
],
[
[
"* Alterando linha da função Z(X)",
"_____no_output_____"
]
],
[
[
"plt.plot(X,Y,label='Y(X)')\nplt.plot(X,Z,label='Z(X)',linestyle='--')\nplt.xlabel('Eixo X')\nplt.ylabel('Eixo Y')\nplt.show()",
"_____no_output_____"
]
],
[
[
"* Usando nomes para os tipos de linhas",
"_____no_output_____"
]
],
[
[
"plt.plot(X,Y,label='Y(X)', linestyle='dashed')\nplt.plot(X,Z,label='Z(X)',linestyle='dashdot')\nplt.xlabel('Eixo X')\nplt.ylabel('Eixo Y')\nplt.show()",
"_____no_output_____"
],
[
"plt.plot(X,Y,label='Y(X)', linestyle='solid')\nplt.plot(X,Z,label='Z(X)',linestyle='dotted')\nplt.xlabel('Eixo X')\nplt.ylabel('Eixo Y')\nplt.show()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d0c0964a5c7e507fbc1bca879afadc3bd361d17d | 60,838 | ipynb | Jupyter Notebook | data_collect/eeg_data_collect_tcp_tag_openvibe.ipynb | s151202/EEG_data_streaming | 16d5056e80c1a3df3e74ad0ebb6048c466a2e7df | [
"BSD-2-Clause"
] | null | null | null | data_collect/eeg_data_collect_tcp_tag_openvibe.ipynb | s151202/EEG_data_streaming | 16d5056e80c1a3df3e74ad0ebb6048c466a2e7df | [
"BSD-2-Clause"
] | null | null | null | data_collect/eeg_data_collect_tcp_tag_openvibe.ipynb | s151202/EEG_data_streaming | 16d5056e80c1a3df3e74ad0ebb6048c466a2e7df | [
"BSD-2-Clause"
] | null | null | null | 60.716567 | 34,332 | 0.746671 | [
[
[
"import sys\nimport pickle\nfrom scipy import signal\nfrom scipy import stats\nimport numpy as np\nfrom sklearn.model_selection import ShuffleSplit\n\nimport socket\nimport time\n\nimport math\nfrom collections import OrderedDict\n\n\nimport matplotlib.pyplot as plt\n\nsys.path.append('D:\\Diamond\\code')\nfrom csp_james_2 import *\n\nsys.path.append('D:\\Diamond\\code')\nfrom thesis_funcs_19_03 import *\n\nimport csv\nimport datetime\nfrom random import randint\nimport random\nimport matplotlib.image as mpimg\n%matplotlib auto",
"Using matplotlib backend: Qt5Agg\n"
]
],
[
[
"# Define classes and how many trials per class",
"_____no_output_____"
]
],
[
[
"C_OVR = [0,1] #MI classes, [0,1,2,3] for left hand, right hand, feet, tongue\n_classes = C_OVR*10 #*trials per MI class\nrandom.shuffle(_classes) #randomize sequence of MI classes\n\n\nfileroot = 'E:\\\\Diamond\\\\own_expo\\\\'\nfilewrite = open(fileroot + 'record.txt','w')\nfilewrite.write('')\nfilewrite.close()\n\nfile_cross = open(fileroot + 'cross_sign.txt','w')\nfile_cross.write('0')\nfile_cross.close()\n\nendgame = open(fileroot + 'endgame.txt','w')\nendgame.write('0')\nendgame.close()\n\n\nfilewrite = open(fileroot + 'record.txt','a')\n",
"_____no_output_____"
],
[
"plt.ioff()\n",
"_____no_output_____"
],
[
"end = 0",
"_____no_output_____"
],
[
"%matplotlib auto\nplt.ioff()\n\nn = 0\nreso = 0.001\nreso1 = 0.2\npltpause = 0.05\n\n######################### Connect to Openvibe aquisition server (AS) ########################################################\n# host and port of tcp tagging server\nHOST = '127.0.0.1' #local machine address\nPORT = 15361 #port to connect to AS\n\n# transform a value into an array of byte values in little-endian order.\ndef to_byte(value, length):\n for x in range(length):\n yield value%256\n value//=256\n \n# connect \ns = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\ns.connect((HOST, PORT))\n\n#padding of zeros to keep length of tag consistant\npadding=[0]*8\n\n#############################################################################################################################\n\nfig = plt.figure()\nax = fig.add_subplot(1,1,1)\n\nfig.canvas.draw()\nimg = mpimg.imread('E:\\\\Diamond\\\\cues\\\\black.png')\nax.imshow(img)\nfig.canvas.draw()\nplt.pause(pltpause)\n\n\nt0 = datetime.datetime.now()\nfilewrite.write('rest,' + str(t0) + '\\n')\n\ncross = t0 + datetime.timedelta(0,randint(0,5)/10 + 6)\ncue = cross + datetime.timedelta(0,1)\nrest = cue + datetime.timedelta(0,4)\n\ncross_exed = 0\ncue_exed = 0\nrest_exed = 0\n\n\nwhile n < len(_classes):\n\n d_cross = (datetime.datetime.now() - cross).total_seconds()\n if cross_exed == 0 and (abs(d_cross) <= reso or d_cross>=reso):\n #if cross_exed == 0 and np.abs(cross - datetime.datetime.now()).total_seconds() < reso:\n \n img = mpimg.imread('E:\\\\Diamond\\\\cues\\\\fixation.png')\n ax.clear()\n ax.imshow(img)\n fig.canvas.draw()\n plt.pause(pltpause)\n filewrite.write('cross,' + str(datetime.datetime.now()) + '\\n')\n \n file_cross = open(fileroot + 'cross_sign.txt','w')\n file_cross.write('1')\n file_cross.close()\n \n print ('cross', datetime.datetime.now())\n print ('_classes', _classes[n])\n \n cross_exed = 1\n \n \"\"\"\n elif cross_exed == 0 and (datetime.datetime.now()-cross).total_seconds() < 0.2 and (datetime.datetime.now()-cross).total_seconds() > reso:\n ax.clear()\n img = mpimg.imread('E:\\\\Diamond\\\\cues\\\\fixation.png')\n ax.imshow(img)\n fig.canvas.draw()\n plt.pause(pltpause)\n filewrite.write('cross,' + str(datetime.datetime.now()) + '\\n')\n print ('cross1', datetime.datetime.now())\n print ('_classes', _classes[n])\n cross_exed = 1\n \n file_cross = open(fileroot + 'cross_sign.txt','w')\n file_cross.write('1')\n file_cross.close()\n \"\"\"\n d_cue = (datetime.datetime.now() - cue).total_seconds()\n #if cue_exed == 0 and np.abs(cue - datetime.datetime.now()).total_seconds() < reso:\n if cue_exed == 0 and (abs(d_cue) <= reso or d_cue>=reso):\n if _classes[n] == 0:\n img = mpimg.imread('E:\\\\Diamond\\\\cues\\\\left_hand.png')\n EVENT_ID = 0x441 #LEFT HAND (1089) EVENT_IDs are used to tag eeg streams in openvibe, IDs are pre-defined at http://openvibe.inria.fr/stimulation-codes/\n elif _classes[n] == 1:\n img = mpimg.imread('E:\\\\Diamond\\\\cues\\\\right_hand.png')\n EVENT_ID = 0x442 #RIGHT HAND (1090)\n elif _classes[n] == 2:\n img = mpimg.imread('E:\\\\Diamond\\\\cues\\\\feet.png')\n EVENT_ID = 0x303 #FOOT (FEET) (771)\n elif _classes[n] == 3:\n img = mpimg.imread('E:\\\\Diamond\\\\cues\\\\tongue.jpg')\n EVENT_ID = 0x304 #TONGUE (772)\n \n \n \n ax.clear()\n ax.imshow(img)\n fig.canvas.draw()\n plt.pause(pltpause)\n \n #same time tag for record.txt and openvibe eeg streams\n time_datetime = datetime.datetime.now()\n time_unix = time.time()\n \n filewrite.write('cue,' + str(_classes[n]) + ',' + str(time_datetime) + '\\n')\n \n #event_id to byte format\n event_id=list(to_byte(EVENT_ID, 8))\n # timestamp can be either the posix time in ms, or 0 to let the acquisition server timestamp the tag itself.\n timestamp=list(to_byte(int(time_unix*1000), 8))\n #send tag to openvibe, tag is padding + event_id + timestamp in unix time, in uint64 format\n s.sendall(bytearray(padding+event_id+timestamp))\n \n print ('cue', datetime.datetime.now())\n print ('_classes', _classes[n])\n \n \n cue_exed = 1 \n\n \"\"\"\n elif cue_exed == 0 and (datetime.datetime.now()-cue).total_seconds() < 0.2 and (datetime.datetime.now()-cue).total_seconds() > reso:\n if _classes[n] == 0:\n img = mpimg.imread('E:\\\\Diamond\\\\cues\\\\left_hand.png')\n elif _classes[n] == 1:\n img = mpimg.imread('E:\\\\Diamond\\\\cues\\\\right_hand.png')\n elif _classes[n] == 2:\n img = mpimg.imread('E:\\\\Diamond\\\\cues\\\\feet.png')\n elif _classes[n] == 3:\n img = mpimg.imread('E:\\\\Diamond\\\\cues\\\\tongue.jpg')\n ax.clear()\n ax.imshow(img)\n fig.canvas.draw()\n plt.pause(pltpause) \n filewrite.write('cue,' + str(_classes[n]) + ',' + str(datetime.datetime.now()) + '\\n')\n\n print ('cue1', datetime.datetime.now())\n print ('_classes', _classes[n])\n cue_exed = 1\n \"\"\"\n \n d_rest = (datetime.datetime.now() - rest).total_seconds()\n #if rest_exed == 0 and np.abs(rest - datetime.datetime.now()).total_seconds() < reso:\n if rest_exed == 0 and (abs(d_rest) <= reso or d_rest>=reso):\n img = mpimg.imread('E:\\\\Diamond\\\\cues\\\\black.png')\n ax.clear()\n ax.imshow(img)\n fig.canvas.draw()\n plt.pause(pltpause)\n filewrite.write('rest,' + str(datetime.datetime.now()) + '\\n')\n \n print ('rest', datetime.datetime.now())\n print ('_classes', _classes[n])\n rest_exed = 1\n \n \"\"\"\n elif rest_exed == 0 and (datetime.datetime.now()-rest).total_seconds() < 0.2 and (datetime.datetime.now()-rest).total_seconds() > reso:\n img = mpimg.imread('E:\\\\Diamond\\\\cues\\\\black.png')\n ax.clear()\n ax.imshow(img)\n fig.canvas.draw()\n plt.pause(pltpause)\n filewrite.write('rest,' + str(datetime.datetime.now()) + '\\n')\n \n print ('rest1', datetime.datetime.now())\n print ('_classes', _classes[n])\n rest_exed = 1\n \"\"\" \n if cross_exed == 1 and cue_exed==1 and rest_exed == 1:\n cross = rest + datetime.timedelta(0,randint(0,5)/10 + 6)\n cue = cross + datetime.timedelta(0,1)\n rest = cue + datetime.timedelta(0,4)\n \n cross_exed = 0\n cue_exed = 0\n rest_exed = 0\n \n \n n = n +1\n print (n)\n \n \nfilewrite.close()\ns.close()\n\nend = 1\nif end == 1:\n endgame = open(fileroot + 'endgame.txt','w')\n endgame.write('1')\n endgame.close()",
"Using matplotlib backend: Qt5Agg\ncross 2019-10-10 12:22:07.817149\n_classes 0\ncue 2019-10-10 12:22:08.571065\n_classes 0\nrest 2019-10-10 12:22:12.574391\n_classes 0\n1\ncross 2019-10-10 12:22:19.099670\n_classes 1\ncue 2019-10-10 12:22:19.975858\n_classes 1\nrest 2019-10-10 12:22:23.968391\n_classes 1\n2\ncross 2019-10-10 12:22:30.410289\n_classes 0\ncue 2019-10-10 12:22:31.365828\n_classes 0\nrest 2019-10-10 12:22:35.372207\n_classes 0\n3\ncross 2019-10-10 12:22:41.516712\n_classes 0\ncue 2019-10-10 12:22:42.475043\n_classes 0\nrest 2019-10-10 12:22:46.474528\n_classes 0\n4\ncross 2019-10-10 12:22:52.713237\n_classes 0\ncue 2019-10-10 12:22:53.665315\n_classes 0\nrest 2019-10-10 12:22:57.715331\n_classes 0\n5\ncross 2019-10-10 12:23:03.744781\n_classes 0\ncue 2019-10-10 12:23:04.668314\n_classes 0\nrest 2019-10-10 12:23:08.676413\n_classes 0\n6\ncross 2019-10-10 12:23:14.713197\n_classes 0\ncue 2019-10-10 12:23:15.675143\n_classes 0\nrest 2019-10-10 12:23:19.664159\n_classes 0\n7\ncross 2019-10-10 12:23:26.017738\n_classes 1\ncue 2019-10-10 12:23:26.967193\n_classes 1\nrest 2019-10-10 12:23:30.966025\n_classes 1\n8\ncross 2019-10-10 12:23:37.015985\n_classes 0\ncue 2019-10-10 12:23:37.968344\n_classes 0\nrest 2019-10-10 12:23:41.970635\n_classes 0\n9\ncross 2019-10-10 12:23:48.419251\n_classes 1\ncue 2019-10-10 12:23:49.368617\n_classes 1\nrest 2019-10-10 12:23:53.390565\n_classes 1\n10\ncross 2019-10-10 12:23:59.715866\n_classes 1\ncue 2019-10-10 12:24:00.676456\n_classes 1\nrest 2019-10-10 12:24:04.673798\n_classes 1\n11\ncross 2019-10-10 12:24:11.014924\n_classes 1\ncue 2019-10-10 12:24:11.977951\n_classes 1\nrest 2019-10-10 12:24:15.991445\n_classes 1\n12\ncross 2019-10-10 12:24:22.217868\n_classes 0\ncue 2019-10-10 12:24:23.165687\n_classes 0\nrest 2019-10-10 12:24:27.174492\n_classes 0\n13\ncross 2019-10-10 12:24:33.217427\n_classes 1\ncue 2019-10-10 12:24:34.168838\n_classes 1\nrest 2019-10-10 12:24:38.174174\n_classes 1\n14\ncross 2019-10-10 12:24:44.615617\n_classes 1\ncue 2019-10-10 12:24:45.575064\n_classes 1\nrest 2019-10-10 12:24:49.575388\n_classes 1\n15\ncross 2019-10-10 12:24:55.618298\n_classes 0\ncue 2019-10-10 12:24:56.570913\n_classes 0\nrest 2019-10-10 12:25:00.590378\n_classes 0\n16\ncross 2019-10-10 12:25:06.612090\n_classes 1\ncue 2019-10-10 12:25:07.574636\n_classes 1\nrest 2019-10-10 12:25:11.568984\n_classes 1\n17\ncross 2019-10-10 12:25:18.116604\n_classes 1\ncue 2019-10-10 12:25:19.072723\n_classes 1\nrest 2019-10-10 12:25:23.072197\n_classes 1\n18\ncross 2019-10-10 12:25:29.511220\n_classes 1\ncue 2019-10-10 12:25:30.468613\n_classes 1\nrest 2019-10-10 12:25:34.468916\n_classes 1\n19\ncross 2019-10-10 12:25:40.718353\n_classes 0\ncue 2019-10-10 12:25:41.671536\n_classes 0\nrest 2019-10-10 12:25:45.673350\n_classes 0\n20\n"
],
[
"plt.ion()\nfig = plt.figure()\nax = fig.add_subplot(1,1,1)\nfig.canvas.draw()\nt0 = datetime.datetime.now()\nimg = mpimg.imread('E:\\\\Diamond\\\\cues\\\\feet.png')\nax.imshow(img)\nfig.show()\n\nplotted = 0\nwhile (datetime.datetime.now()-t0).total_seconds() < 5:\n if plotted == 0 and (datetime.datetime.now()-t0).total_seconds() > 2:\n ax.clear()\n img = mpimg.imread('E:\\\\Diamond\\\\cues\\\\feet.png')\n print('shit')\n ax.imshow(img)\n plotted = 1\n fig.canvas.draw()\n\n",
"shit\n"
],
[
"#plt.ion()\n\nfig = plt.figure()\nax = fig.add_subplot(1,1,1)\nfig.canvas.draw()\n\n\nimg1 = mpimg.imread('E:\\\\Diamond\\\\cues\\\\feet.png')\nimg2 = mpimg.imread('E:\\\\Diamond\\\\cues\\\\black.png')\n\n\nax.imshow(img1)\nfig.canvas.draw()\n\nplt.pause(0.1)\n\nax.clear()\nax.imshow(img2)\nfig.canvas.draw()",
"_____no_output_____"
],
[
"%matplotlib auto",
"Using matplotlib backend: Qt5Agg\n"
],
[
"%matplotlib\nfig = plt.figure()\nax = fig.add_subplot(1,1,1)\nimg = mpimg.imread('E:\\\\Diamond\\\\cues\\\\feet.png')\nax.imshow(img)\nfig.show()",
"Using matplotlib backend: Qt5Agg\n"
],
[
"img",
"_____no_output_____"
],
[
"_classes",
"_____no_output_____"
],
[
"import matplotlib.image as mpimg\nimg = mpimg.imread('E:\\\\Diamond\\\\cues\\\\fixation.png')\nplt.figure(1)\n\nplt.imshow(img)\n\nplt.clf()\nimg = mpimg.imread('E:\\\\Diamond\\\\cues\\\\feet.png')\nplt.imshow(img)\n\n",
"_____no_output_____"
],
[
"rest",
"_____no_output_____"
],
[
"timer = threading.Timer(10000, print('cue'))\ntimer.start()",
"cue\n"
],
[
"cue",
"_____no_output_____"
],
[
"t0",
"_____no_output_____"
],
[
"np.abs((datetime.datetime.now() - (datetime.datetime.now()+datetime.timedelta(0,4))).total_seconds())",
"_____no_output_____"
],
[
"x=datetime.datetime.today()\ny=x.replace(day=x.day+1, hour=1, minute=0, second=0, microsecond=0)",
"_____no_output_____"
],
[
"x",
"_____no_output_____"
],
[
"y",
"_____no_output_____"
],
[
"t0 + datetime.timedelta(0,60)",
"_____no_output_____"
],
[
"t0",
"_____no_output_____"
],
[
"t0 = datetime.datetime.now()\ndt = (datetime.datetime.now() - t0)\nwhile dt.total_seconds() <= 5:\n dt = (datetime.datetime.now() - t0)\nprint (dt.total_seconds(), datetime.datetime.now())",
"5.000638 2019-05-26 14:10:15.671486\n"
],
[
"t0",
"_____no_output_____"
],
[
"t0.",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0c0989c6ab0da27e8d349a914269659131fe467 | 172,890 | ipynb | Jupyter Notebook | .ipynb_checkpoints/RandonForest_test-checkpoint.ipynb | edsonportosilva/TCC_phase_detection_with_ML | be202b04be54129b9c919cf17f5439105d294a1f | [
"MIT"
] | null | null | null | .ipynb_checkpoints/RandonForest_test-checkpoint.ipynb | edsonportosilva/TCC_phase_detection_with_ML | be202b04be54129b9c919cf17f5439105d294a1f | [
"MIT"
] | null | null | null | .ipynb_checkpoints/RandonForest_test-checkpoint.ipynb | edsonportosilva/TCC_phase_detection_with_ML | be202b04be54129b9c919cf17f5439105d294a1f | [
"MIT"
] | 1 | 2021-09-08T04:40:58.000Z | 2021-09-08T04:40:58.000Z | 461.04 | 109,768 | 0.946255 | [
[
[
"from qampy import signals, impairments, equalisation, phaserec, helpers\nfrom qampy.theory import ber_vs_es_over_n0_qam as ber_theory\nfrom qampy.helpers import normalise_and_center as normcenter\nfrom qampy.core.filter import rrcos_pulseshaping as lowpassFilter\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport seaborn as sns\nimport pandas as pd\nfrom sklearn.metrics import mean_squared_error, r2_score\nfrom sklearn.preprocessing import MinMaxScaler, StandardScaler\nfrom sklearn.ensemble import RandomForestRegressor\n\nfrom Funcoes import *",
"_____no_output_____"
],
[
"%matplotlib inline",
"_____no_output_____"
],
[
"plt.rcParams['font.size'] = 18\nplt.rcParams['figure.figsize'] = [16, 8]\nplt.rcParams['lines.linewidth'] = 2",
"_____no_output_____"
],
[
"M = 64 # ordem da modulação\nFb = 40e9 # taxa de símbolos\nSpS = 4 # amostras por símbolo\nFs = SpS*Fb # taxa de amostragem\nSNR = 40 # relação sinal ruído (dB)\nrolloff = 0.01 # Rolloff do filtro formatador de pulso\nsfm = qam_signal_phase_min(M,Fb,SpS,SNR)\nordem = 4\ndataset , X , y = dataset_01(sfm,ordem)",
"_____no_output_____"
],
[
"X_train = X[:50000]\nX_test = X[50000:]\n\ny_train = y[:50000]\ny_test = y[50000:]",
"_____no_output_____"
],
[
"scaler = MinMaxScaler()",
"_____no_output_____"
],
[
"X_train = scaler.fit_transform(X_train)\nX_test = scaler.transform(X_test)",
"_____no_output_____"
],
[
"y_train.shape",
"_____no_output_____"
],
[
"forest = RandomForestRegressor(200)\nforest.fit(X_train, y_train)",
"_____no_output_____"
],
[
"y_preds = forest.predict(X_test)",
"_____no_output_____"
],
[
"print('rmse = ', np.sqrt(mean_squared_error(y_test, y_preds)))\nprint('r2 = ', r2_score(y_test, y_preds))",
"rmse = 0.11838233868703428\nr2 = 0.8052077644886594\n"
],
[
"plt.figure(figsize=(16, 8))\nplt.plot(y_test[:50], '-o')\nplt.plot(y_preds[:50], '-o')\nplt.xlabel('Symbol')\nplt.ylabel('phase (rad)')\nplt.legend(['True phases', 'predicted phases'])\nplt.title('True and predicted phases comparison')\nplt.grid(True)\nplt.show()",
"_____no_output_____"
],
[
"sig_abs = scaler.inverse_transform(X_test)[:].reshape((-1))\nsize = sig_abs.shape[0]",
"_____no_output_____"
],
[
"dataset['amplitudes'].shape",
"_____no_output_____"
],
[
"y_preds.shape",
"_____no_output_____"
],
[
"sinal = dataset['amplitudes'][50000:]*np.exp(1j*y_preds)",
"_____no_output_____"
],
[
"sinal.shape",
"_____no_output_____"
],
[
"plt.magnitude_spectrum(sinal, Fs=Fs, scale='dB')",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0c09e0756df08f6cd8251a96c6307aab3e5cb1f | 4,552 | ipynb | Jupyter Notebook | Ch09.Pos2VelKF/vel2pos_kf.ipynb | rookiecj/kalman_filter | c40c06acd9d58f14d53510245873afdc8903a780 | [
"MIT"
] | 72 | 2019-12-03T20:35:53.000Z | 2022-03-16T11:59:02.000Z | Ch09.Pos2VelKF/vel2pos_kf.ipynb | rookiecj/kalman_filter | c40c06acd9d58f14d53510245873afdc8903a780 | [
"MIT"
] | null | null | null | Ch09.Pos2VelKF/vel2pos_kf.ipynb | rookiecj/kalman_filter | c40c06acd9d58f14d53510245873afdc8903a780 | [
"MIT"
] | 49 | 2019-12-10T08:46:37.000Z | 2022-03-16T06:15:49.000Z | 26.465116 | 103 | 0.499121 | [
[
[
"import numpy as np\nimport matplotlib.pyplot as plt\nfrom numpy.linalg import inv\n\nnp.random.seed(0)",
"_____no_output_____"
],
[
"def get_pos_vel(itime):\n \"\"\"Return Measured Velocity and True Position.\"\"\"\n v = np.random.normal(0, np.sqrt(10)) # v: measurement noise.\n\n vel_true = 80 # nominal velocity = 80 [m/s]. no system noise here.\n pos_true = vel_true * (itime * dt) # pos_true: true position. \n z_vel_meas = vel_true + v # z_vel_meas: measured velocity (observable) \n\n return z_vel_meas, pos_true",
"_____no_output_____"
],
[
"def kalman_filter(z_meas, x_esti, P):\n \"\"\"Kalman Filter Algorithm.\"\"\"\n # (1) Prediction.\n x_pred = A @ x_esti\n P_pred = A @ P @ A.T + Q\n\n # (2) Kalman Gain.\n K = P_pred @ H.T @ inv(H @ P_pred @ H.T + R)\n\n # (3) Estimation.\n x_esti = x_pred + K @ (z_meas - H @ x_pred)\n\n # (4) Error Covariance.\n P = P_pred - K @ H @ P_pred\n\n return x_esti, P",
"_____no_output_____"
],
[
"# Input parameters.\ntime_end = 4\ndt = 0.1",
"_____no_output_____"
],
[
"# Initialization for system model.\n# Matrix: A, H, Q, R, P_0\n# Vector: x_0\nA = np.array([[1, dt],\n [0, 1]])\nH = np.array([[0, 1]])\nQ = np.array([[1, 0],\n [0, 3]])\nR = np.array([[10]])\n\n# Initialization for estimation.\nx_0 = np.array([0, 20]) # position and velocity\nP_0 = 5 * np.eye(2)",
"_____no_output_____"
],
[
"time = np.arange(0, time_end, dt)\nn_samples = len(time)\nvel_meas_save = np.zeros(n_samples)\npos_true_save = np.zeros(n_samples)\npos_esti_save = np.zeros(n_samples)\nvel_esti_save = np.zeros(n_samples)",
"_____no_output_____"
],
[
"x_esti, P = None, None\nfor i in range(n_samples):\n z_meas, pos_true = get_pos_vel(i)\n if i == 0:\n x_esti, P = x_0, P_0\n else:\n x_esti, P = kalman_filter(z_meas, x_esti, P)\n\n vel_meas_save[i] = z_meas\n pos_true_save[i] = pos_true\n pos_esti_save[i] = x_esti[0]\n vel_esti_save[i] = x_esti[1]",
"_____no_output_____"
],
[
"fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(10,5))\n\nplt.subplot(1, 2, 1)\nplt.plot(time, vel_esti_save, 'bo-', label='Estimation (KF)')\nplt.plot(time, vel_meas_save, 'r*--', label='Measurements', markersize=10)\nplt.legend(loc='lower right')\nplt.title('Velocity: Meas. v.s. Esti. (KF)')\nplt.xlabel('Time [sec]')\nplt.ylabel('Velocity [m/s]')\n\nplt.subplot(1, 2, 2)\nplt.plot(time, pos_esti_save, 'bo-', label='Estimation (KF)')\nplt.plot(time, pos_true_save, 'g*--', label='True', markersize=10)\nplt.legend(loc='upper left')\nplt.title('Position: True v.s. Esti. (KF)')\nplt.xlabel('Time [sec]')\nplt.ylabel('Position [m]')\nplt.savefig('png/vel2pos_kf.png')",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0c09e13c56e8ff574e70ca28aabd9fb1d429142 | 330,810 | ipynb | Jupyter Notebook | notebooks/tensorflow-yolo/util_visualization.ipynb | pcrete/SKIL_demo | c0524806f92cecd47bf8c0405ffaf68d4918c52e | [
"MIT"
] | 2 | 2019-03-19T15:31:27.000Z | 2021-01-04T09:50:25.000Z | notebooks/tensorflow-yolo/util_visualization.ipynb | pcrete/skil-demo | c0524806f92cecd47bf8c0405ffaf68d4918c52e | [
"MIT"
] | null | null | null | notebooks/tensorflow-yolo/util_visualization.ipynb | pcrete/skil-demo | c0524806f92cecd47bf8c0405ffaf68d4918c52e | [
"MIT"
] | 1 | 2019-04-22T03:09:31.000Z | 2019-04-22T03:09:31.000Z | 1,323.24 | 322,764 | 0.95284 | [
[
[
"# Bounding Box Visualizer",
"_____no_output_____"
]
],
[
[
"try:\n import cv2\nexcept ImportError:\n cv2 = None\n\nCOLORS = [\n \"#6793be\", \"#990000\", \"#00ff00\", \"#ffbcc9\", \"#ffb9c7\", \"#fdc6d1\",\n \"#fdc9d3\", \"#6793be\", \"#73a4d4\", \"#9abde0\", \"#9abde0\", \"#8fff8f\", \"#ffcfd8\", \"#808080\", \"#808080\",\n \"#ffba00\", \"#6699ff\", \"#009933\", \"#1c1c1c\", \"#08375f\", \"#116ebf\", \"#e61d35\", \"#106bff\", \"#8f8fff\",\n \"#8fff8f\", \"#dbdbff\", \"#dbffdb\", \"#dbffff\", \"#ffdbdb\", \"#ffc2c2\", \"#ffa8a8\", \"#ff8f8f\", \"#e85e68\",\n \"#123456\", \"#5cd38c\", \"#1d1f5f\", \"#4e4b04\", \"#495a5b\", \"#489d73\", \"#9d4872\", \"#d49ea6\", \"#ff0080\",\n \"#6793be\", \"#990000\", \"#fececf\", \"#ffbcc9\", \"#ffb9c7\", \"#fdc6d1\",\n \"#fdc9d3\", \"#6793be\", \"#73a4d4\", \"#9abde0\", \"#9abde0\", \"#8fff8f\", \"#ffcfd8\", \"#808080\", \"#808080\",\n \"#ffba00\", \"#6699ff\", \"#009933\", \"#1c1c1c\", \"#08375f\", \"#116ebf\", \"#e61d35\", \"#106bff\", \"#8f8fff\",\n \"#8fff8f\", \"#dbdbff\", \"#dbffdb\", \"#dbffff\", \"#ffdbdb\", \"#ffc2c2\", \"#ffa8a8\", \"#ff8f8f\", \"#e85e68\",\n \"#123456\", \"#5cd38c\", \"#1d1f5f\", \"#4e4b04\", \"#495a5b\", \"#489d73\", \"#9d4872\", \"#d49ea6\", \"#ff0080\" \n]\n\ndef hex_to_rgb(color_hex):\n color_hex = color_hex.lstrip('#')\n color_rgb = tuple(int(color_hex[i:i+2], 16) for i in (0, 2, 4))\n return color_rgb\n\ndef annotate_image(image, detection):\n \"\"\" Annotate images with object detection results\n # Arguments:\n image: numpy array representing the image used for detection\n detection: `DetectionResult` result from SKIL on the same image\n # Return value:\n annotated image as numpy array\n \"\"\"\n if cv2 is None:\n raise Exception(\"OpenCV is not installed.\")\n \n objects = detection.get('objects')\n if objects:\n for detect in objects:\n confs = detect.get('confidences')\n max_conf = max(confs)\n max_index = confs.index(max_conf)\n classes = detect.get('predictedClasses')\n max_class = classes[max_index]\n class_number = detect.get('predictedClassNumbers')[max_index]\n \n h = detect.get('height')\n w = detect.get('width')\n center_x = detect.get('centerX')\n center_y = detect.get('centerY') \n \n color_hex = COLORS[class_number]\n b,g,r = hex_to_rgb(color_hex)\n color_rgb = (r,g,b)\n \n # bounding box\n xmin, ymin = int(center_x - w/2), int(center_y - h/2)\n xmax, ymax = int(center_x + w/2), int(center_y + h/2)\n upper = (xmin, ymin)\n lower = (xmax, ymax)\n \n cv2.rectangle(image, lower, upper, color_rgb, thickness=3)\n \n # bounding box label: class_name: confidence \n text = max_class + \": \" + str(int(100*max(confs)))+\"%\"\n \n font = cv2.FONT_HERSHEY_SIMPLEX\n fontScale = 0.7\n \n # get text size\n size = cv2.getTextSize(text, font, fontScale+0.1, thickness=2)\n text_width = size[0][0]\n text_height = size[0][1]\n \n # text-box background\n cv2.rectangle(image, \n (xmin-2, ymin),\n (xmin+text_width, ymin-35), color_rgb, thickness=-1)\n \n cv2.putText(image, text, (xmin, ymin-10), font, fontScale, color=0, thickness=2)\n \n return image",
"_____no_output_____"
],
[
"import json\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nwith open('detections/img-5.json') as FILE:\n detections = json.load(FILE)\n\nprint(json.dumps(detections['objects'][0], indent=4))\n\n\nimage = annotate_image(cv2.imread(\"images/img-5.jpg\"), detections)\n\ncv2.imwrite('images/annotated.jpg', image)\n\nplt.figure(figsize=(8,8))\nplt.imshow(cv2.cvtColor(image, cv2.COLOR_BGR2RGB))\nplt.show()\n\nimage.shape\n\nfor k, detection in enumerate(detections['objects']):\n predicted = detection['predictedClasses'][0]\n confidence = detection['confidences'][0]\n \n print('{}: [{}, {:.5}]'.format(k+1, predicted, confidence))",
"{\n \"predictedClassNumbers\": [\n 7,\n 2,\n 5,\n 0,\n 3,\n 6,\n 62,\n 12,\n 39,\n 67\n ],\n \"predictedClasses\": [\n \"truck\",\n \"car\",\n \"bus\",\n \"person\",\n \"motorbike\",\n \"train\",\n \"tvmonitor\",\n \"parking meter\",\n \"bottle\",\n \"cell phone\"\n ],\n \"confidences\": [\n 0.6679658,\n 0.32458264,\n 0.0057245484,\n 0.00025993204,\n 0.00017069715,\n 0.00013948459,\n 0.00010358589,\n 0.0001003403,\n 5.5740995e-05,\n 5.4393808e-05\n ],\n \"height\": 216.0,\n \"centerY\": 239.0,\n \"centerX\": 41.0,\n \"width\": 90.0\n}\n"
],
[
"len(COLORS)",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
d0c0a5c7f4312ac2d62d6fb7988a537883aacffe | 19,884 | ipynb | Jupyter Notebook | 08-lists.ipynb | trailmarkerlib/dataExplore | 529edd4f9eff75753ce89006f8a5b840456b9211 | [
"BSD-3-Clause"
] | null | null | null | 08-lists.ipynb | trailmarkerlib/dataExplore | 529edd4f9eff75753ce89006f8a5b840456b9211 | [
"BSD-3-Clause"
] | null | null | null | 08-lists.ipynb | trailmarkerlib/dataExplore | 529edd4f9eff75753ce89006f8a5b840456b9211 | [
"BSD-3-Clause"
] | null | null | null | 16.808115 | 133 | 0.410984 | [
[
[
"# Chapter 8 Lists",
"_____no_output_____"
],
[
"### 8.1 A list is a sequence",
"_____no_output_____"
]
],
[
[
"# list of integers\n[10, 20, 30, 40]\n\n# list of strings\n['frog','toad','salamander','newt']\n\n# mixed list\n[10,'twenty',30.0,[40, 45]]",
"_____no_output_____"
],
[
"cheeses = ['Cheddar','Mozzarella','Gouda','Swiss']\nnumbers = [27, 42]\nempty = []\nprint(cheeses, numbers, empty)",
"['Cheddar', 'Mozzarella', 'Gouda', 'Swiss'] [27, 42] []\n"
]
],
[
[
"### 8.2 Lists are mutable",
"_____no_output_____"
]
],
[
[
"numbers = [27, 62]\nnumbers[1] = 42\nprint(numbers)",
"[27, 42]\n"
],
[
"cheeses = ['Cheddar','Mozzarella','Gouda','Swiss']\n'Swiss' in cheeses",
"_____no_output_____"
],
[
"'Brie' in cheeses",
"_____no_output_____"
]
],
[
[
"### 8.3 Traversing a list",
"_____no_output_____"
]
],
[
[
"cheeses = ['Cheddar','Mozzarella','Gouda','Swiss']\nfor cheese in cheeses:\n print(cheese)",
"Cheddar\nMozzarella\nGouda\nSwiss\n"
],
[
"numbers = [10, 20, 30, 40]\nfor i in range(len(numbers)):\n numbers[i] = numbers[i] + 2\nprint(numbers)",
"[12, 22, 32, 42]\n"
],
[
"empty = []\nfor x in empty:\n print('This never happens.')",
"_____no_output_____"
],
[
"len(['spam', 1, ['sun','moon','stars'], [1,2,3]])",
"_____no_output_____"
]
],
[
[
"### 8.4 List operations",
"_____no_output_____"
]
],
[
[
"a = [1,2,3]\nb = [4,5,6]\nc = a + b\nprint(c)",
"[1, 2, 3, 4, 5, 6]\n"
],
[
"[0]*4",
"_____no_output_____"
],
[
"[1,2,3]*3",
"_____no_output_____"
]
],
[
[
"### 8.5 List slices",
"_____no_output_____"
]
],
[
[
"t = ['a','b','c','d','e','f']\nt[1:3]",
"_____no_output_____"
],
[
"t[:4]",
"_____no_output_____"
],
[
"t[3:]",
"_____no_output_____"
],
[
"t[:]",
"_____no_output_____"
],
[
"t[1:3] = ['x','y']\nprint(t)",
"['a', 'x', 'y', 'd', 'e', 'f']\n"
]
],
[
[
"### 8.6 List methods",
"_____no_output_____"
]
],
[
[
"t = ['a','b','c']\nt.append('d')\nprint(t)",
"['a', 'b', 'c', 'd']\n"
],
[
"t1 = ['a','b','c']\nt2 = ['d','e']\nt1.extend(t2)\nprint(t1)",
"['a', 'b', 'c', 'd', 'e']\n"
],
[
"t = ['d','c','e','b','a']\nt.sort()\nprint(t)",
"['a', 'b', 'c', 'd', 'e']\n"
]
],
[
[
"### 8.7 Deleting elements",
"_____no_output_____"
]
],
[
[
"t = ['a','b','c']\nx = t.pop(1)\nprint(t)",
"['a', 'c']\n"
],
[
"print(x)",
"b\n"
],
[
"t = ['a','b','c']\ndel t[1]\nprint(t)",
"['a', 'c']\n"
],
[
"t = ['a','b','c']\nx = t.remove('b')\nprint(t)",
"['a', 'c']\n"
],
[
"t = ['a','b','c','b']\nx = t.remove('b') #removes first instance\nprint(t)",
"['a', 'c', 'b']\n"
],
[
"t = ['a', 'b', 'c', 'd', 'e','f']\ndel t[1:5]\nprint(t)",
"['a', 'f']\n"
]
],
[
[
"### 8.8 Lists and functions",
"_____no_output_____"
]
],
[
[
"nums = [3, 41, 12, 9, 74, 15]\nprint(len(nums))",
"6\n"
],
[
"print(max(nums))",
"74\n"
],
[
"print(min(nums))",
"3\n"
],
[
"print(sum(nums))",
"154\n"
],
[
"print(sum(nums)/len(nums))",
"25.666666666666668\n"
],
[
"numlist = list()\nwhile(True):\n inp = input('Enter a number: ')\n if inp == 'done': break\n try: \n value = float(inp)\n numlist.append(value)\n except:\n print('That was not a number. Continue...')\n continue\naverage = sum(numlist)/len(numlist)\nprint('Average:', average)",
"Enter a number: 4\nEnter a number: 565\nEnter a number: fg\n"
]
],
[
[
"### 8.9 Lists and strings",
"_____no_output_____"
]
],
[
[
"s = 'spam'\nt = list(s)\nprint(t)",
"['s', 'p', 'a', 'm']\n"
],
[
"s = 'That there’s some good in this world, Mr. Frodo… and it’s worth fighting for.'\nt = s.split()\nprint(t)",
"['That', 'there’s', 'some', 'good', 'in', 'this', 'world,', 'Mr.', 'Frodo…', 'and', 'it’s', 'worth', 'fighting', 'for.']\n"
],
[
"print(t[3])",
"good\n"
],
[
"s = 'spam-spam-spam'\ndelimiter = '-'\ns.split(delimiter)",
"_____no_output_____"
],
[
"t = ['That', 'there’s', 'some', 'good', 'in', 'this', 'world,', 'Mr.', 'Frodo…', 'and', 'it’s', 'worth', 'fighting', 'for.']\ndelimiter = ' '\ndelimiter.join(t)",
"_____no_output_____"
]
],
[
[
"### 8.10 Parsing lines",
"_____no_output_____"
]
],
[
[
"fhand = open('mbox-short.txt')\nfor line in fhand:\n line = line.rstrip()\n if not line.startswith('From '): continue\n words = line.split()\n print(words[2])",
"Sat\nFri\nFri\nFri\nFri\nFri\nFri\nFri\nFri\nFri\nFri\nFri\nFri\nFri\nFri\nFri\nFri\nFri\nFri\nFri\nFri\nThu\nThu\nThu\nThu\nThu\nThu\n"
]
],
[
[
"### 8.11 Objects and values",
"_____no_output_____"
]
],
[
[
"# same object, two variables pointing to the same object\na = 'banana'\nb = 'banana'\na is b",
"_____no_output_____"
],
[
"# same value because a and b point to the same object\na == b",
"_____no_output_____"
],
[
"# two separate objects\na = [1,2,3]\nb = [1,2,3]\na is b",
"_____no_output_____"
],
[
"# same value\na == b",
"_____no_output_____"
]
],
[
[
"### 8.12 Aliasing",
"_____no_output_____"
]
],
[
[
"# b points to the same object as a\na = [1,2,3]\nb = a\nb is a",
"_____no_output_____"
],
[
"b[0] = 17\nprint(a)",
"[17, 2, 3]\n"
]
],
[
[
"### 8.13 List arguments",
"_____no_output_____"
]
],
[
[
"def delete_head(t):\n del t[0]",
"_____no_output_____"
],
[
"letters = ['a','b','c']\ndelete_head(letters)\nprint(letters)",
"['b', 'c']\n"
],
[
"t1 = [1,2]\nt2 = t1.append(3)\nprint(t1)",
"[1, 2, 3]\n"
],
[
"print(t2)",
"None\n"
],
[
"t1 = [1,2]\nt3 = t1 + [3]\nprint(t3)",
"[1, 2, 3]\n"
],
[
"# changes t within the scope of the function but not \n# the value of the variable passed to it\ndef bad_delete_head(t):\n t = t[1:]",
"_____no_output_____"
],
[
"def tail(t):\n return t[1:]",
"_____no_output_____"
],
[
"letters = ['a','b','c']\nrest = tail(letters)\nprint(rest)",
"['b', 'c']\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0c0af7261e262d127d8e8896f37db133752c55b | 6,085 | ipynb | Jupyter Notebook | data_visualization_in_spreadsheets/2_efficient_column_charts.ipynb | vhsenna/datacamp-courses | dad9982bf7e90061efcbecc3cce97b7a5d14dd80 | [
"MIT"
] | null | null | null | data_visualization_in_spreadsheets/2_efficient_column_charts.ipynb | vhsenna/datacamp-courses | dad9982bf7e90061efcbecc3cce97b7a5d14dd80 | [
"MIT"
] | 1 | 2022-02-19T17:18:22.000Z | 2022-02-19T21:51:45.000Z | data_visualization_in_spreadsheets/2_efficient_column_charts.ipynb | vhsenna/datacamp-courses | dad9982bf7e90061efcbecc3cce97b7a5d14dd80 | [
"MIT"
] | null | null | null | 33.251366 | 224 | 0.613476 | [
[
[
"## Creating a column chart for your dashboard\n\nIn this chapter, you will start to put together your own dashboard.\n\nYour first step is to create a basic column chart showing fatalities, injured, and uninjured statistics for the states of Australia over the last 100 years.\n\nInstructions\n\n1. In `A1` of `Sheet1`, use a formula that refers to the heading in your `Shark Attacks` dataset. This can take the form of `='Sheet Name'!A1`.\n2. In your `Shark Attacks` dataset, select the `State` column and the `Fatal`, `Injured`, and `Uninjured` statistics, and then create a chart.\n3. Copy and paste this chart to `Sheet 1` and change the chart to a column chart.",
"_____no_output_____"
],
[
"## Format chart, axis titles and series\n\nYour next task is to apply some basic formatting to the same chart to jazz it up a bit and make it a bit more pleasing to the reader's eye.\n\nInstructions\n\n1. Alter the title of your chart so that it now reads \"Fatal, Injured, and Uninjured Statistics\".\n2. The color of the title text is now a light grey. Let's change the color to black and make the font **bold**.\n 1. Double-clicking the chart title will allow you to adapt the text and the formatting options in the Chart Editor to the right.\n3. While you're at it, change the series colors accordingly: `Fatal`: red, `Injured`: blue, and `Uninjured`: green.\n 1. You can double-click each series to again open up the Chart Editor, and then select a new color for the series.",
"_____no_output_____"
],
[
"## Removing a series\n\nTaking things a little further, in the next task you will manipulate the look of your chart a bit more and remove the Uninjured statistical data.\n\nInstructions\n\n1. Remove the `Uninjured` series from your chart.",
"_____no_output_____"
],
[
"## Changing the plotted range\n\nIt's just as easy to change a range as it is to delete it. For this task, have a go at changing the range of your chart so it now only showcases the top 3 states' fatal statistics.\n\nInstructions\n\n1. Remove the `Injured` series from your chart so you are **only** plotting the `Fatal` series.\n2. Change the data range so that you are only plotting the first **three states** with the highest number of fatalities.\n 1. To do this, you will need to use the chart editor to adjust the existing data ranges `'Shark Attacks'!A1:A9` and `'Shark Attacks'!C1:C9` so that the chart only displays fatalities from `NSW`, `QLD`, and `WA`.\n3. Finally, change the chart title to 'Top 3 States Number of Fatalities'.",
"_____no_output_____"
],
[
"## Using named ranges\n\nIn this task you are going find an existing range and insert a blank row within the range.\n\nInstructions\n\n1. Select Data then Named ranges and click on the `SharkStats` Named range to see the highlighted range.\n2. Insert a blank row after row `2`.",
"_____no_output_____"
],
[
"## Summing using a named range\n\nIn addition to being a handy way of keeping track of a range of cells, named ranges can also be used in formulas.\n\nFor example, using the formula `=AVERAGE(Total)` would return the average of the totals contained within the `Total` named range.\n\nIn this task you will remove a blank row and use the named range `Total` in a formula.\n\nInstructions\n\n1. Remove the blank row you inserted in the last exercise.\n2. In `B10`, use the `SUM()` function to aggregate the `Total` named range.",
"_____no_output_____"
],
[
"## Averaging using a named range\n\nIn this task you will use a named range within a formula to find an average.\n\nInstructions\n\n1. In `C11` use the `Fatalities` named range instead of cell references to average the number of fatalities.",
"_____no_output_____"
]
]
] | [
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
d0c0be90bcfb0f4e727d88e5b71d07f7ec167f15 | 8,489 | ipynb | Jupyter Notebook | mk003-fibonacci_rabbits.ipynb | karakose77/intro-to-cs-and-programming-in-python-exercises | 59a30f86317df6f921379d0fcea779be806c257e | [
"MIT"
] | null | null | null | mk003-fibonacci_rabbits.ipynb | karakose77/intro-to-cs-and-programming-in-python-exercises | 59a30f86317df6f921379d0fcea779be806c257e | [
"MIT"
] | null | null | null | mk003-fibonacci_rabbits.ipynb | karakose77/intro-to-cs-and-programming-in-python-exercises | 59a30f86317df6f921379d0fcea779be806c257e | [
"MIT"
] | null | null | null | 33.42126 | 145 | 0.564024 | [
[
[
"## Fibonacci Rabbits\nFibonacci considers the growth of an idealized (biologically unrealistic) rabbit population, assuming that:\n\n1. A single newly born pair of rabbits (one male, one female) are put in a field;\n2. Rabbits are able to mate at the age of one month so that at the end of its second month a female can produce another pair of rabbits;\n3. Rabbits never die and a mating pair always produces one new pair (one male, one female) every month from the second month on.\n\nThe puzzle that Fibonacci posed was: how many pairs will there be in a given time period?",
"_____no_output_____"
]
],
[
[
"def populate_iterative(remaining_months, infant_num = 2, mature_num = 0, month_count = 0):\n \"\"\"\n This function calculates the population of infant and mature rabbit population of\n Fibonacci rabbits after a given amount of time(remaining_months) by utilizing a\n while loop. The function assumes there is an infant pair at month=0.\n \"\"\" \n while remaining_months > 0: \n month_count += 1\n remaining_months -= 1\n infant_num, mature_num = mature_num, infant_num + mature_num \n print(f\"Month {month_count}: {infant_num:2} infant, {mature_num:2} mature\") \n print(f\"Total population: {(infant_num + mature_num)}\") \n return infant_num + mature_num \n\ndef populate_iterative_timeit(remaining_months, infant_num = 2, mature_num = 0, month_count = 0):\n \"\"\"\n This function calculates the population of infant and mature rabbit population of\n Fibonacci rabbits after a given amount of time(remaining_months) by utilizing a\n while loop. The function assumes there is an infant pair at month=0.\n \"\"\" \n while remaining_months > 0: \n month_count += 1\n remaining_months -= 1\n infant_num, mature_num = mature_num, infant_num + mature_num \n return infant_num + mature_num \n \n \ndef populate_recursive(remaining_months, infant_num = 2, mature_num = 0, month_count = 0):\n \"\"\"\n This function calculates the population of infant and mature rabbit population of\n Fibonacci rabbits after a given amount of time(remaining_months) recursively. \n The function assumes there is an infant pair at month=0.\n \"\"\" \n if remaining_months > 0: \n month_count += 1\n infant_num, mature_num = mature_num, infant_num + mature_num\n print(f\"Month {month_count}: {infant_num:2} infant, {mature_num:2} mature\")\n remaining_months -= 1\n return populate_recursive(remaining_months, infant_num, mature_num, month_count) \n else:\n print(f\"Total population: {(infant_num + mature_num)}\")\n return infant_num + mature_num\n \ndef populate_recursive_timeit(remaining_months, infant_num = 2, mature_num = 0, month_count = 0):\n \"\"\"\n This function calculates the population of infant and mature rabbit population of\n Fibonacci rabbits after a given amount of time(remaining_months) recursively. \n The function assumes there is an infant pair at month=0.\n \"\"\" \n if remaining_months > 0: \n month_count += 1\n infant_num, mature_num = mature_num, infant_num + mature_num\n remaining_months -= 1\n return populate_recursive_timeit(remaining_months, infant_num, mature_num, month_count) \n else:\n return infant_num + mature_num",
"_____no_output_____"
],
[
"populate_iterative(24)",
"Month 1: 0 infant, 2 mature\nMonth 2: 2 infant, 2 mature\nMonth 3: 2 infant, 4 mature\nMonth 4: 4 infant, 6 mature\nMonth 5: 6 infant, 10 mature\nMonth 6: 10 infant, 16 mature\nMonth 7: 16 infant, 26 mature\nMonth 8: 26 infant, 42 mature\nMonth 9: 42 infant, 68 mature\nMonth 10: 68 infant, 110 mature\nMonth 11: 110 infant, 178 mature\nMonth 12: 178 infant, 288 mature\nMonth 13: 288 infant, 466 mature\nMonth 14: 466 infant, 754 mature\nMonth 15: 754 infant, 1220 mature\nMonth 16: 1220 infant, 1974 mature\nMonth 17: 1974 infant, 3194 mature\nMonth 18: 3194 infant, 5168 mature\nMonth 19: 5168 infant, 8362 mature\nMonth 20: 8362 infant, 13530 mature\nMonth 21: 13530 infant, 21892 mature\nMonth 22: 21892 infant, 35422 mature\nMonth 23: 35422 infant, 57314 mature\nMonth 24: 57314 infant, 92736 mature\nTotal population: 150050\n"
],
[
"populate_recursive(24)",
"Month 1: 0 infant, 2 mature\nMonth 2: 2 infant, 2 mature\nMonth 3: 2 infant, 4 mature\nMonth 4: 4 infant, 6 mature\nMonth 5: 6 infant, 10 mature\nMonth 6: 10 infant, 16 mature\nMonth 7: 16 infant, 26 mature\nMonth 8: 26 infant, 42 mature\nMonth 9: 42 infant, 68 mature\nMonth 10: 68 infant, 110 mature\nMonth 11: 110 infant, 178 mature\nMonth 12: 178 infant, 288 mature\nMonth 13: 288 infant, 466 mature\nMonth 14: 466 infant, 754 mature\nMonth 15: 754 infant, 1220 mature\nMonth 16: 1220 infant, 1974 mature\nMonth 17: 1974 infant, 3194 mature\nMonth 18: 3194 infant, 5168 mature\nMonth 19: 5168 infant, 8362 mature\nMonth 20: 8362 infant, 13530 mature\nMonth 21: 13530 infant, 21892 mature\nMonth 22: 21892 infant, 35422 mature\nMonth 23: 35422 infant, 57314 mature\nMonth 24: 57314 infant, 92736 mature\nTotal population: 150050\n"
],
[
"%timeit populate_iterative_timeit(24)",
"4.32 µs ± 627 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)\n"
],
[
"%timeit populate_recursive_timeit(24)",
"7.86 µs ± 520 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)\n"
]
],
[
[
"Iteration algorithm is faster than recursive one.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
d0c0cba7cb2d168f3601830b67314dce357728bc | 17,986 | ipynb | Jupyter Notebook | 05_explore/spark/98_Analyze_Data_Quality_ProcessingJob_Spark_BYOC.ipynb | MarcusFra/workshop | 83f16d41f5e10f9c23242066f77a14bb61ac78d7 | [
"Apache-2.0"
] | 2,327 | 2020-03-01T09:47:34.000Z | 2021-11-25T12:38:42.000Z | 05_explore/spark/98_Analyze_Data_Quality_ProcessingJob_Spark_BYOC.ipynb | MarcusFra/workshop | 83f16d41f5e10f9c23242066f77a14bb61ac78d7 | [
"Apache-2.0"
] | 209 | 2020-03-01T17:14:12.000Z | 2021-11-08T20:35:42.000Z | 05_explore/spark/98_Analyze_Data_Quality_ProcessingJob_Spark_BYOC.ipynb | MarcusFra/workshop | 83f16d41f5e10f9c23242066f77a14bb61ac78d7 | [
"Apache-2.0"
] | 686 | 2020-03-03T17:24:51.000Z | 2021-11-25T23:39:12.000Z | 28.962963 | 289 | 0.574002 | [
[
[
"\n# Analyze Data Quality with SageMaker Processing Jobs and Spark\n\nTypically a machine learning (ML) process consists of few steps. First, gathering data with various ETL jobs, then pre-processing the data, featurizing the dataset by incorporating standard techniques or prior knowledge, and finally training an ML model using an algorithm.\n\nOften, distributed data processing frameworks such as Spark are used to process and analyze data sets in order to detect data quality issues and prepare them for model training. \n\nIn this notebook we'll use Amazon SageMaker Processing with a library called [**Deequ**](https://github.com/awslabs/deequ), and leverage the power of Spark with a managed SageMaker Processing Job to run our data processing workloads.\n\nHere are some great resources on Deequ: \n* Blog Post: https://aws.amazon.com/blogs/big-data/test-data-quality-at-scale-with-deequ/\n* Research Paper: https://assets.amazon.science/4a/75/57047bd343fabc46ec14b34cdb3b/towards-automated-data-quality-management-for-machine-learning.pdf\n\n",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"# Amazon Customer Reviews Dataset\n\nhttps://s3.amazonaws.com/amazon-reviews-pds/readme.html\n\n### Dataset Columns:\n\n- `marketplace`: 2-letter country code (in this case all \"US\").\n- `customer_id`: Random identifier that can be used to aggregate reviews written by a single author.\n- `review_id`: A unique ID for the review.\n- `product_id`: The Amazon Standard Identification Number (ASIN). `http://www.amazon.com/dp/<ASIN>` links to the product's detail page.\n- `product_parent`: The parent of that ASIN. Multiple ASINs (color or format variations of the same product) can roll up into a single parent.\n- `product_title`: Title description of the product.\n- `product_category`: Broad product category that can be used to group reviews (in this case digital videos).\n- `star_rating`: The review's rating (1 to 5 stars).\n- `helpful_votes`: Number of helpful votes for the review.\n- `total_votes`: Number of total votes the review received.\n- `vine`: Was the review written as part of the [Vine](https://www.amazon.com/gp/vine/help) program?\n- `verified_purchase`: Was the review from a verified purchase?\n- `review_headline`: The title of the review itself.\n- `review_body`: The text of the review.\n- `review_date`: The date the review was written.",
"_____no_output_____"
]
],
[
[
"ingest_create_athena_table_tsv = False",
"_____no_output_____"
],
[
"%store -r ingest_create_athena_table_tsv",
"_____no_output_____"
],
[
"if not ingest_create_athena_table_tsv:\n print('+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++')\n print('[ERROR] YOU HAVE TO RUN THE NOTEBOOKS IN THE INGEST FOLDER FIRST.')\n print('+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++')\nelse:\n print('[OK]')",
"_____no_output_____"
],
[
"import sagemaker\n\nsagemaker_session = sagemaker.Session()\nrole = sagemaker.get_execution_role()\nbucket = sagemaker_session.default_bucket()",
"_____no_output_____"
]
],
[
[
"# Pull the Spark-Deequ Docker Image",
"_____no_output_____"
]
],
[
[
"public_image_uri='docker.io/datascienceonaws/spark-deequ:1.0.0'",
"_____no_output_____"
],
[
"!docker pull $public_image_uri",
"_____no_output_____"
]
],
[
[
"# Push the Image to a Private Docker Repo",
"_____no_output_____"
]
],
[
[
"private_docker_repo = 'spark-deequ'\nprivate_docker_tag = '1.0.0'",
"_____no_output_____"
],
[
"import boto3\naccount_id = boto3.client('sts').get_caller_identity().get('Account')\nregion = boto3.session.Session().region_name\n\nprivate_image_uri = '{}.dkr.ecr.{}.amazonaws.com/{}:{}'.format(account_id, region, private_docker_repo, private_docker_tag)\nprint(private_image_uri)",
"_____no_output_____"
],
[
"!docker tag $public_image_uri $private_image_uri",
"_____no_output_____"
],
[
"!$(aws ecr get-login --region $region --registry-ids $account_id --no-include-email)",
"_____no_output_____"
]
],
[
[
"# Ignore `spark-deequ does not exist` error below",
"_____no_output_____"
]
],
[
[
"!aws ecr describe-repositories --repository-names $private_docker_repo || aws ecr create-repository --repository-name $private_docker_repo",
"_____no_output_____"
]
],
[
[
"# Ignore ^^ `spark-deequ does not exist` ^^ error above",
"_____no_output_____"
]
],
[
[
"!docker push $private_image_uri",
"_____no_output_____"
]
],
[
[
"# Run the Analysis Job using a SageMaker Processing Job\n\nNext, use the Amazon SageMaker Python SDK to submit a processing job. Use the Spark container that was just built with our Spark script.",
"_____no_output_____"
],
[
"# Review the Spark preprocessing script.",
"_____no_output_____"
]
],
[
[
"!pygmentize preprocess-deequ.py",
"_____no_output_____"
],
[
"!pygmentize preprocess-deequ.scala",
"_____no_output_____"
],
[
"from sagemaker.processing import ScriptProcessor\n\nprocessor = ScriptProcessor(base_job_name='spark-amazon-reviews-analyzer',\n image_uri=private_image_uri,\n command=['/opt/program/submit'],\n role=role,\n instance_count=2, # instance_count needs to be > 1 or you will see the following error: \"INFO yarn.Client: Application report for application_ (state: ACCEPTED)\"\n instance_type='ml.r5.2xlarge',\n env={\n 'mode': 'jar',\n 'main_class': 'Main'\n })",
"_____no_output_____"
],
[
"s3_input_data = 's3://{}/amazon-reviews-pds/tsv/'.format(bucket)\nprint(s3_input_data)",
"_____no_output_____"
],
[
"!aws s3 ls $s3_input_data",
"_____no_output_____"
]
],
[
[
"## Setup Output Data",
"_____no_output_____"
]
],
[
[
"from time import gmtime, strftime\ntimestamp_prefix = strftime(\"%Y-%m-%d-%H-%M-%S\", gmtime())\n\noutput_prefix = 'amazon-reviews-spark-analyzer-{}'.format(timestamp_prefix)\nprocessing_job_name = 'amazon-reviews-spark-analyzer-{}'.format(timestamp_prefix)\n\nprint('Processing job name: {}'.format(processing_job_name))",
"_____no_output_____"
],
[
"s3_output_analyze_data = 's3://{}/{}/output'.format(bucket, output_prefix)\n\nprint(s3_output_analyze_data)",
"_____no_output_____"
]
],
[
[
"## Start the Spark Processing Job\n\n_Notes on Invoking from Lambda:_\n* However, if we use the boto3 SDK (ie. with a Lambda), we need to copy the `preprocess.py` file to S3 and specify the everything include --py-files, etc.\n* We would need to do the following before invoking the Lambda:\n !aws s3 cp preprocess.py s3://<location>/sagemaker/spark-preprocess-reviews-demo/code/preprocess.py\n !aws s3 cp preprocess.py s3://<location>/sagemaker/spark-preprocess-reviews-demo/py_files/preprocess.py\n* Then reference the s3://<location> above in the --py-files, etc.\n* See Lambda example code in this same project for more details.\n\n_Notes on not using ProcessingInput and Output:_\n* Since Spark natively reads/writes from/to S3 using s3a://, we can avoid the copy required by ProcessingInput and ProcessingOutput (FullyReplicated or ShardedByS3Key) and just specify the S3 input and output buckets/prefixes._\"\n* See https://github.com/awslabs/amazon-sagemaker-examples/issues/994 for issues related to using /opt/ml/processing/input/ and output/\n* If we use ProcessingInput, the data will be copied to each node (which we don't want in this case since Spark already handles this)",
"_____no_output_____"
]
],
[
[
"from sagemaker.processing import ProcessingOutput\n\nprocessor.run(code='preprocess-deequ.py',\n arguments=['s3_input_data', s3_input_data,\n 's3_output_analyze_data', s3_output_analyze_data,\n ],\n # See https://github.com/aws/sagemaker-python-sdk/issues/1341 \n # for why we need to specify a null-output\n outputs=[\n ProcessingOutput(s3_upload_mode='EndOfJob',\n output_name='null-output',\n source='/opt/ml/processing/output')\n ],\n logs=True,\n wait=False\n)",
"_____no_output_____"
],
[
"from IPython.core.display import display, HTML\n\nprocessing_job_name = processor.jobs[-1].describe()['ProcessingJobName']\n\ndisplay(HTML('<b>Review <a target=\"blank\" href=\"https://console.aws.amazon.com/sagemaker/home?region={}#/processing-jobs/{}\">Processing Job</a></b>'.format(region, processing_job_name)))\n",
"_____no_output_____"
],
[
"from IPython.core.display import display, HTML\n\nprocessing_job_name = processor.jobs[-1].describe()['ProcessingJobName']\n\ndisplay(HTML('<b>Review <a target=\"blank\" href=\"https://console.aws.amazon.com/cloudwatch/home?region={}#logStream:group=/aws/sagemaker/ProcessingJobs;prefix={};streamFilter=typeLogStreamPrefix\">CloudWatch Logs</a> After a Few Minutes</b>'.format(region, processing_job_name)))\n",
"_____no_output_____"
],
[
"from IPython.core.display import display, HTML\n\ns3_job_output_prefix = output_prefix\n\ndisplay(HTML('<b>Review <a target=\"blank\" href=\"https://s3.console.aws.amazon.com/s3/buckets/{}/{}/?region={}&tab=overview\">S3 Output Data</a> After The Spark Job Has Completed</b>'.format(bucket, s3_job_output_prefix, region)))\n",
"_____no_output_____"
]
],
[
[
"# Monitor the Processing Job",
"_____no_output_____"
]
],
[
[
"running_processor = sagemaker.processing.ProcessingJob.from_processing_name(processing_job_name=processing_job_name,\n sagemaker_session=sagemaker_session)\n\nprocessing_job_description = running_processor.describe()\n\nprint(processing_job_description)",
"_____no_output_____"
],
[
"running_processor.wait()",
"_____no_output_____"
]
],
[
[
"# _Please Wait Until the ^^ Processing Job ^^ Completes Above._",
"_____no_output_____"
],
[
"# Inspect the Processed Output \n\n## These are the quality checks on our dataset.\n\n## _The next cells will not work properly until the job completes above._",
"_____no_output_____"
]
],
[
[
"!aws s3 ls --recursive $s3_output_analyze_data/",
"_____no_output_____"
]
],
[
[
"## Copy the Output from S3 to Local\n* dataset-metrics/\n* constraint-checks/\n* success-metrics/\n* constraint-suggestions/\n",
"_____no_output_____"
]
],
[
[
"!aws s3 cp --recursive $s3_output_analyze_data ./amazon-reviews-spark-analyzer/ --exclude=\"*\" --include=\"*.csv\"",
"_____no_output_____"
]
],
[
[
"## Analyze Constraint Checks",
"_____no_output_____"
]
],
[
[
"import glob\nimport pandas as pd\nimport os\n\ndef load_dataset(path, sep, header):\n data = pd.concat([pd.read_csv(f, sep=sep, header=header) for f in glob.glob('{}/*.csv'.format(path))], ignore_index = True)\n\n return data",
"_____no_output_____"
],
[
"df_constraint_checks = load_dataset(path='./amazon-reviews-spark-analyzer/constraint-checks/', sep='\\t', header=0)\ndf_constraint_checks[['check', 'constraint', 'constraint_status', 'constraint_message']]",
"_____no_output_____"
]
],
[
[
"## Analyze Dataset Metrics",
"_____no_output_____"
]
],
[
[
"df_dataset_metrics = load_dataset(path='./amazon-reviews-spark-analyzer/dataset-metrics/', sep='\\t', header=0)\ndf_dataset_metrics",
"_____no_output_____"
]
],
[
[
"## Analyze Success Metrics",
"_____no_output_____"
]
],
[
[
"df_success_metrics = load_dataset(path='./amazon-reviews-spark-analyzer/success-metrics/', sep='\\t', header=0)\ndf_success_metrics",
"_____no_output_____"
]
],
[
[
"## Analyze Constraint Suggestions",
"_____no_output_____"
]
],
[
[
"df_constraint_suggestions = load_dataset(path='./amazon-reviews-spark-analyzer/constraint-suggestions/', sep='\\t', header=0)\ndf_constraint_suggestions.columns=['column_name', 'description', 'code']\ndf_constraint_suggestions",
"_____no_output_____"
]
],
[
[
"# Save for the Next Notebook(s)",
"_____no_output_____"
]
],
[
[
"%store df_dataset_metrics",
"_____no_output_____"
],
[
"%%javascript\nJupyter.notebook.save_checkpoint();\nJupyter.notebook.session.delete();",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d0c0cec46223aa16d4df26bcd82246489a16a6d9 | 1,898 | ipynb | Jupyter Notebook | parsers/.ipynb_checkpoints/unigo-checkpoint.ipynb | b09dan/universities_sentiment | 45df675187757579ef97a17564f9fbc2abd1eace | [
"MIT"
] | 2 | 2017-09-30T21:30:08.000Z | 2017-09-30T21:30:10.000Z | parsers/.ipynb_checkpoints/unigo-checkpoint.ipynb | b09dan/universities_sentiment | 45df675187757579ef97a17564f9fbc2abd1eace | [
"MIT"
] | null | null | null | parsers/.ipynb_checkpoints/unigo-checkpoint.ipynb | b09dan/universities_sentiment | 45df675187757579ef97a17564f9fbc2abd1eace | [
"MIT"
] | null | null | null | 24.025316 | 94 | 0.581665 | [
[
[
"from urllib.request import urlopen # Library for urlopen\nfrom bs4 import BeautifulSoup # Library for html parser (scraper), lxml is also nice\nimport pandas as pd\nimport re\nimport sys\nsys.path.append('..') \nfrom uni_cache.cache_function import cache_function\nimport pymysql\nimport collections\nimport mysql_credits",
"_____no_output_____"
],
[
"# This folder should be edited according to this project path on yours computer\nproject_folder = '/home/bogdan/PycharmProjects/universities_sentiment/'\ncache_folder = project_folder + 'cache/'\nsite = 'https://www.whatuni.com/university-course-reviews/university-of-oxford/3757/'\n\n\nconnection = pymysql.connect(\n host=mysql_credits.db_host,\n user=mysql_credits.db_user,\n password=mysql_credits.db_password,\n db=mysql_credits.db,\n charset='utf8mb4',\n cursorclass=pymysql.cursors.DictCursor\n)\n",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code"
]
] |
d0c0d413c8206fe7e2a2d3aa8ed4a49025b70d8a | 705,834 | ipynb | Jupyter Notebook | _solved/case2_observations_analysis.ipynb | jonasvdd/DS-python-data-analysis | 835226f562ee0b0631d70e48a17c4526ff58a538 | [
"BSD-3-Clause"
] | 4 | 2021-05-28T13:40:42.000Z | 2022-03-29T17:36:48.000Z | _solved/case2_observations_analysis.ipynb | jonasvdd/DS-python-data-analysis | 835226f562ee0b0631d70e48a17c4526ff58a538 | [
"BSD-3-Clause"
] | 20 | 2021-06-21T10:11:22.000Z | 2022-03-24T18:46:44.000Z | notebooks/case2_observations_analysis.ipynb | jorisvandenbossche/course-python-data | 918fd22204e458a7a2358a3bd398611cde06c94a | [
"BSD-3-Clause"
] | null | null | null | 176.414396 | 105,696 | 0.861963 | [
[
[
"<p><font size=\"6\"><b> CASE - Observation data - analysis</b></font></p>\n\n> *© 2021, Joris Van den Bossche and Stijn Van Hoey (<mailto:[email protected]>, <mailto:[email protected]>). Licensed under [CC BY 4.0 Creative Commons](http://creativecommons.org/licenses/by/4.0/)*\n\n---",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\nplt.style.use('seaborn-whitegrid')",
"_____no_output_____"
]
],
[
[
"## 1. Reading in the enriched observations data",
"_____no_output_____"
],
[
"<div class=\"alert alert-success\">\n\n**EXERCISE**\n\n- Read in the `survey_data_completed.csv` file and save the resulting `DataFrame` as variable `survey_data_processed` (if you did not complete the previous notebook, a version of the csv file is available in the `data` folder).\n- Interpret the 'eventDate' column directly as python `datetime` objects and make sure the 'occurrenceID' column is used as the index of the resulting DataFrame (both can be done at once when reading the csv file using parameters of the `read_csv` function)\n- Inspect the first five rows of the DataFrame and the data types of each of the data columns. Verify that the 'eventDate' indeed has a datetime data type.\n\n<details><summary>Hints</summary>\n\n- All read functions in Pandas start with `pd.read_...`.\n- To check the documentation of a function, use the keystroke combination of SHIFT + TAB when the cursor is on the function.\n- Remember `.head()` and `.info()`?\n\n</details>\n\n</div>",
"_____no_output_____"
]
],
[
[
"survey_data_processed = pd.read_csv(\"data/survey_data_completed.csv\",\n parse_dates=['eventDate'], index_col=\"occurrenceID\")",
"_____no_output_____"
],
[
"survey_data_processed.head()",
"_____no_output_____"
],
[
"survey_data_processed.info()",
"<class 'pandas.core.frame.DataFrame'>\nInt64Index: 35550 entries, 1 to 35550\nData columns (total 19 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 verbatimLocality 35550 non-null int64 \n 1 verbatimSex 33042 non-null object \n 2 wgt 32283 non-null float64 \n 3 datasetName 35550 non-null object \n 4 sex 33041 non-null object \n 5 eventDate 35550 non-null datetime64[ns]\n 6 decimalLongitude 35550 non-null float64 \n 7 decimalLatitude 35550 non-null float64 \n 8 genus 33535 non-null object \n 9 species 33535 non-null object \n 10 taxa 33535 non-null object \n 11 name 33535 non-null object \n 12 class 33448 non-null object \n 13 kingdom 33448 non-null object \n 14 order 33448 non-null object \n 15 phylum 33448 non-null object \n 16 scientificName 33448 non-null object \n 17 status 33448 non-null object \n 18 usageKey 33448 non-null float64 \ndtypes: datetime64[ns](1), float64(4), int64(1), object(13)\nmemory usage: 5.4+ MB\n"
]
],
[
[
"## 2. Tackle missing values (NaN) and duplicate values",
"_____no_output_____"
],
[
"See [pandas_08_missing_values.ipynb](pandas_08_missing_values.ipynb) for an overview of functionality to work with missing values.",
"_____no_output_____"
],
[
"<div class=\"alert alert-success\">\n\n**EXERCISE**\n\nHow many records in the data set have no information about the `species`? Use the `isna()` method to find out.\n\n<details><summary>Hints</summary>\n\n- Do NOT use `survey_data_processed['species'] == np.nan`, but use the available method `isna()` to check if a value is NaN\n- The result of an (element-wise) condition returns a set of True/False values, corresponding to 1/0 values. The amount of True values is equal to the sum.\n\n</details>",
"_____no_output_____"
]
],
[
[
"survey_data_processed['species'].isna().sum()",
"_____no_output_____"
]
],
[
[
"<div class=\"alert alert-success\">\n\n**EXERCISE**\n\nHow many duplicate records are present in the dataset? Use the method `duplicated()` to check if a row is a duplicate.\n\n<details><summary>Hints</summary>\n\n- The result of an (element-wise) condition returns a set of True/False values, corresponding to 1/0 values. The amount of True values is equal to the sum.\n\n</details>",
"_____no_output_____"
]
],
[
[
"survey_data_processed.duplicated().sum()",
"_____no_output_____"
]
],
[
[
"<div class=\"alert alert-success\">\n\n**EXERCISE**\n\n- Select all duplicate data by filtering the `observations` data and assign the result to a new variable `duplicate_observations`. The `duplicated()` method provides a `keep` argument define which duplicates (if any) to mark.\n- Sort the `duplicate_observations` data on both the columns `eventDate` and `verbatimLocality` and show the first 9 records.\n\n<details><summary>Hints</summary>\n\n- Check the documentation of the `duplicated` method to find out which value the argument `keep` requires to select all duplicate data.\n- `sort_values()` can work with a single columns name as well as a list of names.\n\n</details>",
"_____no_output_____"
]
],
[
[
"duplicate_observations = survey_data_processed[survey_data_processed.duplicated(keep=False)]\nduplicate_observations.sort_values([\"eventDate\", \"verbatimLocality\"]).head(9)",
"_____no_output_____"
]
],
[
[
"<div class=\"alert alert-success\">\n\n**EXERCISE**\n\n- Exclude the duplicate values (i.e. keep the first occurrence while removing the other ones) from the `observations` data set and save the result as `survey_data_unique`. Use the `drop duplicates()` method from Pandas.\n- How many observations are still left in the data set?\n\n<details><summary>Hints</summary>\n\n- `keep=First` is the default option for `drop_duplicates`\n- The number of rows in a DataFrame is equal to the `len`gth\n\n</details>",
"_____no_output_____"
]
],
[
[
"survey_data_unique = survey_data_processed.drop_duplicates()",
"_____no_output_____"
],
[
"len(survey_data_unique)",
"_____no_output_____"
]
],
[
[
"<div class=\"alert alert-success\">\n\n**EXERCISE**\n\nUse the `dropna()` method to find out:\n\n- For how many observations (rows) we have all the information available (i.e. no NaN values in any of the columns)?\n- For how many observations (rows) we do have the `species_ID` data available ?\n\n<details><summary>Hints</summary>\n\n- `dropna` by default removes by default all rows for which _any_ of the columns contains a `NaN` value.\n- To specify which specific columns to check, use the `subset` argument\n\n</details>",
"_____no_output_____"
]
],
[
[
"len(survey_data_unique.dropna()), len(survey_data_unique.dropna(subset=['species']))",
"_____no_output_____"
]
],
[
[
"<div class=\"alert alert-success\">\n\n**EXERCISE**\n\nFilter the `survey_data_unique` data and select only those records that do not have a `species` while having information on the `sex`. Store the result as variable `not_identified`.\n\n<details><summary>Hints</summary>\n\n- To combine logical operators element-wise in Pandas, use the `&` operator.\n- Pandas provides both a `isna()` and a `notna()` method to check the existence of `NaN` values.\n\n</details>",
"_____no_output_____"
]
],
[
[
"mask = survey_data_unique['species'].isna() & survey_data_unique['sex'].notna()\nnot_identified = survey_data_unique[mask]",
"_____no_output_____"
],
[
"not_identified.head()",
"_____no_output_____"
]
],
[
[
"__NOTE!__\n\nThe `DataFrame` we will use in the further analyses contains species information:",
"_____no_output_____"
]
],
[
[
"survey_data = survey_data_unique.dropna(subset=['species']).copy()\nsurvey_data['name'] = survey_data['genus'] + ' ' + survey_data['species']",
"_____no_output_____"
]
],
[
[
"<div class=\"alert alert-info\">\n\n**INFO**\n\nFor biodiversity studies, absence values (knowing that something is not present) are useful as well to normalize the observations, but this is out of scope for these exercises.\n</div>",
"_____no_output_____"
],
[
"## 3. Select subsets of the data",
"_____no_output_____"
]
],
[
[
"survey_data['taxa'].value_counts()\n#survey_data.groupby('taxa').size()",
"_____no_output_____"
]
],
[
[
"<div class=\"alert alert-success\">\n\n**EXERCISE**\n\n- Select the observations for which the `taxa` is equal to 'Rabbit', 'Bird' or 'Reptile'. Assign the result to a variable `non_rodent_species`. Use the `isin` method for the selection.\n\n<details><summary>Hints</summary>\n\n- You do not have to combine three different conditions, but use the `isin` operator with a list of names.\n\n</details>",
"_____no_output_____"
]
],
[
[
"non_rodent_species = survey_data[survey_data['taxa'].isin(['Rabbit', 'Bird', 'Reptile'])]\nnon_rodent_species.head()",
"_____no_output_____"
],
[
"len(non_rodent_species)",
"_____no_output_____"
]
],
[
[
"<div class=\"alert alert-success\">\n\n**EXERCISE**\n\nSelect the observations for which the `name` starts with the characters 'r' (make sure it does not matter if a capital character is used in the 'taxa' name). Call the resulting variable `r_species`.\n\n<details><summary>Hints</summary>\n\n- Remember the `.str.` construction to provide all kind of string functionalities? You can combine multiple of these after each other.\n- If the presence of capital letters should not matter, make everything lowercase first before comparing (`.lower()`)\n\n</details>",
"_____no_output_____"
]
],
[
[
"r_species = survey_data[survey_data['name'].str.lower().str.startswith('r')]\nr_species.head()",
"_____no_output_____"
],
[
"len(r_species)",
"_____no_output_____"
],
[
"r_species[\"name\"].value_counts()",
"_____no_output_____"
]
],
[
[
"<div class=\"alert alert-success\">\n\n**EXERCISE**\n\nSelect the observations that are not Birds. Call the resulting variable <code>non_bird_species</code>.\n\n<details><summary>Hints</summary>\n\n- Logical operators like `==`, `!=`, `>`,... can still be used.\n\n</details>",
"_____no_output_____"
]
],
[
[
"non_bird_species = survey_data[survey_data['taxa'] != 'Bird']\nnon_bird_species.head()",
"_____no_output_____"
],
[
"len(non_bird_species)",
"_____no_output_____"
]
],
[
[
"<div class=\"alert alert-success\">\n\n**EXERCISE**\n\nSelect the __Bird__ (taxa is Bird) observations from 1985-01 till 1989-12 using the `eventDate` column. Call the resulting variable `birds_85_89`.\n\n<details><summary>Hints</summary>\n\n- No hints, you can do this! (with the help of some `<=` and `&`, and don't forget the put brackets around each comparison that you combine)\n\n\n</details>",
"_____no_output_____"
]
],
[
[
"birds_85_89 = survey_data[(survey_data[\"eventDate\"] >= \"1985-01-01\")\n & (survey_data[\"eventDate\"] <= \"1989-12-31 23:59\")\n & (survey_data['taxa'] == 'Bird')]\nbirds_85_89.head()",
"_____no_output_____"
]
],
[
[
"Alternative solution:",
"_____no_output_____"
]
],
[
[
"# alternative solution\nbirds_85_89 = survey_data[(survey_data[\"eventDate\"].dt.year >= 1985)\n & (survey_data[\"eventDate\"].dt.year <= 1989)\n & (survey_data['taxa'] == 'Bird')]\nbirds_85_89.head()",
"_____no_output_____"
]
],
[
[
"<div class=\"alert alert-success\">\n\n**EXERCISE**\n\n- Drop the observations for which no 'weight' (`wgt` column) information is available.\n- On the filtered data, compare the median weight for each of the species (use the `name` column)\n- Sort the output from high to low median weight (i.e. descending)\n\n__Note__ You can do this all in a single line statement, but don't have to do it as such!\n\n<details><summary>Hints</summary>\n\n- You will need `dropna`, `groupby`, `median` and `sort_values`.\n\n</details>",
"_____no_output_____"
]
],
[
[
"# Multiple lines\nobs_with_weight = survey_data.dropna(subset=[\"wgt\"])\nmedian_weight = obs_with_weight.groupby(['name'])[\"wgt\"].median()\nmedian_weight.sort_values(ascending=False)",
"_____no_output_____"
],
[
"# Single line statement\n(survey_data\n .dropna(subset=[\"wgt\"])\n .groupby(['name'])[\"wgt\"]\n .median()\n .sort_values(ascending=False)\n)",
"_____no_output_____"
]
],
[
[
"## 4. Species abundance",
"_____no_output_____"
],
[
"<div class=\"alert alert-success\">\n\n**EXERCISE**\n\nWhich 8 species (use the `name` column to identify the different species) have been observed most over the entire data set?\n\n<details><summary>Hints</summary>\n\n- Pandas provide a function to combine sorting and showing the first n records, see [here](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.nlargest.html)...\n\n</details>",
"_____no_output_____"
]
],
[
[
"survey_data.groupby(\"name\").size().nlargest(8)",
"_____no_output_____"
],
[
"survey_data['name'].value_counts()[:8]",
"_____no_output_____"
]
],
[
[
"<div class=\"alert alert-success\">\n\n**EXERCISE**\n\n- What is the number of different species in each of the `verbatimLocality` plots? Use the `nunique` method. Assign the output to a new variable `n_species_per_plot`.\n- Define a Matplotlib `Figure` (`fig`) and `Axes` (`ax`) to prepare a plot. Make an horizontal bar chart using Pandas `plot` function linked to the just created Matplotlib `ax`. Each bar represents the `species per plot/verbatimLocality`. Change the y-label to 'Plot number'.\n\n<details><summary>Hints</summary>\n\n- _...in each of the..._ should provide a hint to use `groupby` for this exercise. The `nunique` is the aggregation function for each of the groups.\n- `fig, ax = plt.subplots()` prepares a Matplotlib Figure and Axes.\n\n</details>",
"_____no_output_____"
]
],
[
[
"n_species_per_plot = survey_data.groupby([\"verbatimLocality\"])[\"name\"].nunique()\n\nfig, ax = plt.subplots(figsize=(6, 6))\nn_species_per_plot.plot(kind=\"barh\", ax=ax, color=\"lightblue\")\nax.set_ylabel(\"plot number\")\n\n# Alternative option:\n# inspired on the pivot table we already had:\n# species_per_plot = survey_data.reset_index().pivot_table(\n# index=\"name\", columns=\"verbatimLocality\", values=\"occurrenceID\", aggfunc='count')\n# n_species_per_plot = species_per_plot.count()",
"_____no_output_____"
]
],
[
[
"<div class=\"alert alert-success\">\n\n**EXERCISE**\n\n- What is the number of plots (`verbatimLocality`) each of the species have been observed in? Assign the output to a new variable `n_plots_per_species`. Sort the counts from low to high.\n- Make an horizontal bar chart using Pandas `plot` function to show the number of plots each of the species was found (using the `n_plots_per_species` variable).\n\n<details><summary>Hints</summary>\n\n- Use the previous exercise to solve this one.\n\n</details>",
"_____no_output_____"
]
],
[
[
"n_plots_per_species = survey_data.groupby([\"name\"])[\"verbatimLocality\"].nunique().sort_values()\n\nfig, ax = plt.subplots(figsize=(8, 8))\nn_plots_per_species.plot(kind=\"barh\", ax=ax, color='0.4')\nax.set_xlabel(\"Number of plots\");\nax.set_ylabel(\"\");",
"_____no_output_____"
]
],
[
[
"<div class=\"alert alert-success\">\n\n**EXERCISE**\n\n- Starting from the `survey_data`, calculate the amount of males and females present in each of the plots (`verbatimLocality`). The result should return the counts for each of the combinations of `sex` and `verbatimLocality`. Assign to a new variable `n_plot_sex` and ensure the counts are in a column named \"count\".\n- Use `pivot` to convert the `n_plot_sex` DataFrame to a new DataFrame with the `verbatimLocality` as index and `male`/`female` as column names. Assign to a new variable `pivoted`.\n\n<details><summary>Hints</summary>\n\n- _...for each of the combinations..._ `groupby` can also be used with multiple columns at the same time.\n- If a `groupby` operation gives a Series as result, you can give that Series a name with the `.rename(..)` method.\n- `reset_index()` is useful function to convert multiple indices into columns again.\n\n</details>",
"_____no_output_____"
]
],
[
[
"n_plot_sex = survey_data.groupby([\"sex\", \"verbatimLocality\"]).size().rename(\"count\").reset_index()\nn_plot_sex.head()",
"_____no_output_____"
],
[
"pivoted = n_plot_sex.pivot(columns=\"sex\", index=\"verbatimLocality\", values=\"count\")",
"_____no_output_____"
],
[
"pivoted.head()",
"_____no_output_____"
]
],
[
[
"To check, we can use the variable `pivoted` to plot the result:",
"_____no_output_____"
]
],
[
[
"pivoted.plot(kind='bar', figsize=(12, 6), rot=0)",
"_____no_output_____"
]
],
[
[
"<div class=\"alert alert-success\">\n\n**EXERCISE**\n\nRecreate the previous plot with the `catplot` function from the Seaborn library starting from `n_plot_sex`.\n\n<details><summary>Hints</summary>\n\n- Check the `kind` argument of the `catplot` function to figure out to specify you want a barplot with given x and y values.\n- To link a column to different colors, use the `hue` argument\n\n\n</details>",
"_____no_output_____"
]
],
[
[
"sns.catplot(data=n_plot_sex, x=\"verbatimLocality\", y=\"count\",\n hue=\"sex\", kind=\"bar\", height=3, aspect=3)",
"_____no_output_____"
]
],
[
[
"<div class=\"alert alert-success\">\n\n**EXERCISE**\n\nRecreate the previous plot with the `catplot` function from the Seaborn library directly starting from `survey_data`.\n\n<details><summary>Hints</summary>\n\n- Check the `kind`argument of the `catplot` function to find out how to use counts to define the bars instead of a `y` value.\n- To link a column to different colors, use the `hue` argument\n\n\n</details>",
"_____no_output_____"
]
],
[
[
"sns.catplot(data=survey_data, x=\"verbatimLocality\",\n hue=\"sex\", kind=\"count\", height=3, aspect=3)",
"_____no_output_____"
]
],
[
[
"<div class=\"alert alert-success\">\n\n**EXERCISE**\n\n- Make a summary table with the number of records of each of the species in each of the plots (also called `verbatimLocality`). Each of the species `name`s is a row index and each of the `verbatimLocality` plots is a column name.\n- Using the Seaborn <a href=\"http://seaborn.pydata.org/generated/seaborn.heatmap.html\">documentation</a> to make a heatmap.\n\n<details><summary>Hints</summary>\n\n- Make sure to pass the correct columns to respectively the `index`, `columns`, `values` and `aggfunc` parameters of the `pivot_table` function. You can use the `datasetName` to count the number of observations for each name/locality combination (when counting rows, the exact column doesn't matter).\n\n</details>",
"_____no_output_____"
]
],
[
[
"species_per_plot = survey_data.pivot_table(index=\"name\",\n columns=\"verbatimLocality\",\n values=\"datasetName\",\n aggfunc='count')\n\n# alternative ways to calculate this\n#species_per_plot = survey_data.groupby(['name', 'verbatimLocality']).size().unstack(level=-1)\n#pecies_per_plot = pd.crosstab(survey_data['name'], survey_data['verbatimLocality'])",
"_____no_output_____"
],
[
"fig, ax = plt.subplots(figsize=(8,8))\nsns.heatmap(species_per_plot, ax=ax, cmap='Greens')",
"_____no_output_____"
]
],
[
[
"## 5. Observations over time",
"_____no_output_____"
],
[
"<div class=\"alert alert-success\">\n\n**EXERCISE**\n\nMake a plot visualizing the evolution of the number of observations for each of the individual __years__ (i.e. annual counts) using the `resample` method.\n\n<details><summary>Hints</summary>\n\n- You want to `resample` the data using the `eventDate` column to create annual counts. If the index is not a datetime-index, you can use the `on=` keyword to specify which datetime column to use.\n- `resample` needs an aggregation function on how to combine the values within a single 'group' (in this case data within a year). In this example, we want to know the `size` of each group, i.e. the number of records within each year.\n\n</details>",
"_____no_output_____"
]
],
[
[
"survey_data.resample('A', on='eventDate').size().plot()",
"_____no_output_____"
]
],
[
[
"To evaluate the intensity or number of occurrences during different time spans, a heatmap is an interesting representation.",
"_____no_output_____"
],
[
"<div class=\"alert alert-success\">\n\n**EXERCISE**\n\n- Create a table, called `heatmap_prep`, based on the `survey_data` DataFrame with the row index the individual years, in the column the months of the year (1-> 12) and as values of the table, the counts for each of these year/month combinations.\n- Using the seaborn <a href=\"http://seaborn.pydata.org/generated/seaborn.heatmap.html\">documentation</a>, make a heatmap starting from the `heatmap_prep` variable.\n\n<details><summary>Hints</summary>\n\n- The `.dt` accessor can be used to get the `year`, `month`,... from a `datetime` column\n- Use `pivot_table` and provide the years to `index` and the months to `columns`. Do not forget to `count` the number for each combination (`aggfunc`).\n- Seaborn has an `heatmap` function which requires a short-form DataFrame, comparable to giving each element in a table a color value.\n\n</details>",
"_____no_output_____"
]
],
[
[
"heatmap_prep = survey_data.pivot_table(index=survey_data['eventDate'].dt.year,\n columns=survey_data['eventDate'].dt.month,\n values='species', aggfunc='count')\nfig, ax = plt.subplots(figsize=(10, 8))\nax = sns.heatmap(heatmap_prep, cmap='Reds')",
"_____no_output_____"
]
],
[
[
"Remark that we started from a `tidy` data format (also called *long* format) and converted to *short* format with in the row index the years, in the column the months and the counts for each of these year/month combinations as values.",
"_____no_output_____"
],
[
"## (OPTIONAL SECTION) 6. Evolution of species during monitoring period",
"_____no_output_____"
],
[
"*In this section, all plots can be made with the embedded Pandas plot function, unless specificly asked*",
"_____no_output_____"
],
[
"<div class=\"alert alert-success\">\n\n**EXERCISE**\n\nPlot using Pandas `plot` function the number of records for `Dipodomys merriami` for each month of the year (January (1) -> December (12)), aggregated over all years.\n\n<details><summary>Hints</summary>\n\n- _...for each month of..._ requires `groupby`.\n- `resample` is not useful here, as we do not want to change the time-interval, but look at month of the year (over all years)\n\n</details>",
"_____no_output_____"
]
],
[
[
"merriami = survey_data[survey_data[\"name\"] == \"Dipodomys merriami\"]",
"_____no_output_____"
],
[
"fig, ax = plt.subplots()\nmerriami.groupby(merriami['eventDate'].dt.month).size().plot(kind=\"barh\", ax=ax)\nax.set_xlabel(\"number of occurrences\");\nax.set_ylabel(\"Month of the year\");",
"_____no_output_____"
]
],
[
[
"<div class=\"alert alert-success\">\n\n**EXERCISE**\n\nPlot, for the species 'Dipodomys merriami', 'Dipodomys ordii', 'Reithrodontomys megalotis' and 'Chaetodipus baileyi', the monthly number of records as a function of time during the monitoring period. Plot each of the individual species in a separate subplot and provide them all with the same y-axis scale\n\n<details><summary>Hints</summary>\n\n- `isin` is useful to select from within a list of elements.\n- `groupby` AND `resample` need to be combined. We do want to change the time-interval to represent data as a function of time (`resample`) and we want to do this _for each name/species_ (`groupby`). The order matters!\n- `unstack` is a Pandas function a bit similar to `pivot`. Check the [unstack documentation](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.unstack.html) as it might be helpful for this exercise.\n\n</details>",
"_____no_output_____"
]
],
[
[
"subsetspecies = survey_data[survey_data[\"name\"].isin(['Dipodomys merriami', 'Dipodomys ordii',\n 'Reithrodontomys megalotis', 'Chaetodipus baileyi'])]",
"_____no_output_____"
],
[
"month_evolution = subsetspecies.groupby(\"name\").resample('M', on='eventDate').size()",
"_____no_output_____"
],
[
"species_evolution = month_evolution.unstack(level=0)\naxs = species_evolution.plot(subplots=True, figsize=(14, 8), sharey=True)",
"_____no_output_____"
]
],
[
[
"<div class=\"alert alert-success\">\n\n**EXERCISE**\n\nRecreate the same plot as in the previous exercise using Seaborn `relplot` functon with the `month_evolution` variable.\n \n<details><summary>Hints</summary>\n\n- We want to have the `counts` as a function of `eventDate`, so link these columns to y and x respectively.\n- To create subplots in Seaborn, the usage of _facetting_ (splitting data sets to multiple facets) is used by linking a column name to the `row`/`col` parameter. \n- Using `height` and `widht`, the figure size can be optimized.\n \n</details>",
"_____no_output_____"
],
[
"Uncomment the next cell (calculates `month_evolution`, the intermediate result of the previous excercise):",
"_____no_output_____"
]
],
[
[
"# Given as solution..\nsubsetspecies = survey_data[survey_data[\"name\"].isin(['Dipodomys merriami', 'Dipodomys ordii',\n 'Reithrodontomys megalotis', 'Chaetodipus baileyi'])]\nmonth_evolution = subsetspecies.groupby(\"name\").resample('M', on='eventDate').size().rename(\"counts\")\nmonth_evolution = month_evolution.reset_index()\nmonth_evolution.head()",
"_____no_output_____"
]
],
[
[
"Plotting with seaborn:",
"_____no_output_____"
]
],
[
[
"sns.relplot(data=month_evolution, x='eventDate', y=\"counts\",\n row=\"name\", kind=\"line\", hue=\"name\", height=2, aspect=5)",
"_____no_output_____"
]
],
[
[
"<div class=\"alert alert-success\">\n\n**EXERCISE**\n\nPlot the annual amount of occurrences for each of the 'taxa' as a function of time using Seaborn. Plot each taxa in a separate subplot and do not share the y-axis among the facets.\n\n<details><summary>Hints</summary>\n\n- Combine `resample` and `groupby`!\n- Check out the previous exercise for the plot function.\n- Pass the `sharey=False` to the `facet_kws` argument as a dictionary.\n\n</details>",
"_____no_output_____"
]
],
[
[
"year_evolution = survey_data.groupby(\"taxa\").resample('A', on='eventDate').size()\nyear_evolution.name = \"counts\"\nyear_evolution = year_evolution.reset_index()",
"_____no_output_____"
],
[
"sns.relplot(data=year_evolution, x='eventDate', y=\"counts\",\n col=\"taxa\", col_wrap=2, kind=\"line\", height=2, aspect=5,\n facet_kws={\"sharey\": False})",
"_____no_output_____"
]
],
[
[
"<div class=\"alert alert-success\">\n\n**EXERCISE**\n\nThe observations where taken by volunteers. You wonder on which day of the week the most observations where done. Calculate for each day of the week (`dayofweek`) the number of observations and make a bar plot.\n\n<details><summary>Hints</summary>\n\n- Did you know the Python standard Library has a module `calendar` which contains names of week days, month names,...?\n\n</details>",
"_____no_output_____"
]
],
[
[
"fig, ax = plt.subplots()\nsurvey_data.groupby(survey_data[\"eventDate\"].dt.dayofweek).size().plot(kind='barh', color='#66b266', ax=ax)\nimport calendar\nxticks = ax.set_yticklabels(calendar.day_name)",
"_____no_output_____"
]
],
[
[
"Nice work!",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d0c0daea32bc417c1899de6736fa4e24ef2271dc | 16,067 | ipynb | Jupyter Notebook | notebooks/Classify Image Example-ResNet50.ipynb | AFathi/TuriCreate-Notebook | 418d9692144415e520241f5022d6a485dbd278e7 | [
"MIT"
] | 16 | 2017-12-22T22:43:54.000Z | 2021-04-21T16:39:22.000Z | notebooks/Classify Image Example-ResNet50.ipynb | AFathi/TuriCreate-Notebook | 418d9692144415e520241f5022d6a485dbd278e7 | [
"MIT"
] | 2 | 2019-12-16T06:27:33.000Z | 2020-05-06T19:10:05.000Z | notebooks/Classify Image Example-ResNet50.ipynb | AFathi/TuriCreate-Notebook | 418d9692144415e520241f5022d6a485dbd278e7 | [
"MIT"
] | 4 | 2018-01-15T19:53:16.000Z | 2021-04-21T16:39:24.000Z | 27.894097 | 260 | 0.493496 | [
[
[
"# Classify Images using Residual Network with 50 layers (ResNet-50)",
"_____no_output_____"
],
[
"## Import Turi Create\nPlease follow the repository README instructions to install the Turi Create package.\n\n**Note**: Turi Create is currently only compatible with Python 2.7",
"_____no_output_____"
]
],
[
[
"import turicreate as turi",
"_____no_output_____"
]
],
[
[
"## Reference the dataset path",
"_____no_output_____"
]
],
[
[
"url = \"data/food_images\"",
"_____no_output_____"
]
],
[
[
"## Label the dataset\nIn the following block of code we will labels the image in the dataset of **Egg** and **Soup** images. Then we will export it as an `SFrame` data object to use it for training the image classification model.\n\n1. The first line of code loads the folder images content using the `image_analysis` property. \n\n2. The second line creates a _foodType_ key for each image in the dataset to specify whether it's an **Egg** or **Soup** based on which folder it's located in.\n\n3. The third line exports the analyzed data as an `SFrame` object in order to use it while creating our image classifier.\n\n4. The fourth line simply visualises the new labeled image into a large list.\n\n**Note**:- You do not have to run the following block of code everytime you create a classifer, unless you changed/edited the dataset.",
"_____no_output_____"
]
],
[
[
"data = turi.image_analysis.load_images(url)\ndata[\"foodType\"] = data[\"path\"].apply(lambda path: \"Eggs\" if \"eggs\" in path else \"Soup\")\ndata.save(\"egg_or_soup.sframe\")\ndata.explore()",
"_____no_output_____"
]
],
[
[
"## Load the labeled SFrame\nIn the following line of code we are loading the `SFrame` object that contains the images in our dataset with their labels.",
"_____no_output_____"
]
],
[
[
"dataBuffer = turi.SFrame(\"egg_or_soup.sframe\")",
"_____no_output_____"
]
],
[
[
"## Create training and test data using our existing dataset\nHere, we're randomly splitting the data.\n- 90% of the data in the `SFrame` object will be used for training the image classifier.\n- 10% of the data in the `SFrame` object will be used for testing the image classifier.",
"_____no_output_____"
]
],
[
[
"trainingBuffers, testingBuffers = dataBuffer.random_split(0.9)",
"_____no_output_____"
]
],
[
[
"## Train the image classifier\nIn the following line of code, we will create an image classifier and we'll feed it with the training data we have. \n\nIn this example, the image classifer's architecture will be a state-of-the-art Residual Network with 50 layers, also known as **ResNet-50**.\n\nCheck out the official paper here: https://arxiv.org/abs/1512.03385.",
"_____no_output_____"
]
],
[
[
"model = turi.image_classifier.create(trainingBuffers, target=\"foodType\", model=\"resnet-50\")",
"Resizing images...\nPerforming feature extraction on resized images...\nCompleted 270/270\nPROGRESS: Creating a validation set from 5 percent of training data. This may take a while.\n You can set ``validation_set=None`` to disable validation tracking.\n\n"
]
],
[
[
"## Evaluate the test data to determine the model accuracy",
"_____no_output_____"
]
],
[
[
"evaluations = model.evaluate(testingBuffers)\nprint evaluations[\"accuracy\"]",
"0.933333333333\n"
]
],
[
[
"## Save the Turi Create model to retrieve it later",
"_____no_output_____"
]
],
[
[
"model.save(\"egg_or_soup.model\")",
"_____no_output_____"
]
],
[
[
"## Export the image classification model for Core ML",
"_____no_output_____"
]
],
[
[
"model.export_coreml(\"EggSoupClassifier.mlmodel\")",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.