markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
On va utiliser les mêmes paramètres de simulation que précédemment : $1000$ simulations pour chaque total, et des totaux allant de $50$ à $2000$ par pas de $50$.
nb = 1000 totalMax = 2000 totaux = list(range(50, totalMax + 50, 50))
simus/Simulations_du_jeu_de_151.ipynb
Naereen/notebooks
mit
La courbe pour unCoup permet d'établir le comportement de la stratégie naïve, on pourra ensuite comparer les autres stratégies.
plotResultatsDesPartiesSeul(nb, unCoup, totaux)
simus/Simulations_du_jeu_de_151.ipynb
Naereen/notebooks
mit
Tient, pour unCoup, la courbe est linéaire dans le total. C'est assez logique, vue la stratégie utilisée ! On marque à chaque coup, donc le nombre de coups moyens est juste le total divisé par le score moyen. On se rappelle que le score moyen en un tirage est d'environ $96$ points (avec suite), et en effet $2000 / 91 \simeq 21$, ce qu'on lit sur la courbe.
scoreMoyen = 96 total = 2000 total / scoreMoyen
simus/Simulations_du_jeu_de_151.ipynb
Naereen/notebooks
mit
Pour jusquauBout :
plotResultatsDesPartiesSeul(nb, jusquauBout, totaux)
simus/Simulations_du_jeu_de_151.ipynb
Naereen/notebooks
mit
On constate que cette stratégie jusquauBout gagne bien plus rapidement que la stratégie unCoup ! Pour auMoins200, par exemple :
plotResultatsDesPartiesSeul(nb, auMoins200, totaux)
simus/Simulations_du_jeu_de_151.ipynb
Naereen/notebooks
mit
Pour bernoulli(0.5), par exemple :
plotResultatsDesPartiesSeul(nb, bernoulli(0.5), totaux)
simus/Simulations_du_jeu_de_151.ipynb
Naereen/notebooks
mit
Pour bernoulli(0.2), par exemple :
plotResultatsDesPartiesSeul(nb, bernoulli(0.2), totaux)
simus/Simulations_du_jeu_de_151.ipynb
Naereen/notebooks
mit
Pour bernoulli(0.8), par exemple :
plotResultatsDesPartiesSeul(nb, bernoulli(0.8), totaux)
simus/Simulations_du_jeu_de_151.ipynb
Naereen/notebooks
mit
Ces comparaisons de différentes stratégies de Bernoulli permettent de conclure, comme on le présentait, que la meilleure stratégie (parmi les quelques testées) est la stratégie jusquauBout ! Toutes les courbes ci dessus montrent un comportement (presque) linéaire du nombre moyen de coups requis pour gagner en fonction du total. Ainsi, pour comparer différentes stratégies, on peut juste comparer leur nombre de coups moyen pour un certain total, disons $T = 2000$.
def comparerStrategies(joueurs, nb=1000, total=2000): resultats = [] for joueur in joueurs: historique = desPartiesSeul(nb, joueur, total) nbCoupMoyen = np.mean([len(h) - 1 for h in historique]) resultats.append((nbCoupMoyen, joueur.__name__)) # Trier les résultats permet de voir les meilleures stratégies en premier ! return sorted(resultats) joueurs = [unCoup, jusquauBout] comparerStrategies(joueurs, nb=nb, total=totalMax)
simus/Simulations_du_jeu_de_151.ipynb
Naereen/notebooks
mit
On va comparer toutes les stratégies définies plus haut :
joueurs = [unCoup, jusquauBout] joueurs += [auMoins50, auMoins100, auMoins150, auMoins200, auMoins250, auMoins300, auMoins350, auMoins400, auMoins450, auMoins500, auMoins550, auMoins600, auMoins650, auMoins700, auMoins800, auMoins850, auMoins900, auMoins950, auMoins1000] for p in range(0, 20 + 1): joueurs.append(bernoulli(p/20.)) # print([j.__name__ for j in joueurs]) nb = 1000 totalMax = 2000 resultats = comparerStrategies(joueurs, nb=nb, total=totalMax) print("Pour le total {} et {} simulations ...".format(totalMax, nb)) for (i, (n, j)) in enumerate(resultats): print("- La stratégie classée #{:2} / {} est {:<14}, avec un nombre moyen de coups = {:.3g} ...".format(i, len(joueurs), j, n)) nb = 2000 totalMax = 3000 resultats = comparerStrategies(joueurs, nb=nb, total=totalMax) print("Pour le total {} et {} simulations ...".format(totalMax, nb)) for (i, (n, j)) in enumerate(resultats): print("- La stratégie classée #{:2} / {} est {:<14}, avec un nombre moyen de coups = {:.3g} ...".format(i, len(joueurs), j, n)) nb = 1000 totalMax = 5000 resultats = comparerStrategies(joueurs, nb=nb, total=totalMax) print("Pour le total {} et {} simulations ...".format(totalMax, nb)) for (i, (n, j)) in enumerate(resultats): print("- La stratégie classée #{:2} / {} est {:<14}, avec un nombre moyen de coups = {:.3g} ...".format(i, len(joueurs), j, n))
simus/Simulations_du_jeu_de_151.ipynb
Naereen/notebooks
mit
Let's review the basics Functions without arguments
def my_super_function(): pass def even_better(): print('This is executed within a function') even_better() type(even_better)
content/notebooks/2017-01-20-function-quirks.ipynb
ueapy/ueapy.github.io
mit
Positional arguments aka mandatory parameters
import numpy as np def uv2wdir(u, v): """Calculate horizontal wind direction (meteorological notation)""" return 180 + 180 / np.pi * np.arctan2(u, v) a = uv2wdir(10, -10) a type(a)
content/notebooks/2017-01-20-function-quirks.ipynb
ueapy/ueapy.github.io
mit
Keyword (named) arguments aka optional parameters
def myfun(list_of_strings, separator=' ', another=123): result = separator.join(list_of_strings) return result words = ['This', 'is', 'my', 'Function'] myfun(words, another=456, separator='-------')
content/notebooks/2017-01-20-function-quirks.ipynb
ueapy/ueapy.github.io
mit
Dangerous default arguments
default_number = 10 def double_it(x=default_number): return x * 2 double_it() double_it(2) default_number = 100000000 double_it()
content/notebooks/2017-01-20-function-quirks.ipynb
ueapy/ueapy.github.io
mit
But what if we used a mutable type as a default argument?
def add_items_bad(element, times=1, lst=[]): for _ in range(times): lst.append(element) return lst mylist = add_items_bad('a', 3) print(mylist) another_list = add_items_bad('b', 5) print(another_list) def add_items_good(element, times=1, lst=None): if lst is None: lst = [] for _ in range(times): lst.append(element) return lst mylist = add_items_good('a', 3) print(mylist) another_list = add_items_good('b', 5) print(another_list)
content/notebooks/2017-01-20-function-quirks.ipynb
ueapy/ueapy.github.io
mit
Global variables Variables declared outside the function can be referenced within the function:
x = 5 def add_x(y): return x + y add_x(20)
content/notebooks/2017-01-20-function-quirks.ipynb
ueapy/ueapy.github.io
mit
But these global variables cannot be modified within the function, unless declared global in the function.
def setx(y): global x x = y print('x is {}'.format(x)) x setx(10) print(x) def foo(): a = 1 print(locals()) foo()
content/notebooks/2017-01-20-function-quirks.ipynb
ueapy/ueapy.github.io
mit
Arbitrary number of arguments Special forms of parameters: * *args: any number of positional arguments packed into a tuple * **kwargs: any number of keyword arguments packed into a dictionary
def variable_args(*args, **kwargs): print('args are', args) print('kwargs are', kwargs) if 'z' in kwargs: print(kwargs['z']) variable_args('foo', 'bar', x=1, y=2)
content/notebooks/2017-01-20-function-quirks.ipynb
ueapy/ueapy.github.io
mit
Example 1
def smallest(x, y): if x < y: return x else: return y smallest(1, 2) # smallest(1, 2, 3) <- results in TypeError def smallest(x, *args): small = x for y in args: if y < small: small= y return small smallest(11)
content/notebooks/2017-01-20-function-quirks.ipynb
ueapy/ueapy.github.io
mit
Example 2 Unpacking a dictionary of keyword arguments is particularly handy in matplotlib.
import matplotlib.pyplot as plt %matplotlib inline arr1 = np.random.rand(100) arr2 = np.random.rand(100) style1 = dict(linewidth=3, color='#FF0123') style2 = dict(linestyle='--', color='skyblue') plt.plot(arr1, **style1) plt.plot(arr2, **style2)
content/notebooks/2017-01-20-function-quirks.ipynb
ueapy/ueapy.github.io
mit
Passing functions into functions <img src="https://lifebeyondfife.com/wp-content/uploads/2015/05/functions.jpg" width=400> Functions are first-class objects. This means that functions can be passed around, and used as arguments, just like any other value (e.g, string, int, float).
def find_special_numbers(special_selector, limit=10): found = [] n = 0 while len(found) < limit: if special_selector(n): found.append(n) n += 1 return found def check_odd(a): return a % 2 == 1 mylist = find_special_numbers(check_odd, 25) for n in mylist: print(n, end=',')
content/notebooks/2017-01-20-function-quirks.ipynb
ueapy/ueapy.github.io
mit
But lots of small functions can clutter your code... lambda expressions Highly pythonic! check = i -&gt; return True if i % 6 == 0
check = lambda i: i % 6 == 0 #check = lambda
content/notebooks/2017-01-20-function-quirks.ipynb
ueapy/ueapy.github.io
mit
Lambdas usually are not defined on their own, but inserted in-place.
find_special_numbers(lambda i: i % 6 == 0, 5)
content/notebooks/2017-01-20-function-quirks.ipynb
ueapy/ueapy.github.io
mit
Another common example
lyric = "Never gonna give you up" words = lyric.split() words sorted(words, key=lambda x: x.lower())
content/notebooks/2017-01-20-function-quirks.ipynb
ueapy/ueapy.github.io
mit
How to sort a list of strings, each of which is a number? Just using sorted() gives us not what we want:
lst = ['20', '1', '2', '100'] sorted(lst)
content/notebooks/2017-01-20-function-quirks.ipynb
ueapy/ueapy.github.io
mit
But we can use a lambda-expression to overcome this problem: Option 1:
sorted(lst, key=lambda x: int(x))
content/notebooks/2017-01-20-function-quirks.ipynb
ueapy/ueapy.github.io
mit
Option 2:
sorted(lst, key=lambda x: x.zfill(16))
content/notebooks/2017-01-20-function-quirks.ipynb
ueapy/ueapy.github.io
mit
By the way, what does zfill() method do? It pads a string with zeros:
'aaaa'.zfill(10)
content/notebooks/2017-01-20-function-quirks.ipynb
ueapy/ueapy.github.io
mit
Resources Write Pythonic Code Like a Seasoned Developer Course Demo Code Scipy Lecture notes
HTML(html)
content/notebooks/2017-01-20-function-quirks.ipynb
ueapy/ueapy.github.io
mit
Does improved weight pruning outperforms regular SET
agg(['model']) agg(['on_perc']) agg(['on_perc', 'model']) # translate model names rcParams['figure.figsize'] = 16, 8 d = { 'DSNNWeightedMag': 'DSNN', 'DSNNMixedHeb': 'SET', 'SparseModel': 'Static', } df_plot = df.copy() df_plot['model'] = df_plot['model'].apply(lambda x: d[x]) # sns.scatterplot(data=df_plot, x='on_perc', y='val_acc_max', hue='model') sns.lineplot(data=df_plot, x='on_perc', y='val_acc_max', hue='model') rcParams['figure.figsize'] = 16, 8 filter = df_plot['model'] != 'Static' sns.lineplot(data=df_plot[filter], x='on_perc', y='val_acc_max_epoch', hue='model') sns.lineplot(data=df_plot, x='on_perc', y='val_acc_last', hue='model')
projects/archive/dynamic_sparse/notebooks/ExperimentAnalysis-MNISTSparser.ipynb
mrcslws/nupic.research
agpl-3.0
Loading the Data Data Source: To access and use the data, it is easiest to download the files directly to your computer and import it into Jupyter Notebook from the location on your computer. First, we access the 3 data files from the local file paths and save them to DataFrames: - airports.csv - airport codes, names, and locations - airlines.csv - airline codes and names - flights.csv - commercial domestic flights in 2015, flight info [df].head() helps us see the data and variables we are dealing with in each file.
path = 'C:/Users/Ziqi/Desktop/Data Bootcamp/Project/airports.csv' airports = pd.read_csv(path) airports.head() airlines = pd.read_csv('C:/Users/Ziqi/Desktop/Data Bootcamp/Project/airlines.csv') airlines.head() flights = pd.read_csv('C:/Users/Ziqi/Desktop/Data Bootcamp/Project/flights.csv', low_memory=False) # (this is a big data file) flights.head() # number of rows and columns of each DataFrame print('airports:',airports.shape) print('airlines:',airlines.shape) print('flights:',flights.shape)
UG_S17/Madhok-Rinkacs-Yao-USFlightDelays.ipynb
NYUDataBootcamp/Projects
mit
We see that the data contain 322 airports, 14 airlines, and 5,819,079 flights. First Glance: Average Arrival Delay The main DataFrame of interest is flights, which contains information about airlines, airports, and delays. Here, we examine the columns in flights and create a new DataFrame with our columns of interest.
# list of column names and datatypes in flights flights.info() flights.index
UG_S17/Madhok-Rinkacs-Yao-USFlightDelays.ipynb
NYUDataBootcamp/Projects
mit
Cleaning and Shaping
# create new DataFrame with relevant variables columns=['YEAR', 'MONTH', 'DAY', 'DAY_OF_WEEK', 'AIRLINE', 'FLIGHT_NUMBER', 'ORIGIN_AIRPORT', 'DESTINATION_AIRPORT', 'DEPARTURE_DELAY', 'ARRIVAL_DELAY', 'DIVERTED', 'CANCELLED', 'AIR_SYSTEM_DELAY', 'SECURITY_DELAY', 'AIRLINE_DELAY', 'LATE_AIRCRAFT_DELAY', 'WEATHER_DELAY'] flights2 = pd.DataFrame(flights, columns=columns) flights2.head() # for later convenience, we will replace the airline codes with each airline's full name, using a dictionary airlines_dictionary = dict(zip(airlines['IATA_CODE'], airlines['AIRLINE'])) flights2['AIRLINE'] = flights2['AIRLINE'].apply(lambda x: airlines_dictionary[x]) flights2.head()
UG_S17/Madhok-Rinkacs-Yao-USFlightDelays.ipynb
NYUDataBootcamp/Projects
mit
The DataFrame flights2 will serve as the foundation for our analysis on US domestic flight delays in 2015. We can further examine the data to determine which airline is the "most punctual". First, we will rank the airlines by average arrival delay. We are mainly concerned about arrival delay because regardless of whether a flight departs on time, what matters most to the passenger is whether he or she arrives at the final destination on time. Of course, a significant departure delay may result in an arrival delay. However, airlines may include a buffer in the scheduled arrival time to ensure that passengers reach their destination at the promised time.
# create DataFrame with airlines and arrival delays delays = flights2[['AIRLINE','DEPARTURE_DELAY','ARRIVAL_DELAY']] # if we hadn't used a dictionary to change the airline names, this is the code we would have used to produce the same result: #flights4 = pd.merge(airlines, flights3, left_on='IATA_CODE', right_on='AIRLINE', how='left') #flights4.drop('IATA_CODE', axis=1, inplace=True) #flights4.drop('AIRLINE_y', axis=1, inplace=True) #flights4.rename(columns={'AIRLINE_x': 'AIRLINE'}, inplace=True) delays.head() # group data by airline name, calculate average arrival delay for each airline in 2015 airline_av_delay = delays.groupby(['AIRLINE']).mean() airline_av_delay # create bar graph of average delay time for each airline airline_av_delay.sort(['ARRIVAL_DELAY'], ascending=1, inplace=True) sns.set() fig, ax = plt.subplots() airline_av_delay.plot(ax=ax, kind='bar', title='Average Delay (mins)') ax.set_ylabel('Average Minutes Delayed') ax.set_xlabel('Airline') plt.show()
UG_S17/Madhok-Rinkacs-Yao-USFlightDelays.ipynb
NYUDataBootcamp/Projects
mit
The bar graph shows that Alaska Airlines has the shortest delay on average--in fact, the average Alaska Airlines flight arrives before the scheduled arrival time, making it the airline with the best time on record. On the other end, Spirit Airlines has the longest average arrival delay. Interestingly, none of the average arrival delays exceed 15 minutes--for the most part, it seems that US domestic flights have been pretty punctual in 2015! Additionally, almost all of the airlines have a departure delay greater than the arrival delay (with the exception of Hawaiian Airlines), which makes sense, considering that departure delay could be due to a variety of factors related to the departure airport, such as security, late passengers, or late arrivals of other flights to that airport. Despite a greater average departure delay, most airports seem to make up for the delay during the travel time, resulting in a shorter average arrival delay. Second Glance: Consistency Now that we know how the airlines rank in terms of arrival delay, we can look at the how many of each airline's flights were cancelled or diverted. Second, we can calculate delay percentages for each airline, i.e. what percent of each airline's total flights were delayed in 2015, to determine which airlines are more likely to be delayed.
# new DataFrame with relevant variables diverted_cancelled = flights2[['AIRLINE','DIVERTED', 'CANCELLED']] diverted_cancelled.head() diverted_cancelled = diverted_cancelled.groupby(['AIRLINE']).sum() # total number of flights scheduled by each airline in 2015 total_flights = flights2[['AIRLINE', 'FLIGHT_NUMBER']].groupby(['AIRLINE']).count() total_flights.rename(columns={'FLIGHT_NUMBER': 'TOTAL_FLIGHTS'}, inplace=True) total_flights # Tangent: for fun, we can see which airlines were dominant in the number of domestic flights total_flights['TOTAL_FLIGHTS'].plot.pie(figsize=(12,12), rot=45, autopct='%1.0f%%', title='Market Share of Domestic Flights in 2015 by Airline')
UG_S17/Madhok-Rinkacs-Yao-USFlightDelays.ipynb
NYUDataBootcamp/Projects
mit
It appears that the airlines with the top three largest market share of domestic flights in 2015 were Southwest (22%), Delta (15%), and American Airlines (12%).
#resetting the index to merge the two DataFrames total_flights2 = total_flights.reset_index() diverted_cancelled2 = diverted_cancelled.reset_index() # check total_flights2 diverted_cancelled2 # calculate divertion and cancellation rates (percentages) for each airline dc_rates = pd.merge(diverted_cancelled2, total_flights2, on='AIRLINE') dc_rates['DIVERTION_RATE'] = dc_rates['DIVERTED']/dc_rates['TOTAL_FLIGHTS'] dc_rates['CANCELLATION_RATE'] = dc_rates['CANCELLED']/dc_rates['TOTAL_FLIGHTS'] dc_rates = dc_rates.set_index(['AIRLINE']) dc_rates dc_rates[['DIVERTION_RATE','CANCELLATION_RATE']].plot.bar(legend=True, figsize=(13,11),rot=45)
UG_S17/Madhok-Rinkacs-Yao-USFlightDelays.ipynb
NYUDataBootcamp/Projects
mit
Overall, the chance of cancellation or divertion is very low, with the divertion rate almost nonexistant. Flights are rarely diverted and only in extreme situations dues to plane safety failures, attacks, or natural disasters. We could use the flight divertion rate as a proxy for the safety of flying in 2015, and are happy to see this rate way below 0.01%. American Airlines and its partner American Eagle Airlines were the most likely to cancel a flight in 2015, while Hawaiian Airlines and Alaska Airlines were the least likely. (It is interesting to note that the two airlines operating out of the two states not in the continental U.S. are the least likely to be cancelled, despite have to travel the greatest distance.)
# create a DataFrame with all flights that had a positive arrival delay time delayed = flights2['ARRIVAL_DELAY'] >= 0 pos_delay = flights2[delayed] pos_delay.head() # groupby function to determine how many flights had delayed arrival for each airline pos_delay = pos_delay[['AIRLINE','ARRIVAL_DELAY']].groupby(['AIRLINE']).count() pos_delay2 = pos_delay.reset_index() # merge with total_flights to calculate percentage of flights that were delayed for each airline delay_rates = pd.merge(pos_delay2, total_flights2, on='AIRLINE') delay_rates['DELAY_RATE'] = delay_rates['ARRIVAL_DELAY']/delay_rates['TOTAL_FLIGHTS'] delay_rates = delay_rates.set_index(['AIRLINE']) delay_rates.sort(['DELAY_RATE'], ascending=1, inplace=True) delay_rates.reset_index() delay_rates[['DELAY_RATE']].plot.bar(legend=True, figsize=(13,11),rot=45)
UG_S17/Madhok-Rinkacs-Yao-USFlightDelays.ipynb
NYUDataBootcamp/Projects
mit
Spirit Airlines has the largest chance of being delayed upon arrival, with Delta Airlines the least likely. However, when we combine divertion rate, cancellation rate, and delay rate, we see that delays account for the majority of flights that didn't operate as scheduled for all airlines across the board.
# combining the two into one DataFrame all_rates = pd.merge(dc_rates.reset_index(), delay_rates.reset_index()).set_index(['AIRLINE']) all_rates all_rates[['DIVERTION_RATE','CANCELLATION_RATE','DELAY_RATE']].plot.bar(legend=True, figsize=(13,10),rot=45)
UG_S17/Madhok-Rinkacs-Yao-USFlightDelays.ipynb
NYUDataBootcamp/Projects
mit
Even better is to represent the same data in a Python list. To create a list, you need to use square brackets ([, ]) and separate each item with a comma. Every item in the list is a Python string, so each is enclosed in quotation marks.
flowers_list = ["pink primrose", "hard-leaved pocket orchid", "canterbury bells", "sweet pea", "english marigold", "tiger lily", "moon orchid", "bird of paradise", "monkshood", "globe thistle"] print(type(flowers_list)) print(flowers_list)
notebooks/intro_to_programming/raw/tut5.ipynb
Kaggle/learntools
apache-2.0
At first glance, it doesn't look too different, whether you represent the information in a Python string or list. But as you will see, there are a lot of tasks that you can more easily do with a list. For instance, a list will make it easier to: - get an item at a specified position (first, second, third, etc), - check the number of items, and - add and remove items. Lists Length We can count the number of entries in any list with len(), which is short for "length". You need only supply the name of the list in the parentheses.
# The list has ten entries print(len(flowers_list))
notebooks/intro_to_programming/raw/tut5.ipynb
Kaggle/learntools
apache-2.0
Indexing We can refer to any item in the list according to its position in the list (first, second, third, etc). This is called indexing. Note that Python uses zero-based indexing, which means that: - to pull the first entry in the list, you use 0, - to pull the second entry in the list, you use 1, and - to pull the final entry in the list, you use one less than the length of the list.
print("First entry:", flowers_list[0]) print("Second entry:", flowers_list[1]) # The list has length ten, so we refer to final entry with 9 print("Last entry:", flowers_list[9])
notebooks/intro_to_programming/raw/tut5.ipynb
Kaggle/learntools
apache-2.0
Side Note: You may have noticed that in the code cell above, we use a single print() to print multiple items (both a Python string (like "First entry:") and a value from the list (like flowers_list[0]). To print multiple things in Python with a single command, we need only separate them with a comma. Slicing You can also pull a segment of a list (for instance, the first three entries or the last two entries). This is called slicing. For instance: - to pull the first x entries, you use [:x], and - to pull the last y entries, you use [-y:].
print("First three entries:", flowers_list[:3]) print("Final two entries:", flowers_list[-2:])
notebooks/intro_to_programming/raw/tut5.ipynb
Kaggle/learntools
apache-2.0
As you can see above, when we slice a list, it returns a new, shortened list. Removing items Remove an item from a list with .remove(), and put the item you would like to remove in parentheses.
flowers_list.remove("globe thistle") print(flowers_list)
notebooks/intro_to_programming/raw/tut5.ipynb
Kaggle/learntools
apache-2.0
Adding items Add an item to a list with .append(), and put the item you would like to add in parentheses.
flowers_list.append("snapdragon") print(flowers_list)
notebooks/intro_to_programming/raw/tut5.ipynb
Kaggle/learntools
apache-2.0
Lists are not just for strings So far, we have only worked with lists where each item in the list is a string. But lists can have items with any data type, including booleans, integers, and floats. As an example, consider hardcover book sales in the first week of April 2000 in a retail store.
hardcover_sales = [139, 128, 172, 139, 191, 168, 170]
notebooks/intro_to_programming/raw/tut5.ipynb
Kaggle/learntools
apache-2.0
Here, hardcover_sales is a list of integers. Similar to when working with strings, you can still do things like get the length, pull individual entries, and extend the list.
print("Length of the list:", len(hardcover_sales)) print("Entry at index 2:", hardcover_sales[2])
notebooks/intro_to_programming/raw/tut5.ipynb
Kaggle/learntools
apache-2.0
You can also get the minimum with min() and the maximum with max().
print("Minimum:", min(hardcover_sales)) print("Maximum:", max(hardcover_sales))
notebooks/intro_to_programming/raw/tut5.ipynb
Kaggle/learntools
apache-2.0
To add every item in the list, use sum().
print("Total books sold in one week:", sum(hardcover_sales))
notebooks/intro_to_programming/raw/tut5.ipynb
Kaggle/learntools
apache-2.0
We can also do similar calculations with slices of the list. In the next code cell, we take the sum from the first five days (sum(hardcover_sales[:5])), and then divide by five to get the average number of books sold in the first five days.
print("Average books sold in first five days:", sum(hardcover_sales[:5])/5)
notebooks/intro_to_programming/raw/tut5.ipynb
Kaggle/learntools
apache-2.0
Python returns the floor of the 1 / 2 because we gave it integers to divide. It then interprets the result as also needing to be an integer. If one of the numbers was a decimal number we would have a decimal number as a result (really these are floating point numbers float).
1. / 2
01_python.ipynb
eramirem/numerical-methods-pdes
cc0-1.0
In compound statements it can become more difficult to figure out where possible rounding might occur so be careful when you evaluate statements.
4. + 4.0**(3.0/2)
01_python.ipynb
eramirem/numerical-methods-pdes
cc0-1.0
Python also understands imaginary numbers:
4 + 3j
01_python.ipynb
eramirem/numerical-methods-pdes
cc0-1.0
Some of the more advanced mathematical functions are stored in modules. In order to use these functions we must first import them into our notebook and then use them.
import math math? math.sqrt(2.0) math.sin(math.pi / 2.0) from math import * sin(pi)
01_python.ipynb
eramirem/numerical-methods-pdes
cc0-1.0
Variables Variables are defined and assigned to like many other languages.
num_students = 80 room_capacity = 85 (room_capacity - num_students) / room_capacity * 100.0
01_python.ipynb
eramirem/numerical-methods-pdes
cc0-1.0
Control Flow if statements are the most basic unit of logic and allows us to conditionally operate on things.
x = 4 if x > 5: print "x is greater than 5" elif x < 5: print "x is less than 5" else: print "x is equal to 5"
01_python.ipynb
eramirem/numerical-methods-pdes
cc0-1.0
for allows us to repeat tasks over a range of values or objects.
for i in range(5): print i for i in range(3,7): print i for animal in ['cat', 'dog', 'chinchilla']: print animal for n in range(2, 10): is_prime = True for x in range(2, n): if n % x == 0: print n, 'equals', x, '*', n / x is_prime = False break if is_prime: print "%s is a prime number" % (n)
01_python.ipynb
eramirem/numerical-methods-pdes
cc0-1.0
Functions Functions are a fundamental way in any language to break up the code into pieces that can be isolated and repeatedly used based on their input.
def my_print_function(x): print x my_print_function(3) def my_add_function(a, b): return a + b, b my_add_function(3.0, 5.0) def my_crazy_function(a, b, c=1.0): d = a + b**c return d my_crazy_function(2.0, 3.0), my_crazy_function(2.0, 3.0, 2.0), my_crazy_function(2.0, 3.0, c=2.0) def my_other_function(a, b, c=1.0): return a + b, a + b**c, a + b**(3.0 / 7.0) my_other_function(2.0, 3.0, c=2.0) def fibonacci(n): """Return a list of the Fibonacci sequence up to n""" values = [0, 1] while values[-1] <= n: values.append(values[-1] + values[-2]) print values return values fibonacci(100) fibonacci?
01_python.ipynb
eramirem/numerical-methods-pdes
cc0-1.0
NumPy The most important part of NumPy is the specification of an array object called an ndarray. This object as its name suggests stores array like information in multiple dimensions. These objects allow a programmer to access the data in a multitude of different ways as well as create common types of arrays and operate on these arrays easily.
import numpy
01_python.ipynb
eramirem/numerical-methods-pdes
cc0-1.0
Constructors Ways to make arrays in NumPy.
my_array = numpy.array([[1, 2], [3, 4]]) print my_array numpy.linspace(-1, 1, 10) numpy.zeros([3, 3]) numpy.ones([2, 3, 2]) numpy.empty([2,3])
01_python.ipynb
eramirem/numerical-methods-pdes
cc0-1.0
Access How do we access data in an array?
my_array[0, 1] my_array[:,0] my_vec = numpy.array([[1], [2]]) print my_vec numpy.dot(my_array, my_vec) numpy.cross? my_array * my_vec
01_python.ipynb
eramirem/numerical-methods-pdes
cc0-1.0
Manipulations How do we manipulate arrays beyond indexing into them?
A = numpy.array([[1, 2, 3], [4, 5, 6]]) print "A Shape = ", A.shape print A B = A.reshape((6,1)) print "A Shape = ", A.shape print "B Shape = ", B.shape print B numpy.tile(A, (2,2)) A.transpose() A = numpy.array([[1,2,3],[4,5,6],[7,8,9]]) print A print A.shape B = numpy.arange(1,10) print B print B.reshape((3,3)) B.reshape? C = B.reshape((3,3)) print A * C numpy.dot(A, C)
01_python.ipynb
eramirem/numerical-methods-pdes
cc0-1.0
Mathematical Functions
x = numpy.linspace(-2.0 * numpy.pi, 2.0 * numpy.pi, 62) y = numpy.sin(x) print y x = numpy.linspace(-1, 1, 20) numpy.sqrt(x) x = numpy.linspace(-1, 1, 20, dtype=complex) numpy.sqrt(x)
01_python.ipynb
eramirem/numerical-methods-pdes
cc0-1.0
Linear Algebra Some functions for linear algebra available in NumPy. Full implementation in scipy.linalg.
numpy.linalg.norm(x) numpy.linalg.norm? M = numpy.array([[0,2],[8,0]]) b = numpy.array([1,2]) print M print b x = numpy.linalg.solve(M,b) print x lamda,V = numpy.linalg.eig(M) print lamda print V
01_python.ipynb
eramirem/numerical-methods-pdes
cc0-1.0
Specify the function to minimize as a simple python function.<br> We have implemented some test functions that can be selected using the function selector, however, you are free to implement your own functions.<br> Right now, we have implemented the following functions: 1. $\frac{1}{2}x^2$, which is convex and has a global minimum at $x=0$ 2. $\frac{1}{2}x^3$, which has no global minimum, but an inflection point at $x=0$ 3. $x^2+x^3$, which has a minimum at $x=0$ and a maximum at $x=-\frac{2}{3}$ The derivative is automatically computed using the autograd library, which returns a function that evaluates the gradient of myfun
function_select = 3 def myfun(x): functions = { 1: 0.5*x**2, 2: 0.5*x**3, 3: x**2+x**3 } return functions.get(function_select) if autograd_available: gradient = egrad(myfun) else: def gradient(x): functions = { 1: x, 2: 1.5*x**2, 3: 2*x+3*x**2 } return functions.get(function_select)
mloc/ch1_Preliminaries/gradient_descent.ipynb
kit-cel/lecture-examples
gpl-2.0
Plot the function and its derivative
x = np.linspace(-3,3,100) fy = myfun(x) gy = gradient(x) plt.figure(1,figsize=(10,6)) plt.rcParams.update({'font.size': 14}) plt.plot(x,fy,x,gy) plt.grid(True) plt.xlabel("x") plt.ylabel("y") plt.legend(["$f(x)$","$f^\prime(x)$"]) plt.show()
mloc/ch1_Preliminaries/gradient_descent.ipynb
kit-cel/lecture-examples
gpl-2.0
Simple gradient descent strategy using only sign of the derivative Carry out the simple gradient descent strategy by using only the sign of the gradient \begin{equation} x_i = x_{i-1} - \epsilon\cdot \mathrm{sign}(f^\prime(x_{i-1})) \end{equation}
epsilon = 0.5 start = 3.75 points = [] while abs(gradient(start)) > 1e-8 and len(points) < 50: points.append( (start,myfun(start)) ) start = start - epsilon*np.sign(gradient(start)) plt.figure(1,figsize=(15,6)) plt.rcParams.update({'font.size': 14}) plt.subplot(1,2,1) plt.scatter(list(zip(*points))[0],list(zip(*points))[1],c=range(len(points),0,-1),cmap='gray',s=40,edgecolors='k') plt.plot(x,fy) plt.grid(True) plt.xlabel("x") plt.ylabel("y=f(x)") plt.subplot(1,2,2) plt.plot(range(0,len(points)),list(zip(*points))[0]) plt.grid(True) plt.xlabel("Step i") plt.ylabel("x_i") plt.show()
mloc/ch1_Preliminaries/gradient_descent.ipynb
kit-cel/lecture-examples
gpl-2.0
Gradient descent Carry out the final gradient descent strategy, which is given by \begin{equation} x_i = x_{i-1} - \epsilon\cdot f^\prime(x_{i-1}) \end{equation}
epsilon = 0.01 start = 3.75 points = [] while abs(gradient(start)) > 1e-8 and len(points) < 500: points.append( (start,myfun(start)) ) start = start - epsilon*gradient(start) plt.figure(1,figsize=(15,6)) plt.rcParams.update({'font.size': 14}) plt.subplot(1,2,1) plt.scatter(list(zip(*points))[0],list(zip(*points))[1],c=range(len(points),0,-1),cmap='gray',s=40,edgecolors='k') plt.plot(x,fy) plt.grid(True) plt.xlabel("x") plt.ylabel("y=f(x)") plt.subplot(1,2,2) plt.plot(range(0,len(points)),list(zip(*points))[0]) plt.grid(True) plt.xlabel("Step i") plt.ylabel("x_i") plt.show()
mloc/ch1_Preliminaries/gradient_descent.ipynb
kit-cel/lecture-examples
gpl-2.0
Here, we provide an interactive tool to play around yourself with parameters of the gradient descent.
def interactive_gradient_descent(start,epsilon, maximum_steps, xmin, xmax): points = [] # assume 1e-10 is about zero while abs(gradient(start)) > 1e-10 and len(points) < maximum_steps: points.append( (start,myfun(start)) ) start = start - epsilon*gradient(start) plt.figure(1,figsize=(15,6)) plt.rcParams.update({'font.size': 14}) plt.subplot(1,2,1) plt.scatter(list(zip(*points))[0],list(zip(*points))[1],c=range(len(points),0,-1),cmap='gray',s=40,edgecolors='k') px = np.linspace(xmin,xmax,1000) pfy = myfun(px) plt.plot(px,pfy) plt.autoscale(enable=True,tight=True) plt.xlim(xmin,xmax) plt.grid(True) plt.xlabel("x") plt.ylabel("y=f(x)") plt.subplot(1,2,2) plt.plot(range(0,len(points)),list(zip(*points))[0]) plt.grid(True) plt.xlabel("Step i") plt.ylabel("x_i") plt.show() epsilon_values = np.arange(0.0,0.1,0.0001) style = {'description_width': 'initial'} interactive_update = interactive(interactive_gradient_descent, \ epsilon = widgets.SelectionSlider(options=[("%g"%i,i) for i in epsilon_values], value=0.01, continuous_update=False,description='epsilon',layout=widgets.Layout(width='50%'),style=style), \ start = widgets.FloatSlider(min=-5.0,max=5.0,step=0.0001,value=3.7, continuous_update=False, description='Start x', layout=widgets.Layout(width='75%'), style=style), \ maximum_steps = widgets.IntSlider(min=20, max=500, value= 200, continuous_update=False, description='Number steps',layout=widgets.Layout(width='50%'),style=style), \ xmin = widgets.FloatSlider(min=-10, max=0, step=0.1, value=-5, continuous_update=False, description='Plot negative x limit',layout=widgets.Layout(width='50%'), style=style), \ xmax = widgets.FloatSlider(min=0, max=10, step=0.1, value=5, continuous_update=False, description='Plot positive x limit',layout=widgets.Layout(width='50%'),style=style)) output = interactive_update.children[-1] output.layout.height = '400px' interactive_update
mloc/ch1_Preliminaries/gradient_descent.ipynb
kit-cel/lecture-examples
gpl-2.0
Express Deep Learning in Python - Part 1 Do you have everything ready? Check the part 0! How fast can you build a MLP? In this first part we will see how to implement the basic components of a MultiLayer Perceptron (MLP) classifier, most commonly known as Neural Network. We will be working with the Keras: a very simple library for deep learning. At this point, you may know how machine learning in general is applied and have some intuitions about how deep learning works, and more importantly, why it works. Now it's time to make some experiments, and for that you need to be as quick and flexible as possible. Keras is an idea tool for prototyping and doing your first approximations to a Machine Learning problem. On the one hand, Keras is integrated with two very powerfull backends that support GPU computations, Tensorflow and Theano. On the other hand, it has a level of abstraction high enough to be simple to understand and easy to use. For example, it uses a very similar interface to the sklearn library that you have seen before, with fit and predict methods. Now let's get to work with an example: 1 - The libraries Firts let's check we have installed everything we need for this tutorial:
import numpy import keras from keras.models import Sequential from keras.layers import Dense, Dropout from keras.datasets import mnist
deep_learning_tutorial_1.ipynb
PLN-FaMAF/DeepLearningEAIA
bsd-3-clause
2 - The dataset For this quick tutorial we will use the (very popular) MNIST dataset. This is a dataset of 70K images of handwritten digits. Our task is to recognize which digits is displayed in the image: a classification problem. You have seen in previous courses how to train and evaluate a classifier, so we wont talk in further details about supervised learning. The input to the MLP classifier are going to be images of 28x28 pixels represented as matrixes. The output will be one of ten classes (0 to 9), representing the predicted number written in the image.
batch_size = 128 num_classes = 10 epochs = 10 TRAIN_EXAMPLES = 60000 TEST_EXAMPLES = 10000 # the data, shuffled and split between train and test sets (x_train, y_train), (x_test, y_test) = mnist.load_data() # reshape the dataset to convert the examples from 2D matrixes to 1D arrays. x_train = x_train.reshape(60000, 28*28) x_test = x_test.reshape(10000, 28*28) # to make quick runs, select a smaller set of images. train_mask = numpy.random.choice(x_train.shape[0], TRAIN_EXAMPLES, replace=False) x_train = x_train[train_mask, :].astype('float32') y_train = y_train[train_mask] test_mask = numpy.random.choice(x_test.shape[0], TEST_EXAMPLES, replace=False) x_test = x_test[test_mask, :].astype('float32') y_test = y_test[test_mask] # normalize the input x_train /= 255 x_test /= 255 # convert class vectors to binary class matrices y_train = keras.utils.to_categorical(y_train, num_classes) y_test = keras.utils.to_categorical(y_test, num_classes)
deep_learning_tutorial_1.ipynb
PLN-FaMAF/DeepLearningEAIA
bsd-3-clause
3 - The model The concept of Deep Learning is very broad, but the core of it is the use of classifiers with multiple hidden layer of neurons, or smaller classifiers. We all know the classical image of the simplest possible possible deep model: a neural network with a single hidden layer. credits http://www.extremetech.com/wp-content/uploads/2015/07/NeuralNetwork.png In theory, this model can represent any function TODO add a citation here. We will see how to implement this network in Keras, and during the second part of this tutorial how to add more features to create a deep and powerful classifier. First, Deep Learning models are concatenations of Layers. This is represented in Keras with the Sequential model. We create the Sequential instance as an "empty carcass" and then we fill it with different layers. The most basic type of Layer is the Dense layer, where each neuron in the input is connected to each neuron in the following layer, like we can see in the image above. Internally, a Dense layer has two variables: a matrix of weights and a vector of bias, but the beauty of Keras is that you don't need to worry about that. All the variables will be correctly created, initialized, trained and possibly regularized for you. Each layer needs to know or be able to calculate al least three things: The size of the input: the number of neurons in the incoming layer. For the first layer this corresponds to the size of each example in our dataset. The next layers can calculate their input size using the output of the previous layer, so we generally don't need to tell them this. The type of activation: this is the function that is applied to the output of each neuron. Will talk in detail about this later. The size of the output: the number of neurons in the next layer.
model = Sequential() # Input to hidden layer model.add(Dense(512, activation='relu', input_shape=(784,))) # Hidden to output layer model.add(Dense(10, activation='softmax'))
deep_learning_tutorial_1.ipynb
PLN-FaMAF/DeepLearningEAIA
bsd-3-clause
We have successfully build a Neural Network! We can print a description of our architecture using the following command:
model.summary()
deep_learning_tutorial_1.ipynb
PLN-FaMAF/DeepLearningEAIA
bsd-3-clause
Compiling a model in Keras A very appealing aspect of Deep Learning frameworks is that they solve the implementation of complex algorithms such as Backpropagation. For those with some numerical optimization notions, minimization algorithms often involve the calculation of first defivatives. Neural Networks are huge functions full of non-linearities, and differentiating them is a... nightmare. For this reason, models need to be "compiled". In this stage, the backend builds complex computational graphs, and we don't have to worry about derivatives or gradients. In Keras, a model can be compiled with the method .compile(). The method takes two parameters: loss and optimizer. The loss is the function that calculates how much error we have in each prediction example, and there are a lot of implemented alternatives ready to use. We will talk more about this, for now we use the standard categorical crossentropy. As you can see, we can simply pass a string with the name of the function and Keras will find the implementation for us. The optimizer is the algorithm to minimize the value of the loss function. Again, Keras has many optimizers available. The basic one is the Stochastic Gradient Descent. We pass a third argument to the compile method: the metric. Metrics are measures or statistics that allows us to keep track of the classifier's performance. It's similar to the loss, but the results of the metrics are not use by the optimization algorithm. Besides, metrics are always comparable, while the loss function can take random values depending on your problem. Keras will calculate metrics and loss both on the training and the validation dataset. That way, we can monitor how other performance metrics vary when the loss is optimized and detect anomalies like overfitting.
model.compile(loss='categorical_crossentropy', optimizer=keras.optimizers.SGD(), metrics=['accuracy'])
deep_learning_tutorial_1.ipynb
PLN-FaMAF/DeepLearningEAIA
bsd-3-clause
[OPTIONAL] We can now visualize the architecture of our model using the vis_util tools. It's a very schematic view, but you can check it's not that different from the image we saw above (and that we intended to replicate). If you can't execute this step don't worry, you can still finish the tutorial. This step requires graphviz and pydotplus libraries.
from IPython.display import SVG from keras.utils.vis_utils import model_to_dot SVG(model_to_dot(model).create(prog='dot', format='svg'))
deep_learning_tutorial_1.ipynb
PLN-FaMAF/DeepLearningEAIA
bsd-3-clause
Training Once the model is compiled, everything is ready to train the classifier. Keras' Sequential model has a similar interface as the sklearn library that you have seen before, with fit and predict methods. As usual, we need to pass our training examples and their corresponding labels. Other parameters needed to train a neural network is the size of the batch and the number of epochs. We have two ways of specifying a validation dataset: we can pass the tuple of values and labels directly with the validation_data parameter, or we can pass a proportion to the validation_split argument and Keras will split the training dataset for us. To correctly train our model we need to pass two important parameters to the fit function: * batch_size: is the number of examples to use in each "minibatch" iteration of the Stochastic Gradient Descent algorithm. This is necessary for most optimization algorithms. The size of the batch is important because it defines how fast the algorithm will perform each iteration and also how much memory will be used to load each batch (possibly in the GPU). * epochs: is the number of passes through the entire dataset. We need enough epochs for the classifier to converge, but we need to stop before the classifier starts overfitting.
history = model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, validation_data=(x_test, y_test));
deep_learning_tutorial_1.ipynb
PLN-FaMAF/DeepLearningEAIA
bsd-3-clause
We have trained our model! Additionally, Keras has printed out a lot of information of the training, thanks to the parameter verbose=1 that we passed to the fit function. We can see how many time it took in each iteration, and the value of the loss and metrics in the training and the validation dataset. The same information is stored in the output of the fit method, which sadly it's not well documented. We can see it in a pretty table with pandas.
import pandas pandas.DataFrame(history.history)
deep_learning_tutorial_1.ipynb
PLN-FaMAF/DeepLearningEAIA
bsd-3-clause
Why is this useful? This will give you an insight on how well your network is optimizing the loss, and how much it's actually learning. When training, you need to keep track of two things: Your network is actually learning. This means your training loss is decreasing in average. If it's going up or it's stuck for more than a couple of epochs is safe to stop you training and try again. You network is not overfitting. It's normal to have a gap between the validation and the training metrics, but they should decrease more or less at the same rate. If you see that your metrics for training are getting better but your validation metrics are getting worse, it is also a good point to stop and fix your overfitting problem. Evaluation Keras gives us a very useful method to evaluate the current performance called evaluate (surprise!). Evaluate will return the value of the loss function and all the metrics that we pass to the model when calling compile.
score = model.evaluate(x_test, y_test, verbose=0) print('Test loss:', score[0]) print('Test accuracy:', score[1])
deep_learning_tutorial_1.ipynb
PLN-FaMAF/DeepLearningEAIA
bsd-3-clause
As you can see, using only 10 training epochs we get a very surprising accuracy in the training and test dataset. If you want to take a deeper look into your model, you can obtain the predictions as a vector and then use general purpose tools to explore the results. For example, we can plot the confusion matrix to see the most common errors.
prediction = model.predict_classes(x_test) import seaborn as sns from sklearn.metrics import confusion_matrix sns.set_style('white') sns.set_palette('colorblind') matrix = confusion_matrix(numpy.argmax(y_test, 1), prediction) figure = sns.heatmap(matrix / matrix.astype(numpy.float).sum(axis=1), xticklabels=range(10), yticklabels=range(10), cmap=sns.cubehelix_palette(8, as_cmap=True))
deep_learning_tutorial_1.ipynb
PLN-FaMAF/DeepLearningEAIA
bsd-3-clause
<br> <br> The two plots above nicely confirm what we have discussed before: Where the PCA accounts for the most variance in the whole dataset, the LDA gives us the axes that account for the most variance between the individual classes. <br> <br> LDA via scikit-learn [back to top] Now, after we have seen how an Linear Discriminant Analysis works using a step-by-step approach, there is also a more convenient way to achive the same via the LDA class implemented in the scikit-learn machine learning library.
from sklearn.lda import LDA # LDA sklearn_lda = LDA(n_components=2) X_lda_sklearn = sklearn_lda.fit_transform(X, y) def plot_scikit_lda(X, title, mirror=1): ax = plt.subplot(111) for label,marker,color in zip( range(1,4),('^', 's', 'o'),('blue', 'red', 'green')): plt.scatter(x=X[:,0][y == label]*mirror, y=X[:,1][y == label], marker=marker, color=color, alpha=0.5, label=label_dict[label] ) plt.xlabel('LD1') plt.ylabel('LD2') leg = plt.legend(loc='upper right', fancybox=True) leg.get_frame().set_alpha(0.5) plt.title(title) # hide axis ticks plt.tick_params(axis="both", which="both", bottom="off", top="off", labelbottom="on", left="off", right="off", labelleft="on") # remove axis spines ax.spines["top"].set_visible(False) ax.spines["right"].set_visible(False) ax.spines["bottom"].set_visible(False) ax.spines["left"].set_visible(False) plt.grid() plt.tight_layout plt.show() plot_step_lda() plot_scikit_lda(X_lda_sklearn, title='Default LDA via scikit-learn', mirror=(-1))
dimensionality_reduction/projection/linear_discriminant_analysis.ipynb
murali-munna/pattern_classification
gpl-3.0
Factorial HMM Example synthetic data: 20 different "devices", each with different power consumptions, turning on and off following separate Markov models
devices = factorial_hmm.gen_devices() T = 50 np.random.seed(20) X, Y = factorial_hmm.gen_dataset(devices, T) plt.figure(figsize=(15,3.5)) plt.plot(Y) plt.figure(figsize=(15,10)) plt.imshow((X*devices).T, interpolation='None', aspect=1); plt.yticks(np.arange(len(devices)), devices); print len(devices), 2**len(devices) trace_train = [] trace_validation = [] dist_est = cde.ConditionalBinaryMADE(len(devices)+1, len(devices), H=300, num_layers=4) if USE_GPU: dist_est.cuda() dist_est.load_state_dict(torch.load('../saved/trained_hmm_params.rar'))
notebooks/Factorial-HMM.ipynb
tbrx/compiled-inference
gpl-3.0
Test out learned distribution inside of SMC We'll compare it against a baseline of "bootstrap" SMC, which proposes from the transition dynamics of the individual HMMs.
X_hat_bootstrap, ancestry_bootstrap, ESS_bootstrap = \ factorial_hmm.run_smc(devices, Y, 500, factorial_hmm.baseline_proposal, verbose=False) Y_hat_bootstrap = np.dot(X_hat_bootstrap, devices) nn_proposal = factorial_hmm.make_nn_proposal(dist_est) X_hat_nn, ancestry_nn, ESS_nn = \ factorial_hmm.run_smc(devices, Y, 500, nn_proposal, verbose=False) Y_hat_nn = np.dot(X_hat_nn, devices) plt.hist(ESS_bootstrap, histtype='stepfilled', linewidth=2, alpha=0.5, bins=20,edgeColor='k') plt.hist(ESS_nn, histtype='stepfilled', linewidth=2, alpha=0.5, bins=20,edgeColor='k') plt.xlim([0,plt.xlim()[1]]) plt.legend(['bootstrap', 'nnsmc']) plt.title('Histogram of effective sample size of SMC filtering distribution'); plt.figure(figsize=(16,4)) plt.title('Ancestral paths for bootstrap proposals (blue) and nn (green)') plt.plot(ancestry_bootstrap.T, color=sns.color_palette()[0]); plt.plot(ancestry_nn.T, color=sns.color_palette()[1]); plt.ylim(0,ancestry_nn.shape[0]) plt.xlim(0,T-1); plt.figure(figsize=(14,3.25)) plt.plot(np.dot(X_hat_nn, devices).T, color=sns.color_palette()[1], alpha=0.1) plt.plot(np.arange(len(Y)), Y,'k--') plt.xlim([0,T-1]) plt.xlabel('Time step') plt.ylabel('Total energy usage')
notebooks/Factorial-HMM.ipynb
tbrx/compiled-inference
gpl-3.0
Look at rate of path coalescence
ANC_PRIOR = [] ANC_NN = [] def count_uniques(ancestry): K, T = ancestry.shape counts = np.empty((T,), dtype=int) for t in xrange(T): counts[t] = len(np.unique(ancestry[:,t])) return counts def run_iter(): X,Y = factorial_hmm.gen_dataset(devices, T=30) X_particles_baseline, ancestry_baseline, _ = \ factorial_hmm.run_smc(devices, Y, 100, factorial_hmm.baseline_proposal, verbose=False) print "smc complete" X_particles, ancestry_nnsmc, _ = \ factorial_hmm.run_smc(devices, Y, 500, nn_proposal, verbose=False) print "nn complete" ANC_PRIOR.append(count_uniques(ancestry_baseline)) ANC_NN.append(count_uniques(ancestry_nnsmc)) return X,Y for i in xrange(10): print "iteration", i+1 X_tmp, Y_tmp = run_iter() plt.figure(figsize=(8,3.5)) plt.plot(np.arange(len(X_tmp)), np.mean(ANC_PRIOR, 0)); plt.plot(np.arange(len(X_tmp)), np.mean(ANC_NN, 0)); plt.legend(['Bootstrap SMC', 'NN-SMC'], loc='upper left') pm = np.mean(ANC_PRIOR, 0) psd = np.std(ANC_PRIOR, 0) safe_lb = (pm - psd) * (pm - psd > 1.0) + (pm - psd <= 1.0) plt.fill_between(np.arange(len(X_tmp)), safe_lb, pm+psd, alpha=0.25, color=sns.color_palette()[0]); pm = np.mean(ANC_NN, 0) psd = np.std(ANC_NN, 0) plt.fill_between(np.arange(len(X_tmp)), pm-psd, pm+psd, alpha=0.25, color=sns.color_palette()[1]); plt.semilogy(); plt.xlabel('Time step') plt.ylabel('Surviving paths') plt.ylim(1, 100) plt.xlim(0, len(X_tmp)-1) plt.tight_layout()
notebooks/Factorial-HMM.ipynb
tbrx/compiled-inference
gpl-3.0
Create a survey plan
obs = {'time': [], 'field': [], 'band': [], 'maglim': [], 'skynoise': [], 'comment': [], 'zp': []} mjd_start = 58239.5 for k in range(0, 61, 3): obs['time'].extend([mjd_start + k + l/24. for l in range(3)]) obs['field'].extend([683 for l in range(3)]) obs['band'].extend(['ztfg', 'ztfr', 'ztfi']) obs['maglim'].extend([22 for l in range(3)]) obs['zp'].extend([30 for l in range(3)]) obs['comment'].extend(['' for l in range(3)]) obs['skynoise'] = 10**(-0.4 * (np.array(obs['maglim']) - 30)) / 5 plan = simsurvey.SurveyPlan(time=obs['time'], band=obs['band'], skynoise=obs['skynoise'], obs_field=obs['field'], obs_ccd=None, zp=obs['zp'], comment=obs['comment'], fields=fields, ccds=ccds ) mjd_range = (plan.pointings['time'].min() - 30, plan.pointings['time'].max() + 30) plan.pointings
simsurvey_demo.ipynb
ZwickyTransientFacility/simsurvey-examples
bsd-3-clause
Number of injections, you can fix the number of generated transients or follow a rate. Rate should always be specified even for ntransient != None.
ntransient = 1000 rate = 1000 * 1e-6 # Mpc-3 yr-1 dec_range=(plan.pointings['Dec'].min()-10,plan.pointings['Dec'].max()+10) ra_range=(plan.pointings['RA'].min()-10,plan.pointings['RA'].max()+10) tr = simsurvey.get_transient_generator([0, 0.05], ntransient=ntransient, ratefunc=lambda z: rate, dec_range=dec_range, ra_range=ra_range, mjd_range=(mjd_range[0], mjd_range[1]), transientprop=transientprop, sfd98_dir=sfd98_dir )
simsurvey_demo.ipynb
ZwickyTransientFacility/simsurvey-examples
bsd-3-clause
SimulSurvey
# With sourcenoise==False, the flux error will correspond to the skynoise. Sourcenoise==True add an extra term in the flux errors from the brightness of the source. survey = simsurvey.SimulSurvey(generator=tr, plan=plan, n_det=2, threshold=5., sourcenoise=False) lcs = survey.get_lightcurves( progress_bar=True, notebook=True # If you get an error because of the progress_bar, delete this line. ) len(lcs.lcs)
simsurvey_demo.ipynb
ZwickyTransientFacility/simsurvey-examples
bsd-3-clause
Save
lcs.save('lcs.pkl')
simsurvey_demo.ipynb
ZwickyTransientFacility/simsurvey-examples
bsd-3-clause
Output lcs.lcs contains the detected lightcurves lcs.meta contains parameters for the detected lightcurves lcs.meta_full contains parameters for all the injection within the observed area. lcs.meta_rejected contains parameters for all the injection within the observed area that were not detected. lcs.meta_notobserved contains parameters for all the injection outside the observed area.
_ = sncosmo.plot_lc(lcs[0]) # Redshift distribution plt.hist(lcs.meta_full['z'], lw=1, histtype='step', range=(0,0.05), bins=20, label='all') plt.hist(lcs.meta['z'], lw=2, histtype='step', range=(0,0.05), bins=20, label='detected') plt.xlabel('Redshift', fontsize='x-large') plt.ylabel(r'$N_{KNe}$', fontsize='x-large') plt.xlim((0, 0.05)) plt.legend() plt.hist(lcs.stats['p_det'], lw=2, histtype='step', range=(0,10), bins=20) plt.xlabel('Detection phase (observer-frame)', fontsize='x-large') _ = plt.ylabel(r'$N_{KNe}$', fontsize='x-large') plt.figure() ax = plt.axes() ax.grid() ax.scatter(lcs.meta_notobserved['ra'], lcs.meta_notobserved['dec'], marker='*', label='meta_notobserved', alpha=0.7) ax.scatter(lcs.meta_rejected['ra'], lcs.meta_rejected['dec'], marker='*', label='meta_rejected', alpha=0.7) ax.scatter(lcs.meta['ra'], lcs.meta['dec'], marker='*', label='meta_detected', alpha=0.7) #ax.legend(loc='center left', bbox_to_anchor=(0.9, .5)) ax.legend(loc=0) ax.set_ylabel('DEC (deg)') ax.set_xlabel('RA (deg)') plt.tight_layout() plt.show() plt.figure() ax = plt.axes( [0.05, 0.05, 0.9, 0.9], projection='geo degrees mollweide' ) ax.grid() ax.scatter(lcs.meta_notobserved['ra'], lcs.meta_notobserved['dec'], transform=ax.get_transform('world'), marker='*', label='meta_notobserved', alpha=0.7) ax.scatter(lcs.meta_rejected['ra'], lcs.meta_rejected['dec'], transform=ax.get_transform('world'), marker='*', label='meta_rejected', alpha=0.7) ax.scatter(lcs.meta['ra'], lcs.meta['dec'], transform=ax.get_transform('world'), marker='*', label='meta_detected', alpha=0.7) #ax.legend(loc='center left', bbox_to_anchor=(0.9, .5)) ax.legend(loc=0) ax.set_ylabel('DEC (deg)') ax.set_xlabel('RA (deg)') plt.tight_layout() plt.show()
simsurvey_demo.ipynb
ZwickyTransientFacility/simsurvey-examples
bsd-3-clause
NumPy 곱셉의 경우에는 행렬의 곱, 즉 내적(inner product, dot product)의 정의와 다르다. 따라서 이 경우에는 별도로 dot이라는 명령 혹은 메서드를 사용해야 한다.
x = np.arange(10) y = np.arange(10) x * y x y np.dot(x, y) x.dot(y)
통계, 머신러닝 복습/160516월_3일차_기초 선형 대수 1 - 행렬의 정의와 연산 Basic Linear Algebra(NumPy)/3.NumPy 연산.ipynb
kimkipyo/dss_git_kkp
mit
브로드캐스팅은 더 차원이 높은 경우에도 적용된다. 다음 그림을 참조하라. <img src="https://datascienceschool.net/upfiles/dbd3775c3b914d4e8c6bbbb342246b6a.png" style="width: 60%; margin: 0 auto 0 auto;">
np.tile(np.arange(0, 40, 10), (3, 1)) a = np.tile(np.arange(0, 40, 10), (3, 1)).T a b = np.array([0, 1, 2]) b a + b a = np.arange(0, 40, 10)[:, np.newaxis] a a + b
통계, 머신러닝 복습/160516월_3일차_기초 선형 대수 1 - 행렬의 정의와 연산 Basic Linear Algebra(NumPy)/3.NumPy 연산.ipynb
kimkipyo/dss_git_kkp
mit
차원 축소 연산 ndarray의 하나의 행에 있는 원소를 하나의 데이터 집합으로 보고 평균을 구하면 각 행에 대해 하나의 숫자가 나오게 된다. 예를 들어 10x5 크기의 2차원 배열에 대해 행-평균을 구하면 10개의 숫자를 가진 1차원 벡터가 나오게 된다. 이러한 연산을 차원 축소(dimension reduction) 연산이라고 한다. ndarray 는 다음과 같은 차원 축소 연산 명령 혹은 메서드를 지원한다. 최대/최소: min, max, argmin, argmax 통계: sum, mean, median, std, var 불리언: all, any
x = np.array([1, 2, 3, 4]) x np.sum(x) x.sum() x = np.array([1, 3, 2, 4]) x.min(), np.min(x) x.max() x.argmin() # index of minimum x.argmax() # index of maximum x = np.array([1, 2, 3, 1]) x.mean() np.median(x) np.all([True, True, False]) np.any([True, True, False]) a = np.zeros((100, 100), dtype=np.int) a np.any(a == 0) np.any(a != 0) np.all(a == 0) a = np.array([1, 2, 3, 2]) b = np.array([2, 2, 3, 2]) c = np.array([6, 4, 4, 5]) ((a <= b) & (b <= c)).all()
통계, 머신러닝 복습/160516월_3일차_기초 선형 대수 1 - 행렬의 정의와 연산 Basic Linear Algebra(NumPy)/3.NumPy 연산.ipynb
kimkipyo/dss_git_kkp
mit
연산의 대상이 2차원 이상인 경우에는 어느 차원으로 계산을 할 지를 axis 인수를 사용하여 지시한다. axis=0인 경우는 열 연산, axis=1인 경우는 행 연산 등으로 사용한다. 디폴트 값은 0이다. <img src="https://datascienceschool.net/upfiles/edfaf93a7f124f359343d1dcfe7f29fc.png", style="margin: 0 auto 0 auto;">
x = np.array([[1, 1], [2, 2]]) x x.sum() x.sum(axis=0) # columns (first dimension) x.sum(axis=1) # rows (second dimension) y = np.array([[1, 2, 3], [5, 6, 1]]) np.median(y, axis=-1) # last axis y np.median(y, axis=1)
통계, 머신러닝 복습/160516월_3일차_기초 선형 대수 1 - 행렬의 정의와 연산 Basic Linear Algebra(NumPy)/3.NumPy 연산.ipynb
kimkipyo/dss_git_kkp
mit
정렬 sort 명령이나 메서드를 사용하여 배열 안의 원소를 크기에 따라 정렬하여 새로운 배열을 만들 수도 있다. 2차원 이상인 경우에는 마찬가지로 axis 인수를 사용하여 방향을 결정한다.
a = np.array([[4, 3, 5], [1, 2, 1]]) a np.sort(a) np.sort(a, axis=1) np.sort(a, axis=0)
통계, 머신러닝 복습/160516월_3일차_기초 선형 대수 1 - 행렬의 정의와 연산 Basic Linear Algebra(NumPy)/3.NumPy 연산.ipynb
kimkipyo/dss_git_kkp
mit
sort 메서드는 해당 객체의 자료 자체가 변화하는 in-place 메서드이므로 사용할 때 주의를 기울여야 한다.
a a.sort(axis=1) a
통계, 머신러닝 복습/160516월_3일차_기초 선형 대수 1 - 행렬의 정의와 연산 Basic Linear Algebra(NumPy)/3.NumPy 연산.ipynb
kimkipyo/dss_git_kkp
mit
使用 Keras 预处理层对结构化数据进行分类 <table class="tfo-notebook-buttons" align="left"> <td><a target="_blank" href="https://tensorflow.google.cn/tutorials/structured_data/preprocessing_layers"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">在 TensorFlow.org 上查看</a></td> <td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/structured_data/preprocessing_layers.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">在 Google Colab 中运行</a></td> <td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/structured_data/preprocessing_layers.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png">在 GitHub 上查看源代码</a></td> <td> <img><a>下载笔记本</a> </td> </table> 本教程演示了如何对结构化数据(例如 CSV 中的表格数据)进行分类。您将使用 Keras 定义模型,并使用预处理层作为桥梁,将 CSV 中的列映射到用于训练模型的特征。本教程包含以下操作的完整代码: 使用 Pandas 加载 CSV 文件。 构建输入流水线以使用 tf.data 对行进行批处理和乱序。 使用 Keras 预处理层将 CSV 中的列映射到用于训练模型的特征。 使用 Keras 构建、训练和评估模型。 注:本教程类似于使用特征列对结构化数据进行分类。此版本使用新的实验性 Keras 预处理层而不是 tf.feature_column。Keras 预处理层更直观,可以轻松包含在模型中以简化部署。 数据集 您将使用 PetFinder 数据集的简化版本。CSV 中有几千行。每行描述一个宠物,每列描述一个特性。您将使用此信息来预测宠物是否会被领养。 以下是对该数据集的描述。请注意,其中既有数值列,也有分类列。还有一个您不会在本教程中用到的自由文本列。 列 | 描述 | 特征类型 | 数据类型 --- | --- | --- | --- Type | 动物类型(狗、猫) | 分类 | 字符串 Age | 宠物年龄 | 数值 | 整数 Breed1 | 宠物的主要品种 | 分类 | 字符串 Color1 | 宠物的颜色 1 | 分类 | 字符串 Color2 | 宠物的颜色 2 | 分类 | 字符串 MaturitySize | 成年个体大小 | 分类 | 字符串 FurLength | 毛发长度 | 分类 | 字符串 Vaccinated | 宠物已接种疫苗 | 分类 | 字符串 Sterilized | 宠物已绝育 | 分类 | 字符串 Health | 健康状况 | 分类 | 字符串 Fee | 领养费 | 数值 | 整数 Description | 关于此宠物的简介 | 文本 | 字符串 PhotoAmt | 为该宠物上传的照片总数 | 数值 | 整数 AdoptionSpeed | 领养速度 | 分类 | 整数 导入TensorFlow和其他库
!pip install -q sklearn import numpy as np import pandas as pd import tensorflow as tf from sklearn.model_selection import train_test_split from tensorflow.keras import layers from tensorflow.keras.layers.experimental import preprocessing tf.__version__
site/zh-cn/tutorials/structured_data/preprocessing_layers.ipynb
tensorflow/docs-l10n
apache-2.0
使用 Pandas 创建数据帧 Pandas 是一个 Python 库,其中包含许多有用的加载和处理结构化数据的实用工具。您将使用 Pandas 从 URL 下载数据集,并将其加载到数据帧中。
import pathlib dataset_url = 'http://storage.googleapis.com/download.tensorflow.org/data/petfinder-mini.zip' csv_file = 'datasets/petfinder-mini/petfinder-mini.csv' tf.keras.utils.get_file('petfinder_mini.zip', dataset_url, extract=True, cache_dir='.') dataframe = pd.read_csv(csv_file) dataframe.head()
site/zh-cn/tutorials/structured_data/preprocessing_layers.ipynb
tensorflow/docs-l10n
apache-2.0
创建目标变量 Kaggle 比赛中的任务是预测宠物被领养的速度(例如,在第一周、第一个月、前三个月等)。我们针对教程进行一下简化。在这里,您将把它转化为一个二元分类问题,并简单地预测宠物是否被领养。 修改标签列后,0 表示宠物未被领养,1 表示宠物已被领养。
# In the original dataset "4" indicates the pet was not adopted. dataframe['target'] = np.where(dataframe['AdoptionSpeed']==4, 0, 1) # Drop un-used columns. dataframe = dataframe.drop(columns=['AdoptionSpeed', 'Description'])
site/zh-cn/tutorials/structured_data/preprocessing_layers.ipynb
tensorflow/docs-l10n
apache-2.0
将数据帧拆分为训练集、验证集和测试集 您下载的数据集是单个 CSV 文件。您将把它拆分为训练集、验证集和测试集。
train, test = train_test_split(dataframe, test_size=0.2) train, val = train_test_split(train, test_size=0.2) print(len(train), 'train examples') print(len(val), 'validation examples') print(len(test), 'test examples')
site/zh-cn/tutorials/structured_data/preprocessing_layers.ipynb
tensorflow/docs-l10n
apache-2.0
使用 tf.data 创建输入流水线 接下来,您将使用 tf.data 封装数据帧,以便对数据进行乱序和批处理。如果您处理的 CSV 文件非常大(大到无法放入内存),则可以使用 tf.data 直接从磁盘读取文件。本教程中没有涉及这方面的内容。
# A utility method to create a tf.data dataset from a Pandas Dataframe def df_to_dataset(dataframe, shuffle=True, batch_size=32): dataframe = dataframe.copy() labels = dataframe.pop('target') ds = tf.data.Dataset.from_tensor_slices((dict(dataframe), labels)) if shuffle: ds = ds.shuffle(buffer_size=len(dataframe)) ds = ds.batch(batch_size) ds = ds.prefetch(batch_size) return ds
site/zh-cn/tutorials/structured_data/preprocessing_layers.ipynb
tensorflow/docs-l10n
apache-2.0
现在您已经创建了输入流水线,我们调用它来查看它返回的数据的格式。您使用了小批次来保持输出的可读性。
batch_size = 5 train_ds = df_to_dataset(train, batch_size=batch_size) [(train_features, label_batch)] = train_ds.take(1) print('Every feature:', list(train_features.keys())) print('A batch of ages:', train_features['Age']) print('A batch of targets:', label_batch )
site/zh-cn/tutorials/structured_data/preprocessing_layers.ipynb
tensorflow/docs-l10n
apache-2.0
您可以看到数据集(从数据帧)返回了一个列名称字典,该字典映射到来自数据帧中行的列值。 演示预处理层的使用。 Keras 预处理层 API 允许您构建 Keras 原生输入处理流水线。您将使用 3 个预处理层来演示特征预处理代码。 Normalization - 数据的特征归一化。 Normalization - 类别编码层。 StringLookup - 将字符串从词汇表映射到整数索引。 IntegerLookup - 将词汇表中的整数映射到整数索引。 您可以在此处找到可用预处理层的列表。 数值列 对于每个数值特征,您将使用 Normalization() 层来确保每个特征的平均值为 0,且其标准差为 1。 get_normalization_layer 函数返回一个层,该层将特征归一化应用于数值特征。
def get_normalization_layer(name, dataset): # Create a Normalization layer for our feature. normalizer = preprocessing.Normalization(axis=None) # Prepare a Dataset that only yields our feature. feature_ds = dataset.map(lambda x, y: x[name]) # Learn the statistics of the data. normalizer.adapt(feature_ds) return normalizer photo_count_col = train_features['PhotoAmt'] layer = get_normalization_layer('PhotoAmt', train_ds) layer(photo_count_col)
site/zh-cn/tutorials/structured_data/preprocessing_layers.ipynb
tensorflow/docs-l10n
apache-2.0