markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Creating the device will signal to the computer that a monitor is connected. Starting the frontend will wait attempt to detect the video mode, blocking until a lock can be achieved. Once the frontend is started the video mode will be available.
hdmiin_frontend.start() hdmiin_frontend.mode
boards/Pynq-Z1/base/notebooks/video/hdmi_video_pipeline.ipynb
cathalmccabe/PYNQ
bsd-3-clause
The HDMI output frontend can be accessed in a similar way.
hdmiout_frontend = base.video.hdmi_out.frontend
boards/Pynq-Z1/base/notebooks/video/hdmi_video_pipeline.ipynb
cathalmccabe/PYNQ
bsd-3-clause
and the mode must be set prior to starting the output. In this case we are just going to use the same mode as the input.
hdmiout_frontend.mode = hdmiin_frontend.mode hdmiout_frontend.start()
boards/Pynq-Z1/base/notebooks/video/hdmi_video_pipeline.ipynb
cathalmccabe/PYNQ
bsd-3-clause
Note that nothing will be displayed on the screen as no video data is currently being send. Colorspace conversion The colorspace converter operates on each pixel independently using a 3x4 matrix to transform the pixels. The converter is programmed with a list of twelve coefficients in the folling order: | |in1 |in2 |in3 | 1 | |-----|----|----|----|----| |out1 |c1 |c2 |c3 |c10 | |out2 |c4 |c5 |c6 |c11 | |out3 |c7 |c8 |c9 |c12 | Each coefficient should be a floating point number between -2 and +2. The pixels to and from the HDMI frontends are in BGR order so a list of coefficients to convert from the input format to RGB would be: [0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0] reversing the order of the pixels and not adding any bias. The driver for the colorspace converters has a single property that contains the list of coefficients.
colorspace_in = base.video.hdmi_in.color_convert colorspace_out = base.video.hdmi_out.color_convert bgr2rgb = [0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0] colorspace_in.colorspace = bgr2rgb colorspace_out.colorspace = bgr2rgb colorspace_in.colorspace
boards/Pynq-Z1/base/notebooks/video/hdmi_video_pipeline.ipynb
cathalmccabe/PYNQ
bsd-3-clause
Pixel format conversion The pixel format converters convert between the 24-bit signal used by the HDMI frontends and the colorspace converters to either an 8, 24, or 32 bit signal. 24-bit mode passes the input straight through, 32-bit pads the additional pixel with 0 and 8-bit mode selects the first channel in the pixel. This is exposed by a single property to set or get the number of bits.
pixel_in = base.video.hdmi_in.pixel_pack pixel_out = base.video.hdmi_out.pixel_unpack pixel_in.bits_per_pixel = 8 pixel_out.bits_per_pixel = 8 pixel_in.bits_per_pixel
boards/Pynq-Z1/base/notebooks/video/hdmi_video_pipeline.ipynb
cathalmccabe/PYNQ
bsd-3-clause
Video DMA The final element in the pipeline is the video DMA which transfers video frames to and from memory. The VDMA consists of two channels, one for each direction which operate completely independently. To use a channel its mode must be set prior to start being called. After the DMA is started readframe and writeframe transfer frames. Frames are only transferred once with the call blocking if necessary. asyncio coroutines are available as readframe_async and writeframe_async which yield instead of blocking. A frame of the size of the output can be retrieved from the VDMA by calling writechannel.newframe(). This frame is not guaranteed to be initialised to blank so should be completely written before being handed back.
inputmode = hdmiin_frontend.mode framemode = VideoMode(inputmode.width, inputmode.height, 8) vdma = base.video.axi_vdma vdma.readchannel.mode = framemode vdma.readchannel.start() vdma.writechannel.mode = framemode vdma.writechannel.start() frame = vdma.readchannel.readframe() vdma.writechannel.writeframe(frame)
boards/Pynq-Z1/base/notebooks/video/hdmi_video_pipeline.ipynb
cathalmccabe/PYNQ
bsd-3-clause
In this case, because we are only using 8 bits per pixel, only the red channel is read and displayed. The two channels can be tied together which will ensure that the input is always mirrored to the output
vdma.readchannel.tie(vdma.writechannel)
boards/Pynq-Z1/base/notebooks/video/hdmi_video_pipeline.ipynb
cathalmccabe/PYNQ
bsd-3-clause
Frame Ownership The VDMA driver has a strict method of frame ownership. Any frames returned by readframe or newframe are owned by the user and should be destroyed by the user when no longer needed by calling frame.freebuffer(). Frames handed back to the VDMA with writeframe are no longer owned by the user and should not be touched - the data may disappear at any time. Cleaning up It is vital to stop the VDMA before reprogramming the bitstream otherwise the memory system of the chip can be placed into an undefined state. If the monitor does not power on when starting the VDMA this is the likely cause.
vdma.readchannel.stop() vdma.writechannel.stop()
boards/Pynq-Z1/base/notebooks/video/hdmi_video_pipeline.ipynb
cathalmccabe/PYNQ
bsd-3-clause
Exercise 1: Reading various forms of JSON Data In the /data/ folder, you will find a series of .json files called dataN.json, numbered 1-4. Each file contains the following data: | |birthday | first_name |last_name | |--|-----------|------------|----------| |0 |5\/3\/67 |Robert |Hernandez | |1 |8\/4\/84 |Steve |Smith | |2 |9\/13\/91 |Anne |Raps | |3 |4\/15\/75 |Alice |Muller |
#Your code here... file1 = pd.read_json('../../data/data1.json') file2 = pd.read_json('../../data/data2.json') file2 = pd.read_json('../../data/data2.json') file3 = pd.read_json('../../data/data3.json') # add orient=columns file4 = pd.read_json('../../data/data4.json', orient='split') combined = pd.concat([file1, file2.T, file3, file4], ignore_index=True) combined
1m_ML_Security/notebooks/day_1/Worksheet 2 - Exploring Two Dimensional Data.ipynb
yevheniyc/Python
mit
Exercise 2: In the data file, there is a webserver file called hackers-access.httpd. For this exercise, you will use this file to answer the following questions: 1. Which browsers are the top 10 most used browsers in this data? 2. Which are the top 10 most used operating systems? In order to accomplish this task, do the following: 1. Write a function which takes a User Agent string as an argument and returns the relevant data. HINT: You might want to use python's user_agents module, the documentation for which is available here: (https://pypi.python.org/pypi/user-agents) 2. Next, apply this function to the column which contains the user agent string. 3. Store this series as a new column in the dataframe 4. Count the occurances of each value in the new columns
import apache_log_parser from user_agents import parse def parse_ua(line): parsed_data = parse(line) return str(parsed_data).split('/')[1] def parse_ua_2(line): parsed_data = parse(line) return str(parsed_data).split('/')[2] #Read in the log file line_parser = apache_log_parser.make_parser("%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-agent}i\"") server_log = open("../../data/hackers-access.httpd", "r") parsed_server_data = [] for line in server_log: data = {} data = line_parser(line) parsed_server_data.append( data ) server_df = pd.DataFrame(parsed_server_data) server_df['OS'] = server_df['request_header_user_agent'].apply(parse_ua) server_df['Browser'] = server_df['request_header_user_agent'].apply(parse_ua_2) server_df['OS'].value_counts().head(10) #Apply the functions to the dataframe #Get the top 10 values
1m_ML_Security/notebooks/day_1/Worksheet 2 - Exploring Two Dimensional Data.ipynb
yevheniyc/Python
mit
Exercise 3: Using the dailybots.csv film, read the file into a DataFrame and perform the following operations: 1. Filter the DataFrame to include bots from the Government/Politics Industry. 2. Calculate the ratio of hosts to orgs and add this as a column to the DataFrame and output the result 3. Calculate the total number of hosts infected by each BotFam in the Government/Politics Industry. You should use the groupby() function which is documented here: (http://pandas.pydata.org/pandas-docs/stable/groupby.html)
#Your code here... bots = pd.read_csv('../../data/dailybots.csv') gov_bots = bots[['botfam', 'hosts']][bots['industry'] == 'Government/Politics'] gov_bots.groupby(['botfam']).size()
1m_ML_Security/notebooks/day_1/Worksheet 2 - Exploring Two Dimensional Data.ipynb
yevheniyc/Python
mit
$ \newcommand{\cur}{i} \newcommand{\prev}{j} \newcommand{\prevcur}{{\cur\prev}} \newcommand{\next}{k} \newcommand{\curnext}{{\next\cur}} \newcommand{\ex}{\eta} \newcommand{\pot}{\rho} \newcommand{\feature}{x} \newcommand{\weight}{w} \newcommand{\wcur}{{\weight_{\cur\prev}}} \newcommand{\activthres}{\theta} \newcommand{\activfunc}{f} \newcommand{\errfunc}{E} \newcommand{\learnrate}{\epsilon} \newcommand{\learnit}{n} \newcommand{\sigout}{y} \newcommand{\sigoutdes}{d} \newcommand{\weights}{\boldsymbol{W}} \newcommand{\errsig}{\Delta} $
# Notations : # - $\cur$: couche courante # - $\prev$: couche immédiatement en amont de la courche courrante (i.e. vers la couche d'entrée du réseau) # - $\next$: couche immédiatement en aval de la courche courrante (i.e. vers la couche de sortie du réseau) # - $\ex$: exemple (*sample* ou *feature*) courant (i.e. le vecteur des entrées courantes du réseau) # - $\pot_\cur$: *Potentiel d'activation* du neurone $i$ pour l'exemple courant # - $\wcur$: Poids de la connexion entre le neurone $j$ et le neurone $i$ # - $\activthres_\cur$: *Seuil d'activation* du neurone $i$ # - $\activfunc_\cur$: *Fonction d'activation* du neurone $i$ # - $\errfunc$: *Fonction objectif* ou *fonction d'erreur* # - $\learnrate$: *Pas d'apprentissage* ou *Taux d'apprentissage* # - $\learnit$: Numéro d'itération (ou cycle ou époque) du processus d'apprentissage # - $\sigout_\cur$: Signal de sortie du neurone $i$ pour l'exemple courant # - $\sigoutdes_\cur$: Sortie désirée (*étiquette*) du neurone $i$ pour l'exemple courant # - $\weights$: Matrice des poids du réseau (en réalité il y a une matrice de taille potentiellement différente par couche) # - $\errsig_i$: *Signal d'erreur* du neurone $i$ pour l'exemple courant
nb_sci_ai/ai_ml_multilayer_perceptron_fr.ipynb
jdhp-docs/python_notebooks
mit
Introduction Qu'est-ce qu'un réseau de neurones ? Une grosse fonction parametrique. Pour peu qu'on donne suffisamment de paramètres à cette fonction, elle est capable d'approximer n'importe quelle fonction continue. Représentation schématique d'une fonction paramètrique avec 3 paramètres avec une entrée en une sortie à 1 dimension $$\mathbb{R} \rightarrow \mathbb{R}$$ $$x \mapsto g_{\boldsymbol{\omega}}(x)$$ TODO: image/schéma intuition : entrés -> fonction avec paramètres = table de mixage -> sortie À quoi ça sert ? TODO : expliquer la régression et la classification TODO : applications avec références... Exemples d'application concrètes : - Reconnaissance de texte manuscrit - Reconnaissance de formes, d'objets, de visages, etc. dans des images - Reconnaissance de la parole - Prédiction de séries temporelles (cours de la bourse, etc.) - etc. Définition du neurone "formel"
STR_CUR = r"i" # Couche courante STR_PREV = r"j" # Couche immédiatement en amont de la courche courrante (i.e. vers la couche d'entrée du réseau) STR_NEXT = r"k" # Couche immédiatement en aval de la courche courrante (i.e. vers la couche de sortie du réseau) STR_EX = r"\eta" # Exemple (*sample* ou *feature*) courant (i.e. le vecteur des entrées courantes du réseau) STR_POT = r"x" # *Potentiel d'activation* du neurone $i$ pour l'exemple $\ex$ STR_POT_CUR = r"x_i" # *Potentiel d'activation* du neurone $i$ pour l'exemple $\ex$ STR_WEIGHT = r"w" STR_WEIGHT_CUR = r"w_{ij}" # Poids de la connexion entre le neurone $j$ et le neurone $i$ STR_ACTIVTHRES = r"\theta" # *Seuil d'activation* du neurone $i$ STR_ACTIVFUNC = r"f" # *Fonction d'activation* du neurone $i$ STR_ERRFUNC = r"E" # *Fonction objectif* ou *fonction d'erreur* STR_LEARNRATE = r"\epsilon" # *Pas d'apprentissage* ou *Taux d'apprentissage* STR_LEARNIT = r"n" # Numéro d'itération (ou cycle ou époque) du processus d'apprentissage STR_SIGIN = r"x" # Signal de sortie du neurone $i$ pour l'exemple $\ex$ STR_SIGOUT = r"y" # Signal de sortie du neurone $i$ pour l'exemple $\ex$ STR_SIGOUT_CUR = r"y_i" STR_SIGOUT_PREV = r"y_j" STR_SIGOUT_DES = r"d" # Sortie désirée (*étiquette*) du neurone $i$ pour l'exemple $\ex$ STR_SIGOUT_DES_CUR = r"d_i" STR_WEIGHTS = r"W" # Matrice des poids du réseau (en réalité il y a une matrice de taille potentiellement différente par couche) STR_ERRSIG = r"\Delta" # *Signal d'erreur* du neurone $i$ pour l'exemple $\ex$ def tex(tex_str): return r"$" + tex_str + r"$" fig, ax = nnfig.init_figure(size_x=8, size_y=4) nnfig.draw_synapse(ax, (0, -6), (10, 0)) nnfig.draw_synapse(ax, (0, -2), (10, 0)) nnfig.draw_synapse(ax, (0, 2), (10, 0)) nnfig.draw_synapse(ax, (0, 6), (10, 0), label=tex(STR_WEIGHT_CUR), label_position=0.5, fontsize=14) nnfig.draw_synapse(ax, (10, 0), (12, 0)) nnfig.draw_neuron(ax, (0, -6), 0.5, empty=True) nnfig.draw_neuron(ax, (0, -2), 0.5, empty=True) nnfig.draw_neuron(ax, (0, 2), 0.5, empty=True) nnfig.draw_neuron(ax, (0, 6), 0.5, empty=True) plt.text(x=0, y=7.5, s=tex(STR_PREV), fontsize=14) plt.text(x=10, y=1.5, s=tex(STR_CUR), fontsize=14) plt.text(x=0, y=0, s=r"$\vdots$", fontsize=14) plt.text(x=-2.5, y=0, s=tex(STR_SIGOUT_PREV), fontsize=14) plt.text(x=13, y=0, s=tex(STR_SIGOUT_CUR), fontsize=14) plt.text(x=9.2, y=-1.8, s=tex(STR_POT_CUR), fontsize=14) nnfig.draw_neuron(ax, (10, 0), 1, ag_func="sum", tr_func="sigmoid") plt.show()
nb_sci_ai/ai_ml_multilayer_perceptron_fr.ipynb
jdhp-docs/python_notebooks
mit
$$ \sigout = \activfunc \left( \sum_i \weight_i \feature_i \right) $$ $$ \pot_\cur = \sum_\prev \wcur \sigout_{\prev} $$ $$ \sigout_{\cur} = \activfunc(\pot_\cur) $$ $$ \weights = \begin{pmatrix} \weight_{11} & \cdots & \weight_{1m} \ \vdots & \ddots & \vdots \ \weight_{n1} & \cdots & \weight_{nm} \end{pmatrix} $$ Avec : - $\cur$: couche courante - $\prev$: couche immédiatement en amont de la courche courrante (i.e. vers la couche d'entrée du réseau) - $\next$: couche immédiatement en aval de la courche courrante (i.e. vers la couche de sortie du réseau) - $\ex$: exemple (sample ou feature) courant (i.e. le vecteur des entrées courantes du réseau) - $\pot_\cur$: Potentiel d'activation du neurone $i$ pour l'exemple courant - $\wcur$: Poids de la connexion entre le neurone $j$ et le neurone $i$ - $\activthres_\cur$: Seuil d'activation du neurone $i$ - $\activfunc_\cur$: Fonction d'activation du neurone $i$ - $\errfunc$: Fonction objectif ou fonction d'erreur - $\learnrate$: Pas d'apprentissage ou Taux d'apprentissage - $\learnit$: Numéro d'itération (ou cycle ou époque) du processus d'apprentissage - $\sigout_\cur$: Signal de sortie du neurone $i$ pour l'exemple courant - $\sigoutdes_\cur$: Sortie désirée (étiquette) du neurone $i$ pour l'exemple courant - $\weights$: Matrice des poids du réseau (en réalité il y a une matrice de taille potentiellement différente par couche) - $\errsig_i$: Signal d'erreur du neurone $i$ pour l'exemple courant Fonction d'activation Fonction sigmoïde La fonction sigmoïde (en forme de "S") est définie par : $$f(x) = \frac{1}{1 + e^{-x}}$$ pour tout réel $x$. On peut la généraliser à toute fonction dont l'expression est : $$f(x) = \frac{1}{1 + e^{-\lambda x}}$$
def sigmoid(x, _lambda=1.): y = 1. / (1. + np.exp(-_lambda * x)) return y %matplotlib inline x = np.linspace(-5, 5, 300) y1 = sigmoid(x, 1.) y2 = sigmoid(x, 5.) y3 = sigmoid(x, 0.5) plt.plot(x, y1, label=r"$\lambda=1$") plt.plot(x, y2, label=r"$\lambda=5$") plt.plot(x, y3, label=r"$\lambda=0.5$") plt.hlines(y=0, xmin=-5, xmax=5, color='gray', linestyles='dotted') plt.vlines(x=0, ymin=-2, ymax=2, color='gray', linestyles='dotted') plt.legend() plt.title("Fonction sigmoïde") plt.axis([-5, 5, -0.5, 2]);
nb_sci_ai/ai_ml_multilayer_perceptron_fr.ipynb
jdhp-docs/python_notebooks
mit
Fonction logistique Fonctions ayant pour expression $$ f(t) = K \frac{1}{1+ae^{-\lambda t}} $$ où $K$ et $\lambda$ sont des réels positifs et $a$ un réel quelconque. Les fonctions sigmoïdes sont un cas particulier de fonctions logistique avec $a > 0$.
def logistique(x, a=1., k=1., _lambda=1.): y = k / (1. + a * np.exp(-_lambda * x)) return y %matplotlib inline x = np.linspace(-5, 5, 300) y1 = logistique(x, a=1.) y2 = logistique(x, a=2.) y3 = logistique(x, a=0.5) plt.plot(x, y1, label=r"$a=1$") plt.plot(x, y2, label=r"$a=2$") plt.plot(x, y3, label=r"$a=0.5$") plt.hlines(y=0, xmin=-5, xmax=5, color='gray', linestyles='dotted') plt.vlines(x=0, ymin=-2, ymax=2, color='gray', linestyles='dotted') plt.legend() plt.title("Fonction logistique") plt.axis([-5, 5, -0.5, 2]);
nb_sci_ai/ai_ml_multilayer_perceptron_fr.ipynb
jdhp-docs/python_notebooks
mit
Le terme de biais TODO
fig, ax = nnfig.init_figure(size_x=8, size_y=6) HSPACE = 6 VSPACE = 4 # Synapse ##################################### #nnfig.draw_synapse(ax, (0,2*VSPACE), (HSPACE, 0), label=tex(STR_WEIGHT + "_0"), label_position=0.3) nnfig.draw_synapse(ax, (0, VSPACE), (HSPACE, 0), label=tex(STR_WEIGHT + "_1"), label_position=0.3) nnfig.draw_synapse(ax, (0, 0), (HSPACE, 0), label=tex(STR_WEIGHT + "_2"), label_position=0.3) nnfig.draw_synapse(ax, (0, -VSPACE), (HSPACE, 0), label=tex(STR_WEIGHT + "_3"), label_position=0.3, label_offset_y=-0.8) nnfig.draw_synapse(ax, (HSPACE, 0), (HSPACE + 2, 0)) # Neuron ###################################### # Layer 1 (input) #nnfig.draw_neuron(ax, (0,2*VSPACE), 0.5, empty=True) nnfig.draw_neuron(ax, (0, VSPACE), 0.5, empty=True) nnfig.draw_neuron(ax, (0, 0), 0.5, empty=True) nnfig.draw_neuron(ax, (0, -VSPACE), 0.5, empty=True) # Layer 2 nnfig.draw_neuron(ax, (HSPACE, 0), 1, ag_func="sum", tr_func="sigmoid") # Text ######################################## # Layer 1 (input) #plt.text(x=0.5, y=VSPACE+1, s=tex(STR_SIGOUT + "_i"), fontsize=12) #plt.text(x=-1.7, y=2*VSPACE, s=tex("1"), fontsize=12) plt.text(x=-1.7, y=VSPACE, s=tex(STR_SIGIN + "_1"), fontsize=12) plt.text(x=-1.7, y=-0.2, s=tex(STR_SIGIN + "_2"), fontsize=12) plt.text(x=-1.7, y=-VSPACE-0.2, s=tex(STR_SIGIN + "_3"), fontsize=12) # Layer 2 #plt.text(x=HSPACE-1.25, y=1.5, s=tex(STR_POT), fontsize=12) #plt.text(x=2*HSPACE+0.4, y=1.5, s=tex(STR_SIGOUT + "_o"), fontsize=12) plt.text(x=HSPACE+2.5, y=-0.3, s=tex(STR_SIGOUT), fontsize=12) plt.show() fig, ax = nnfig.init_figure(size_x=8, size_y=6) HSPACE = 6 VSPACE = 4 # Synapse ##################################### nnfig.draw_synapse(ax, (0,2*VSPACE), (HSPACE, 0), label=tex(STR_WEIGHT + "_0"), label_position=0.3) nnfig.draw_synapse(ax, (0, VSPACE), (HSPACE, 0), label=tex(STR_WEIGHT + "_1"), label_position=0.3) nnfig.draw_synapse(ax, (0, 0), (HSPACE, 0), label=tex(STR_WEIGHT + "_2"), label_position=0.3) nnfig.draw_synapse(ax, (0, -VSPACE), (HSPACE, 0), label=tex(STR_WEIGHT + "_3"), label_position=0.3, label_offset_y=-0.8) nnfig.draw_synapse(ax, (HSPACE, 0), (HSPACE + 2, 0)) # Neuron ###################################### # Layer 1 (input) nnfig.draw_neuron(ax, (0,2*VSPACE), 0.5, empty=True) nnfig.draw_neuron(ax, (0, VSPACE), 0.5, empty=True) nnfig.draw_neuron(ax, (0, 0), 0.5, empty=True) nnfig.draw_neuron(ax, (0, -VSPACE), 0.5, empty=True) # Layer 2 nnfig.draw_neuron(ax, (HSPACE, 0), 1, ag_func="sum", tr_func="sigmoid") # Text ######################################## # Layer 1 (input) #plt.text(x=0.5, y=VSPACE+1, s=tex(STR_SIGOUT + "_i"), fontsize=12) plt.text(x=-1.7, y=2*VSPACE, s=tex("1"), fontsize=12) plt.text(x=-1.7, y=VSPACE, s=tex(STR_SIGIN + "_1"), fontsize=12) plt.text(x=-1.7, y=-0.2, s=tex(STR_SIGIN + "_2"), fontsize=12) plt.text(x=-1.7, y=-VSPACE-0.2, s=tex(STR_SIGIN + "_3"), fontsize=12) # Layer 2 #plt.text(x=HSPACE-1.25, y=1.5, s=tex(STR_POT), fontsize=12) #plt.text(x=2*HSPACE+0.4, y=1.5, s=tex(STR_SIGOUT + "_o"), fontsize=12) plt.text(x=HSPACE+2.5, y=-0.3, s=tex(STR_SIGOUT), fontsize=12) plt.show()
nb_sci_ai/ai_ml_multilayer_perceptron_fr.ipynb
jdhp-docs/python_notebooks
mit
Exemple
fig, ax = nnfig.init_figure(size_x=8, size_y=6) HSPACE = 6 VSPACE = 4 # Synapse ##################################### nnfig.draw_synapse(ax, (0,2*VSPACE), (HSPACE, 0), label=tex(STR_WEIGHT + "_0"), label_position=0.3) nnfig.draw_synapse(ax, (0, VSPACE), (HSPACE, 0), label=tex(STR_WEIGHT + "_1"), label_position=0.3) nnfig.draw_synapse(ax, (0, 0), (HSPACE, 0), label=tex(STR_WEIGHT + "_2"), label_position=0.3) nnfig.draw_synapse(ax, (0, -VSPACE), (HSPACE, 0), label=tex(STR_WEIGHT + "_3"), label_position=0.3, label_offset_y=-0.8) nnfig.draw_synapse(ax, (HSPACE, 0), (HSPACE + 2, 0)) # Neuron ###################################### # Layer 1 (input) nnfig.draw_neuron(ax, (0,2*VSPACE), 0.5, empty=True) nnfig.draw_neuron(ax, (0, VSPACE), 0.5, empty=True) nnfig.draw_neuron(ax, (0, 0), 0.5, empty=True) nnfig.draw_neuron(ax, (0, -VSPACE), 0.5, empty=True) # Layer 2 nnfig.draw_neuron(ax, (HSPACE, 0), 1, ag_func="sum", tr_func="sigmoid") # Text ######################################## # Layer 1 (input) #plt.text(x=0.5, y=VSPACE+1, s=tex(STR_SIGOUT + "_i"), fontsize=12) plt.text(x=-1.7, y=2*VSPACE, s=tex("1"), fontsize=12) plt.text(x=-1.7, y=VSPACE, s=tex(STR_SIGIN + "_1"), fontsize=12) plt.text(x=-1.7, y=-0.2, s=tex(STR_SIGIN + "_2"), fontsize=12) plt.text(x=-1.7, y=-VSPACE-0.2, s=tex(STR_SIGIN + "_3"), fontsize=12) # Layer 2 #plt.text(x=HSPACE-1.25, y=1.5, s=tex(STR_POT), fontsize=12) #plt.text(x=2*HSPACE+0.4, y=1.5, s=tex(STR_SIGOUT + "_o"), fontsize=12) plt.text(x=HSPACE+2.5, y=-0.3, s=tex(STR_SIGOUT), fontsize=12) plt.show()
nb_sci_ai/ai_ml_multilayer_perceptron_fr.ipynb
jdhp-docs/python_notebooks
mit
Pour vecteur d'entrée = ... et un vecteur de poids arbitrairement fixé à ... et un neurone défini avec la fonction sigmoïde, on peut calculer la valeur de sortie du neurone : On a: $$ \sum_i \weight_i \feature_i = \dots $$ donc $$ y = \frac{1}{1 + e^{-\dots}} $$
@interact(w1=(-10., 10., 0.5), w2=(-10., 10., 0.5)) def nn1(wb1=0., w1=10.): x = np.linspace(-10., 10., 100) xb = np.ones(x.shape) s1 = wb1 * xb + w1 * x y = sigmoid(s1) plt.plot(x, y)
nb_sci_ai/ai_ml_multilayer_perceptron_fr.ipynb
jdhp-docs/python_notebooks
mit
Définition d'un réseau de neurones Disposition des neurones en couches et couches cachées TODO Exemple : réseau de neurones à 1 couche "cachée"
fig, ax = nnfig.init_figure(size_x=8, size_y=4) HSPACE = 6 VSPACE = 4 # Synapse ##################################### # Layer 1-2 nnfig.draw_synapse(ax, (0, VSPACE), (HSPACE, VSPACE), label=tex(STR_WEIGHT + "_1"), label_position=0.4) nnfig.draw_synapse(ax, (0, -VSPACE), (HSPACE, VSPACE), label=tex(STR_WEIGHT + "_3"), label_position=0.25, label_offset_y=-0.8) nnfig.draw_synapse(ax, (0, VSPACE), (HSPACE, -VSPACE), label=tex(STR_WEIGHT + "_2"), label_position=0.25) nnfig.draw_synapse(ax, (0, -VSPACE), (HSPACE, -VSPACE), label=tex(STR_WEIGHT + "_4"), label_position=0.4, label_offset_y=-0.8) # Layer 2-3 nnfig.draw_synapse(ax, (HSPACE, VSPACE), (2*HSPACE, 0), label=tex(STR_WEIGHT + "_5"), label_position=0.4) nnfig.draw_synapse(ax, (HSPACE, -VSPACE), (2*HSPACE, 0), label=tex(STR_WEIGHT + "_6"), label_position=0.4, label_offset_y=-0.8) nnfig.draw_synapse(ax, (2*HSPACE, 0), (2*HSPACE + 2, 0)) # Neuron ###################################### # Layer 1 (input) nnfig.draw_neuron(ax, (0, VSPACE), 0.5, empty=True) nnfig.draw_neuron(ax, (0, -VSPACE), 0.5, empty=True) # Layer 2 nnfig.draw_neuron(ax, (HSPACE, VSPACE), 1, ag_func="sum", tr_func="sigmoid") nnfig.draw_neuron(ax, (HSPACE, -VSPACE), 1, ag_func="sum", tr_func="sigmoid") # Layer 3 nnfig.draw_neuron(ax, (2*HSPACE, 0), 1, ag_func="sum", tr_func="sigmoid") # Text ######################################## # Layer 1 (input) #plt.text(x=0.5, y=VSPACE+1, s=tex(STR_SIGOUT + "_i"), fontsize=12) plt.text(x=-1.7, y=VSPACE, s=tex(STR_SIGIN + "_1"), fontsize=12) plt.text(x=-1.7, y=-VSPACE-0.2, s=tex(STR_SIGIN + "_2"), fontsize=12) # Layer 2 #plt.text(x=HSPACE-1.25, y=VSPACE+1.5, s=tex(STR_POT + "_1"), fontsize=12) plt.text(x=HSPACE+0.4, y=VSPACE+1.5, s=tex(STR_SIGOUT + "_1"), fontsize=12) #plt.text(x=HSPACE-1.25, y=-VSPACE-1.8, s=tex(STR_POT + "_2"), fontsize=12) plt.text(x=HSPACE+0.4, y=-VSPACE-1.8, s=tex(STR_SIGOUT + "_2"), fontsize=12) # Layer 3 #plt.text(x=2*HSPACE-1.25, y=1.5, s=tex(STR_POT + "_o"), fontsize=12) #plt.text(x=2*HSPACE+0.4, y=1.5, s=tex(STR_SIGOUT + "_o"), fontsize=12) plt.text(x=2*HSPACE+2.5, y=-0.3, s=tex(STR_SIGOUT), fontsize=12) plt.show()
nb_sci_ai/ai_ml_multilayer_perceptron_fr.ipynb
jdhp-docs/python_notebooks
mit
TODO: il manque les biais... $$ \sigout = \activfunc \left( \weight_5 ~ \underbrace{\activfunc \left(\weight_1 \feature_1 + \weight_3 \feature_2 \right)}{\sigout_1} + \weight_6 ~ \underbrace{\activfunc \left(\weight_2 \feature_1 + \weight_4 \feature_2 \right)}{\sigout_2} \right) $$
@interact(wb1=(-10., 10., 0.5), w1=(-10., 10., 0.5), wb2=(-10., 10., 0.5), w2=(-10., 10., 0.5)) def nn1(wb1=0.1, w1=0.1, wb2=0.1, w2=0.1): x = np.linspace(-10., 10., 100) xb = np.ones(x.shape) s1 = wb1 * xb + w1 * x y1 = sigmoid(s1) s2 = wb2 * xb + w2 * x y2 = sigmoid(s2) s = wb2 * xb + w2 * x y = sigmoid(s) plt.plot(x, y)
nb_sci_ai/ai_ml_multilayer_perceptron_fr.ipynb
jdhp-docs/python_notebooks
mit
Exemple : réseau de neurones à 2 couches "cachée"
fig, ax = nnfig.init_figure(size_x=8, size_y=4) HSPACE = 6 VSPACE = 4 # Synapse ##################################### # Layer 1-2 nnfig.draw_synapse(ax, (0, VSPACE), (HSPACE, VSPACE), label=tex(STR_WEIGHT + "_1"), label_position=0.4) nnfig.draw_synapse(ax, (0, -VSPACE), (HSPACE, VSPACE), label=tex(STR_WEIGHT + "_3"), label_position=0.25, label_offset_y=-0.8) nnfig.draw_synapse(ax, (0, VSPACE), (HSPACE, -VSPACE), label=tex(STR_WEIGHT + "_2"), label_position=0.25) nnfig.draw_synapse(ax, (0, -VSPACE), (HSPACE, -VSPACE), label=tex(STR_WEIGHT + "_4"), label_position=0.4, label_offset_y=-0.8) # Layer 2-3 nnfig.draw_synapse(ax, (HSPACE, VSPACE), (2*HSPACE, VSPACE), label=tex(STR_WEIGHT + "_5"), label_position=0.4) nnfig.draw_synapse(ax, (HSPACE, -VSPACE), (2*HSPACE, VSPACE), label=tex(STR_WEIGHT + "_7"), label_position=0.25, label_offset_y=-0.8) nnfig.draw_synapse(ax, (HSPACE, VSPACE), (2*HSPACE, -VSPACE), label=tex(STR_WEIGHT + "_6"), label_position=0.25) nnfig.draw_synapse(ax, (HSPACE, -VSPACE), (2*HSPACE, -VSPACE), label=tex(STR_WEIGHT + "_8"), label_position=0.4, label_offset_y=-0.8) # Layer 3-4 nnfig.draw_synapse(ax, (2*HSPACE, VSPACE), (3*HSPACE, 0), label=tex(STR_WEIGHT + "_9"), label_position=0.4) nnfig.draw_synapse(ax, (2*HSPACE, -VSPACE), (3*HSPACE, 0), label=tex(STR_WEIGHT + "_{10}"), label_position=0.4, label_offset_y=-0.8) nnfig.draw_synapse(ax, (3*HSPACE, 0), (3*HSPACE + 2, 0)) # Neuron ###################################### # Layer 1 (input) nnfig.draw_neuron(ax, (0, VSPACE), 0.5, empty=True) nnfig.draw_neuron(ax, (0, -VSPACE), 0.5, empty=True) # Layer 2 nnfig.draw_neuron(ax, (HSPACE, VSPACE), 1, ag_func="sum", tr_func="sigmoid") nnfig.draw_neuron(ax, (HSPACE, -VSPACE), 1, ag_func="sum", tr_func="sigmoid") # Layer 3 nnfig.draw_neuron(ax, (2*HSPACE, VSPACE), 1, ag_func="sum", tr_func="sigmoid") nnfig.draw_neuron(ax, (2*HSPACE, -VSPACE), 1, ag_func="sum", tr_func="sigmoid") # Layer 4 nnfig.draw_neuron(ax, (3*HSPACE, 0), 1, ag_func="sum", tr_func="sigmoid") # Text ######################################## # Layer 1 (input) #plt.text(x=0.5, y=VSPACE+1, s=tex(STR_SIGOUT + "_i"), fontsize=12) plt.text(x=-1.7, y=VSPACE, s=tex(STR_SIGIN + "_1"), fontsize=12) plt.text(x=-1.7, y=-VSPACE-0.2, s=tex(STR_SIGIN + "_2"), fontsize=12) # Layer 2 #plt.text(x=HSPACE-1.25, y=VSPACE+1.5, s=tex(STR_POT + "_1"), fontsize=12) plt.text(x=HSPACE+0.4, y=VSPACE+1.5, s=tex(STR_SIGOUT + "_1"), fontsize=12) #plt.text(x=HSPACE-1.25, y=-VSPACE-1.8, s=tex(STR_POT + "_2"), fontsize=12) plt.text(x=HSPACE+0.4, y=-VSPACE-1.8, s=tex(STR_SIGOUT + "_2"), fontsize=12) # Layer 3 #plt.text(x=2*HSPACE-1.25, y=VSPACE+1.5, s=tex(STR_POT + "_3"), fontsize=12) plt.text(x=2*HSPACE+0.4, y=VSPACE+1.5, s=tex(STR_SIGOUT + "_3"), fontsize=12) #plt.text(x=2*HSPACE-1.25, y=-VSPACE-1.8, s=tex(STR_POT + "_4"), fontsize=12) plt.text(x=2*HSPACE+0.4, y=-VSPACE-1.8, s=tex(STR_SIGOUT + "_4"), fontsize=12) # Layer 4 #plt.text(x=3*HSPACE-1.25, y=1.5, s=tex(STR_POT + "_o"), fontsize=12) #plt.text(x=3*HSPACE+0.4, y=1.5, s=tex(STR_SIGOUT + "_o"), fontsize=12) plt.text(x=3*HSPACE+2.5, y=-0.3, s=tex(STR_SIGOUT), fontsize=12) plt.show()
nb_sci_ai/ai_ml_multilayer_perceptron_fr.ipynb
jdhp-docs/python_notebooks
mit
TODO: il manque le biais... $ \newcommand{\yone}{\underbrace{\activfunc \left(\weight_1 \feature_1 + \weight_3 \feature_2 \right)}{\sigout_1}} \newcommand{\ytwo}{\underbrace{\activfunc \left(\weight_2 \feature_1 + \weight_4 \feature_2 \right)}{\sigout_2}} \newcommand{\ythree}{\underbrace{\activfunc \left(\weight_5 \yone + \weight_7 \ytwo \right)}{\sigout_3}} \newcommand{\yfour}{\underbrace{\activfunc \left(\weight_6 \yone + \weight_8 \ytwo \right)}{\sigout_4}} $ $$ \sigout = \activfunc \left( \weight_9 ~ \ythree + \weight_{10} ~ \yfour \right) $$ Pouvoir expressif d'un réseau de neurones TODO Apprentissage Fonction objectif (ou fonction d'erreur) Fonction objectif: $\errfunc \left( \weights \right)$ Typiquement, la fonction objectif (fonction d'erreur) est la somme du carré de l'erreur de chaque neurone de sortie. $$ \errfunc = \frac12 \sum_{\cur \in \Omega} \left[ \sigout_\cur - \sigoutdes_\cur \right]^2 $$ $\Omega$: l'ensemble des neurones de sortie Le $\frac12$, c'est juste pour simplifier les calculs de la dérivée.
fig, ax = nnfig.init_figure(size_x=8, size_y=4) nnfig.draw_synapse(ax, (0, -6), (10, 0)) nnfig.draw_synapse(ax, (0, -2), (10, 0)) nnfig.draw_synapse(ax, (0, 2), (10, 0)) nnfig.draw_synapse(ax, (0, 6), (10, 0)) nnfig.draw_synapse(ax, (0, -6), (10, -4)) nnfig.draw_synapse(ax, (0, -2), (10, -4)) nnfig.draw_synapse(ax, (0, 2), (10, -4)) nnfig.draw_synapse(ax, (0, 6), (10, -4)) nnfig.draw_synapse(ax, (0, -6), (10, 4)) nnfig.draw_synapse(ax, (0, -2), (10, 4)) nnfig.draw_synapse(ax, (0, 2), (10, 4)) nnfig.draw_synapse(ax, (0, 6), (10, 4)) nnfig.draw_synapse(ax, (10, -4), (12, -4)) nnfig.draw_synapse(ax, (10, 0), (12, 0)) nnfig.draw_synapse(ax, (10, 4), (12, 4)) nnfig.draw_neuron(ax, (0, -6), 0.5, empty=True) nnfig.draw_neuron(ax, (0, -2), 0.5, empty=True) nnfig.draw_neuron(ax, (0, 2), 0.5, empty=True) nnfig.draw_neuron(ax, (0, 6), 0.5, empty=True) nnfig.draw_neuron(ax, (10, -4), 1, ag_func="sum", tr_func="sigmoid") nnfig.draw_neuron(ax, (10, 0), 1, ag_func="sum", tr_func="sigmoid") nnfig.draw_neuron(ax, (10, 4), 1, ag_func="sum", tr_func="sigmoid") plt.text(x=0, y=7.5, s=tex(STR_PREV), fontsize=14) plt.text(x=10, y=7.5, s=tex(STR_CUR), fontsize=14) plt.text(x=0, y=0, s=r"$\vdots$", fontsize=14) plt.text(x=9.7, y=-6.1, s=r"$\vdots$", fontsize=14) plt.text(x=9.7, y=5.8, s=r"$\vdots$", fontsize=14) plt.text(x=12.5, y=4, s=tex(STR_SIGOUT + "_1"), fontsize=14) plt.text(x=12.5, y=0, s=tex(STR_SIGOUT + "_2"), fontsize=14) plt.text(x=12.5, y=-4, s=tex(STR_SIGOUT + "_3"), fontsize=14) plt.text(x=16, y=4, s=tex(STR_ERRFUNC + "_1 = " + STR_SIGOUT + "_1 - " + STR_SIGOUT_DES + "_1"), fontsize=14) plt.text(x=16, y=0, s=tex(STR_ERRFUNC + "_2 = " + STR_SIGOUT + "_2 - " + STR_SIGOUT_DES + "_2"), fontsize=14) plt.text(x=16, y=-4, s=tex(STR_ERRFUNC + "_3 = " + STR_SIGOUT + "_3 - " + STR_SIGOUT_DES + "_3"), fontsize=14) plt.text(x=16, y=-8, s=tex(STR_ERRFUNC + " = 1/2 ( " + STR_ERRFUNC + "^2_1 + " + STR_ERRFUNC + "^2_2 + " + STR_ERRFUNC + "^2_3 + \dots )"), fontsize=14) plt.show()
nb_sci_ai/ai_ml_multilayer_perceptron_fr.ipynb
jdhp-docs/python_notebooks
mit
Mise à jours des poids $$ \weights_{\learnit + 1} = \weights_{\learnit} - \underbrace{\learnrate \nabla_{\weights} \errfunc \left( \weights_{\learnit} \right)} $$ $- \learnrate \nabla_{\weights} \errfunc \left( \weights_{\learnit} \right)$: descend dans la direction opposée au gradient (plus forte pente) avec $\nabla_{\weights} \errfunc \left( \weights_{\learnit} \right)$: gradient de la fonction objectif au point $\weights$ $\learnrate > 0$: pas (ou taux) d'apprentissage $$ \begin{align} \delta_{\wcur} & = \wcur_{\learnit + 1} - \wcur_{\learnit} \ & = - \learnrate \frac{\partial \errfunc}{\partial \wcur} \end{align} $$ $$ \Leftrightarrow \wcur_{\learnit + 1} = \wcur_{\learnit} - \learnrate \frac{\partial \errfunc}{\partial \wcur} $$ Chaque présentation de l'ensemble des exemples = un cycle (ou une époque) d'apprentissage Critère d'arrêt de l'apprentissage: quand la valeur de la fonction objectif se stabilise (ou que le problème est résolu avec la précision souhaitée) Dérivée des principales fonctions d'activation Fonction sigmoïde Fonction dérivée : $$ f'(x) = \frac{\lambda e^{-\lambda x}}{(1+e^{-\lambda x})^{2}} $$ qui peut aussi être défini par $$ \frac{\mathrm{d} y}{\mathrm{d} x} = \lambda y (1-y) $$ où $y$ varie de 0 à 1.
def d_sigmoid(x, _lambda=1.): e = np.exp(-_lambda * x) y = _lambda * e / np.power(1 + e, 2) return y %matplotlib inline x = np.linspace(-5, 5, 300) y1 = d_sigmoid(x, 1.) y2 = d_sigmoid(x, 5.) y3 = d_sigmoid(x, 0.5) plt.plot(x, y1, label=r"$\lambda=1$") plt.plot(x, y2, label=r"$\lambda=5$") plt.plot(x, y3, label=r"$\lambda=0.5$") plt.hlines(y=0, xmin=-5, xmax=5, color='gray', linestyles='dotted') plt.vlines(x=0, ymin=-2, ymax=2, color='gray', linestyles='dotted') plt.legend() plt.title("Fonction dérivée de la sigmoïde") plt.axis([-5, 5, -0.5, 2]);
nb_sci_ai/ai_ml_multilayer_perceptron_fr.ipynb
jdhp-docs/python_notebooks
mit
Tangente hyperbolique Dérivée : $$ \tanh '= \frac{1}{\cosh^{2}} = 1-\tanh^{2} $$
def d_tanh(x): y = 1. - np.power(np.tanh(x), 2) return y x = np.linspace(-5, 5, 300) y = d_tanh(x) plt.plot(x, y) plt.hlines(y=0, xmin=-5, xmax=5, color='gray', linestyles='dotted') plt.vlines(x=0, ymin=-2, ymax=2, color='gray', linestyles='dotted') plt.title("Fonction dérivée de la tangente hyperbolique") plt.axis([-5, 5, -2, 2]); # TODO # - "généralement le minimum local suffit" (preuve ???) # - "dans le cas contraire, le plus simple est de recommencer plusieurs fois l'apprentissage avec des poids initiaux différents et de conserver la meilleure matrice $\weights$ (celle qui minimise $\errfunc$)" fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(4, 4)) x = np.arange(10, 30, 0.1) y = (x - 20)**2 + 2 ax.set_xlabel(r"Poids $" + STR_WEIGHTS + "$", fontsize=14) ax.set_ylabel(r"Fonction objectif $" + STR_ERRFUNC + "$", fontsize=14) # See http://matplotlib.org/api/axes_api.html#matplotlib.axes.Axes.tick_params ax.tick_params(axis='both', # changes apply to the x and y axis which='both', # both major and minor ticks are affected bottom='on', # ticks along the bottom edge are on top='off', # ticks along the top edge are off left='on', # ticks along the left edge are on right='off', # ticks along the right edge are off labelbottom='off', # labels along the bottom edge are off labelleft='off') # labels along the lefleft are off ax.set_xlim(left=10, right=25) ax.set_ylim(bottom=0, top=5) ax.plot(x, y);
nb_sci_ai/ai_ml_multilayer_perceptron_fr.ipynb
jdhp-docs/python_notebooks
mit
Apprentissage incrémentiel (ou partiel) (ang. incremental learning): on ajuste les poids $\weights$ après la présentation d'un seul exemple ("ce n'est pas une véritable descente de gradient"). C'est mieux pour éviter les minimums locaux, surtout si les exemples sont mélangés au début de chaque itération
# *Apprentissage différé* (ang. *batch learning*): # TODO # Est-ce que la fonction objectif $\errfunc$ est une fonction multivariée # ou est-ce une aggrégation des erreurs de chaque exemple ? # **TODO: règle du delta / règle du delta généralisée**
nb_sci_ai/ai_ml_multilayer_perceptron_fr.ipynb
jdhp-docs/python_notebooks
mit
Rétropropagation du gradient Rétropropagation du gradient: une méthode pour calculer efficacement le gradient de la fonction objectif $\errfunc$. Intuition: La rétropropagation du gradient n'est qu'une méthode parmis d'autre pour résoudre le probème d'optimisation des poids $\weight$. On pourrait très bien résoudre ce problème d'optimisation avec des algorithmes évolutionnistes par exemple. En fait, l'intérêt de la méthode de la rétropropagation du gradient (et ce qui explique sa notoriété) est qu'elle formule le problème d'optimisation des poids avec une écriture analytique particulièrement efficace qui élimine astucieusement un grand nombre de calculs redondants (un peu à la manière de ce qui se fait en programmation dynamique): quand on decide d'optimiser les poids via une descente de gradient, certains termes (les signaux d'erreurs $\errsig$) apparaissent un grand nombre de fois dans l'écriture analytique complète du gradient. La méthode de la retropropagation du gradient fait en sorte que ces termes ne soient calculés qu'une seule fois. À noter qu'on aurrait très bien pu résoudre le problème avec une descente de gradient oú le gradient $\frac{\partial \errfunc}{\partial\wcur_{\learnit}}$ serait calculé via une approximation numérique (méthode des différences finies par exemple) mais ce serait beaucoup plus lent et beaucoup moins efficace... Principe: on modifie les poids à l'aide des signaux d'erreur $\errsig$. $$ \wcur_{\learnit + 1} = \wcur_{\learnit} \underbrace{- \learnrate \frac{\partial \errfunc}{\partial \wcur_{\learnit}}}{\delta\prevcur} $$ $$ \begin{align} \delta_\prevcur & = - \learnrate \frac{\partial \errfunc}{\partial \wcur(\learnit)} \ & = - \learnrate \errsig_\cur \sigout\prev \end{align} $$ Dans le cas de l'apprentissage différé (batch), on calcule pour chaque exemple l'erreur correspondante. Leur contribution individuelle aux modifications des poids sont additionnées L'apprentissage suppervisé fonctionne mieux avec des neurones de sortie linéaires (fonction d'activation $\activfunc$ = fonction identitée) "car les signaux d'erreurs se transmettent mieux". Des données d'entrée binaires doivent être choisies dans ${-1,1}$ plutôt que ${0,1}$ car un signal nul ne contribu pas à l'apprentissage.
# TODO #Voc: #- *erreur marginale*: **TODO**
nb_sci_ai/ai_ml_multilayer_perceptron_fr.ipynb
jdhp-docs/python_notebooks
mit
Note intéressante de Jürgen Schmidhuber : http://people.idsia.ch/~juergen/who-invented-backpropagation.html Signaux d'erreur $\errsig_\cur$ pour les neurones de sortie $(\cur \in \Omega)$ $$ \errsig_\cur = \activfunc'(\pot_\cur)[\sigout_\cur - \sigoutdes_\cur] $$ Signaux d'erreur $\errsig_\cur$ pour les neurones cachés $(\cur \not\in \Omega)$ $$ \errsig_\cur = \activfunc'(\pot_\cur) \sum_\next \weight_\curnext \errsig_\next $$
fig, ax = nnfig.init_figure(size_x=8, size_y=4) nnfig.draw_synapse(ax, (0, -2), (10, 0)) nnfig.draw_synapse(ax, (0, 2), (10, 0), label=tex(STR_WEIGHT + "_{" + STR_NEXT + STR_CUR + "}"), label_position=0.5, fontsize=14) nnfig.draw_synapse(ax, (10, 0), (12, 0)) nnfig.draw_neuron(ax, (0, -2), 0.5, empty=True) nnfig.draw_neuron(ax, (0, 2), 0.5, empty=True) plt.text(x=0, y=3.5, s=tex(STR_CUR), fontsize=14) plt.text(x=10, y=3.5, s=tex(STR_NEXT), fontsize=14) plt.text(x=0, y=-0.2, s=r"$\vdots$", fontsize=14) nnfig.draw_neuron(ax, (10, 0), 1, ag_func="sum", tr_func="sigmoid") plt.show()
nb_sci_ai/ai_ml_multilayer_perceptron_fr.ipynb
jdhp-docs/python_notebooks
mit
Plus de détail : calcul de $\errsig_\cur$ Dans l'exemple suivant on ne s'intéresse qu'aux poids $\weight_1$, $\weight_2$, $\weight_3$, $\weight_4$ et $\weight_5$ pour simplifier la demonstration.
fig, ax = nnfig.init_figure(size_x=8, size_y=4) HSPACE = 6 VSPACE = 4 # Synapse ##################################### # Layer 1-2 nnfig.draw_synapse(ax, (0, VSPACE), (HSPACE, VSPACE), label=tex(STR_WEIGHT + "_1"), label_position=0.4) nnfig.draw_synapse(ax, (0, -VSPACE), (HSPACE, VSPACE), color="lightgray") nnfig.draw_synapse(ax, (0, VSPACE), (HSPACE, -VSPACE), color="lightgray") nnfig.draw_synapse(ax, (0, -VSPACE), (HSPACE, -VSPACE), color="lightgray") # Layer 2-3 nnfig.draw_synapse(ax, (HSPACE, VSPACE), (2*HSPACE, VSPACE), label=tex(STR_WEIGHT + "_2"), label_position=0.4) nnfig.draw_synapse(ax, (HSPACE, -VSPACE), (2*HSPACE, VSPACE), color="lightgray") nnfig.draw_synapse(ax, (HSPACE, VSPACE), (2*HSPACE, -VSPACE), label=tex(STR_WEIGHT + "_3"), label_position=0.4) nnfig.draw_synapse(ax, (HSPACE, -VSPACE), (2*HSPACE, -VSPACE), color="lightgray") # Layer 3-4 nnfig.draw_synapse(ax, (2*HSPACE, VSPACE), (3*HSPACE, 0), label=tex(STR_WEIGHT + "_4"), label_position=0.4) nnfig.draw_synapse(ax, (2*HSPACE, -VSPACE), (3*HSPACE, 0), label=tex(STR_WEIGHT + "_5"), label_position=0.4, label_offset_y=-0.8) # Neuron ###################################### # Layer 1 (input) nnfig.draw_neuron(ax, (0, VSPACE), 0.5, empty=True) nnfig.draw_neuron(ax, (0, -VSPACE), 0.5, empty=True, line_color="lightgray") # Layer 2 nnfig.draw_neuron(ax, (HSPACE, VSPACE), 1, ag_func="sum", tr_func="sigmoid") nnfig.draw_neuron(ax, (HSPACE, -VSPACE), 1, ag_func="sum", tr_func="sigmoid", line_color="lightgray") # Layer 3 nnfig.draw_neuron(ax, (2*HSPACE, VSPACE), 1, ag_func="sum", tr_func="sigmoid") nnfig.draw_neuron(ax, (2*HSPACE, -VSPACE), 1, ag_func="sum", tr_func="sigmoid") # Layer 4 nnfig.draw_neuron(ax, (3*HSPACE, 0), 1, ag_func="sum", tr_func="sigmoid") # Text ######################################## # Layer 1 (input) plt.text(x=0.5, y=VSPACE+1, s=tex(STR_SIGOUT + "_i"), fontsize=12) # Layer 2 plt.text(x=HSPACE-1.25, y=VSPACE+1.5, s=tex(STR_POT + "_1"), fontsize=12) plt.text(x=HSPACE+0.4, y=VSPACE+1.5, s=tex(STR_SIGOUT + "_1"), fontsize=12) # Layer 3 plt.text(x=2*HSPACE-1.25, y=VSPACE+1.5, s=tex(STR_POT + "_2"), fontsize=12) plt.text(x=2*HSPACE+0.4, y=VSPACE+1.5, s=tex(STR_SIGOUT + "_2"), fontsize=12) plt.text(x=2*HSPACE-1.25, y=-VSPACE-1.8, s=tex(STR_POT + "_3"), fontsize=12) plt.text(x=2*HSPACE+0.4, y=-VSPACE-1.8, s=tex(STR_SIGOUT + "_3"), fontsize=12) # Layer 4 plt.text(x=3*HSPACE-1.25, y=1.5, s=tex(STR_POT + "_o"), fontsize=12) plt.text(x=3*HSPACE+0.4, y=1.5, s=tex(STR_SIGOUT + "_o"), fontsize=12) plt.text(x=3*HSPACE+2, y=-0.3, s=tex(STR_ERRFUNC + " = (" + STR_SIGOUT + "_o - " + STR_SIGOUT_DES + "_o)^2/2"), fontsize=12) plt.show()
nb_sci_ai/ai_ml_multilayer_perceptron_fr.ipynb
jdhp-docs/python_notebooks
mit
Attention: $\weight_1$ influe $\pot_2$ et $\pot_3$ en plus de $\pot_1$ et $\pot_o$. Calcul de la dérivée partielle de l'erreur par rapport au poid synaptique $\weight_4$ rappel: $$ \begin{align} \errfunc &= \frac12 \left( \sigout_o - \sigoutdes_o \right)^2 \tag{1} \ \sigout_o &= \activfunc(\pot_o) \tag{2} \ \pot_o &= \sigout_2 \weight_4 + \sigout_3 \weight_5 \tag{3} \ \end{align} $$ c'est à dire: $$ \errfunc = \frac12 \left( \activfunc \left( \sigout_2 \weight_4 + \sigout_3 \weight_5 \right) - \sigoutdes_o \right)^2 $$ donc, en appliquant les règles de derivation de fonctions composées, on a: $$ \frac{\partial \errfunc}{\partial \weight_4} = \frac{\partial \pot_o}{\partial \weight_4} \underbrace{ \frac{\partial \sigout_o}{\partial \pot_o} \frac{\partial \errfunc}{\partial \sigout_o} }_{\errsig_o} $$ Rappel: dérivation des fonctions composées (parfois appelé règle de dérivation en chaîne ou règle de la chaîne) $$ \frac{\mathrm{d} y}{\mathrm{d} x} = \frac{\mathrm{d} y}{\mathrm{d} u} \cdot \frac{\mathrm{d} u}{\mathrm {d} x} $$ de (1), (2) et (3) on déduit: $$ \begin{align} \frac{\partial \pot_o}{\partial \weight_4} &= \sigout_2 \ \frac{\partial \sigout_o}{\partial \pot_o} &= \activfunc'(\pot_o) \ \frac{\partial \errfunc}{\partial \sigout_o} &= \sigout_o - \sigoutdes_o \ \end{align} $$ le signal d'erreur s'écrit donc: $$ \begin{align} \errsig_o &= \frac{\partial \sigout_o}{\partial \pot_o} \frac{\partial \errfunc}{\partial \sigout_o} \ &= \activfunc'(\pot_o) [\sigout_o - \sigoutdes_o] \end{align} $$ Calcul de la dérivée partielle de l'erreur par rapport au poid synaptique $\weight_5$ $$ \frac{\partial \errfunc}{\partial \weight_5} = \frac{\partial \pot_o}{\partial \weight_5} \underbrace{ \frac{\partial \sigout_o}{\partial \pot_o} \frac{\partial \errfunc}{\partial \sigout_o} }_{\errsig_o} $$ avec: $$ \begin{align} \frac{\partial \pot_o}{\partial \weight_5} &= \sigout_3 \ \frac{\partial \sigout_o}{\partial \pot_o} &= \activfunc'(\pot_o) \ \frac{\partial \errfunc}{\partial \sigout_o} &= \sigout_o - \sigoutdes_o \ \errsig_o &= \frac{\partial \sigout_o}{\partial \pot_o} \frac{\partial \errfunc}{\partial \sigout_o} \ &= \activfunc'(\pot_o) [\sigout_o - \sigoutdes_o] \end{align} $$ Calcul de la dérivée partielle de l'erreur par rapport au poid synaptique $\weight_2$ $$ \frac{\partial \errfunc}{\partial \weight_2} = \frac{\partial \pot_2}{\partial \weight_2} % \underbrace{ \frac{\partial \sigout_2}{\partial \pot_2} \frac{\partial \pot_o}{\partial \sigout_2} \underbrace{ \frac{\partial \sigout_o}{\partial \pot_o} \frac{\partial \errfunc}{\partial \sigout_o} }{\errsig_o} }{\errsig_2} $$ avec: $$ \begin{align} \frac{\partial \pot_2}{\partial \weight_2} &= \sigout_1 \ \frac{\partial \sigout_2}{\partial \pot_2} &= \activfunc'(\pot_2) \ \frac{\partial \pot_o}{\partial \sigout_2} &= \weight_4 \ \errsig_2 &= \frac{\partial \sigout_2}{\partial \pot_2} \frac{\partial \pot_o}{\partial \sigout_2} \errsig_o \ &= \activfunc'(\pot_2) \weight_4 \errsig_o \end{align} $$ Calcul de la dérivée partielle de l'erreur par rapport au poid synaptique $\weight_3$ $$ \frac{\partial \errfunc}{\partial \weight_3} = \frac{\partial \pot_3}{\partial \weight_3} % \underbrace{ \frac{\partial \sigout_3}{\partial \pot_3} \frac{\partial \pot_o}{\partial \sigout_3} \underbrace{ \frac{\partial \sigout_o}{\partial \pot_o} \frac{\partial \errfunc}{\partial \sigout_o} }{\errsig_o} }{\errsig_3} $$ avec: $$ \begin{align} \frac{\partial \pot_3}{\partial \weight_3} &= \sigout_1 \ \frac{\partial \sigout_3}{\partial \pot_3} &= \activfunc'(\pot_3) \ \frac{\partial \pot_o}{\partial \sigout_3} &= \weight_5 \ \errsig_3 &= \frac{\partial \sigout_3}{\partial \pot_3} \frac{\partial \pot_o}{\partial \sigout_3} \errsig_o \ &= \activfunc'(\pot_3) \weight_5 \errsig_o \end{align} $$ Calcul de la dérivée partielle de l'erreur par rapport au poid synaptique $\weight_1$ $$ \frac{\partial \errfunc}{\partial \weight_1} = \frac{\partial \pot_1}{\partial \weight_1} % \underbrace{ \frac{\partial \sigout_1}{\partial \pot_1} \left( \frac{\partial \pot_2}{\partial \sigout_1} % err? \underbrace{ \frac{\partial \sigout_2}{\partial \pot_2} \frac{\partial \pot_o}{\partial \sigout_2} \underbrace{ \frac{\partial \sigout_o}{\partial \pot_o} \frac{\partial \errfunc}{\partial \sigout_o} }{\errsig_o} }{\errsig_2} + \frac{\partial \pot_3}{\partial \sigout_1} % err? \underbrace{ \frac{\partial \sigout_3}{\partial \pot_3} \frac{\partial \pot_o}{\partial \sigout_3} \underbrace{ \frac{\partial \sigout_o}{\partial \pot_o} \frac{\partial \errfunc}{\partial \sigout_o} }{\errsig_o} }{\errsig_3} \right) }_{\errsig_1} $$ avec: $$ \begin{align} \frac{\partial \pot_1}{\partial \weight_1} &= \sigout_i \ \frac{\partial \sigout_1}{\partial \pot_1} &= \activfunc'(\pot_1) \ \frac{\partial \pot_2}{\partial \sigout_1} &= \weight_2 \ \frac{\partial \pot_3}{\partial \sigout_1} &= \weight_3 \ \errsig_1 &= \frac{\partial \sigout_1}{\partial \pot_1} \left( \frac{\partial \pot_2}{\partial \sigout_1} \errsig_2 + \frac{\partial \pot_3}{\partial \sigout_1} \errsig_3 \right) \ &= \activfunc'(\pot_1) \left( \weight_2 \errsig_2 + \weight_3 \errsig_3 \right) \end{align} $$ Python implementation
# Define the activation function and its derivative activation_function = tanh d_activation_function = d_tanh def init_weights(num_input_cells, num_output_cells, num_cell_per_hidden_layer, num_hidden_layers=1): """ The returned `weights` object is a list of weight matrices, where weight matrix at index $i$ represents the weights between layer $i$ and layer $i+1$. Numpy array shapes for e.g. num_input_cells=2, num_output_cells=2, num_cell_per_hidden_layer=3 (without taking account bias): - in: (2,) - in+bias: (3,) - w[0]: (3,3) - w[0]+bias: (3,4) - w[1]: (3,2) - w[1]+bias: (4,2) - out: (2,) """ # TODO: # - faut-il que wij soit positif ? # - loi normale plus appropriée que loi uniforme ? # - quel sigma conseillé ? W = [] # Weights between the input layer and the first hidden layer W.append(np.random.uniform(low=0., high=1., size=(num_input_cells + 1, num_cell_per_hidden_layer + 1))) # Weights between hidden layers (if there are more than one hidden layer) for layer in range(num_hidden_layers - 1): W.append(np.random.uniform(low=0., high=1., size=(num_cell_per_hidden_layer + 1, num_cell_per_hidden_layer + 1))) # Weights between the last hidden layer and the output layer W.append(np.random.uniform(low=0., high=1., size=(num_cell_per_hidden_layer + 1, num_output_cells))) return W def evaluate_network(weights, input_signal): # TODO: find a better name # Add the bias on the input layer input_signal = np.concatenate([input_signal, [-1]]) assert input_signal.ndim == 1 assert input_signal.shape[0] == weights[0].shape[0] # Compute the output of the first hidden layer p = np.dot(input_signal, weights[0]) output_hidden_layer = activation_function(p) # Compute the output of the intermediate hidden layers # TODO: check this num_layers = len(weights) for n in range(num_layers - 2): p = np.dot(output_hidden_layer, weights[n + 1]) output_hidden_layer = activation_function(p) # Compute the output of the output layer p = np.dot(output_hidden_layer, weights[-1]) output_signal = activation_function(p) return output_signal def compute_gradient(): # TODO pass weights = init_weights(num_input_cells=2, num_output_cells=2, num_cell_per_hidden_layer=3, num_hidden_layers=1) print(weights) #print(weights[0].shape) #print(weights[1].shape) input_signal = np.array([.1, .2]) input_signal evaluate_network(weights, input_signal)
nb_sci_ai/ai_ml_multilayer_perceptron_fr.ipynb
jdhp-docs/python_notebooks
mit
Divers Le PMC peut approximer n'importe quelle fonction continue avec une précision arbitraire suivant le nombre de neurones présents sur la couche cachée. Initialisation des poids: généralement des petites valeurs aléatoires
# TODO: la différence entre: # * réseau bouclé # * réseau récurent
nb_sci_ai/ai_ml_multilayer_perceptron_fr.ipynb
jdhp-docs/python_notebooks
mit
Structure Types Structure types are assigned by hand by ICSD curators.
# How many ternaries have been assigned a structure type? structure_types = [line[3] for line in data if line[3] is not ''] unique_structure_types = set(structure_types) print("There are {} ICSD ternaries entries.".format(len(data))) print("Structure types are assigned for {} entries.".format(len(structure_types))) print("There are {} unique structure types.".format(len(unique_structure_types)))
notebooks/old_ICSD_Notebooks/Understanding ICSD data.ipynb
bismayan/MaterialsMachineLearning
mit
Filter for stoichiometric compounds only:
def is_stoichiometric(composition): return np.all(np.mod(composition.values(), 1) == 0) stoichiometric_compositions = [c for c in compositions if is_stoichiometric(c)] print("Number of stoichiometric compositions: {}".format(len(stoichiometric_compositions))) ternaries = set(c.formula for c in stoichiometric_compositions) len(ternaries) data_stoichiometric = [x for x in data if is_stoichiometric(Composition(x[2]))] from collections import Counter struct_type_freq = Counter(x[3] for x in data_stoichiometric if x[3] is not '') plt.loglog(range(1, len(struct_type_freq)+1), sorted(struct_type_freq.values(), reverse = True), 'o') sorted(struct_type_freq.items(), key = lambda x: x[1], reverse = True) len(set([x[2] for x in data if x[3] == 'Perovskite-GdFeO3'])) uniq_phases = set() for row in data_stoichiometric: spacegroup, formula, struct_type = row[1:4] phase = (spacegroup, Composition(formula).formula, struct_type) uniq_phases.add(phase) uniq_struct_type_freq = Counter(x[2] for x in uniq_phases if x[2] is not '') uniq_struct_type_freq_sorted = sorted(uniq_struc_type_freq.items(), key = lambda x: x[1], reverse = True) plt.loglog(range(1, len(uniq_struct_type_freq_sorted)+1), [x[1] for x in uniq_struct_type_freq_sorted], 'o') uniq_struct_type_freq_sorted for struct_type,freq in uniq_struct_type_freq_sorted[:10]: print("{} : {}".format(struct_type, freq)) fffs = [p[1] for p in uniq_phases if p[2] == struct_type] fmt = " ".join(["{:14}"]*5) print(fmt.format(*fffs[0:5])) print(fmt.format(*fffs[5:10])) print(fmt.format(*fffs[10:15])) print(fmt.format(*fffs[15:20]))
notebooks/old_ICSD_Notebooks/Understanding ICSD data.ipynb
bismayan/MaterialsMachineLearning
mit
Long Formulas
# What are the longest formulas? for formula in sorted(formulas, key = lambda x: len(x), reverse = True)[:20]: print(formula)
notebooks/old_ICSD_Notebooks/Understanding ICSD data.ipynb
bismayan/MaterialsMachineLearning
mit
Two key insights: 1. Just because there are three elements in the formula doesn't mean the compound is fundamentally a ternary. There are doped binaries which masquerade as ternaries. And there are doped ternaries which masquerade as quaternaries, or even quintenaries. Because I only asked for compositions with 3 elements, this data is missing. 2. ICSD has strategically placed parentheses in the formulas which give hints as to logical groupings. For example: (Ho1.3 Ti0.7) ((Ti0.64 Ho1.36) O6.67) is in fact in the pyrochlore family, A2B2O7. Intermetallics How many intermetallics does the ICSD database contain?
def filter_in_set(compound, universe): return all((e in universe) for e in Composition(compound)) transition_metals = [e for e in Element if e.is_transition_metal] tm_ternaries = [c for c in formulas if filter_in_set(c, transition_metals)] print("Number of intermetallics:", len(tm_ternaries)) unique_tm_ternaries = set([Composition(c).formula for c in tm_ternaries]) print("Number of unique intermetallics:", len(unique_tm_ternaries)) unique_tm_ternaries
notebooks/old_ICSD_Notebooks/Understanding ICSD data.ipynb
bismayan/MaterialsMachineLearning
mit
A repository: a group of linked commits <!-- offline: ![](files/fig/threecommits.png) --> <img src="https://raw.github.com/fperez/reprosw/master/fig/threecommits.png" > Note: these form a Directed Acyclic Graph (DAG), with nodes identified by their hash. A hash: a fingerprint of the content of each commit and its parent
import sha # Our first commit data1 = 'This is the start of my paper2.' meta1 = 'date: 1/1/12' hash1 = sha.sha(data1 + meta1).hexdigest() print 'Hash:', hash1 # Our second commit, linked to the first data2 = 'Some more text in my paper...' meta2 = 'date: 1/2/12' # Note we add the parent hash here! hash2 = sha.sha(data2 + meta2 + hash1).hexdigest() print 'Hash:', hash2
day1/Version Control.ipynb
AstroHackWeek/AstroHackWeek2014
bsd-3-clause
And this is pretty much the essence of Git! First things first: git must be configured before first use The minimal amount of configuration for git to work without pestering you is to tell it who you are:
%%bash git config --global user.name "Fernando Perez" git config --global user.email "[email protected]"
day1/Version Control.ipynb
AstroHackWeek/AstroHackWeek2014
bsd-3-clause
And how you will edit text files (it will often ask you to edit messages and other information, and thus wants to know how you like to edit your files):
%%bash # Put here your preferred editor. If this is not set, git will honor # the $EDITOR environment variable git config --global core.editor /usr/bin/jed # my lightweight unix editor # On Windows Notepad will do in a pinch, I recommend Notepad++ as a free alternative # On the mac, you can set nano or emacs as a basic option # And while we're at it, we also turn on the use of color, which is very useful git config --global color.ui "auto"
day1/Version Control.ipynb
AstroHackWeek/AstroHackWeek2014
bsd-3-clause
Set git to use the credential memory cache so we don't have to retype passwords too frequently. On Linux, you should run the following (note that this requires git version 1.7.10 or newer):
%%bash git config --global credential.helper cache # Set the cache to timeout after 2 hours (setting is in seconds) git config --global credential.helper 'cache --timeout=7200'
day1/Version Control.ipynb
AstroHackWeek/AstroHackWeek2014
bsd-3-clause
Github offers in its help pages instructions on how to configure the credentials helper for Mac OSX and Windows. Stage 1: Local, single-user, linear workflow Simply type git to see a full list of all the 'core' commands. We'll now go through most of these via small practical exercises:
!git
day1/Version Control.ipynb
AstroHackWeek/AstroHackWeek2014
bsd-3-clause
And git rm works in a similar fashion. Exercise Add a new file file2.txt, commit it, make some changes to it, commit them again, and then remove it (and don't forget to commit this last step!). Local user, branching What is a branch? Simply a label for the 'current' commit in a sequence of ongoing commits: <!-- offline: ![](files/fig/masterbranch.png) --> <img src="https://raw.github.com/fperez/reprosw/master/fig/masterbranch.png" > There can be multiple branches alive at any point in time; the working directory is the state of a special pointer called HEAD. In this example there are two branches, master and testing, and testing is the currently active branch since it's what HEAD points to: <!-- offline: ![](files/fig/HEAD_testing.png) --> <img src="https://raw.github.com/fperez/reprosw/master/fig/HEAD_testing.png" > Once new commits are made on a branch, HEAD and the branch label move with the new commits: <!-- offline: ![](files/fig/branchcommit.png) --> <img src="https://raw.github.com/fperez/reprosw/master/fig/branchcommit.png" > This allows the history of both branches to diverge: <!-- offline: ![](files/fig/mergescenario.png) --> <img src="https://raw.github.com/fperez/reprosw/master/fig/mergescenario.png" > But based on this graph structure, git can compute the necessary information to merge the divergent branches back and continue with a unified line of development: <!-- offline: ![](files/fig/mergeaftermath.png) --> <img src="https://raw.github.com/fperez/reprosw/master/fig/mergeaftermath.png" > Let's now illustrate all of this with a concrete example. Let's get our bearings first:
%%bash cd test git status ls
day1/Version Control.ipynb
AstroHackWeek/AstroHackWeek2014
bsd-3-clause
Since the above cell didn't produce any output after the git remote -v call, it means we have no remote repositories configured. We will now proceed to do so. Once logged into GitHub, go to the new repository page and make a repository called test. Do not check the box that says Initialize this repository with a README, since we already have an existing repository here. That option is useful when you're starting first at Github and don't have a repo made already on a local computer. We can now follow the instructions from the next page:
%%bash cd test git remote add origin https://github.com/fperez/test.git git push -u origin master
day1/Version Control.ipynb
AstroHackWeek/AstroHackWeek2014
bsd-3-clause
We can now see this repository publicly on github. Let's see how this can be useful for backup and syncing work between two different computers. I'll simulate a 2nd computer by working in a different directory...
%%bash # Here I clone my 'test' repo but with a different name, test2, to simulate a 2nd computer git clone https://github.com/fperez/test.git test2 cd test2 pwd git remote -v
day1/Version Control.ipynb
AstroHackWeek/AstroHackWeek2014
bsd-3-clause
Note: While it's a good idea to understand the basics of fixing merge conflicts by hand, in some cases you may find the use of an automated tool useful. Git supports multiple merge tools: a merge tool is a piece of software that conforms to a basic interface and knows how to merge two files into a new one. Since these are typically graphical tools, there are various to choose from for the different operating systems, and as long as they obey a basic command structure, git can work with any of them. Collaborating on github with a small team Single remote with shared access: we are going to set up a shared collaboration with one partner (the person sitting next to you). This will show the basic workflow of collaborating on a project with a small team where everyone has write privileges to the same repository. Note for SVN users: this is similar to the classic SVN workflow, with the distinction that commit and push are separate steps. SVN, having no local repository, commits directly to the shared central resource, so to a first approximation you can think of svn commit as being synonymous with git commit; git push. We will have two people, let's call them Alice and Bob, sharing a repository. Alice will be the owner of the repo and she will give Bob write privileges. We begin with a simple synchronization example, much like we just did above, but now between two people instead of one person. Otherwise it's the same: Bob clones Alice's repository. Bob makes changes to a file and commits them locally. Bob pushes his changes to github. Alice pulls Bob's changes into her own repository. Next, we will have both parties make non-conflicting changes each, and commit them locally. Then both try to push their changes: Alice adds a new file, alice.txt to the repo and commits. Bob adds bob.txt and commits. Alice pushes to github. Bob tries to push to github. What happens here? The problem is that Bob's changes create a commit that conflicts with Alice's, so git refuses to apply them. It forces Bob to first do the merge on his machine, so that if there is a conflict in the merge, Bob deals with the conflict manually (git could try to do the merge on the server, but in that case if there's a conflict, the server repo would be left in a conflicted state without a human to fix things up). The solution is for Bob to first pull the changes (pull in git is really fetch+merge), and then push again. Full-contact github: distributed collaboration with large teams Multiple remotes and merging based on pull request workflow: this is beyond the scope of this brief tutorial, so we'll simply discuss how it works very briefly, illustrating it with the activity on the IPython github repository. Other useful commands show reflog rebase tag Git resources Introductory materials There are lots of good tutorials and introductions for Git, which you can easily find yourself; this is just a short list of things I've found useful. For a beginner, I would recommend the following 'core' reading list, and below I mention a few extra resources: The smallest, and in the style of this tuorial: git - the simple guide contains 'just the basics'. Very quick read. The concise Git Reference: compact but with all the key ideas. If you only read one document, make it this one. In my own experience, the most useful resource was Understanding Git Conceptually. Git has a reputation for being hard to use, but I have found that with a clear view of what is actually a very simple internal design, its behavior is remarkably consistent, simple and comprehensible. For more detail, see the start of the excellent Pro Git online book, or similarly the early parts of the Git community book. Pro Git's chapters are very short and well illustrated; the community book tends to have more detail and has nice screencasts at the end of some sections. If you are really impatient and just want a quick start, this visual git tutorial may be sufficient. It is nicely illustrated with diagrams that show what happens on the filesystem. For windows users, an Illustrated Guide to Git on Windows is useful in that it contains also some information about handling SSH (necessary to interface with git hosted on remote servers when collaborating) as well as screenshots of the Windows interface. Cheat sheets : Two different cheat sheets in PDF format that can be printed for frequent reference. Beyond the basics At some point, it will pay off to understand how git itself is built. These two documents, written in a similar spirit, are probably the most useful descriptions of the Git architecture short of diving into the actual implementation. They walk you through how you would go about building a version control system with a little story. By the end you realize that Git's model is almost an inevitable outcome of the proposed constraints: The Git parable by Tom Preston-Werner. Git foundations by Matthew Brett. Git ready : A great website of posts on specific git-related topics, organized by difficulty. QGit: an excellent Git GUI : Git ships by default with gitk and git-gui, a pair of Tk graphical clients to browse a repo and to operate in it. I personally have found qgit to be nicer and easier to use. It is available on modern linux distros, and since it is based on Qt, it should run on OSX and Windows. Git Magic : Another book-size guide that has useful snippets. The learning center at Github : Guides on a number of topics, some specific to github hosting but much of it of general value. A port of the Hg book's beginning : The Mercurial book has a reputation for clarity, so Carl Worth decided to port its introductory chapter to Git. It's a nicely written intro, which is possible in good measure because of how similar the underlying models of Hg and Git ultimately are. Intermediate tips : A set of tips that contains some very valuable nuggets, once you're past the basics. Finally, if you prefer a video presentation, this 1-hour tutorial prepared by the GitHub educational team will walk you through the entire process:
from IPython.display import YouTubeVideo YouTubeVideo('U8GBXvdmHT4')
day1/Version Control.ipynb
AstroHackWeek/AstroHackWeek2014
bsd-3-clause
Deciding a model The first thing once we've got some data is decide which is the model that generated the data. In this case we decide that the height of Python developers comes from a normal distribution. A normal distribution has two parameters, the mean $\mu$ and the standard deviation $\sigma$ (or the variance $\sigma^2$ which is equivalent, as it's just the square of the standard deviation). Deciding which model to use can be obvious in few cases, but it'll be the most complex part of the statistical inference problem in many others. Some of the obvious cases are: * The Normal distribution when modelling natural phenomena like human heights. * The Beta distribution when modelling probability distributions. * The Poisson distribution when modelling the frequency of events occurring. In many cases we will use a combination of different distributions to explain how our data was generated. Each of these distribution has parameters, \alpha and \beta for the Beta distribution, \lambda for the Poisson, or $\mu$ and $\sigma$ for the normal distribution of our example. The goal of inference is to find the best values for these parameters. Evaluating a set of parameters Before trying to find the best parameters, let's choose some arbitrary parameters, and let's evaluate them. For example, we can choose the values $\mu=175$ and $\sigma=5$. And to evaluate them, we'll use the Bayes formula: $$P(\theta|x) = \frac{P(x|\theta) \cdot P(\theta)}{P(x)}$$ Given a model, a normal distribution in this case, $P(\theta|x)$ is the probability that the parameters $\theta$ (which are $\mu$ and $\sigma$ in this case) given the data $x$. The higher the probability of the parameters given the data, the better they are. So, this value is the score we will use to decide which are the best parameters $\mu$ and $\sigma$ for our data $x$, assuming data comes from a normal distribution. Parts of the problem To recap, we have: * Data $x$: [183, 168, 177, 170, 175, 177, 178, 166, 174, 178] * A model: the normal distribution * The parameters of the model: $\mu$ and $\sigma$ And we're interested in finding the best values for $\mu$ and $\sigma$ for the data $x$, for example $\mu=175$ and $\sigma=5$. Bayes formula Back to Bayes formula for conditional probability: $$P(\theta|x) = \frac{P(x|\theta) \cdot P(\theta)}{P(x)}$$ We already mentioned that $P(\theta|x)$ is the probability of the parameter values we're checking given the data $x$. And assuming our data is generated by the model we decided, the normal distribution. And this is the value we're interested in maximizing. In Bayesian terminology, $P(\theta|x)$ is known as the posterior. The posterior is a function of three other values. $P(x|\theta)$: the likelihood, which is the probability of obtaining the data $x$ if the parameters $\sigma$ were the values we're checking (e.g. $\mu=175$ and $\sigma=5$). And always assuming our data is generated by our model, the normal distribution. $P(\theta)$: the prior, which is our knowledge about the parameters before seeing any data. $P(x)$: the evidence, which is the probability of the data, not given any specific set of parameters $\sigma$, but given the model we choose, the normal distribution in the example. Likelihood The likelihood is the probability of obtaining the data $x$ from the choosen model (e.g. the normal distribution) and for a specific set of parameters $\theta$ (e.g. $\mu=175$ and $\sigma=5$). It is often represented as $\mathcal{L}(\theta|x)$ (note that the order of $\theta$ and $x$ is reversed to when the probability notation is used). In the case of a normal distribution, the formula to compute the probability given $x$ (its probability density function) is: $$P(x|\theta) = P(x| \mu, \sigma) = \frac{1}{\sqrt{2 \pi \sigma^2}} \cdot e^{-\frac{(x - \mu)^2}{2 \sigma^2}}$$ If we plot it, we obtain the famous normal bell curve (we use $\mu=0$ and $\sigma=1$ in the plot):
import numpy import scipy.stats from matplotlib import pyplot mu = 0. sigma = 1. x = numpy.linspace(-10., 10., 201) likelihood = scipy.stats.norm.pdf(x, mu, sigma) pyplot.plot(x, likelihood) pyplot.xlabel('x') pyplot.ylabel('Likelihood') pyplot.title('Normal distribution with $\mu=0$ and $\sigma=1$');
docs/Bayesian inference tutorial.ipynb
datapythonista/datapythonista.github.io
apache-2.0
Following the example, we wanted to score how good are the parameters $\mu=175$ and $\sigma=5$ for our data. So far we choosen these parameters arbitrarily, but we'll choose them in a smarter way later on. If we take the probability density function (p.d.f.) of the normal distribution and we compute for the first data point of $x$ 183, we have: $$P(x| \mu, \sigma) = \frac{1}{\sqrt{2 \pi \sigma^2}} \cdot e^{-\frac{(x - \mu)^2}{2 \sigma^2}}$$ where $\mu=175$, $\sigma=5$ and $x=183$, so: $$P(x=183| \mu=175, \sigma=5) = \frac{1}{\sqrt{2 \cdot \pi \cdot 5^2}} \cdot e^{-\frac{(183 - 175)^2}{2 \cdot 5^2}}$$ If we do the math:
import math 1. / math.sqrt(2 * math.pi * (5 **2)) * math.exp(-((183 - 175) ** 2) / (2 * (5 ** 2)))
docs/Bayesian inference tutorial.ipynb
datapythonista/datapythonista.github.io
apache-2.0
This is the probability that 183 was generated by a normal distribution with mean 175 and standard deviation 5. With scipy we can easily compute the likelihood of all values in our data:
import scipy.stats mu = 175 sigma = 5 x = [183, 168, 177, 170, 175, 177, 178, 166, 174, 178] scipy.stats.norm.pdf(x, mu, sigma)
docs/Bayesian inference tutorial.ipynb
datapythonista/datapythonista.github.io
apache-2.0
Prior The prior is our knowledge of the parameters before we observe the data. It's probably the most subjective part of Bayesian inference, and different approaches can be used. We can use informed priors, and try to give the model as much information as possible. Or use uninformed priors, and let the process find the parameters using mainly the data. In our case, we can start thinking on which are the possible values for a normal distribution. For the mean, the range is between $-\infty$ and $\infty$. But we can of course do better than this. We're interested on the mean of Python developers height. And it's easy to see that the minimum possible height is $0$. And for the maximum, we can start by considering the maximum known human height. This is 272 cms, the maximum measured height of Robert Pershing Wadlow, born in 1918. We can be very confident that the mean of the height of Python developers is in the range $0$ to $272$. So, a first option for an uninformed prior could be all the values in this range with equal probability.
import numpy import scipy.stats from matplotlib import pyplot mean_height = numpy.linspace(0, 272, 273) probability = scipy.stats.uniform.pdf(mean_height, 0, 272) pyplot.plot(mean_height, probability) pyplot.xlabel('Mean height') pyplot.ylabel('Probability') pyplot.title('Uninformed prior for Python developers height');
docs/Bayesian inference tutorial.ipynb
datapythonista/datapythonista.github.io
apache-2.0
This could work, but we can do better. Just having 10 data points, the amount of information that we can learn from them is quite limited. And we may use these 10 data points to discover something we already know. That the probability of the mean height being 0 is nil, as it is the probability of the maximum ever observed height. And that the probability of a value like 175 cms is much higher than the probability of a value like 120 cms. If we know all this before observing any data, why not use it? This is exactly what a prior is. The tricky part is defining the exact prior. In this case, we don't know the mean of the height of Python developers, but we can check the mean of the height of the world population, which is arond 165. This doesn't need to be the value we're looking for. It's known that there are more male than female Python programmers. And male height is higher, so the value we're looking for will probably be higher. Also, height changes from country to country, and Python programmers are not equally distributed around the world. But we will use our data to try to find the value that contains all these biases. The prior is just a starting point that will help find the value faster. So, let's use the mean of the world population as the mean of our prior, and we'll take the standard deviation of the world population, 7 cms, and we'll use the double of it. Multiplying it by 2 is arbitrary, but we'll make our prior a bit less informed. As mentioned before, choosing a prior is quite subjective. Note that it's not necessary to use a normal distribution for the prior. We were considering a uniform distribution before. But in this case it can make sense, as we're more sure than the mean we're looking for will be close to the mean of the human population.
import numpy import scipy.stats from matplotlib import pyplot world_height_mean = 165 world_height_standard_deviation = 7 mean_height = numpy.linspace(0, 272, 273) probability = scipy.stats.norm.pdf(mean_height, world_height_mean, world_height_standard_deviation * 2) pyplot.plot(mean_height, probability) pyplot.xlabel('Mean height') pyplot.ylabel('Probability') pyplot.title('Informed prior for Python developers height');
docs/Bayesian inference tutorial.ipynb
datapythonista/datapythonista.github.io
apache-2.0
Violations of graphical excellence and integrity Find a data-focused visualization on one of the following websites that is a negative example of the principles that Tufte describes in The Visual Display of Quantitative Information. CNN Fox News Time Upload the image for the visualization to this directory and display the image inline in this notebook.
# Add your filename and uncomment the following line: Image(filename='StockPicture.png')
assignments/assignment04/TheoryAndPracticeEx02.ipynb
LimeeZ/phys292-2015-work
mit
Graficas chidas!
def awesome_settings(): # awesome plot options sns.set_style("white") sns.set_style("ticks") sns.set_context("paper", font_scale=2) sns.set_palette(sns.color_palette('Set2')) # image stuff plt.rcParams['figure.figsize'] = (12.0, 6.0) plt.rcParams['savefig.dpi'] = 60 plt.rcParams['lines.linewidth'] = 3 return %config InlineBackend.figure_format='retina' awesome_settings()
Dia2/2_Intro_Matplotlib.ipynb
beangoben/quantum_solar
mit
1 Crear graficas (plt.plot) Un ejemplo "complejo" Crear graficas es muy facil en matplotlib, aqui va un ejemplo complicado..si entiendes este pedazo de codigo puedes entender el resto.
# datos x = np.linspace(0.0, 2.0, 40) y1 = np.sin(2*np.pi*x) y2 = 0.5*x+0.1 y3 = 0.5*x**2+0.5*x+0.1 # a graficas plt.plot(x,y1,'--',label='Seno') plt.plot(x,y2,'-',label='Linea') plt.plot(x,y3,'.',label='Cuadratica') # estilo plt.xlabel('y') plt.ylabel('x') plt.title('Unas grafiquitas') plt.legend(loc='best') sns.despine() plt.show()
Dia2/2_Intro_Matplotlib.ipynb
beangoben/quantum_solar
mit
Ahora por pedazos Podemos usar la funcion np.linspace para crear valores en un rango, por ejemplo si queremos 100 numeros entre 0 y 10 usamos: Y podemos graficar dos cosas al mismo tiempo: Que tal si queremos distinguir cada linea? Pues usamos legend(), de leyenda..tambien tenemos que agregarles nombres a cada plot Tambien podemos hacer mas cosas, como dibujar solamente los puntos, o las lineas con los puntos usando linestyle: Actividad: Haz muchas graficas Grafica las siguientes curvas: Usa x dentro del rango $[-2,2]$. $e^{-x^2}$ $x^2$ $ cos(2 x) $ Ponle nombre a cada curva, usa leyendas, titulos y demas informacion. Pero ademas podemos meter mas informacion, por ejemplo dar colores cada punto, o darle tamanos diferentes: Histogramas (plt.hist) Los histogramas nos muestran distribuciones de datos, la forma de los datos, nos muestran el numero de datos de diferentes tipos:
mu, sigma = 100, 15 x = mu + sigma*np.random.randn(10000) n, bins, patches = plt.hist(x, 50, normed=1) plt.ylabel('Porcentaje') plt.xlabel('IQ') plt.title('Distribucion de IQ entre 10k personas') plt.xlim([0,200]) sns.despine() plt.show()
Dia2/2_Intro_Matplotlib.ipynb
beangoben/quantum_solar
mit
This crazy try-except construction is our way of making sure the notebooks will work when completed without actually providing complete code. You can either write your code directly in the except block, or delete the try, exec and except lines entirely (remembering to unindent the remaining lines in that case, because python).
try: exec(open('Solution/goals.py').read()) except IOError: my_goals = REPLACE_WITH_YOUR_SOLUTION()
tutorials/Week1/GithubAndGoals.ipynb
drphilmarshall/StatisticalMethods
gpl-2.0
This cell just prints out the string my_goals.
print(my_goals)
tutorials/Week1/GithubAndGoals.ipynb
drphilmarshall/StatisticalMethods
gpl-2.0
1. Load trajectory Read the heat current from a simple column-formatted file. The desired columns are selected based on their header (e.g. with LAMMPS format). For other input formats see corresponding the example.
jfile = st.i_o.TableFile('./data/Silica.dat', group_vectors=True) jfile.read_datalines(start_step=0, NSTEPS=0, select_ckeys=['flux1'])
examples/example_cepstrum_singlecomp_silica.ipynb
lorisercole/thermocepstrum
gpl-3.0
2. Heat Current Define a HeatCurrent from the trajectory, with the correct parameters.
DT_FS = 1.0 # time step [fs] TEMPERATURE = 1065.705630 # temperature [K] VOLUME = 3130.431110818 # volume [A^3] j = st.HeatCurrent(jfile.data['flux1'], UNITS= 'metal', DT_FS=DT_FS, TEMPERATURE=TEMPERATURE, VOLUME=VOLUME) # trajectory f = plt.figure() ax = plt.plot(j.timeseries()/1000., j.traj); plt.xlim([0, 1.0]) plt.xlabel(r'$t$ [ps]') plt.ylabel(r'$J$ [eV A/ps]');
examples/example_cepstrum_singlecomp_silica.ipynb
lorisercole/thermocepstrum
gpl-3.0
Compute the Power Spectral Density and filter it for visualization.
# Periodogram with given filtering window width ax = j.plot_periodogram(PSD_FILTER_W=0.5, kappa_units=True) print(j.Nyquist_f_THz) plt.xlim([0, 50]) ax[0].set_ylim([0, 150]); ax[1].set_ylim([12, 18]);
examples/example_cepstrum_singlecomp_silica.ipynb
lorisercole/thermocepstrum
gpl-3.0
3. Resampling If the Nyquist frequency is very high (i.e. the sampling time is small), such that the log-spectrum goes to low values, you may want resample your time series to obtain a maximum frequency $f^$. Before performing that operation, the time series is automatically filtered to reduce the amount of aliasing introduced. Ideally you do not want to go too low in $f^$. In an intermediate region the results should not change. To perform resampling you can choose the resampling frequency $f^$ or the resampling step (TSKIP). If you choose $f^$, the code will try to choose the closest value allowed. The resulting PSD is visualized to ensure that the low-frequency region is not affected.
FSTAR_THZ = 28.0 jf,ax = j.resample(fstar_THz=FSTAR_THZ, plot=True, freq_units='thz') plt.xlim([0, 80]) ax[1].set_ylim([12,18]); ax = jf.plot_periodogram(PSD_FILTER_W=0.1) ax[1].set_ylim([12, 18]);
examples/example_cepstrum_singlecomp_silica.ipynb
lorisercole/thermocepstrum
gpl-3.0
4. Cepstral Analysis Perform Cepstral Analysis. The code will: 1. the parameters describing the theoretical distribution of the PSD are computed 2. the Cepstral coefficients are computed by Fourier transforming the log(PSD) 3. the Akaike Information Criterion is applied 4. the resulting $\kappa$ is returned
jf.cepstral_analysis() # Cepstral Coefficients print('c_k = ', jf.dct.logpsdK) ax = jf.plot_ck() ax.set_xlim([0, 50]) ax.set_ylim([-0.5, 0.5]) ax.grid(); # AIC function f = plt.figure() plt.plot(jf.dct.aic, '.-', c=c[0]) plt.xlim([0, 200]) plt.ylim([2800, 3000]); print('K of AIC_min = {:d}'.format(jf.dct.aic_Kmin)) print('AIC_min = {:f}'.format(jf.dct.aic_min))
examples/example_cepstrum_singlecomp_silica.ipynb
lorisercole/thermocepstrum
gpl-3.0
Plot the thermal conductivity $\kappa$ as a function of the cutoff $P^*$
# L_0 as a function of cutoff K ax = jf.plot_L0_Pstar() ax.set_xlim([0, 200]) ax.set_ylim([12.5, 14.5]); print('K of AIC_min = {:d}'.format(jf.dct.aic_Kmin)) print('AIC_min = {:f}'.format(jf.dct.aic_min)) # kappa as a function of cutoff K ax = jf.plot_kappa_Pstar() ax.set_xlim([0,200]) ax.set_ylim([0, 5.0]); print('K of AIC_min = {:d}'.format(jf.dct.aic_Kmin)) print('AIC_min = {:f}'.format(jf.dct.aic_min))
examples/example_cepstrum_singlecomp_silica.ipynb
lorisercole/thermocepstrum
gpl-3.0
You can now visualize the filtered PSD...
# filtered log-PSD ax = j.plot_periodogram(0.5, kappa_units=True) ax = jf.plot_periodogram(0.5, axes=ax, kappa_units=True) ax = jf.plot_cepstral_spectrum(axes=ax, kappa_units=True) ax[0].axvline(x = jf.Nyquist_f_THz, ls='--', c='r') ax[1].axvline(x = jf.Nyquist_f_THz, ls='--', c='r') plt.xlim([0., 50.]) ax[1].set_ylim([12,18]) ax[0].legend(['original', 'resampled', 'cepstrum-filtered']) ax[1].legend(['original', 'resampled', 'cepstrum-filtered']);
examples/example_cepstrum_singlecomp_silica.ipynb
lorisercole/thermocepstrum
gpl-3.0
Lists are mutable while strings are immutable. We can never change a string, only reassign it to something else.
S = 'abc' #S[1] = 'z' # <== Doesn't work! L = ['a', 'b', 'c'] L[1] = 'z' print L
python-club/notebooks/python-club-10.ipynb
wtsi-medical-genomics/team-code
gpl-2.0
Some common list procedures: reduce Convert a sequence (eg list) into a single element. Examples: sum, mean map Apply some function to each element of a sequence. Examples: making every element in a list positive, capitalizing all elements of a list filter Selecting some elements of a sequence according to some condition. Examples: selecting only positive numbers from a list, selecting only elements of a list of strings that have length greater than 10. Everything in Python is an object. Think of an object as the underlying data. Objects have individuality. For example,
a = 23 b = 23 a is b list1 = [1,2,3] list2 = [1,2,3] list1 is list2
python-club/notebooks/python-club-10.ipynb
wtsi-medical-genomics/team-code
gpl-2.0
list1 and list2 are equivalent (same values) but not identical (same object). In order to make these two lists identical we can alias the object.
list2 = list1 list1 is list2
python-club/notebooks/python-club-10.ipynb
wtsi-medical-genomics/team-code
gpl-2.0
Now both names/variables point at the same object (reference the same object).
list1[0] = 1234 print list1 print list2 Back to the strings, b = 'abc' a = b a is b
python-club/notebooks/python-club-10.ipynb
wtsi-medical-genomics/team-code
gpl-2.0
Let's try to change b by assigning to a (they reference the same object after all)
a = 'xyz' print a print b
python-club/notebooks/python-club-10.ipynb
wtsi-medical-genomics/team-code
gpl-2.0
What happened is that we have reassigned a to a new object, that is they no longer point at the same object.
a is b id(b)
python-club/notebooks/python-club-10.ipynb
wtsi-medical-genomics/team-code
gpl-2.0
토큰(token) 토큰은 문서에서 단어장을 생성할 때 하나의 단어가 되는 단위를 말한다. analyzer, tokenizer, token_pattern 등의 인수로 조절할 수 있다. 문서를 보고 어떤 언어인지 맞추는 방법은 토큰으로 사용빈도를 보고 맞춘다. 예를 들어 제일 많이 나오는 char를 e로 잡고 그 다음 뭐 인지를 패턴화해서 맞추는 방식으로
vect = CountVectorizer(analyzer="char").fit(corpus) #토큰 1개가 vocaburary로 인식. 원래 기본은 word이지만 char가 들어갈 수 있다. vect.vocabulary_ import nltk nltk.download("punkt") vect = CountVectorizer(tokenizer=nltk.word_tokenize).fit(corpus) vect.vocabulary_ vect = CountVectorizer(token_pattern="t\w+").fit(corpus) vect.vocabulary_
통계, 머신러닝 복습/160615수_16일차_문서 전처리 Text Preprocessing/4.문서 전처리.ipynb
kimkipyo/dss_git_kkp
mit
빈도수 max_df, min_df 인수를 사용하여 문서에서 토큰이 나타난 횟수를 기준으로 단어장을 구성할 수도 있다. 토큰의 빈도가 max_df로 지정한 값을 초과 하거나 min_df로 지정한 값보다 작은 경우에는 무시한다. 인수 값은 정수인 경우 횟수, 부동소수점인 경우 비중을 뜻한다.
vect = CountVectorizer(max_df=4, min_df=2).fit(corpus) vect.vocabulary_, vect.stop_words_ vect.transform(corpus).toarray() vect.transform(corpus).toarray().sum(axis=0)
통계, 머신러닝 복습/160615수_16일차_문서 전처리 Text Preprocessing/4.문서 전처리.ipynb
kimkipyo/dss_git_kkp
mit
TF-IDF TF-IDF(Term Frequency – Inverse Document Frequency) 인코딩은 단어를 갯수 그대로 카운트하지 않고 모든 문서에 공통적으로 들어있는 단어의 경우 문서 구별 능력이 떨어진다고 보아 가중치를 축소하는 방법이다. 구제적으로는 문서 $d$(document)와 단어 $t$ 에 대해 다음과 같이 계산한다. $$ \text{tf-idf}(d, t) = \text{tf}(d, t) \cdot \text{idf}(d, t) $$ 여기에서 $\text{tf}(d, t)$: 단어의 빈도수 $\text{idf}(d, t)$ : inverse document frequency $$ \text{idf}(d, t) = \log \dfrac{n_d}{1 + \text{df}(t)} $$ $n_d$ : 전체 문서의 수 $\text{df}(t)$: 단어 $t$를 가진 문서의 수 1을 더하는 이유는 스무딩 하기 위해서. 너무 커지기 때문에 log를 취해서 스케일링 했음. df(t)가 크면 idf가 작게 된다. idf는 가중치를 축소시키기도 하고 확대시키기도 한다.
from sklearn.feature_extraction.text import TfidfVectorizer tfidv = TfidfVectorizer().fit(corpus) tfidv.transform(corpus).toarray()
통계, 머신러닝 복습/160615수_16일차_문서 전처리 Text Preprocessing/4.문서 전처리.ipynb
kimkipyo/dss_git_kkp
mit
Hashing Trick CountVectorizer는 모든 작업을 in-memory 상에서 수행하므로 데이터 양이 커지면 속도가 느려지거나 실행이 불가능해진다. 이 때 HashingVectorizer를 사용하면 Hashing Trick을 사용하여 메모리 및 실행 시간을 줄일 수 있다. 하지만 사용 빈도로는 이게 더 잘 안 쓰인다.
from sklearn.datasets import fetch_20newsgroups twenty = fetch_20newsgroups() len(twenty.data) %time CountVectorizer().fit(twenty.data).transform(twenty.data) from sklearn.feature_extraction.text import HashingVectorizer hv = HashingVectorizer(n_features=10) %time hv.transform(twenty.data)
통계, 머신러닝 복습/160615수_16일차_문서 전처리 Text Preprocessing/4.문서 전처리.ipynb
kimkipyo/dss_git_kkp
mit
import json import string from konlpy.utils import pprint from konlpy.tag import Hannanum hannanum = Hannanum() req = urllib2.Request("https://www.datascienceschool.net/download-notebook/708e711429a646818b9dcbb581e0c10a/") opener = urllib2.build_opener() f = opener.open(req) json = json.loads(f.read()) cell = ["\n".join(c["source"]) for c in json["cells"] if c["cell_type"] == u"markdown"] docs = [w for w in hannanum.nouns(" ".join(cell)) if ((not w[0].isnumeric()) and (w[0] not in string.punctuation))] vect = CountVectorizer().fit(docs) count = vect.transform(docs).toarray().sum(axis=0) plt.bar(range(len(count)), count) plt.show() pprint(zip(vect.get_feature_names(), count))
통계, 머신러닝 복습/160615수_16일차_문서 전처리 Text Preprocessing/4.문서 전처리.ipynb
kimkipyo/dss_git_kkp
mit
The code in dataset_fn will be invoked on the input device, which is usually the CPU, on each of the worker machines. Model construction and compiling Now, you will create a tf.keras.Model with the APIs of choice (a trivial tf.keras.models.Sequential model is being used as a demonstration here), followed by a Model.compile call to incorporate components such as optimizer, metrics, or parameters such as steps_per_execution:
# TODO with strategy.scope(): model = tf.keras.models.Sequential([tf.keras.layers.Dense(10)]) model.compile(tf.keras.optimizers.SGD(), loss='mse', steps_per_execution=10)
courses/machine_learning/deepdive2/production_ml/solutions/parameter_server_training.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Then we create the training dataset wrapped in a dataset_fn:
def dataset_fn(_): raw_dataset = tf.data.Dataset.from_tensor_slices(examples) # TODO train_dataset = raw_dataset.map( lambda x: ( {"features": feature_preprocess_stage(x["features"])}, label_preprocess_stage(x["label"]) )).shuffle(200).batch(32).repeat() return train_dataset
courses/machine_learning/deepdive2/production_ml/solutions/parameter_server_training.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Build the model Second, we create the model and other objects. Make sure to create all variables under strategy.scope.
# These variables created under the `strategy.scope` will be placed on parameter # servers in a round-robin fashion. with strategy.scope(): # Create the model. The input needs to be compatible with KPLs. # TODO model_input = keras.layers.Input( shape=(3,), dtype=tf.int64, name="model_input") emb_layer = keras.layers.Embedding( input_dim=len(feature_lookup_layer.get_vocabulary()), output_dim=20) emb_output = tf.reduce_mean(emb_layer(model_input), axis=1) dense_output = keras.layers.Dense(units=1, activation="sigmoid")(emb_output) model = keras.Model({"features": model_input}, dense_output) optimizer = keras.optimizers.RMSprop(learning_rate=0.1) accuracy = keras.metrics.Accuracy()
courses/machine_learning/deepdive2/production_ml/solutions/parameter_server_training.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Then we create a per-worker dataset and an iterator. In the per_worker_dataset_fn below, wrapping the dataset_fn into strategy.distribute_datasets_from_function is recommended to allow efficient prefetching to GPUs seamlessly.
@tf.function def per_worker_dataset_fn(): return strategy.distribute_datasets_from_function(dataset_fn) # TODO per_worker_dataset = coordinator.create_per_worker_dataset(per_worker_dataset_fn) per_worker_iterator = iter(per_worker_dataset)
courses/machine_learning/deepdive2/production_ml/solutions/parameter_server_training.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Here is how you can fetch the result of a RemoteValue:
# TODO loss = coordinator.schedule(step_fn, args=(per_worker_iterator,)) print ("Final loss is %f" % loss.fetch())
courses/machine_learning/deepdive2/production_ml/solutions/parameter_server_training.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Initial set-up Load experiments used for unified dataset calibration: - Steady-state activation [Li1997] - Activation time constant [Li1997] - Steady-state inactivation [Li1997] - Inactivation time constant (fast+slow) [Li1997] - Recovery time constant (fast+slow) [Li1997]
from experiments.ical_li import (li_act_and_tau, # combines steady-state activation and time constant li_inact_1000, li_inact_kin_80, li_recov) modelfile = 'models/nygren_ical.mmt'
docs/examples/human-atrial/nygren_ical_unified.ipynb
c22n/ion-channel-ABC
gpl-3.0
Plot steady-state and time constant functions of original model
from ionchannelABC.visualization import plot_variables sns.set_context('talk') V = np.arange(-140, 50, 0.01) nyg_par_map = {'di': 'ical.d_inf', 'f1i': 'ical.f_inf', 'f2i': 'ical.f_inf', 'dt': 'ical.tau_d', 'f1t': 'ical.tau_f_1', 'f2t': 'ical.tau_f_2'} f, ax = plot_variables(V, nyg_par_map, 'models/nygren_ical.mmt', figshape=(3,2))
docs/examples/human-atrial/nygren_ical_unified.ipynb
c22n/ion-channel-ABC
gpl-3.0
Set up prior ranges for each parameter in the model. See the modelfile for further information on specific parameters. Prepending `log_' has the effect of setting the parameter in log space.
limits = {'ical.p1': (-100, 100), 'ical.p2': (0, 50), 'log_ical.p3': (-7, 3), 'ical.p4': (-100, 100), 'ical.p5': (0, 50), 'log_ical.p6': (-7, 3)} prior = Distribution(**{key: RV("uniform", a, b - a) for key, (a,b) in limits.items()}) # Test this works correctly with set-up functions assert len(observations) == len(summary_statistics(model(prior.rvs())))
docs/examples/human-atrial/nygren_ical_unified.ipynb
c22n/ion-channel-ABC
gpl-3.0
Run ABC calibration
db_path = ("sqlite:///" + os.path.join(tempfile.gettempdir(), "nygren_ical_dgate_unified.db")) logging.basicConfig() abc_logger = logging.getLogger('ABC') abc_logger.setLevel(logging.DEBUG) eps_logger = logging.getLogger('Epsilon') eps_logger.setLevel(logging.DEBUG)
docs/examples/human-atrial/nygren_ical_unified.ipynb
c22n/ion-channel-ABC
gpl-3.0
Initialise ABCSMC (see pyABC documentation for further details). IonChannelDistance calculates the weighting applied to each datapoint based on the experimental variance.
abc = ABCSMC(models=model, parameter_priors=prior, distance_function=IonChannelDistance( exp_id=list(observations.exp_id), variance=list(observations.variance), delta=0.05), population_size=ConstantPopulationSize(2000), summary_statistics=summary_statistics, transitions=EfficientMultivariateNormalTransition(), eps=MedianEpsilon(initial_epsilon=100), sampler=MulticoreEvalParallelSampler(n_procs=8), acceptor=IonChannelAcceptor()) obs = observations.to_dict()['y'] obs = {str(k): v for k, v in obs.items()} abc_id = abc.new(db_path, obs)
docs/examples/human-atrial/nygren_ical_unified.ipynb
c22n/ion-channel-ABC
gpl-3.0
Analysis of results
history = History(db_path) history.all_runs() df, w = history.get_distribution(m=0) df.describe() sns.set_context('poster') mpl.rcParams['font.size'] = 14 mpl.rcParams['legend.fontsize'] = 14 g = plot_sim_results(modelfile, li_act_and_tau, df=df, w=w) plt.tight_layout() import pandas as pd N = 100 nyg_par_samples = df.sample(n=N, weights=w, replace=True) nyg_par_samples = nyg_par_samples.set_index([pd.Index(range(N))]) nyg_par_samples = nyg_par_samples.to_dict(orient='records') sns.set_context('talk') mpl.rcParams['font.size'] = 14 mpl.rcParams['legend.fontsize'] = 14 V = np.arange(-140, 50, 0.01) nyg_par_map = {'di': 'ical.d_inf', 'f1i': 'ical.f_inf', 'f2i': 'ical.f_inf', 'dt': 'ical.tau_d', 'f1t': 'ical.tau_f_1', 'f2t': 'ical.tau_f_2'} f, ax = plot_variables(V, nyg_par_map, 'models/nygren_ical.mmt', [nyg_par_samples], figshape=(3,2)) from ionchannelABC.visualization import plot_kde_matrix_custom import myokit import numpy as np m,_,_ = myokit.load(modelfile) originals = {} for name in limits.keys(): if name.startswith("log"): name_ = name[4:] else: name_ = name val = m.value(name_) if name.startswith("log"): val_ = np.log10(val) else: val_ = val originals[name] = val_ sns.set_context('paper') g = plot_kde_matrix_custom(df, w, limits=limits, refval=originals)
docs/examples/human-atrial/nygren_ical_unified.ipynb
c22n/ion-channel-ABC
gpl-3.0
Scatter Chart Scatter Chart Selections Click a point on the Scatter plot to select it. Now, run the cell below to check the selection. After you've done this, try holding the ctrl (or command key on Mac) and clicking another point. Clicking the background will reset the selection.
x_sc = LinearScale() y_sc = LinearScale() x_data = np.arange(20) y_data = np.random.randn(20) scatter_chart = Scatter(x=x_data, y=y_data, scales= {'x': x_sc, 'y': y_sc}, default_colors=['dodgerblue'], interactions={'click': 'select'}, selected_style={'opacity': 1.0, 'fill': 'DarkOrange', 'stroke': 'Red'}, unselected_style={'opacity': 0.5}) ax_x = Axis(scale=x_sc) ax_y = Axis(scale=y_sc, orientation='vertical', tick_format='0.2f') fig = Figure(marks=[scatter_chart], axes=[ax_x, ax_y]) display(fig) scatter_chart.selected
examples/Mark Interactions.ipynb
rmenegaux/bqplot
apache-2.0
Scatter Chart Interactions and Tooltips
from ipywidgets import * x_sc = LinearScale() y_sc = LinearScale() x_data = np.arange(20) y_data = np.random.randn(20) dd = Dropdown(options=['First', 'Second', 'Third', 'Fourth']) scatter_chart = Scatter(x=x_data, y=y_data, scales= {'x': x_sc, 'y': y_sc}, default_colors=['dodgerblue'], names=np.arange(100, 200), names_unique=False, display_names=False, display_legend=True, labels=['Blue']) ins = Button(icon='fa-legal') scatter_chart.tooltip = ins scatter_chart2 = Scatter(x=x_data, y=np.random.randn(20), scales= {'x': x_sc, 'y': y_sc}, default_colors=['orangered'], tooltip=dd, names=np.arange(100, 200), names_unique=False, display_names=False, display_legend=True, labels=['Red']) ax_x = Axis(scale=x_sc) ax_y = Axis(scale=y_sc, orientation='vertical', tick_format='0.2f') fig = Figure(marks=[scatter_chart, scatter_chart2], axes=[ax_x, ax_y]) display(fig) def print_event(self, target): print(target) # Adding call back to scatter events # print custom mssg on hover and background click of Blue Scatter scatter_chart.on_hover(print_event) scatter_chart.on_background_click(print_event) # print custom mssg on click of an element or legend of Red Scatter scatter_chart2.on_element_click(print_event) scatter_chart2.on_legend_click(print_event) # Adding figure as tooltip x_sc = LinearScale() y_sc = LinearScale() x_data = np.arange(10) y_data = np.random.randn(10) lc = Lines(x=x_data, y=y_data, scales={'x': x_sc, 'y':y_sc}) ax_x = Axis(scale=x_sc) ax_y = Axis(scale=y_sc, orientation='vertical', tick_format='0.2f') tooltip_fig = Figure(marks=[lc], axes=[ax_x, ax_y], min_height=400, min_width=400) scatter_chart.tooltip = tooltip_fig # Changing interaction from hover to click for tooltip scatter_chart.interactions = {'click': 'tooltip'}
examples/Mark Interactions.ipynb
rmenegaux/bqplot
apache-2.0
Line Chart
# Adding default tooltip to Line Chart x_sc = LinearScale() y_sc = LinearScale() x_data = np.arange(100) y_data = np.random.randn(3, 100) def_tt = Tooltip(fields=['name', 'index'], formats=['', '.2f'], labels=['id', 'line_num']) line_chart = Lines(x=x_data, y=y_data, scales= {'x': x_sc, 'y': y_sc}, tooltip=def_tt, display_legend=True, labels=["line 1", "line 2", "line 3"] ) ax_x = Axis(scale=x_sc) ax_y = Axis(scale=y_sc, orientation='vertical', tick_format='0.2f') fig = Figure(marks=[line_chart], axes=[ax_x, ax_y]) display(fig) # Adding call back to print event when legend or the line is clicked line_chart.on_legend_click(print_event) line_chart.on_element_click(print_event)
examples/Mark Interactions.ipynb
rmenegaux/bqplot
apache-2.0
Bar Chart
# Adding interaction to select bar on click for Bar Chart x_sc = OrdinalScale() y_sc = LinearScale() x_data = np.arange(10) y_data = np.random.randn(2, 10) bar_chart = Bars(x=x_data, y=[y_data[0, :].tolist(), y_data[1, :].tolist()], scales= {'x': x_sc, 'y': y_sc}, interactions={'click': 'select'}, selected_style={'stroke': 'orange', 'fill': 'red'}, labels=['Level 1', 'Level 2'], display_legend=True) ax_x = Axis(scale=x_sc) ax_y = Axis(scale=y_sc, orientation='vertical', tick_format='0.2f') fig = Figure(marks=[bar_chart], axes=[ax_x, ax_y]) display(fig) # Adding a tooltip on hover in addition to select on click def_tt = Tooltip(fields=['x', 'y'], formats=['', '.2f']) bar_chart.tooltip=def_tt bar_chart.interactions = { 'legend_hover': 'highlight_axes', 'hover': 'tooltip', 'click': 'select', } # Changing tooltip to be on click bar_chart.interactions = {'click': 'tooltip'} # Call back on legend being clicked bar_chart.type='grouped' bar_chart.on_legend_click(print_event)
examples/Mark Interactions.ipynb
rmenegaux/bqplot
apache-2.0
Histogram
# Adding tooltip for Histogram x_sc = LinearScale() y_sc = LinearScale() sample_data = np.random.randn(100) def_tt = Tooltip(formats=['', '.2f'], fields=['count', 'midpoint']) hist = Hist(sample=sample_data, scales= {'sample': x_sc, 'count': y_sc}, tooltip=def_tt, display_legend=True, labels=['Test Hist'], select_bars=True) ax_x = Axis(scale=x_sc, tick_format='0.2f') ax_y = Axis(scale=y_sc, orientation='vertical', tick_format='0.2f') fig = Figure(marks=[hist], axes=[ax_x, ax_y]) display(fig) # Changing tooltip to be displayed on click hist.interactions = {'click': 'tooltip'} # Changing tooltip to be on click of legend hist.interactions = {'legend_click': 'tooltip'}
examples/Mark Interactions.ipynb
rmenegaux/bqplot
apache-2.0
You use the yerr argument of the function plt.errorbar() in order to specify what your error rate in the y-direction is. There's also an xerr optional argument, if your error is actually in the x-direction. What about the histograms we built from the color channels of the images in last week's lectures? We can use matplotlib's hist() function for this.
x = np.random.normal(size = 100) _ = plt.hist(x, bins = 20)
lectures/Lecture24.ipynb
eds-uga/cbio4835-sp17
mit
plt.hist() has only 1 required argument: a list of numbers. However, the optional bins argument is very useful, as it dictates how many bins you want to use to divide up the data in the required argument. Too many bins and every bar in the histogram will have a count of 1; too few bins and all your data will end up in just a single bar! Here's too few bins:
_ = plt.hist(x, bins = 2)
lectures/Lecture24.ipynb
eds-uga/cbio4835-sp17
mit
And too many:
_ = plt.hist(x, bins = 200)
lectures/Lecture24.ipynb
eds-uga/cbio4835-sp17
mit
Picking the number of bins for histograms is an art unto itself that usually requires a lot of trial-and-error, hence the importance of having a good visualization setup! Another point on histograms, specifically its lone required argument: matplotlib expects a 1D array. This is important if you're trying to visualize, say, the pixel intensities of an image channel. Images are always either 2D (grayscale) or 3D (color, RGB). As such, if you feed an image object directly into the hist method, matplotlib will complain:
import matplotlib.image as mpimg img = mpimg.imread("Lecture22/image1.png") # Our good friend! channel = img[:, :, 0] # The "R" channel _ = plt.hist(channel)
lectures/Lecture24.ipynb
eds-uga/cbio4835-sp17
mit
Offhand, I don't know what this is, but it definitely is not the intensity histogram we were hoping for. Here's the magical way around it: all NumPy arrays (which images objects are!) have a flatten() method. This function is dead simple: no matter how many dimensions the NumPy array has, whether it's a grayscale image (2D), a color image (3D), or million-dimensional tensor, it completely flattens the whole thing out into a long 1D list of numbers.
print(channel.shape) # Before flat = channel.flatten() print(flat.shape) # After
lectures/Lecture24.ipynb
eds-uga/cbio4835-sp17
mit
Then just feed the flattened array into the hist method:
_ = plt.hist(flat)
lectures/Lecture24.ipynb
eds-uga/cbio4835-sp17
mit
The last type of plot we'll discuss here isn't really a "plot" in the sense as the previous ones have been, but it is no less important: showing images!
img = mpimg.imread("Lecture22/image1.png") plt.imshow(img)
lectures/Lecture24.ipynb
eds-uga/cbio4835-sp17
mit
Setting EEG Montage (using standard montages) In the case where your data don't have locations you can set them using a :class:mne.channels.Montage. MNE comes with a set of default montages. To read one of them do:
montage = mne.channels.read_montage('standard_1020') print(montage)
0.18/_downloads/11f39f61bd7f4cfd5791b0d10da462f2/plot_eeg_erp.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Model initialization The uniform model doesn't take any special initialization arguments, so the initialization is straightforward.
tm = UniformModel()
notebooks/example_uniform_model.ipynb
hpparvi/PyTransit
gpl-2.0
OpenCL Usage The OpenCL version of the uniform model, pytransit.UniformModelCL works identically to the Python version, except that the OpenCL context and queue can be given as arguments in the initialiser, and the model evaluation method can be told to not to copy the model from the GPU memory. If the context and queue are not given, the model creates a default context using cl.create_some_context().
import pyopencl as cl from pytransit import UniformModelCL devices = cl.get_platforms()[0].get_devices()[2:] ctx = cl.Context(devices) queue = cl.CommandQueue(ctx) tm_cl = UniformModelCL(cl_ctx=ctx, cl_queue=queue) tm_cl.set_data(times_sc) plot_transits(tm_cl)
notebooks/example_uniform_model.ipynb
hpparvi/PyTransit
gpl-2.0
GPU vs. CPU Performance The performance difference between the OpenCL and Python versions depends on the CPU, GPU, number of simultaneously evaluated models, amount of supersampling, and whether the model data is copied from the GPU memory. The performance difference grows in the favour of OpenCL model with the number of simultaneous models and amount of supersampling, but copying the data slows the OpenCL implementation down. For best performance, also the log likelihood computations should be done in the GPU.
times_sc2 = tile(times_sc, 20) # 20000 short cadence datapoints times_lc2 = tile(times_lc, 50) # 5000 long cadence datapoints tm_py = UniformModel() tm_cl = UniformModelCL(cl_ctx=ctx, cl_queue=queue) tm_py.set_data(times_sc2) tm_cl.set_data(times_sc2) %%timeit tm_py.evaluate_pv(pvp) %%timeit tm_cl.evaluate_pv(pvp, copy=True) tm_py.set_data(times_lc2, nsamples=10, exptimes=0.01) tm_cl.set_data(times_lc2, nsamples=10, exptimes=0.01) %%timeit tm_py.evaluate_pv(pvp) %%timeit tm_cl.evaluate_pv(pvp, copy=True)
notebooks/example_uniform_model.ipynb
hpparvi/PyTransit
gpl-2.0
數位數字資料是解析度為8*8的手寫數字影像,總共有1797筆資料。預設為0~9十種數字類型,亦可由n_class來設定要取得多少種數字類型。 輸出的資料包含 1. ‘data’, 特徵資料(179764) 2. ‘images’, 影像資料(1797*88) 3. ‘target’, 資料標籤(1797) 4. ‘target_names’, 選取出的標籤列表(與n_class給定的長度一樣) 5. ‘DESCR’, 此資料庫的描述 可以參考Classification的Ex1 (二)以疊代方式計算模型 RFE以排除最不具目標影響力的特徵,做特徵的影響力排序。並且將訓練用的特徵挑選至n_features_to_select所給定的特徵數。因為要看每一個特徵的影響力排序,所以我們將n_features_to_select設定為1,一般會根據你所知道的具有影響力特徵數目來設定該參數。而step代表每次刪除較不具影響力的特徵數目,因為本範例要觀察每個特徵的影響力排序,所以也是設定為1。若在實際應用時,特徵的數目較大,可以考慮將step的參數設高一點。
# Create the RFE object and rank each pixel svc = SVC(kernel="linear", C=1) rfe = RFE(estimator=svc, n_features_to_select=1, step=1) rfe.fit(X, y) ranking = rfe.ranking_.reshape(digits.images[0].shape)
Feature_Selection/ipython_notebook/ex2_Recursive_feature_elimination.ipynb
dryadb11781/machine-learning-python
bsd-3-clause