markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
위의 그림에서 0~4는 레이블 컬럼에 들어가 있는 것!!마스크 방식은 `for문` + `if문````pythonfor ...: if ...: for ...: if (~(df['label'] ==0) | (df['label'] == 4)) : 0도 아니고 4도 아니고```
dfx = df[(~(df['label'] ==0) | (df['label'] == 4))] df.shape, dfx.shape dfx.plot(kind='scatter', x='Grocery',y='Frozen',c='label', cmap='Set1', figsize=(7,7)) df.to_excel('./wholesale.xls')
_____no_output_____
Apache-2.0
0702_ML19_clustering_kmeans.ipynb
msio900/minsung_machinelearning
[Index](Index.ipynb) - [Next](Widget List.ipynb) Simple Widget Introduction What are widgets? Widgets are eventful python objects that have a representation in the browser, often as a control like a slider, textbox, etc. What can they be used for? You can use widgets to build **interactive GUIs** for your notebooks. You can also use widgets to **synchronize stateful and stateless information** between Python and JavaScript. Using widgets To use the widget framework, you need to import `ipywidgets`.
import ipywidgets as widgets
_____no_output_____
BSD-3-Clause
docs/source/examples/Widget Basics.ipynb
akhand1111/ipywidgets
repr Widgets have their own display `repr` which allows them to be displayed using IPython's display framework. Constructing and returning an `IntSlider` automatically displays the widget (as seen below). Widgets are displayed inside the output area below the code cell. Clearing cell output will also remove the widget.
widgets.IntSlider()
_____no_output_____
BSD-3-Clause
docs/source/examples/Widget Basics.ipynb
akhand1111/ipywidgets
display() You can also explicitly display the widget using `display(...)`.
from IPython.display import display w = widgets.IntSlider() display(w)
_____no_output_____
BSD-3-Clause
docs/source/examples/Widget Basics.ipynb
akhand1111/ipywidgets
Multiple display() calls If you display the same widget twice, the displayed instances in the front-end will remain in sync with each other. Try dragging the slider below and watch the slider above.
display(w)
_____no_output_____
BSD-3-Clause
docs/source/examples/Widget Basics.ipynb
akhand1111/ipywidgets
Why does displaying the same widget twice work? Widgets are represented in the back-end by a single object. Each time a widget is displayed, a new representation of that same object is created in the front-end. These representations are called views.![Kernel & front-end diagram](images/WidgetModelView.png) Closing widgets You can close a widget by calling its `close()` method.
display(w) w.close()
_____no_output_____
BSD-3-Clause
docs/source/examples/Widget Basics.ipynb
akhand1111/ipywidgets
Widget properties All of the IPython widgets share a similar naming scheme. To read the value of a widget, you can query its `value` property.
w = widgets.IntSlider() display(w) w.value
_____no_output_____
BSD-3-Clause
docs/source/examples/Widget Basics.ipynb
akhand1111/ipywidgets
Similarly, to set a widget's value, you can set its `value` property.
w.value = 100
_____no_output_____
BSD-3-Clause
docs/source/examples/Widget Basics.ipynb
akhand1111/ipywidgets
Keys In addition to `value`, most widgets share `keys`, `description`, and `disabled`. To see the entire list of synchronized, stateful properties of any specific widget, you can query the `keys` property.
w.keys
_____no_output_____
BSD-3-Clause
docs/source/examples/Widget Basics.ipynb
akhand1111/ipywidgets
Shorthand for setting the initial values of widget properties While creating a widget, you can set some or all of the initial values of that widget by defining them as keyword arguments in the widget's constructor (as seen below).
widgets.Text(value='Hello World!', disabled=True)
_____no_output_____
BSD-3-Clause
docs/source/examples/Widget Basics.ipynb
akhand1111/ipywidgets
Linking two similar widgets If you need to display the same value two different ways, you'll have to use two different widgets. Instead of attempting to manually synchronize the values of the two widgets, you can use the `link` or `jslink` function to link two properties together (the difference between these is discussed in [Widget Events](Widget Events.ipynb)). Below, the values of two widgets are linked together.
a = widgets.FloatText() b = widgets.FloatSlider() display(a,b) mylink = widgets.jslink((a, 'value'), (b, 'value'))
_____no_output_____
BSD-3-Clause
docs/source/examples/Widget Basics.ipynb
akhand1111/ipywidgets
Unlinking widgets Unlinking the widgets is simple. All you have to do is call `.unlink` on the link object. Try changing one of the widgets above after unlinking to see that they can be independently changed.
# mylink.unlink()
_____no_output_____
BSD-3-Clause
docs/source/examples/Widget Basics.ipynb
akhand1111/ipywidgets
Data format conversion for WEASEL_MUSE===---Input---Two file types, each **data file** represents a single sample; the **label file** contains labels of all samples***Note:*** *both training and testing data should do the conversion***data files**: - file name: "sample_id.csv"- file contents: L * D, L is the MTS length, D is the dimension size**label file**: - file name: "meta_data.csv"- file contents: L * 2, L is the MTS length, each row is like "sample_id, label"---Output---A single file contains all samples and their labels: ***L * (3 + D)***- 1st col: sample_id- 2nd col: timestamps- 3rd col: label- after the 4th col: mts vector with D dimensions
import numpy as np
_____no_output_____
MIT
Baselines/mtsc_weasel_muse/.ipynb_checkpoints/Preprocess_weasel_muse-checkpoint.ipynb
JingweiZuo/SMATE
On country (only MS)
df.fund= df.fund=='TRUE' df.gre= df.gre=='TRUE' df.highLevelBachUni= df.highLevelBachUni=='TRUE' df.highLevelMasterUni= df.highLevelMasterUni=='TRUE' df.uniRank.fillna(294,inplace=True) df.columns oldDf=df.copy() df=df[['countryCoded','degreeCoded','engCoded', 'fieldGroup','fund','gpaBachelors','gre', 'highLevelBachUni', 'paper','uniRank']] df=df[df.degreeCoded==0] del df['degreeCoded'] bestAvg=[] for alg in algorithm: for dis in dist: k_fold = KFold(n=len(df), n_folds=5) scores = [] try: clf = KNeighborsClassifier(n_neighbors=3, weights='distance',algorithm=alg, metric=dis) except Exception as err: # print(alg,dis,'err') continue for train_indices, test_indices in k_fold: xtr = df.iloc[train_indices,(df.columns != 'countryCoded')] ytr = df.iloc[train_indices]['countryCoded'] xte = df.iloc[test_indices, (df.columns != 'countryCoded')] yte = df.iloc[test_indices]['countryCoded'] clf.fit(xtr, ytr) ypred = clf.predict(xte) acc=accuracy_score(list(yte),list(ypred)) scores.append(acc*100) print(alg,dis,np.average(scores)) bestAvg.append(np.average(scores)) print('>>>>>>>Best: ',np.max(bestAvg))
('ball_tree', 'braycurtis', 55.507529507529512) ('ball_tree', 'canberra', 44.839072039072036) ('ball_tree', 'chebyshev', 53.738054538054541) ('ball_tree', 'cityblock', 55.735775335775337) ('ball_tree', 'euclidean', 55.793080993080991) ('ball_tree', 'dice', 46.14798534798534) ('ball_tree', 'hamming', 47.408547008547011) ('ball_tree', 'jaccard', 46.14798534798534) ('ball_tree', 'kulsinski', 46.319413919413918) ('ball_tree', 'matching', 46.14798534798534) ('ball_tree', 'rogerstanimoto', 46.14798534798534) ('ball_tree', 'russellrao', 48.896052096052095) ('ball_tree', 'sokalsneath', 46.14798534798534) ('kd_tree', 'chebyshev', 53.909483109483105) ('kd_tree', 'cityblock', 55.67863247863248) ('kd_tree', 'euclidean', 55.793080993080991) ('brute', 'braycurtis', 55.393080993081) ('brute', 'canberra', 45.066829466829468) ('brute', 'chebyshev', 53.738217338217339) ('brute', 'cityblock', 55.735449735449741) ('brute', 'correlation', 42.444444444444443) ('brute', 'cosine', 44.841025641025645) ('brute', 'euclidean', 55.792755392755396)
MIT
08_AfterAcceptance/06_KNN/knn.ipynb
yazdipour/DM17
On Fund (only MS)
bestAvg=[] for alg in algorithm: for dis in dist: k_fold = KFold(n=len(df), n_folds=5) scores = [] try: clf = KNeighborsClassifier(n_neighbors=3, weights='distance',algorithm=alg, metric=dis) except Exception as err: continue for train_indices, test_indices in k_fold: xtr = df.iloc[train_indices, (df.columns != 'fund')] ytr = df.iloc[train_indices]['fund'] xte = df.iloc[test_indices, (df.columns != 'fund')] yte = df.iloc[test_indices]['fund'] clf.fit(xtr, ytr) ypred = clf.predict(xte) acc=accuracy_score(list(yte),list(ypred)) score=acc*100 scores.append(score) if (len(bestAvg)>1) : if(score > np.max(bestAvg)) : bestClf=clf bestAvg.append(np.average(scores)) print (alg,dis,np.average(scores)) print('>>>>>>>Best: ',np.max(bestAvg))
('ball_tree', 'braycurtis', 76.495400895400905) ('ball_tree', 'canberra', 75.354008954008961) ('ball_tree', 'chebyshev', 75.584533984533977) ('ball_tree', 'cityblock', 77.293935693935694) ('ball_tree', 'euclidean', 76.496703296703302) ('ball_tree', 'dice', 74.383557183557173) ('ball_tree', 'hamming', 76.152706552706562) ('ball_tree', 'jaccard', 74.383557183557173) ('ball_tree', 'kulsinski', 74.497842897842901) ('ball_tree', 'matching', 74.383557183557173) ('ball_tree', 'rogerstanimoto', 74.383557183557173) ('ball_tree', 'russellrao', 75.409361009360993) ('ball_tree', 'sokalsneath', 74.383557183557173) ('kd_tree', 'chebyshev', 75.641676841676855) ('kd_tree', 'cityblock', 77.293935693935694) ('kd_tree', 'euclidean', 76.553683353683354) ('brute', 'braycurtis', 76.495563695563703) ('brute', 'canberra', 75.411151811151825) ('brute', 'chebyshev', 75.754008954008967) ('brute', 'cityblock', 77.008547008547012) ('brute', 'correlation', 73.528367928367928) ('brute', 'cosine', 72.901912901912894) ('brute', 'euclidean', 76.61066341066342) ('brute', 'dice', 73.983882783882777) ('brute', 'hamming', 76.0954008954009) ('brute', 'jaccard', 73.983882783882777) ('brute', 'kulsinski', 74.098168498168491) ('brute', 'matching', 73.983882783882777) ('brute', 'rogerstanimoto', 73.983882783882777) ('brute', 'russellrao', 72.670411070411063) ('brute', 'sokalsneath', 73.983882783882777) ('brute', 'yule', 58.807651607651607) ('>>>>>>>Best: ', 77.293935693935694)
MIT
08_AfterAcceptance/06_KNN/knn.ipynb
yazdipour/DM17
Best : ('kd_tree', 'cityblock', 77.692144892144896)
me=[0,2,0,2.5,False,False,1.5,400] n=bestClf.kneighbors([me]) n for i in n[1]: print(xtr.iloc[i])
countryCoded engCoded fieldGroup gpaBachelors gre highLevelBachUni \ 664 0 2 0 2.5 False False 767 0 2 0 3.0 False False 911 0 2 0 3.0 False False paper uniRank 664 1.000000 72.0 767 1.333333 294.0 911 3.000000 294.0
MIT
08_AfterAcceptance/06_KNN/knn.ipynb
yazdipour/DM17
 Periodo de los datos: 6 ciclos y medio (+ 3 ciclos-0)
df['date'].hist(bins=51, figsize=(10,5)) plt.xlim(df['date'].min(), df['date'].max()) plt.title('Histograma de la Fecha de Envío del Mensaje') plt.ylabel('Número de Mensajes') plt.xlabel('Año-Mes') plt.show() #plt.savefig('hist_fecha.svg', format='svg') df['month'] = df['date'].dt.month df['dayofweek'] = df['date'].dt.dayofweek # Plot for month and day of week variables day_value_counts = (df['dayofweek'].value_counts()/df.shape[0])*100 month_value_counts = (df['month'].value_counts()/df.shape[0])*100 monthnames_ES = ['Enero','Febrero','Marzo','Abril','Mayo','Junio','Julio','Agosto','Septiembre','Octubre','Noviembre','Diciembre'] daynames_ES = ['Lunes','Martes','Miércoles','Jueves','Viernes','Sábado','Domingo'] fig, (ax1, ax2) = plt.subplots(nrows=2, ncols=1, figsize=(10,10)) month_value_counts.plot(ax=ax1, rot=0, kind='bar', title='Mensajes Enviados según el Mes (%)', color='b') ax1.set_xticklabels(monthnames_ES) ax1.set_ylabel('% de Mensajes') ax1.set_xlabel('Mes') day_value_counts.plot(ax=ax2, rot=0, kind='bar', title='Mensajes Enviados según el Día de la Semana (%)', color='b') ax2.set_xticklabels(daynames_ES) ax2.set_ylabel('% de Mensajes') ax2.set_xlabel('Día de la Semana') plt.tight_layout() plt.show() #fig.savefig('grafico_barras_dia_mes.svg', format='svg') #fig.savefig('grafico_barras_dia_mes.png', format='png') %%time df['body'] = df['body'].apply(get_text_selectolax) # filter text from hltm emails # Extract sender and recipient email only df['sender_email'] = df.sender.str.extract("([a-zA-z0-9-.]+@[a-zA-z0-9-.]+)")[0].str.lower() df['recipient_email'] = df.recipient.str.extract("([a-zA-z0-9-.]+@[a-zA-z0-9-.]+)")[0].str.lower() print() print(df.isna().sum()) print() # eliminate 'no reply' and 'automatic' msgs df_noreply = df[~df.sender.str.contains('[email protected]').fillna(False)] df_noautom = df_noreply[~df_noreply.subject.str.contains('Respuesta automática').fillna(False)] # Separate msgs by type of sender send_by_alumns = df_noautom[df_noautom.sender.str.contains('@alum.up.edu.pe').fillna(False)] send_by_no_alumns = df_noautom[~df_noautom.sender.str.contains('@alum.up.edu.pe').fillna(False)] send_by_internals = df_noautom[df_noautom.sender.str.contains('@up.edu.pe').fillna(False)] print('# msgs send by alumns:', len(send_by_alumns)) print('# of alumns that send msgs:', len(send_by_alumns.sender_email.unique())) len(send_by_internals) # Clean mails subject send_by_internals['subject'] = send_by_internals['subject'].apply(filterResponses)
_____no_output_____
MIT
deep_learning/models/combine_processes/Data_Cleaning_NLP.ipynb
Claudio9701/mailbot
Email pairing algorithm1. Extrae los mensajes enviados por alumno y los mensajes enviados por usuarios internos a cada alumno, respectivamente2. Extrae el asunto de cada mensaje del punto 1. Si el asunto del mensaje es igual al asunto enviado en el mensaje anterior aumenta el contador de mensajes con el mismo asunto.3. Utilizando en contador de mensajes con el mismo asunto, busca el asunto extraido en el punto 2 entre los emails enviados por usuarios internos a ese alumno.4. Genera una lista con el asunto, los datos del mail enviado por el alumno y la respuesta que recibió.
# Separate mails sended to each alumn dfs = [send_by_internals[send_by_internals.recipient_email == alumn] for alumn in send_by_alumns.sender_email.unique()] unique_alumns = send_by_alumns.sender_email.unique() n = len(unique_alumns) # Count causes to not being able to process a text resp_date_bigger_than_input_date = 0 responses_with_same_subject_lower_than_counter = 0 subject_equal_none = 0 n_obs_less_than_0 = 0 repited_id = 0 for i, alumn in tqdm(enumerate(unique_alumns), total=n): if len(dfs[i]) > 0: temp_ = send_by_alumns[send_by_alumns.sender_email == alumn] indexes = temp_.index counter_subject = 0 subject_pre = 'initial_value' for index in indexes: subject = filterResponses(temp_.subject[index]) if subject != None: if subject_pre == subject: counter_subject += 1 else: counter_subject = 0 subject_pre = subject if len(dfs[i][dfs[i]['subject'] == subject]) > counter_subject: input_date = temp_.loc[index, 'date'] resp_date = dfs[i]['date'][dfs[i]['subject'] == subject].iloc[counter_subject] if input_date < resp_date: input_id, sender, recipient, input_body = temp_.loc[index, ['id','sender','recipient','body']] resp_id, resp_body = dfs[i][['id','body']][dfs[i]['subject'] == subject].iloc[counter_subject] pair = np.array([[subject, sender, recipient, input_id, input_date, input_body, resp_id, resp_date, resp_body]],dtype=object) if i == 0: pairs = np.array(pair) elif all([not(pair[0,3] in pairs[:,3]), not(pair[0,6] in pairs[:,6])]): pairs = np.append(pairs, pair, axis=0) else: repited_id += 1 else: resp_date_bigger_than_input_date += 1 else: responses_with_same_subject_lower_than_counter += 1 else: subject_equal_none += 1 else: n_obs_less_than_0 += 1
100%|██████████| 3781/3781 [00:57<00:00, 65.81it/s]
MIT
deep_learning/models/combine_processes/Data_Cleaning_NLP.ipynb
Claudio9701/mailbot
Format data
total_unpaired_mails = repited_id+resp_date_bigger_than_input_date+responses_with_same_subject_lower_than_counter+subject_equal_none+n_obs_less_than_0 print() print('Filtros del algoritmo de emparejamiento') print('resp_date_bigger_than_input_date:',resp_date_bigger_than_input_date) print('subject_equal_none:',subject_equal_none) print('repited_id:', repited_id) print('no hay motivo pero no lo empareje:',len(send_by_alumns) - total_unpaired_mails - len(pairs) ) print('-'*50) print('motivos de sar:') print('el ultimo mensaje de la cadena del asunto no tuvo respuesta:',responses_with_same_subject_lower_than_counter) print('no le respondieron ni el primer mensaje:',n_obs_less_than_0) print('-'*50) print('# of mails in total:', len(mails)) print('# msgs send by alumns:', len(send_by_alumns)) print('# of paired emails:', len(pairs)) print('% de paired mails:', round((len(pairs)/len(send_by_alumns))*100,2),'%') print('total of unpaired mails: ', total_unpaired_mails) print('% de unpaired mails:', round((total_unpaired_mails/len(send_by_alumns))*100,2),'%') print() # Load paired mails in a DataFrame columns_names = ['subject', 'sender', 'recipient', 'input_id', 'input_date', 'input_body', 'resp_id', 'resp_date', 'resp_body'] paired_mails = pd.DataFrame(data=pairs, columns=columns_names) paired_mails['input_date'] = pd.to_datetime(paired_mails['input_date'], infer_datetime_format=True) paired_mails['resp_date'] = pd.to_datetime(paired_mails['resp_date'], infer_datetime_format=True) paired_mails['input_month'] = paired_mails['input_date'].dt.month paired_mails['input_dayofweek'] = paired_mails['input_date'].dt.dayofweek # Plot for month and day of week variables day_value_counts = (paired_mails['input_dayofweek'].value_counts()/df.shape[0])*100 month_value_counts = (paired_mails['input_month'].value_counts()/df.shape[0])*100 monthnames_ES = ['Enero','Febrero','Marzo','Abril','Mayo','Junio','Julio','Agosto','Septiembre','Octubre','Noviembre','Diciembre'] daynames_ES = ['Lunes','Martes','Miércoles','Jueves','Viernes','Sábado','Domingo'] fig, (ax1, ax2) = plt.subplots(nrows=2, ncols=1, figsize=(5,5)) month_value_counts.plot(ax=ax1, rot=45, kind='bar', title='Mensajes Enviados según el Mes (%)', color='b') ax1.set_xticklabels(monthnames_ES) ax1.set_ylabel('% de Mensajes') ax1.set_xlabel('Mes') day_value_counts.plot(ax=ax2, rot=45, kind='bar', title='Mensajes Enviados según el Día de la Semana (%)', color='b') ax2.set_xticklabels(daynames_ES) ax2.set_ylabel('% de Mensajes') ax2.set_xlabel('Día de la Semana') plt.tight_layout() plt.show() #fig.savefig('grafico_barras_dia_mes.svg', format='svg') #fig.savefig('grafico_barras_dia_mes.png', format='png') paired_mails['input_date'].hist(bins=51, figsize=(10*1.5,5*1.5), color='blue') plt.xlim(df['date'].min(), df['date'].max()) plt.title('Histograma de la Fecha de Envío del Mensaje de Alumnos',fontsize=20) plt.ylabel('Número de Mensajes',fontsize=15) plt.xlabel('Año-Mes',fontsize=15) plt.yticks(fontsize=12.5) plt.xticks(fontsize=12.5) plt.savefig('hist_fecha_inputs.svg', dpi=300, format='svg') plt.show() fig, (ax1, ax2) = plt.subplots(1,2, figsize=(15*1.25,5*1.25)) for historyDir in historyDirs: params = historyDir.replace('.pk','').split('_')[-4:] try: history = pickle.load(open(historyDir,'rb')) ax1.plot(range(len(history['loss'])), history['loss'], linewidth=5) ax1.grid(True) ax1.set_ylabel('Entropía Cruzada (Error)',fontsize=20) ax1.set_xlabel('Época',fontsize=20) ax1.set_title('Entrenamiento',fontsize=20) ax1.set_xlim(-0.5, 100) ax1.set ax2.plot(range(len(history['val_loss'])), history['val_loss'], linewidth=5) ax2.grid(True) ax2.set_xlabel('Época',fontsize=20) ax2.set_title('Validación',fontsize=20) plt.suptitle('Curvas de Error',fontsize=25) ax2.set_xlim(-0.5, 100) except: pass fig.savefig('curvas_error.svg', dpi=300, format='svg') paired_mails['resp_date'].hist(bins=51, figsize=(10,5), color='blue') plt.xlim(df['date'].min(), df['date'].max()) plt.title('Histograma de la Fecha de Envío del Mensaje hacia Alumnos') plt.ylabel('Número de Mensajes') plt.xlabel('Año-Mes') plt.show() #fig.savefig('hist_fecha_resps.svg', format='svg') # Create features to detect possible errors paired_mails['resp_time'] = paired_mails['resp_date'] - paired_mails['input_date'] paired_mails['input_body_len'] = paired_mails['input_body'].apply(len) paired_mails['resp_body_len'] = paired_mails['resp_body'].apply(len) # Calculate input messages lenghts input_len_stats = paired_mails['input_body_len'].describe([0.01, 0.05, 0.1, 0.25, 0.5, 0.75, 0.8, 0.85, 0.9, 0.98, 0.99]).round() print() print(input_len_stats) print() # Calculate response messages lenghts resp_len_stats = paired_mails['resp_body_len'].describe([0.05, 0.1, 0.25, 0.5, 0.75, 0.8, 0.85, 0.9, 0.98, 0.99]).round() print() print(resp_len_stats) print() # Response time analysis resp_time_stats = paired_mails['resp_time'].describe([0.05, 0.1, 0.25, 0.5, 0.75, 0.8, 0.85, 0.9, 0.98, 0.99]) print() print(resp_time_stats) print() # Filter errors using response time paired_mails = paired_mails[paired_mails['resp_time'] <= paired_mails['resp_time'].sort_values().iloc[-65]] # Filter errors using messages body lenghts paired_mails = paired_mails[paired_mails['input_body_len'] <= paired_mails['input_body_len'].sort_values().iloc[-3]] # not errors caught using resp_body_len # Response time analysis resp_time_stats = paired_mails['resp_time'].describe([0.05, 0.1, 0.25, 0.5, 0.75, 0.8, 0.85, 0.9, 0.98, 0.99]) print() print(resp_time_stats) print() paired_mails['input_body'] = paired_mails['input_body'].apply(filterMail) paired_mails['resp_body'] = paired_mails['resp_body'].apply(filterMail) paired_mails['resp_body'] = paired_mails['resp_body'].apply(filterFirm) sentence_pairs = paired_mails[['input_body','resp_body']] sentence_pairs.to_csv('output/data_cleaning_nlp/q_and_a.txt', sep='\t', index=False, header=False) paired_mails['input_body'] = paired_mails['input_body'].apply(lambda x: regex.sub(pattern='[`<@!*>-]', repl='', string=x)) paired_mails['resp_body'] = paired_mails['resp_body'].apply(lambda x: regex.sub(pattern='[`<@!*>-]', repl='', string=x)) paired_mails.to_csv('output/data_cleaning_nlp/paired_emails.csv', encoding='utf-8', index=False)
_____no_output_____
MIT
deep_learning/models/combine_processes/Data_Cleaning_NLP.ipynb
Claudio9701/mailbot
NLP
## Tokenization using NLTK # Define input (x) and target (y) sequences variables x = [word_tokenize(msg, language='spanish') for msg in paired_mails['input_body'].values] y = [word_tokenize(msg, language='spanish') for msg in paired_mails['resp_body'].values] # Variables to store lenghts hist_len_inp = [] hist_len_out = [] maxlen_inp = 0 maxlen_out = 0 # Define word counter word_freqs_inp = collections.Counter() word_freqs_out = collections.Counter() num_recs = 0 for inp, out in zip(x, y): # Get input and target sequence lenght hist_len_inp.append(len(inp)) hist_len_out.append(len(out)) # Calculate max sequence lenght if len(inp) > maxlen_inp: maxlen_inp = len(inp) if len(out) > maxlen_out: maxlen_out = len(out) # Count unique words for words in inp: word_freqs_inp[words] += 1 for words in out: word_freqs_out[words] += 1 num_recs += 1 print() print("maxlen input:", maxlen_inp) print("maxlen output:", maxlen_out) print("features (words) - input:", len(word_freqs_inp)) print("features (words) - output:", len(word_freqs_out)) print("number of records:", num_recs) print() plt.hist(hist_len_inp, bins =100) plt.xlim((0,850)) plt.xticks(range(0,800,100)) plt.title('input_len') plt.show() plt.hist(hist_len_out, bins=100) plt.xlim((0,850)) plt.xticks(range(0,800,100)) plt.title('output_len') plt.show() pk.dump(word_freqs_inp, open('output/data_cleaning_nlp/word_freqs_input.pk', 'wb')) pk.dump(word_freqs_out, open('output/data_cleaning_nlp/word_freqs_output.pk', 'wb')) pk.dump(x, open('output/data_cleaning_nlp/input_data.pk', 'wb')) pk.dump(y, open('output/data_cleaning_nlp/target_data.pk', 'wb'))
_____no_output_____
MIT
deep_learning/models/combine_processes/Data_Cleaning_NLP.ipynb
Claudio9701/mailbot
Matplotlib Matplotlib is a python 2D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms. matplotlib can be used in python scripts, the python and ipython shell, web application servers, and six graphical user interface toolkits.Matplotlib tries to make easy things easy and hard things possible. You can generate plots, histograms, power spectra, bar charts, errorcharts, scatterplots, etc, with just a few lines of code.Library documentation: http://matplotlib.org/
# needed to display the graphs %matplotlib inline import numpy as np import matplotlib.pyplot as plt x = np.linspace(0, 5, 10) y = x ** 2 fig = plt.figure() # left, bottom, width, height (range 0 to 1) axes = fig.add_axes([0.1, 0.1, 0.8, 0.8]) axes.plot(x, y, 'r') axes.set_xlabel('x') axes.set_ylabel('y') axes.set_title('title'); fig = plt.figure() axes1 = fig.add_axes([0.1, 0.1, 0.8, 0.8]) # main axes axes2 = fig.add_axes([0.2, 0.5, 0.4, 0.3]) # inset axes # main figure axes1.plot(x, y, 'r') axes1.set_xlabel('x') axes1.set_ylabel('y') axes1.set_title('title') # insert axes2.plot(y, x, 'g') axes2.set_xlabel('y') axes2.set_ylabel('x') axes2.set_title('insert title'); fig, axes = plt.subplots(nrows=1, ncols=2) for ax in axes: ax.plot(x, y, 'r') ax.set_xlabel('x') ax.set_ylabel('y') ax.set_title('title') fig.tight_layout() # example with a legend and latex symbols fig, ax = plt.subplots() ax.plot(x, x**2, label=r"$y = \alpha^2$") ax.plot(x, x**3, label=r"$y = \alpha^3$") ax.legend(loc=2) # upper left corner ax.set_xlabel(r'$\alpha$', fontsize=18) ax.set_ylabel(r'$y$', fontsize=18) ax.set_title('title'); # line customization fig, ax = plt.subplots(figsize=(12,6)) ax.plot(x, x+1, color="blue", linewidth=0.25) ax.plot(x, x+2, color="blue", linewidth=0.50) ax.plot(x, x+3, color="blue", linewidth=1.00) ax.plot(x, x+4, color="blue", linewidth=2.00) # possible linestype options ‘-‘, ‘–’, ‘-.’, ‘:’, ‘steps’ ax.plot(x, x+5, color="red", lw=2, linestyle='-') ax.plot(x, x+6, color="red", lw=2, ls='-.') ax.plot(x, x+7, color="red", lw=2, ls=':') # custom dash line, = ax.plot(x, x+8, color="black", lw=1.50) line.set_dashes([5, 10, 15, 10]) # format: line length, space length, ... # possible marker symbols: marker = '+', 'o', '*', 's', ',', '.', # '1', '2', '3', '4', ... ax.plot(x, x+ 9, color="green", lw=2, ls='*', marker='+') ax.plot(x, x+10, color="green", lw=2, ls='*', marker='o') ax.plot(x, x+11, color="green", lw=2, ls='*', marker='s') ax.plot(x, x+12, color="green", lw=2, ls='*', marker='1') # marker size and color ax.plot(x, x+13, color="purple", lw=1, ls='-', marker='o', markersize=2) ax.plot(x, x+14, color="purple", lw=1, ls='-', marker='o', markersize=4) ax.plot(x, x+15, color="purple", lw=1, ls='-', marker='o', markersize=8, markerfacecolor="red") ax.plot(x, x+16, color="purple", lw=1, ls='-', marker='s', markersize=8, markerfacecolor="yellow", markeredgewidth=2, markeredgecolor="blue"); # axis controls fig, axes = plt.subplots(1, 3, figsize=(12, 4)) axes[0].plot(x, x**2, x, x**3) axes[0].set_title("default axes ranges") axes[1].plot(x, x**2, x, x**3) axes[1].axis('tight') axes[1].set_title("tight axes") axes[2].plot(x, x**2, x, x**3) axes[2].set_ylim([0, 60]) axes[2].set_xlim([2, 5]) axes[2].set_title("custom axes range"); # scaling fig, axes = plt.subplots(1, 2, figsize=(10,4)) axes[0].plot(x, x**2, x, exp(x)) axes[0].set_title("Normal scale") axes[1].plot(x, x**2, x, exp(x)) axes[1].set_yscale("log") axes[1].set_title("Logarithmic scale (y)"); # axis grid fig, axes = plt.subplots(1, 2, figsize=(10,3)) # default grid appearance axes[0].plot(x, x**2, x, x**3, lw=2) axes[0].grid(True) # custom grid appearance axes[1].plot(x, x**2, x, x**3, lw=2) axes[1].grid(color='b', alpha=0.5, linestyle='dashed', linewidth=0.5) # twin axes example fig, ax1 = plt.subplots() ax1.plot(x, x**2, lw=2, color="blue") ax1.set_ylabel(r"area $(m^2)$", fontsize=18, color="blue") for label in ax1.get_yticklabels(): label.set_color("blue") ax2 = ax1.twinx() ax2.plot(x, x**3, lw=2, color="red") ax2.set_ylabel(r"volume $(m^3)$", fontsize=18, color="red") for label in ax2.get_yticklabels(): label.set_color("red") # other plot styles xx = np.linspace(-0.75, 1., 100) n = array([0,1,2,3,4,5]) fig, axes = plt.subplots(1, 4, figsize=(12,3)) axes[0].scatter(xx, xx + 0.25*randn(len(xx))) axes[0].set_title("scatter") axes[1].step(n, n**2, lw=2) axes[1].set_title("step") axes[2].bar(n, n**2, align="center", width=0.5, alpha=0.5) axes[2].set_title("bar") axes[3].fill_between(x, x**2, x**3, color="green", alpha=0.5); axes[3].set_title("fill_between"); # histograms n = np.random.randn(100000) fig, axes = plt.subplots(1, 2, figsize=(12,4)) axes[0].hist(n) axes[0].set_title("Default histogram") axes[0].set_xlim((min(n), max(n))) axes[1].hist(n, cumulative=True, bins=50) axes[1].set_title("Cumulative detailed histogram") axes[1].set_xlim((min(n), max(n))); # annotations fig, ax = plt.subplots() ax.plot(xx, xx**2, xx, xx**3) ax.text(0.15, 0.2, r"$y=x^2$", fontsize=20, color="blue") ax.text(0.65, 0.1, r"$y=x^3$", fontsize=20, color="green"); # color map alpha = 0.7 phi_ext = 2 * pi * 0.5 def flux_qubit_potential(phi_m, phi_p): return ( + alpha - 2 * cos(phi_p)*cos(phi_m) - alpha * cos(phi_ext - 2*phi_p)) phi_m = linspace(0, 2*pi, 100) phi_p = linspace(0, 2*pi, 100) X,Y = meshgrid(phi_p, phi_m) Z = flux_qubit_potential(X, Y).T fig, ax = plt.subplots() p = ax.pcolor(X/(2*pi), Y/(2*pi), Z, cmap=cm.RdBu, vmin=abs(Z).min(), vmax=abs(Z).max()) cb = fig.colorbar(p, ax=ax) from mpl_toolkits.mplot3d.axes3d import Axes3D # surface plots fig = plt.figure(figsize=(14,6)) # `ax` is a 3D-aware axis instance because of the projection='3d' # keyword argument to add_subplot ax = fig.add_subplot(1, 2, 1, projection='3d') p = ax.plot_surface(X, Y, Z, rstride=4, cstride=4, linewidth=0) # surface_plot with color grading and color bar ax = fig.add_subplot(1, 2, 2, projection='3d') p = ax.plot_surface(X, Y, Z, rstride=1, cstride=1, cmap=cm.coolwarm, linewidth=0, antialiased=False) cb = fig.colorbar(p, shrink=0.5) # wire frame fig = plt.figure(figsize=(8,6)) ax = fig.add_subplot(1, 1, 1, projection='3d') p = ax.plot_wireframe(X, Y, Z, rstride=4, cstride=4) # contour plot with projections fig = plt.figure(figsize=(8,6)) ax = fig.add_subplot(1,1,1, projection='3d') ax.plot_surface(X, Y, Z, rstride=4, cstride=4, alpha=0.25) cset = ax.contour(X, Y, Z, zdir='z', offset=-pi, cmap=cm.coolwarm) cset = ax.contour(X, Y, Z, zdir='x', offset=-pi, cmap=cm.coolwarm) cset = ax.contour(X, Y, Z, zdir='y', offset=3*pi, cmap=cm.coolwarm) ax.set_xlim3d(-pi, 2*pi); ax.set_ylim3d(0, 3*pi); ax.set_zlim3d(-pi, 2*pi);
_____no_output_____
MIT
Matplotlib-BEst.ipynb
imamol555/Machine-Learning
Copyright 2018 The TF-Agents Authors.
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License.
_____no_output_____
Apache-2.0
site/en-snapshot/agents/tutorials/2_environments_tutorial.ipynb
secsilm/docs-l10n
Environments View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook Introduction The goal of Reinforcement Learning (RL) is to design agents that learn by interacting with an environment. In the standard RL setting, the agent receives an observation at every time step and chooses an action. The action is applied to the environment and the environment returns a reward and a new observation. The agent trains a policy to choose actions to maximize the sum of rewards, also known as return.In TF-Agents, environments can be implemented either in Python or TensorFlow. Python environments are usually easier to implement, understand, and debug, but TensorFlow environments are more efficient and allow natural parallelization. The most common workflow is to implement an environment in Python and use one of our wrappers to automatically convert it into TensorFlow.Let us look at Python environments first. TensorFlow environments follow a very similar API. Setup If you haven't installed tf-agents or gym yet, run:
!pip install tf-agents !pip install 'gym==0.10.11' from __future__ import absolute_import from __future__ import division from __future__ import print_function import abc import tensorflow as tf import numpy as np from tf_agents.environments import py_environment from tf_agents.environments import tf_environment from tf_agents.environments import tf_py_environment from tf_agents.environments import utils from tf_agents.specs import array_spec from tf_agents.environments import wrappers from tf_agents.environments import suite_gym from tf_agents.trajectories import time_step as ts tf.compat.v1.enable_v2_behavior()
_____no_output_____
Apache-2.0
site/en-snapshot/agents/tutorials/2_environments_tutorial.ipynb
secsilm/docs-l10n
Python Environments Python environments have a `step(action) -> next_time_step` method that applies an action to the environment, and returns the following information about the next step:1. `observation`: This is the part of the environment state that the agent can observe to choose its actions at the next step.2. `reward`: The agent is learning to maximize the sum of these rewards across multiple steps.3. `step_type`: Interactions with the environment are usually part of a sequence/episode. e.g. multiple moves in a game of chess. step_type can be either `FIRST`, `MID` or `LAST` to indicate whether this time step is the first, intermediate or last step in a sequence.4. `discount`: This is a float representing how much to weight the reward at the next time step relative to the reward at the current time step.These are grouped into a named tuple `TimeStep(step_type, reward, discount, observation)`.The interface that all python environments must implement is in `environments/py_environment.PyEnvironment`. The main methods are:
class PyEnvironment(object): def reset(self): """Return initial_time_step.""" self._current_time_step = self._reset() return self._current_time_step def step(self, action): """Apply action and return new time_step.""" if self._current_time_step is None: return self.reset() self._current_time_step = self._step(action) return self._current_time_step def current_time_step(self): return self._current_time_step def time_step_spec(self): """Return time_step_spec.""" @abc.abstractmethod def observation_spec(self): """Return observation_spec.""" @abc.abstractmethod def action_spec(self): """Return action_spec.""" @abc.abstractmethod def _reset(self): """Return initial_time_step.""" @abc.abstractmethod def _step(self, action): """Apply action and return new time_step.""" self._current_time_step = self._step(action) return self._current_time_step
_____no_output_____
Apache-2.0
site/en-snapshot/agents/tutorials/2_environments_tutorial.ipynb
secsilm/docs-l10n
In addition to the `step()` method, environments also provide a `reset()` method that starts a new sequence and provides an initial `TimeStep`. It is not necessary to call the `reset` method explicitly. We assume that environments reset automatically, either when they get to the end of an episode or when step() is called the first time.Note that subclasses do not implement `step()` or `reset()` directly. They instead override the `_step()` and `_reset()` methods. The time steps returned from these methods will be cached and exposed through `current_time_step()`.The `observation_spec` and the `action_spec` methods return a nest of `(Bounded)ArraySpecs` that describe the name, shape, datatype and ranges of the observations and actions respectively.In TF-Agents we repeatedly refer to nests which are defined as any tree like structure composed of lists, tuples, named-tuples, or dictionaries. These can be arbitrarily composed to maintain structure of observations and actions. We have found this to be very useful for more complex environments where you have many observations and actions. Using Standard EnvironmentsTF Agents has built-in wrappers for many standard environments like the OpenAI Gym, DeepMind-control and Atari, so that they follow our `py_environment.PyEnvironment` interface. These wrapped evironments can be easily loaded using our environment suites. Let's load the CartPole environment from the OpenAI gym and look at the action and time_step_spec.
environment = suite_gym.load('CartPole-v0') print('action_spec:', environment.action_spec()) print('time_step_spec.observation:', environment.time_step_spec().observation) print('time_step_spec.step_type:', environment.time_step_spec().step_type) print('time_step_spec.discount:', environment.time_step_spec().discount) print('time_step_spec.reward:', environment.time_step_spec().reward)
_____no_output_____
Apache-2.0
site/en-snapshot/agents/tutorials/2_environments_tutorial.ipynb
secsilm/docs-l10n
So we see that the environment expects actions of type `int64` in [0, 1] and returns `TimeSteps` where the observations are a `float32` vector of length 4 and discount factor is a `float32` in [0.0, 1.0]. Now, let's try to take a fixed action `(1,)` for a whole episode.
action = np.array(1, dtype=np.int32) time_step = environment.reset() print(time_step) while not time_step.is_last(): time_step = environment.step(action) print(time_step)
_____no_output_____
Apache-2.0
site/en-snapshot/agents/tutorials/2_environments_tutorial.ipynb
secsilm/docs-l10n
Creating your own Python EnvironmentFor many clients, a common use case is to apply one of the standard agents (see agents/) in TF-Agents to their problem. To do this, they have to frame their problem as an environment. So let us look at how to implement an environment in Python.Let's say we want to train an agent to play the following (Black Jack inspired) card game:1. The game is played using an infinite deck of cards numbered 1...10.2. At every turn the agent can do 2 things: get a new random card, or stop the current round.3. The goal is to get the sum of your cards as close to 21 as possible at the end of the round, without going over.An environment that represents the game could look like this:1. Actions: We have 2 actions. Action 0: get a new card, and Action 1: terminate the current round.2. Observations: Sum of the cards in the current round.3. Reward: The objective is to get as close to 21 as possible without going over, so we can achieve this using the following reward at the end of the round: sum_of_cards - 21 if sum_of_cards <= 21, else -21
class CardGameEnv(py_environment.PyEnvironment): def __init__(self): self._action_spec = array_spec.BoundedArraySpec( shape=(), dtype=np.int32, minimum=0, maximum=1, name='action') self._observation_spec = array_spec.BoundedArraySpec( shape=(1,), dtype=np.int32, minimum=0, name='observation') self._state = 0 self._episode_ended = False def action_spec(self): return self._action_spec def observation_spec(self): return self._observation_spec def _reset(self): self._state = 0 self._episode_ended = False return ts.restart(np.array([self._state], dtype=np.int32)) def _step(self, action): if self._episode_ended: # The last action ended the episode. Ignore the current action and start # a new episode. return self.reset() # Make sure episodes don't go on forever. if action == 1: self._episode_ended = True elif action == 0: new_card = np.random.randint(1, 11) self._state += new_card else: raise ValueError('`action` should be 0 or 1.') if self._episode_ended or self._state >= 21: reward = self._state - 21 if self._state <= 21 else -21 return ts.termination(np.array([self._state], dtype=np.int32), reward) else: return ts.transition( np.array([self._state], dtype=np.int32), reward=0.0, discount=1.0)
_____no_output_____
Apache-2.0
site/en-snapshot/agents/tutorials/2_environments_tutorial.ipynb
secsilm/docs-l10n
Let's make sure we did everything correctly defining the above environment. When creating your own environment you must make sure the observations and time_steps generated follow the correct shapes and types as defined in your specs. These are used to generate the TensorFlow graph and as such can create hard to debug problems if we get them wrong.To validate our environment we will use a random policy to generate actions and we will iterate over 5 episodes to make sure things are working as intended. An error is raised if we receive a time_step that does not follow the environment specs.
environment = CardGameEnv() utils.validate_py_environment(environment, episodes=5)
_____no_output_____
Apache-2.0
site/en-snapshot/agents/tutorials/2_environments_tutorial.ipynb
secsilm/docs-l10n
Now that we know the environment is working as intended, let's run this environment using a fixed policy: ask for 3 cards and then end the round.
get_new_card_action = np.array(0, dtype=np.int32) end_round_action = np.array(1, dtype=np.int32) environment = CardGameEnv() time_step = environment.reset() print(time_step) cumulative_reward = time_step.reward for _ in range(3): time_step = environment.step(get_new_card_action) print(time_step) cumulative_reward += time_step.reward time_step = environment.step(end_round_action) print(time_step) cumulative_reward += time_step.reward print('Final Reward = ', cumulative_reward)
_____no_output_____
Apache-2.0
site/en-snapshot/agents/tutorials/2_environments_tutorial.ipynb
secsilm/docs-l10n
Environment WrappersAn environment wrapper takes a python environment and returns a modified version of the environment. Both the original environment and the modified environment are instances of `py_environment.PyEnvironment`, and multiple wrappers can be chained together.Some common wrappers can be found in `environments/wrappers.py`. For example:1. `ActionDiscretizeWrapper`: Converts a continuous action space to a discrete action space.2. `RunStats`: Captures run statistics of the environment such as number of steps taken, number of episodes completed etc.3. `TimeLimit`: Terminates the episode after a fixed number of steps. Example 1: Action Discretize Wrapper InvertedPendulum is a PyBullet environment that accepts continuous actions in the range `[-2, 2]`. If we want to train a discrete action agent such as DQN on this environment, we have to discretize (quantize) the action space. This is exactly what the `ActionDiscretizeWrapper` does. Compare the `action_spec` before and after wrapping:
env = suite_gym.load('Pendulum-v0') print('Action Spec:', env.action_spec()) discrete_action_env = wrappers.ActionDiscretizeWrapper(env, num_actions=5) print('Discretized Action Spec:', discrete_action_env.action_spec())
_____no_output_____
Apache-2.0
site/en-snapshot/agents/tutorials/2_environments_tutorial.ipynb
secsilm/docs-l10n
The wrapped `discrete_action_env` is an instance of `py_environment.PyEnvironment` and can be treated like a regular python environment. TensorFlow Environments The interface for TF environments is defined in `environments/tf_environment.TFEnvironment` and looks very similar to the Python environments. TF Environments differ from python envs in a couple of ways:* They generate tensor objects instead of arrays* TF environments add a batch dimension to the tensors generated when compared to the specs. Converting the python environments into TFEnvs allows tensorflow to parallelize operations. For example, one could define a `collect_experience_op` that collects data from the environment and adds to a `replay_buffer`, and a `train_op` that reads from the `replay_buffer` and trains the agent, and run them in parallel naturally in TensorFlow.
class TFEnvironment(object): def time_step_spec(self): """Describes the `TimeStep` tensors returned by `step()`.""" def observation_spec(self): """Defines the `TensorSpec` of observations provided by the environment.""" def action_spec(self): """Describes the TensorSpecs of the action expected by `step(action)`.""" def reset(self): """Returns the current `TimeStep` after resetting the Environment.""" return self._reset() def current_time_step(self): """Returns the current `TimeStep`.""" return self._current_time_step() def step(self, action): """Applies the action and returns the new `TimeStep`.""" return self._step(action) @abc.abstractmethod def _reset(self): """Returns the current `TimeStep` after resetting the Environment.""" @abc.abstractmethod def _current_time_step(self): """Returns the current `TimeStep`.""" @abc.abstractmethod def _step(self, action): """Applies the action and returns the new `TimeStep`."""
_____no_output_____
Apache-2.0
site/en-snapshot/agents/tutorials/2_environments_tutorial.ipynb
secsilm/docs-l10n
The `current_time_step()` method returns the current time_step and initializes the environment if needed.The `reset()` method forces a reset in the environment and returns the current_step.If the `action` doesn't depend on the previous `time_step` a `tf.control_dependency` is needed in `Graph` mode.For now, let us look at how `TFEnvironments` are created. Creating your own TensorFlow EnvironmentThis is more complicated than creating environments in Python, so we will not cover it in this colab. An example is available [here](https://github.com/tensorflow/agents/blob/master/tf_agents/environments/tf_environment_test.py). The more common use case is to implement your environment in Python and wrap it in TensorFlow using our `TFPyEnvironment` wrapper (see below). Wrapping a Python Environment in TensorFlow We can easily wrap any Python environment into a TensorFlow environment using the `TFPyEnvironment` wrapper.
env = suite_gym.load('CartPole-v0') tf_env = tf_py_environment.TFPyEnvironment(env) print(isinstance(tf_env, tf_environment.TFEnvironment)) print("TimeStep Specs:", tf_env.time_step_spec()) print("Action Specs:", tf_env.action_spec())
_____no_output_____
Apache-2.0
site/en-snapshot/agents/tutorials/2_environments_tutorial.ipynb
secsilm/docs-l10n
Note the specs are now of type: `(Bounded)TensorSpec`. Usage Examples Simple Example
env = suite_gym.load('CartPole-v0') tf_env = tf_py_environment.TFPyEnvironment(env) # reset() creates the initial time_step after resetting the environment. time_step = tf_env.reset() num_steps = 3 transitions = [] reward = 0 for i in range(num_steps): action = tf.constant([i % 2]) # applies the action and returns the new TimeStep. next_time_step = tf_env.step(action) transitions.append([time_step, action, next_time_step]) reward += next_time_step.reward time_step = next_time_step np_transitions = tf.nest.map_structure(lambda x: x.numpy(), transitions) print('\n'.join(map(str, np_transitions))) print('Total reward:', reward.numpy())
_____no_output_____
Apache-2.0
site/en-snapshot/agents/tutorials/2_environments_tutorial.ipynb
secsilm/docs-l10n
Whole Episodes
env = suite_gym.load('CartPole-v0') tf_env = tf_py_environment.TFPyEnvironment(env) time_step = tf_env.reset() rewards = [] steps = [] num_episodes = 5 for _ in range(num_episodes): episode_reward = 0 episode_steps = 0 while not time_step.is_last(): action = tf.random.uniform([1], 0, 2, dtype=tf.int32) time_step = tf_env.step(action) episode_steps += 1 episode_reward += time_step.reward.numpy() rewards.append(episode_reward) steps.append(episode_steps) time_step = tf_env.reset() num_steps = np.sum(steps) avg_length = np.mean(steps) avg_reward = np.mean(rewards) print('num_episodes:', num_episodes, 'num_steps:', num_steps) print('avg_length', avg_length, 'avg_reward:', avg_reward)
_____no_output_____
Apache-2.0
site/en-snapshot/agents/tutorials/2_environments_tutorial.ipynb
secsilm/docs-l10n
`Практикум по программированию на языке Python` `Занятие 2: Пользовательские и встроенные функции, итераторы и генераторы` `Мурат Апишев ([email protected])` `Москва, 2021` `Функции range и enumerate`
r = range(2, 10, 3) print(type(r)) for e in r: print(e, end=' ') for index, element in enumerate(list('abcdef')): print(index, element, end=' ')
0 a 1 b 2 c 3 d 4 e 5 f
MIT
lectures/02-functions.ipynb
sir-rois/mipt-python
`Функция zip`
z = zip([1, 2, 3], 'abc') print(type(z)) for a, b in z: print(a, b, end=' ') for e in zip('abcdef', 'abc'): print(e) for a, b, c, d in zip('abc', [1,2,3], [True, False, None], 'xyz'): print(a, b, c, d)
a 1 True x b 2 False y c 3 None z
MIT
lectures/02-functions.ipynb
sir-rois/mipt-python
`Определение собственных функций`
def function(arg_1, arg_2=None): print(arg_1, arg_2) function(10) function(10, 20)
10 None 10 20
MIT
lectures/02-functions.ipynb
sir-rois/mipt-python
Функция - это тоже объект, её имя - просто символическая ссылка:
f = function f(10) print(function is f)
10 None True
MIT
lectures/02-functions.ipynb
sir-rois/mipt-python
`Определение собственных функций`
retval = f(10) print(retval) def factorial(n): return n * factorial(n - 1) if n > 1 else 1 # recursion print(factorial(1)) print(factorial(2)) print(factorial(4))
1 2 24
MIT
lectures/02-functions.ipynb
sir-rois/mipt-python
`Передача аргументов в функцию`Параметры в Python всегда передаются по ссылке
def function(scalar, lst): scalar += 10 print(f'Scalar in function: {scalar}') lst.append(None) print(f'Scalar in function: {lst}') s, l = 5, [] function(s, l) print(s, l)
Scalar in function: 15 Scalar in function: [None] 5 [None]
MIT
lectures/02-functions.ipynb
sir-rois/mipt-python
`Передача аргументов в функцию`
def f(a, *args): print(type(args)) print([v for v in [a] + list(args)]) f(10, 2, 6, 8) def f(*args, a): print([v for v in [a] + list(args)]) print() f(2, 6, 8, a=10) def f(a, *args, **kw): print(type(kw)) print([v for v in [a] + list(args) + [(k, v) for k, v in kw.items()]]) f(2, *(6, 8), **{'arg1': 1, 'arg2': 2})
<class 'dict'> [2, 6, 8, ('arg1', 1), ('arg2', 2)]
MIT
lectures/02-functions.ipynb
sir-rois/mipt-python
`Области видимости переменных`В Python есть 4 основных уровня видимости:- Встроенная (buildins) - на этом уровне находятся все встроенные объекты (функции, классы исключений и т.п.)- Глобальная в рамках модуля (global) - всё, что определяется в коде модуля на верхнем уровне- Объемлюшей функции (enclosed) - всё, что определено в функции верхнего уровня- Локальной функции (local) - всё, что определено в функции нижнего уровняЕсть ещё области видимости переменных циклов, списковых включений и т.п. `Правило разрешения области видимости LEGB при чтении`
def outer_func(x): def inner_func(x): return len(x) return inner_func(x) print(outer_func([1, 2]))
2
MIT
lectures/02-functions.ipynb
sir-rois/mipt-python
Кто определил имя `len`?- на уровне вложенной функции такого имени нет, смотрим выше- на уровне объемлющей функции такого имени нет, смотрим выше- на уровне модуля такого имени нет, смотрим выше- на уровне builtins такое имя есть, используем его `На builtins можно посмотреть`
import builtins counter = 0 lst = [] for name in dir(builtins): if name[0].islower(): lst.append(name) counter += 1 if counter == 5: break lst
_____no_output_____
MIT
lectures/02-functions.ipynb
sir-rois/mipt-python
Кстати, то же самое можно сделать более pythonic кодом:
list(filter(lambda x: x[0].islower(), dir(builtins)))[: 5]
_____no_output_____
MIT
lectures/02-functions.ipynb
sir-rois/mipt-python
`Локальные и глобальные переменные`
x = 2 def func(): print('Inside: ', x) # read func() print('Outside: ', x) x = 2 def func(): x += 1 # write print('Inside: ', x) func() # UnboundLocalError: local variable 'x' referenced before assignment print('Outside: ', x) x = 2 def func(): x = 3 x += 1 print('Inside: ', x) func() print('Outside: ', x)
Inside: 4 Outside: 2
MIT
lectures/02-functions.ipynb
sir-rois/mipt-python
`Ключевое слово global`
x = 2 def func(): global x x += 1 # write print('Inside: ', x) func() print('Outside: ', x) x = 2 def func(x): x += 1 print('Inside: ', x) return x x = func(x) print('Outside: ', x)
Inside: 3 Outside: 3
MIT
lectures/02-functions.ipynb
sir-rois/mipt-python
`Ключевое слово nonlocal`
a = 0 def out_func(): b = 10 def mid_func(): c = 20 def in_func(): global a a += 100 nonlocal c c += 100 nonlocal b b += 100 print(a, b, c) in_func() mid_func() out_func()
100 110 120
MIT
lectures/02-functions.ipynb
sir-rois/mipt-python
__Главный вывод:__ не надо злоупотреблять побочными эффектами при работе с переменными верхних уровней `Пример вложенных функций: замыкания`- В большинстве случаев вложенные функции не нужны, плоская иерархия будет и проще, и понятнее- Одно из исключений - фабричные функции (замыкания)
def function_creator(n): def function(x): return x ** n return function f = function_creator(5) f(2)
_____no_output_____
MIT
lectures/02-functions.ipynb
sir-rois/mipt-python
Объект-функция, на который ссылается `f`, хранит в себе значение `n` `Анонимные функции`- `def` - не единственный способ объявления функции- `lambda` создаёт анонимную (lambda) функциюТакие функции часто используются там, где синтаксически нельзя записать определение через `def`
def func(x): return x ** 2 func(6) lambda_func = lambda x: x ** 2 # should be an expression lambda_func(6) def func(x): print(x) func(6) lambda_func = lambda x: print(x ** 2) # as print is function in Python 3.* lambda_func(6)
36
MIT
lectures/02-functions.ipynb
sir-rois/mipt-python
`Встроенная функция sorted`
lst = [5, 2, 7, -9, -1] def abs_comparator(x): return abs(x) print(sorted(lst, key=abs_comparator)) sorted(lst, key=lambda x: abs(x)) sorted(lst, key=lambda x: abs(x), reverse=True)
_____no_output_____
MIT
lectures/02-functions.ipynb
sir-rois/mipt-python
`Встроенная функция filter`
lst = [5, 2, 7, -9, -1] f = filter(lambda x: x < 0, lst) # True condition type(f) # iterator list(f)
_____no_output_____
MIT
lectures/02-functions.ipynb
sir-rois/mipt-python
`Встроенная функция map`
lst = [5, 2, 7, -9, -1] m = map(lambda x: abs(x), lst) type(m) # iterator list(m)
_____no_output_____
MIT
lectures/02-functions.ipynb
sir-rois/mipt-python
`Ещё раз сравним два подхода`Напишем функцию скалярного произведения в императивном и функциональном стилях:
def dot_product_imp(v, w): result = 0 for i in range(len(v)): result += v[i] * w[i] return result dot_product_func = lambda v, w: sum(map(lambda x: x[0] * x[1], zip(v, w))) print(dot_product_imp([1, 2, 3], [4, 5, 6])) print(dot_product_func([1, 2, 3], [4, 5, 6]))
32 32
MIT
lectures/02-functions.ipynb
sir-rois/mipt-python
`Функция reduce``functools` - стандартный модуль с другими функциями высшего порядка.Рассмотрим пока только функцию `reduce`:
from functools import reduce lst = list(range(1, 10)) reduce(lambda x, y: x * y, lst)
_____no_output_____
MIT
lectures/02-functions.ipynb
sir-rois/mipt-python
`Итерирование, функции iter и next`
r = range(3) for e in r: print(e) it = iter(r) # r.__iter__() - gives us an iterator print(next(it)) print(it.__next__()) print(next(it)) print(next(it))
0 1 2
MIT
lectures/02-functions.ipynb
sir-rois/mipt-python
`Итераторы часто используются неявно`Как выглядит для нас цикл `for`:
for i in 'seq': print(i)
s e q
MIT
lectures/02-functions.ipynb
sir-rois/mipt-python
Как он работает на самом деле:
iterator = iter('seq') while True: try: i = next(iterator) print(i) except StopIteration: break
s e q
MIT
lectures/02-functions.ipynb
sir-rois/mipt-python
`Генераторы`- Генераторы, как и итераторы, предназначены для итерирования по коллекции, но устроены несколько иначе- Они определяются с помощью функций с оператором `yield` или генераторов списков, а не вызовов `iter()` и `next()`- В генераторе есть внутреннее изменяемое состояние в виде локальных переменных, которое он хранит автоматически- Генератор - более простой способ создания собственного итератора, чем его прямое определение- Все генераторы являются итераторами, но не наоборот - Примеры функций-генераторов: - `zip` - `enumerate` - `reversed` - `map` - `filter` `Ключевое слово yield`- `yield` - это слово, по смыслу похожее на `return`- Но используется в функциях, возвращающих генераторы- При вызове такой функции тело не выполняется, функция только возвращает генератор- В первых запуск функция будет выполняться от начала и до `yield`- После выхода состояние функции сохраняется- На следующий вызов будет проводиться итерация цикла и возвращаться следующее значение- И так далее, пока не кончится цикл каждого `yield` в теле функции- После этого генератор станет пустым `Пример генератора`
def my_range(n): yield 'You really want to run this generator?' i = -1 while i < n: i += 1 yield i gen = my_range(3) while True: try: print(next(gen), end=' ') except StopIteration: # we want to catch this type of exceptions break for e in my_range(3): print(e, end=' ')
You really want to run this generator? 0 1 2 3
MIT
lectures/02-functions.ipynb
sir-rois/mipt-python
`Особенность range``range` не является генератором, хотя и похож, поскольку не хранит всю последовательность
print('__next__' in dir(zip([], []))) print('__next__' in dir(range(3)))
True False
MIT
lectures/02-functions.ipynb
sir-rois/mipt-python
Полезные особенности:- объекты `range` неизменяемые (могут быть ключами словаря)- имеют полезные атрибуты (`len`, `index`, `__getitem__`)- по ним можно итерироваться многократно `Модуль itetools`- Модуль представляет собой набор инструментов для работы с итераторами и последовательностями- Содержит три основных типа итераторов: - бесконечные итераторы - конечные итераторы - комбинаторные итераторы- Позволяет эффективно решать небольшие задачи вида: - итерирование по бесконечному потоку - слияние в один список вложенных списков - генерация комбинаторного перебора сочетаний элементов последовательности - аккумуляция и агрегация данных внутри последовательности `Модуль itetools: примеры`
from itertools import count for i in count(start=0): print(i, end=' ') if i == 5: break from itertools import cycle count = 0 for item in cycle('XYZ'): if count > 4: break print(item, end=' ') count += 1
X Y Z X Y
MIT
lectures/02-functions.ipynb
sir-rois/mipt-python
`Модуль itetools: примеры`
from itertools import accumulate for i in accumulate(range(1, 5), lambda x, y: x * y): print(i) from itertools import chain for i in chain([1, 2], [3], [4]): print(i)
1 2 3 4
MIT
lectures/02-functions.ipynb
sir-rois/mipt-python
`Модуль itetools: примеры`
from itertools import groupby vehicles = [('Ford', 'Taurus'), ('Dodge', 'Durango'), ('Chevrolet', 'Cobalt'), ('Ford', 'F150'), ('Dodge', 'Charger'), ('Ford', 'GT')] sorted_vehicles = sorted(vehicles) for key, group in groupby(sorted_vehicles, lambda x: x[0]): for maker, model in group: print('{model} is made by {maker}'.format(model=model, maker=maker)) print ("**** END OF THE GROUP ***\n")
Cobalt is made by Chevrolet **** END OF THE GROUP *** Charger is made by Dodge Durango is made by Dodge **** END OF THE GROUP *** F150 is made by Ford GT is made by Ford Taurus is made by Ford **** END OF THE GROUP ***
MIT
lectures/02-functions.ipynb
sir-rois/mipt-python
***Introduction to Radar Using Python and MATLAB*** Andy Harrison - Copyright (C) 2019 Artech House Coherent Detector*** The in-phase and quadrature signal components from a coherent detector may be written as (Equation 5.13)$$ x(t) = a(t) \cos(2\pi f_0 t) \cos(\phi(t)) - a(t) \sin(2 \pi f_0 t) \sin(\phi(t)) = X_I(t) \cos(2 \pi f_0 t) - X_Q \sin(2 \pi f_0 t)$$*** Begin by setting the library path
import lib_path
_____no_output_____
Apache-2.0
jupyter/Chapter05/coherent_detector.ipynb
miltondsantos/software
Set the sampling frequency (Hz), the start frequency (Hz), the end frequency (Hz), the amplitude modulation frequency (Hz) and amplitude (relative) for the sample signal
sampling_frequency = 100 start_frequency = 4 end_frequency = 25 am_amplitude = 0.1 am_frequency = 9
_____no_output_____
Apache-2.0
jupyter/Chapter05/coherent_detector.ipynb
miltondsantos/software
Calculate the bandwidth (Hz) and center frequency (Hz)
bandwidth = end_frequency - start_frequency center_frequency = 0.5 * bandwidth + start_frequency
_____no_output_____
Apache-2.0
jupyter/Chapter05/coherent_detector.ipynb
miltondsantos/software
Set up the waveform
from numpy import arange, sin from scipy.constants import pi from scipy.signal import chirp time = arange(sampling_frequency) / sampling_frequency if_signal = chirp(time, start_frequency, time[-1], end_frequency) if_signal *= (1.0 + am_amplitude * sin(2.0 * pi * am_frequency * time))
_____no_output_____
Apache-2.0
jupyter/Chapter05/coherent_detector.ipynb
miltondsantos/software
Set up the keyword args
kwargs = {'if_signal': if_signal, 'center_frequency': center_frequency, 'bandwidth': bandwidth, 'sample_frequency': sampling_frequency, 'time': time}
_____no_output_____
Apache-2.0
jupyter/Chapter05/coherent_detector.ipynb
miltondsantos/software
Calculate the baseband in-phase and quadrature signals
from Libs.receivers import coherent_detector i_signal, q_signal = coherent_detector.iq(**kwargs)
_____no_output_____
Apache-2.0
jupyter/Chapter05/coherent_detector.ipynb
miltondsantos/software
Use the `matplotlib` routines to display the results
from matplotlib import pyplot as plt from numpy import real, imag # Set the figure size plt.rcParams["figure.figsize"] = (15, 10) # Display the results plt.plot(time, real(i_signal), '', label='In Phase') plt.plot(time, real(q_signal), '-.', label='Quadrature') # Set the plot title and labels plt.title('Coherent Detector', size=14) plt.xlabel('Time (s)', size=12) plt.ylabel('Amplitude (V)', size=12) # Set the tick label size plt.tick_params(labelsize=12) # Turn on the grid plt.grid(linestyle=':', linewidth=0.5) # Show the legend plt.legend(loc='upper right', prop={'size': 10})
_____no_output_____
Apache-2.0
jupyter/Chapter05/coherent_detector.ipynb
miltondsantos/software
Visualizing and Analyzing Jigsaw
import pandas as pd import re import numpy as np
_____no_output_____
BSD-3-Clause
.ipynb_checkpoints/Visualizing and Analyzing Jigsaw-checkpoint.ipynb
dudaspm/LDA_Bias_Data
In the previous section, we explored how to generate topics from a textual dataset using LDA. But how can this be used as an application? Therefore, in this section, we will look into the possible ways to read the topics as well as understand how it can be used. We will now import the preloaded data of the LDA result that was achieved in the previous section.
df = pd.read_csv("https://raw.githubusercontent.com/dudaspm/LDA_Bias_Data/main/topics.csv") df.head()
_____no_output_____
BSD-3-Clause
.ipynb_checkpoints/Visualizing and Analyzing Jigsaw-checkpoint.ipynb
dudaspm/LDA_Bias_Data
We will visualize these results to understand what major themes are present in them.
%%html <iframe src='https://flo.uri.sh/story/941631/embed' title='Interactive or visual content' class='flourish-embed-iframe' frameborder='0' scrolling='no' style='width:100%;height:600px;' sandbox='allow-same-origin allow-forms allow-scripts allow-downloads allow-popups allow-popups-to-escape-sandbox allow-top-navigation-by-user-activation'></iframe><div style='width:100%!;margin-top:4px!important;text-align:right!important;'><a class='flourish-credit' href='https://public.flourish.studio/story/941631/?utm_source=embed&utm_campaign=story/941631' target='_top' style='text-decoration:none!important'><img alt='Made with Flourish' src='https://public.flourish.studio/resources/made_with_flourish.svg' style='width:105px!important;height:16px!important;border:none!important;margin:0!important;'> </a></div>
_____no_output_____
BSD-3-Clause
.ipynb_checkpoints/Visualizing and Analyzing Jigsaw-checkpoint.ipynb
dudaspm/LDA_Bias_Data
An Overview of the analysis From the above visualization, an anomaly that we come across is that the dataset we are examining is supposed to be related to people with physical, mental and learning disability. But unfortunately based on the topics that were extracted, we notice just a small subset of words that are related to this topic. Topic 2 have words that addresses themes related to what we were expecting the dataset to have. But the major theme that was noticed in the Top 5 topics are mainly terms that are political. (The Top 10 topics show themes related to Religion as well, which is quite interesting.)LDA hence helped in understanding what the conversations the dataset consisted. From the word collection, we also notice that there were certain words such as \'kill' that can be categorized as \'Toxic'\. To analyse this more, we can classify each word based on the fact that it can be categorized wi by an NLP classifier. To demonstrate an example of a toxic analysis framework, the below code shows the working of the Unitary library in python.{cite}`Detoxify`This library provides a toxicity score (from a scale of 0 to 1) for the sentece that is passed through it.
headers = {"Authorization": f"Bearer api_ZtUEFtMRVhSLdyTNrRAmpxXgMAxZJpKLQb"}
_____no_output_____
BSD-3-Clause
.ipynb_checkpoints/Visualizing and Analyzing Jigsaw-checkpoint.ipynb
dudaspm/LDA_Bias_Data
To get access to this software, you will need to get an API KEY at https://huggingface.co/unitary/toxic-bertHere is an example of what this would look like.```pythonheaders = {"Authorization": f"Bearer api_XXXXXXXXXXXXXXXXXXXXXXXXXXX"}```
import requests API_URL = "https://api-inference.huggingface.co/models/unitary/toxic-bert" def query(payload): response = requests.post(API_URL, headers=headers, json=payload) return response.json() query({"inputs": "addict"})
_____no_output_____
BSD-3-Clause
.ipynb_checkpoints/Visualizing and Analyzing Jigsaw-checkpoint.ipynb
dudaspm/LDA_Bias_Data
You can input words or sentences in \, in the code, to look at the results that are generated through this.This example can provide an idea as to how ML can be used for toxicity analysis.
query({"inputs": "<insert word here>"}) %%html <iframe src='https://flo.uri.sh/story/941681/embed' title='Interactive or visual content' class='flourish-embed-iframe' frameborder='0' scrolling='no' style='width:100%;height:600px;' sandbox='allow-same-origin allow-forms allow-scripts allow-downloads allow-popups allow-popups-to-escape-sandbox allow-top-navigation-by-user-activation'></iframe><div style='width:100%!;margin-top:4px!important;text-align:right!important;'><a class='flourish-credit' href='https://public.flourish.studio/story/941681/?utm_source=embed&utm_campaign=story/941681' target='_top' style='text-decoration:none!important'><img alt='Made with Flourish' src='https://public.flourish.studio/resources/made_with_flourish.svg' style='width:105px!important;height:16px!important;border:none!important;margin:0!important;'> </a></div>
_____no_output_____
BSD-3-Clause
.ipynb_checkpoints/Visualizing and Analyzing Jigsaw-checkpoint.ipynb
dudaspm/LDA_Bias_Data
The BiasThe visualization shows how contextually toxic words are derived as important words within various topics related to this dataset. This can lead to any Natural Language Processing kernel learning this dataset to provide skewed analysis for the population in consideration, i.e. people with mental, physical and learning disability. This can lead to very discriminatory classifications. An ExampleTo illustrate the impact better, we will be taking the most associated words to the word 'mental' from the results. Below is a network graph that shows the commonly associated words. It is seen that words such as 'Kill' and 'Gun' appear with the closest association. This can lead to the machine contextualizing the word 'mental' to be associated with such words.
%%html <iframe src='https://flo.uri.sh/visualisation/6867000/embed' title='Interactive or visual content' class='flourish-embed-iframe' frameborder='0' scrolling='no' style='width:100%;height:600px;' sandbox='allow-same-origin allow-forms allow-scripts allow-downloads allow-popups allow-popups-to-escape-sandbox allow-top-navigation-by-user-activation'></iframe><div style='width:100%!;margin-top:4px!important;text-align:right!important;'><a class='flourish-credit' href='https://public.flourish.studio/visualisation/6867000/?utm_source=embed&utm_campaign=visualisation/6867000' target='_top' style='text-decoration:none!important'><img alt='Made with Flourish' src='https://public.flourish.studio/resources/made_with_flourish.svg' style='width:105px!important;height:16px!important;border:none!important;margin:0!important;'> </a></div>
_____no_output_____
BSD-3-Clause
.ipynb_checkpoints/Visualizing and Analyzing Jigsaw-checkpoint.ipynb
dudaspm/LDA_Bias_Data
It is hence important to be aware of the dataset that is being used to analyse a specific population. With LDA, we were able to understand that this dataset cannot be used as a good representation of the disabled community. To bring about a movement of unbiased AI, we need to perform such preliminary analysis and more, to not cause unintended descrimination. The DashboardBelow is the complete data visaulization dashboard of the topic analysis. Feel feel to experiment and compare various labels to your liking.
%%html <iframe src='https://flo.uri.sh/visualisation/6856937/embed' title='Interactive or visual content' class='flourish-embed-iframe' frameborder='0' scrolling='no' style='width:100%;height:600px;' sandbox='allow-same-origin allow-forms allow-scripts allow-downloads allow-popups allow-popups-to-escape-sandbox allow-top-navigation-by-user-activation'></iframe><div style='width:100%!;margin-top:4px!important;text-align:right!important;'><a class='flourish-credit' href='https://public.flourish.studio/visualisation/6856937/?utm_source=embed&utm_campaign=visualisation/6856937' target='_top' style='text-decoration:none!important'><img alt='Made with Flourish' src='https://public.flourish.studio/resources/made_with_flourish.svg' style='width:105px!important;height:16px!important;border:none!important;margin:0!important;'> </a></div>
_____no_output_____
BSD-3-Clause
.ipynb_checkpoints/Visualizing and Analyzing Jigsaw-checkpoint.ipynb
dudaspm/LDA_Bias_Data
Figure 1 - Overview
df = pd.read_csv(datdir / 'fig_1.csv') scores = df[list(map(str, range(20)))].values selected = ~np.isnan(df['Selected'].values) gens_sel = np.nonzero(selected)[0] scores_sel = np.array([np.max(scores[g]) for g in gens_sel]) ims_sel = [plt.imread(str(datdir / 'images' / 'overview' / f'gen{gen:03d}.png')) for gen in gens_sel] ims_sel = np.array(ims_sel) print('gens to visualize:', gens_sel) with np.printoptions(precision=2, suppress=True): print('corresponding scores:', scores_sel) print('ims_sel shape:', ims_sel.shape) c0 = array((255,92,0)) / 255 # highlight color figure(figsize=(2.5, 0.8), dpi=150) plot(scores.mean(1)) xlim(0, 500) ylim(bottom=0) xticks((250,500)) yticks((0,50)) gca().set_xticks(np.nonzero(selected)[0], minor=True) gca().tick_params(axis='x', which='minor', colors=c0, width=1) title('CaffeNet layer fc8, unit 1') xlabel('Generation') ylabel('Activation') savefig(figdir / f'overview-evo_scores.png', dpi=300, bbox_inches='tight') savefig(figdir / f'overview-evo_scores.svg', dpi=300, bbox_inches='tight') def make_canvas(ims, nrows=None, ncols=None, margin=15, margin_colors=None): if margin_colors is not None: assert len(ims) == len(margin_colors) if ncols is None: assert nrows is not None ncols = int(np.ceil(len(ims) / nrows)) else: nrows = int(np.ceil(len(ims) / ncols)) im0 = ims.__iter__().__next__() imsize = im0.shape[0] size = imsize + margin w = margin + size * ncols h = margin + size * nrows canvas = np.ones((h, w, 3), dtype=im0.dtype) for i, im in enumerate(ims): ih = i // ncols iw = i % ncols if len(im.shape) > 2 and im.shape[-1] == 4: im = im[..., :3] if margin_colors is not None: canvas[size * ih:size * (ih + 1) + margin, size * iw:size * (iw + 1) + margin] = margin_colors[i] canvas[margin + size * ih:margin + size * ih + imsize, margin + size * iw:margin + size * iw + imsize] = im return canvas scores_sel_max = scores_sel.max() margin_colors = np.array([(s / scores_sel_max * c0) for s in scores_sel]) for i, im_idc in enumerate((slice(0,5), slice(5,None))): canvas = make_canvas(ims_sel[im_idc], nrows=1, margin_colors=margin_colors[im_idc]) figure(dpi=150) imshow(canvas) # turn off axis decorators to make tight plot ax = gca() ax.tick_params(labelcolor='none', bottom=False, left=False, right=False) ax.set_frame_on(False) for sp in ax.spines.values(): sp.set_visible(False) ax.xaxis.set_ticks([]) ax.yaxis.set_ticks([]) plt.imsave(figdir / f'overview-evo_ims_{i}.png', canvas)
_____no_output_____
MIT
figure_data/Make Plots.ipynb
willwx/XDream
Define Custom Violinplot
def violinplot2(data=None, x=None, y=None, hue=None, palette=None, linewidth=1, orient=None, order=None, hue_order=None, x_disp=None, palette_per_violin=None, hline_at_1=True, legend_palette=None, legend_kwargs=None, width=0.7, control_width=0.8, control_y=None, hues_share_control=False, ax=None, **kwargs): """ width: width of a group of violins ("hues") as fraction of between-group distance contorl_width: width of a group of bars (control) as fraction of hue width """ if order is None: n_groups = len(set(data[x])) if orient != 'h' else len(set(data[y])) else: n_groups = len(order) extra_plot_handles = [] if ax is None: ax = plt.gca() if orient == 'h': fill_between = ax.fill_betweenx plot = ax.vlines else: fill_between = ax.fill_between plot = ax.hlines ############ drawing ############ if not isinstance(y, str) and hasattr(y, '__iter__'): ys = y else: ys = (y,) for y in ys: ax = sns.violinplot(data=data, x=x, y=y, hue=hue, ax=ax, palette=palette, linewidth=linewidth, orient=orient, width=width, order=order, hue_order=hue_order, **kwargs) if legend_kwargs is not None: lgnd = plt.legend(**legend_kwargs) else: lgnd = None if hline_at_1: hdl = plot(1, -0.45, n_groups-0.55, linestyle='--', linewidth=.75, zorder=-3) extra_plot_handles.append(hdl) ############ drawing ############ ############ styling ############ if orient != 'h': ax.xaxis.set_ticks_position('none') if x_disp is not None: ax.set_xticklabels(x_disp) # enlarge the circle for median median_marks = [o for o in ax.get_children() if isinstance(o, matplotlib.collections.PathCollection)] for o in median_marks: o.set_sizes([10,]) # recolor the violins violins = np.array([o for o in ax.get_children() if isinstance(o, matplotlib.collections.PolyCollection)]) violins = violins[np.argsort([int(v.get_label().replace('_collection','')) for v in violins])] for i, o in enumerate(violins): if palette_per_violin is not None: i %= len(palette_per_violin) c = palette_per_violin[i] if len(c) == 2: o.set_facecolor(c[0]) o.set_edgecolor(c[1]) else: o.set_facecolor(c) o.set_edgecolor('none') else: o.set_edgecolor('none') # recolor the legend patches if lgnd is not None: for v in (legend_palette, palette_per_violin, palette): if v is not None: legend_palette = v break if legend_palette is not None: for o, c in zip(lgnd.get_patches(), legend_palette): o.set_facecolor(c) o.set_edgecolor('none') ############ styling ############ ############ control ############ # done last to not interfere with coloring violins if control_y is not None: assert control_y in df.columns assert hue is not None and order is not None and hue_order is not None nhues = len(hue_order) vw = width # width per control (long) if not hues_share_control: vw /= nhues cw = vw * control_width # width per control (short) ctl_hdl = None for i, xval in enumerate(order): if not hues_share_control: for j, hval in enumerate(hue_order): df_ = df[(df[x] == xval) & (df[hue] == hval)] if not len(df_): continue lq, mq, uq = np.nanpercentile(df_[control_y].values, (25, 50, 75)) xs_qtl = i + vw * (-nhues/2 + 1/2 + j) + cw/2 * np.array((-1,1)) xs_med = i + vw * (-nhues/2 + j) + vw * np.array((0,1)) ctl_hdl = fill_between(xs_qtl, lq, uq, color=(0.9,0.9,0.9), zorder=-2) # upper & lower quartiles plot(mq, *xs_med, color=(0.5,0.5,0.5), linewidth=1, zorder=-1) # median else: df_ = df[(df[x] == xval)] if not len(df_): continue lq, mq, uq = np.nanpercentile(df_[control_y].values, (25, 50, 75)) xs_qtl = i + cw/2 * np.array((-1,1)) xs_med = i + vw/2 * np.array((-1,1)) ctl_hdl = fill_between(xs_qtl, lq, uq, color=(0.9,0.9,0.9), zorder=-2) plot(mq, *xs_med, color=(0.5,0.5,0.5), linewidth=1, zorder=-1) extra_plot_handles.append(ctl_hdl) ############ control ############ return n_groups, ax, lgnd, extra_plot_handles def default_ax_lims(ax, n_groups=None, orient=None): if orient == 'h': ax.set_xticks((0,1,2,3)) ax.set_xlim(-0.25, 3.5) else: if n_groups is not None: ax.set_xlim(-0.65, n_groups-0.35) ax.set_yticks((0,1,2,3)) ax.set_ylim(-0.25, 3.5) def rotate_xticklabels(ax, rotation=10, pad=5): for i, tick in enumerate(ax.xaxis.get_major_ticks()): if tick.label.get_text() == 'none': tick.set_visible(False) tick.label.set(va='top', ha='center', rotation=rotation, rotation_mode='anchor') tick.set_pad(pad)
_____no_output_____
MIT
figure_data/Make Plots.ipynb
willwx/XDream
Figure 3 - Compare Target Nets, Layers
df = pd.read_csv(datdir/'fig_2.csv') df = df[~np.isnan(df['Rel_act'])] # remove invalid data df.head() nets = ('caffenet', 'resnet-152-v2', 'resnet-269-v2', 'inception-v3', 'inception-v4', 'inception-resnet-v2', 'placesCNN') layers = {'caffenet': ('conv2', 'conv4', 'fc6', 'fc8'), 'resnet-152-v2': ('res15_eletwise', 'res25_eletwise', 'res35_eletwise', 'classifier'), 'resnet-269-v2': ('res25_eletwise', 'res45_eletwise', 'res60_eletwise', 'classifier'), 'inception-v3': ('pool2_3x3_s2', 'reduction_a_concat', 'reduction_b_concat', 'classifier'), 'inception-v4': ('inception_stem3', 'reduction_a_concat', 'reduction_b_concat', 'classifier'), 'inception-resnet-v2': ('stem_concat', 'reduction_a_concat', 'reduction_b_concat', 'classifier'), 'placesCNN': ('conv2', 'conv4', 'fc6', 'fc8')} get_layer_level = lambda r: ('Early', 'Middle', 'Late', 'Output')[layers[r[1]['Classifier']].index(r[1]['Layer'])] df['Layer_level'] = list(map(get_layer_level, df.iterrows())) x_disp = ('CaffeNet', 'ResNet-152-v2', 'ResNet-269-v2', 'Inception-v3', 'Inception-v4', 'Inception-ResNet-v2', 'PlacesCNN') palette = get_cmap('Blues')(np.linspace(0.3,0.8,4)) fig = figure(figsize=(6.3,2.5), dpi=150) n_groups, ax, lgnd, hdls = violinplot2( data=df, x='Classifier', y='Rel_act', hue='Layer_level', cut=0, order=nets, hue_order=('Early', 'Middle', 'Late', 'Output'), x_disp=x_disp, legend_kwargs=dict(title='Evolved,\ntarget layer', loc='upper left', bbox_to_anchor=(1,1.05)), palette_per_violin=palette, control_y='Rel_exp_max') default_ax_lims(ax, n_groups) rotate_xticklabels(ax) ylabel('Relative activation') xlabel('Target architecture') # another legend legend(handles=hdls, labels=['Overall', 'In 10k'], title='ImageNet max', loc='upper left', bbox_to_anchor=(1,0.4)) ax.add_artist(lgnd) savefig(figdir / f'nets.png', dpi=300, bbox_inches='tight') savefig(figdir / f'nets.svg', dpi=300, bbox_inches='tight')
_____no_output_____
MIT
figure_data/Make Plots.ipynb
willwx/XDream
Figure 5 - Compare Generators Compare representation "depth"
df = pd.read_csv(datdir / 'fig_5-repr_depth.csv') df = df[~np.isnan(df['Rel_act'])] df['Classifier, layer'] = [', '.join(tuple(a)) for a in df[['Classifier', 'Layer']].values] df.head() nets = ('caffenet', 'inception-resnet-v2') layers = {'caffenet': ('conv2', 'fc6', 'fc8'), 'inception-resnet-v2': ('classifier',)} generators = ('raw_pixel', 'deepsim-norm1', 'deepsim-norm2', 'deepsim-conv3', 'deepsim-conv4', 'deepsim-pool5', 'deepsim-fc6', 'deepsim-fc7', 'deepsim-fc8') xorder = ('caffenet, conv2', 'caffenet, fc6', 'caffenet, fc8', 'inception-resnet-v2, classifier') x_disp = ('CaffeNet, conv2', 'CaffeNet, fc6', 'CaffeNet, fc8', 'Inception-ResNet-v2,\nclassifier') lbl_disp = ('Raw pixel',) + tuple(v.replace('deepsim', 'DeePSiM') for v in generators[1:]) palette = ([[0.75, 0.75, 0.75]] + # raw pixel sns.husl_palette(len(generators)-1, h=0.05, l=0.65)) # deepsim 1--8 fig = figure(figsize=(5.6,2.4), dpi=150) n_groups, ax, lgnd, hdls = violinplot2( data=df, x='Classifier, layer', y='Rel_act', hue='Generator', cut=0, linewidth=.75, width=0.9, control_width=0.9, order=xorder, hue_order=generators, x_disp=x_disp, legend_kwargs=dict(title='Generator', loc='upper left', bbox_to_anchor=(1,1.05)), palette=palette, control_y='Rel_exp_max', hues_share_control=True) default_ax_lims(ax, n_groups) ylabel('Relative activation') xlabel('Target layer') # change legend label text for txt, lbl in zip(lgnd.get_texts(), lbl_disp): txt.set_text(lbl) savefig(figdir / f'generators.png', dpi=300, bbox_inches='tight') savefig(figdir / f'generators.svg', dpi=300, bbox_inches='tight')
_____no_output_____
MIT
figure_data/Make Plots.ipynb
willwx/XDream
Compare training dataset
df = pd.read_csv(datdir / 'fig_5-training_set.csv') df = df[~np.isnan(df['Rel_act'])] df['Classifier, layer'] = [', '.join(tuple(a)) for a in df[['Classifier', 'Layer']].values] df.head() nets = ('caffenet', 'inception-resnet-v2') cs = ('caffenet', 'placesCNN', 'inception-resnet-v2') layers = {c: ('conv2', 'conv4', 'fc6', 'fc8') for c in cs} layers['inception-resnet-v2'] = ('classifier',) gs = ('deepsim-fc6', 'deepsim-fc6-places365') cls = ('caffenet, conv2', 'caffenet, conv4', 'caffenet, fc6', 'caffenet, fc8', 'inception-resnet-v2, classifier', 'placesCNN, conv2', 'placesCNN, conv4', 'placesCNN, fc6', 'placesCNN, fc8') cls_spaced = cls[:5] + ('none',) + cls[5:] x_disp = tuple(f'CaffeNet, {v}' for v in ('conv2', 'conv4', 'fc6', 'fc8')) + \ ('Inception-ResNet-v2,\nclassifier', 'none') + \ tuple(f'PlacesCNN, {v}' for v in ('conv2', 'conv4', 'fc6', 'fc8')) lbl_disp = ('DeePSiM-fc6', 'DeePSiM-fc6-Places365') palette = [get_cmap(main_c)(np.linspace(0.3,0.8,4)) for main_c in ('Blues', 'Oranges')] palette = list(np.array(palette).transpose(1,0,2).reshape(-1, 4)) palette = palette + palette[-2:] + palette fig = figure(figsize=(5.15,1.8), dpi=150) n_groups, ax, lgnd, hdls = violinplot2( data=df, x='Classifier, layer', y='Rel_act', hue='Generator', cut=0, split=True, inner='quartile', order=cls_spaced, hue_order=gs, x_disp=x_disp, legend_kwargs=dict(title='Generator', loc='upper left', bbox_to_anchor=(.97,1.05)), palette_per_violin=palette, legend_palette=palette[4:], control_y='Rel_exp_max', hues_share_control=True) rotate_xticklabels(ax, rotation=15, pad=10) ylabel('Relative activation') xlabel('Target layer') # change legend label text for txt, lbl in zip(lgnd.get_texts(), lbl_disp): txt.set_text(lbl) savefig(figdir / f'generators2.png', dpi=300, bbox_inches='tight') savefig(figdir / f'generators2.svg', dpi=300, bbox_inches='tight')
_____no_output_____
MIT
figure_data/Make Plots.ipynb
willwx/XDream
Figure 4 - Compare Inits
layers = ('conv2', 'conv4', 'fc6', 'fc8') layers_disp = tuple(v.capitalize() for v in layers)
_____no_output_____
MIT
figure_data/Make Plots.ipynb
willwx/XDream
Rand inits, fraction change
df = pd.read_csv(datdir/'fig_4-rand_init.csv').set_index(['Layer', 'Unit', 'Init_seed']) df = (df.drop(0, level='Init_seed') - df.xs(0, level='Init_seed')).mean(axis=0,level=('Layer','Unit')) df = df.rename({'Rel_act': 'Fraction change'}, axis=1) df = df.reset_index() df.head() palette = get_cmap('Blues')(np.linspace(0.2,0.9,6)[1:-1]) fig = figure(figsize=(1.75,1.5), dpi=150) n_groups, ax, lgnd, hdls = violinplot2( data=df, x='Layer', y='Fraction change', cut=0, width=0.9, palette=palette, order=layers, x_disp=layers_disp, hline_at_1=False) xlabel('Target CaffeNet layer') ylim(-0.35, 0.35) yticks((-0.25,0,0.25)) ax.set_yticklabels([f'{t:.2f}' for t in (-0.25,0,0.25)]) ax.set_yticks(np.arange(-0.3,0.30,0.05), minor=True) savefig(figdir / f'inits-change.png', dpi=300, bbox_inches='tight') savefig(figdir / f'inits-change.svg', dpi=300, bbox_inches='tight')
_____no_output_____
MIT
figure_data/Make Plots.ipynb
willwx/XDream
Rand inits, interpolation
df = pd.read_csv(datdir/'fig_4-rand_init_interp.csv').set_index(['Layer', 'Unit', 'Seed_i0', 'Seed_i1']) df = df.mean(axis=0,level=('Layer','Unit')) df2 = pd.read_csv(datdir/'fig_4-rand_init_interp-2.csv').set_index(['Layer', 'Unit']) # control conditions df2_normed = df2.divide(df[['Rel_act_loc_0.0','Rel_act_loc_1.0']].mean(axis=1),axis=0) df_normed = df.divide(df[['Rel_act_loc_0.0','Rel_act_loc_1.0']].mean(axis=1),axis=0) df_normed.head() fig, axs = subplots(1, 2, figsize=(3.5,1.5), dpi=150) subplots_adjust(wspace=0.5) interp_xs = np.array([float(i[i.rfind('_')+1:]) for i in df.columns]) for ax, df_ in zip(axs, (df, df_normed)): df_mean = df_.mean(axis=0, level='Layer') df_std = df_.std(axis=0, level='Layer') for l, ld, c in zip(layers, layers_disp, palette): m = df_mean.loc[l].values s = df_std.loc[l].values ax.plot(interp_xs, m, c=c, label=ld) ax.fill_between(interp_xs, m-s, m+s, fc=c, ec='none', alpha=0.1) # plot control xs2 = (interp_xs.min(), interp_xs.max()) axs[0].hlines(1, *xs2, linestyle='--', linewidth=1) for l, c in zip(layers, palette): # left subplot: relative activation df_ = df2.loc[l] mq = np.nanmedian(df_['Rel_ImNet_median_act'].values) axs[0].plot(xs2, (mq, mq), color=c, linewidth=1.15, zorder=-2) # right subplot: normalized to endpoints df_ = df2_normed.loc[l] for k, ls, lw in zip(('Rel_exp_max', 'Rel_ImNet_median_act'), ('--','-'), (1, 1.15)): mq = np.nanmedian(df_[k].values) axs[1].plot(xs2, (mq, mq), color=c, ls=ls, linewidth=lw, zorder=-2) axs[0].set_yticks((0, 1, 2)) axs[1].set_yticks((0, 0.5, 1)) axs[0].set_ylabel('Relative activation') axs[1].set_ylabel('Normalized activation') for ax in axs: ax.set_xlabel('Interpolation location') lgnd = axs[-1].legend(loc='upper left', bbox_to_anchor=(1.05, 1.05)) legend(handles=[Line2D([0], [0], color='k', lw=1, ls='--', label='Max'), Line2D([0], [0], color='k', lw=1.15, label='Median')], title='ImageNet ref.', loc='upper left', bbox_to_anchor=(1.05,0.3)) ax.add_artist(lgnd) savefig(figdir / f'inits-interp.png', dpi=300, bbox_inches='tight') savefig(figdir / f'inits-interp.svg', dpi=300, bbox_inches='tight')
_____no_output_____
MIT
figure_data/Make Plots.ipynb
willwx/XDream
Per-neuron inits
df = pd.read_csv(datdir/'fig_4-per_neuron_init.csv') df.head() hue_order = ('rand', 'none', 'worst_opt', 'mid_opt', 'best_opt', 'worst_ivt', 'mid_ivt', 'best_ivt') palette = [get_cmap(main_c)(np.linspace(0.3,0.8,4)) for main_c in ('Blues', 'Greens', 'Purples')] palette = np.concatenate([[ palette[0][i]] * 1 + [palette[1][i]] * 3 + [palette[2][i]] * 3 for i in range(4)]) palette = tuple(palette) + tuple(('none', c) for c in palette) fig = figure(figsize=(6.3,2), dpi=150) n_groups, ax, lgnd, hdls = violinplot2( data=df, x='Layer', y=('Rel_act', 'Rel_act_init'), hue='Init_name', cut=0, order=layers, hue_order=hue_order, x_disp=x_disp, palette_per_violin=palette) ylabel('Relative activation') ylabel('Target CaffeNet layer') # create custom legends # for init methods legend_elements = [ matplotlib.patches.Patch(facecolor=palette[14+3*i], edgecolor='none', label=l) for i, l in enumerate(('Random', 'Opt', 'Ivt'))] lgnd1 = legend(handles=legend_elements, title='Init. method', loc='upper left', bbox_to_anchor=(1,1.05)) # for generation condition legend_elements = [ matplotlib.patches.Patch(facecolor='gray', edgecolor='none', label='Final'), matplotlib.patches.Patch(facecolor='none', edgecolor='gray', label='Initial')] ax.legend(handles=legend_elements, title='Generation', loc='upper left', bbox_to_anchor=(1,.45)) ax.add_artist(lgnd1) savefig(figdir / f'inits-per_neuron.png', dpi=300, bbox_inches='tight') savefig(figdir / f'inits-per_neuron.svg', dpi=300, bbox_inches='tight')
_____no_output_____
MIT
figure_data/Make Plots.ipynb
willwx/XDream
Figure 6 - Compare Optimizers & Stoch Scales Compare optimizers
df = pd.read_csv(datdir/'fig_6-optimizers.csv') df['OCL'] = ['_'.join(v) for v in df[['Optimizer','Classifier','Layer']].values] df.head() opts = ('genetic', 'FDGD', 'NES') layers = {'caffenet': ('conv2', 'conv4', 'fc6', 'fc8'), 'inception-resnet-v2': ('classifier',)} cls = [(c, l) for c in layers for l in layers[c]] xorder = tuple(f'{opt}_{c}_{l}' for c in layers for l in layers[c] for opt in (opts + ('none',)))[:-1] x_disp = ('CaffeNet, conv2', 'CaffeNet, conv4', 'CaffeNet, fc6', 'CaffeNet, fc8', 'Inception-ResNet-v2,\nclassifier') opts_disp = ('Genetic', 'FDGD', 'NES') palette = [get_cmap(main_c)(np.linspace(0.3,0.8,4)) for main_c in ('Blues', 'Oranges', 'Greens')] palette = np.concatenate([ np.concatenate([[palette[j][i], palette[j][i]/2+0.5] for j in range(3)]) for i in (0,1,2,3,3)]) fig = figure(figsize=(6.75,2.75), dpi=150) n_groups, ax, lgnd, hdls = violinplot2( data=df, x='OCL', y='Rel_act', hue='Noisy', cut=0, inner='quartiles', split=True, width=1, order=xorder, palette_per_violin=palette) default_ax_lims(ax, n_groups) xticks(np.arange(1,20,4), labels=x_disp) xlabel('Target layer', labelpad=0) ylabel('Relative activation') # create custom legends # for optimizers legend_patches = [matplotlib.patches.Patch(facecolor=palette[i], edgecolor='none', label=opt) for i, opt in zip(range(12,18,2), opts_disp)] lgnd1 = legend(handles=legend_patches, title='Optimization alg.', loc='upper left', bbox_to_anchor=(0,1)) # for noise condition legend_patches = [matplotlib.patches.Patch(facecolor=(0.5,0.5,0.5), edgecolor='none', label='Noiseless'), matplotlib.patches.Patch(facecolor=(0.8,0.8,0.8), edgecolor='none', label='Noisy')] legend(handles=legend_patches, loc='upper right', bbox_to_anchor=(1,1)) ax.add_artist(lgnd1) # plot control group_width_ = 4 for i, cl in enumerate(cls): i = i * group_width_ + 1 df_ = df[(df['Classifier'] == cl[0]) & (df['Layer'] == cl[1])] lq, mq, uq = np.nanpercentile(df_['Rel_exp_max'].values, (25, 50, 75)) xs_qtl = i+np.array((-1,1))*group_width_*0.7/2 xs_med = i+np.array((-1,1))*group_width_*0.75/2 fill_between(xs_qtl, lq, uq, color=(0.9,0.9,0.9), zorder=-2) plot(xs_med, (mq, mq), color=(0.5,0.5,0.5), linewidth=1.15, zorder=-1) savefig(figdir / f'optimizers.png', dpi=300, bbox_inches='tight') savefig(figdir / f'optimizers.svg', dpi=300, bbox_inches='tight')
_____no_output_____
MIT
figure_data/Make Plots.ipynb
willwx/XDream
Compare varying amounts of noise
df = pd.read_csv(datdir/'fig_6-stoch_scales.csv') df = df[~np.isnan(df['Rel_noise'])] df['Stoch_scale_plot'] = [str(int(v)) if ~np.isnan(v) else 'None' for v in df['Stoch_scale']] df.head() layers = ('conv2', 'conv4', 'fc6', 'fc8') stoch_scales = list(map(str, (5, 10, 20, 50, 75, 100, 250))) + ['None'] stoch_scales_disp = stoch_scales[:-1] + ['No\nnoise'] stat_keys = ('Self_correlation', 'Rel_noise', 'SNR') stat_keys_disp = ('Self correlation', 'Stdev. : mean ratio', 'Signal-to-noise ratio') palette = [get_cmap('Blues')(np.linspace(0.3,0.8,4))[2]] # to match previous color # calculate noise statstics and define their formatting format_frac = lambda v: ('%.2f' % v)[1:] if (0 < v < 1) else '0' if v == 0 else str(v) def format_sci(v): v = '%.0e' % v if v == 'inf': return v m, s = v.split('e') s = int(s) if s: if False: #s > 1: m = re.split('0+$', m)[0] m += 'e%d' % s else: m = str(int((float(m) * np.power(10, s)))) return m fmts = (format_frac, format_frac, format_sci) byl_byss_stats = {k: {} for k in stat_keys} for l in layers: df_ = df[df['Layer'] == l] stats = {k: [] for k in stat_keys} for ss in stoch_scales: df__ = df_[df_['Stoch_scale_plot'] == ss] for k in stat_keys: stats[k].append(np.median(df__[k])) for k in stats.keys(): byl_byss_stats[k][l] = stats[k] fig, axs = subplots(1, 4, figsize=(5.25, 2), dpi=150, sharex=True, sharey=True, squeeze=False) axs = axs.flatten() subplots_adjust(wspace=0.05) for l, ax in zip(layers, axs): df_ = df[df['Layer'] == l] n_groups, ax, lgnd, hdls = violinplot2( data=df_, x='Rel_act', y='Stoch_scale_plot', orient='h', cut=0, width=.85, scale='width', palette=palette, ax=ax) ax.set_title(f'CaffeNet, {l}', fontsize=8) default_ax_lims(ax, n_groups, orient='h') ax.set_xlabel(None) # append more y-axes to last axis pars = [twinx(ax) for _ in range(len(stat_keys))] ylim_ = ax.get_ylim() for i, (par, k, fmt, k_disp) in enumerate(zip(pars, stat_keys, fmts, stat_keys_disp)): par.set_frame_on(True) par.patch.set_visible(False) par.spines['right'].set_visible(True) par.yaxis.set_ticks_position('right') par.yaxis.set_label_position('right') par.yaxis.labelpad = 2 par.spines['right'].set_position(('axes', 1+.6*i)) par.set_ylabel(k_disp) par.set_yticks(range(len(stoch_scales))) par.set_yticklabels(map(fmt, byl_byss_stats[k][l])) par.set_ylim(ylim_) axs[0].set_ylabel('Expected max firing rate, spks') axs[0].set_yticklabels(stoch_scales_disp) for ax in axs[1:]: ax.set_ylabel(None) ax.yaxis.set_tick_params(left=False) # joint ax = fig.add_subplot(111, frameon=False) ax.tick_params(labelcolor='none', bottom=False, left=False, right=False) ax.set_frame_on(False) ax.set_xlabel('Relative activation') savefig(figdir / 'stoch_scales.png', dpi=300, bbox_inches='tight') savefig(figdir / 'stoch_scales.svg', dpi=300, bbox_inches='tight')
_____no_output_____
MIT
figure_data/Make Plots.ipynb
willwx/XDream
Libraries and auxiliary functions
#load the libraries from time import sleep from kafka import KafkaConsumer import datetime as dt import pygeohash as pgh #fuctions to check the location based on the geo hash (precision =5) #function to check location between 2 data def close_location (data1,data2): print("checking location...of sender",data1.get("id")," and sender" , data2.get("id")) #with the precision =5 , we find the location that close together with the radius around 2.4km if data1.get("geohash")== data2.get("geohash"): print("=>>>>>sender",str(data1.get("id")),"location near ", "sender",str(data2.get("id")),"location") else: print('>>>not close together<<<') #function to check location between the joined data and another data (e.g hotspot data) def close_location_2 (data1,data2): print("checking location...of joined data id:",data1.get("id")," and sender" , data2.get("id")) #with the precision =5 , we find the location that close together with the radius 2.4km if data1.get("geohash")== data2.get("geohash"): print("=>>>> location",str(data1.get("geohash")),"location near ", str(data2.get("geohash")),"location") else: print('>>>not close together<<<') # check location of 2 climate data stored in the list def close_location_in_list(a_list): print('check 2 climate location data') data_1 = a_list[0] data_2 = a_list[1] close_location (data_1,data_2) #auxilary function to handle the average and join of the json file #function to merge satellite data def merge_sat(data1,data2): result ={} result["_id"] = data1.get("_id") # take satellite _id ,we will store this joined data to the hotspot collection result["created_time"] = data1.get("created_time") #average the result of the location result['surface_temperature_celsius'] = (float(data1.get("surface_temperature_celsius"))+float(data2.get("surface_temperature_celsius")))/2 result["confidence"] = (float(data1.get("confidence"))+float(data2.get("confidence")))/2 #reassign the location like the initial data structure result['geohash'] = data2.get('geohash') result["location"] = data1.get("location") return result # function to join climate data and satellite data def join_data_cli_sat(climData,satData): result={} #get location and id of the join data result["_id"] = climData.get("_id") # take climate _id ,we will store this joined data to the climate collection result['geohash'] = climData.get('geohash') result["location"] = climData.get("location") result["created_time"] = climData.get("created_time") #get climate data result["air_temperature_celsius"] = climData.get("air_temperature_celsius") result["relative_humidity"] = climData.get("relative_humidity") result["max_wind_speed"] = climData.get("max_wind_speed") result["windspeed_knots"] = climData.get("windspeed_knots") result["precipitation"] = climData.get("precipitation") #get satellite data result["surface_temperature_celsius"] = satData.get("surface_temperature_celsius") result["confidence"] = satData.get("confidence") result["hotspots"] = satData.get("_id") #reference to the hotspot data like in the task A_B return result
_____no_output_____
MIT
Assignment_TaskC_Streaming_Application.ipynb
tonbao30/Parallel-dataprocessing-simulation
Streaming Application
import os os.environ['PYSPARK_SUBMIT_ARGS'] = '--packages org.apache.spark:spark-streaming-kafka-0-8_2.11:2.3.0 pyspark-shell' import sys import time import json from pymongo import MongoClient from pyspark import SparkContext, SparkConf from pyspark.streaming import StreamingContext from pyspark.streaming.kafka import KafkaUtils def sendDataToDB(iter): client = MongoClient() db = client.fit5148_assignment_db # MongoDB design sat_col = db.hotspot #to store satellite data and joined satellite data # to store the join between climate and satellite clim_col = db.climate #to store the climate data #list of senders per iter sender = [] #variable to store the data from 3 unique senders per iter climList = [] satData_2 = {} satData_3 = {} ##################################### PARSING THE DATA FROM SENDERS PER ITER########################################### for record in iter: sender.append(record[0]) data_id = json.loads(record[1]) data = data_id.get('data') if record[0] == "sender_2" : #parse AQUA satelite data #main data #add "AQUA" string to the "_id" to handle the case when 2 satellite data come at the same time #to make sure the incomming data from AQUA at a specific time is unique satData_2["_id"] = "AQUA" +str(dt.datetime.strptime(str(data_id.get("created_time")), "%Y-%m-%dT%H:%M:%S")) satData_2["id"] = data_id.get("sender_id") #unique sender_id #use datetime as ISO format for readable in mongoDB satData_2["created_time"] = dt.datetime.strptime(str(data_id.get("created_time")), "%Y-%m-%dT%H:%M:%S") # parse other data satData_2["location"] = {"latitude" : float(data.get("lat")), "longitude" : float(data.get("lon"))} satData_2["surface_temperature_celsius"] = float(data.get("surface_temp")) satData_2["confidence"] = float(data.get("confidence")) geohash = pgh.encode(float(data.get("lat")),float(data.get("lon")),precision=5) satData_2["geohash"] = geohash #unique_location if record[0] == "sender_3": #parse TERRA satelite data #main data #add "TERRA" string to the "_id" to handle the case when 2 satellite data come at the same time #to make sure the incomming data for TERRA at a specific time is unique satData_3["_id"] = "TERRA" +str(dt.datetime.strptime(str(data_id.get("created_time")), "%Y-%m-%dT%H:%M:%S")) satData_3["id"] = data_id.get("sender_id") #unique sender_id #use datetime as ISO format for readable in mongoDB satData_3["created_time"] = dt.datetime.strptime(str(data_id.get("created_time")), "%Y-%m-%dT%H:%M:%S") # parse other data satData_3["location"] = {"latitude" : float(data.get("lat")), "longitude" : float(data.get("lon"))} satData_3["surface_temperature_celsius"] = float(data.get("surface_temp")) satData_3["confidence"] = float(data.get("confidence")) geohash = pgh.encode(float(data.get("lat")),float(data.get("lon")),precision=5) satData_3["geohash"] = geohash #unique_location if record[0] == "sender_1": #parse climate data climData = {} #main data #add "CLIM" string to the "_id" to handle to make sure the incomming data for #climate at a specific time is unique climData["_id"] = "CLIM" + str(dt.datetime.strptime(str(data_id.get("created_time")), "%Y-%m-%dT%H:%M:%S")) climData["id"] = data_id.get("sender_id") #unique sender_id #use datetime as ISO format for readable in mongoDB climData["created_time"] = dt.datetime.strptime(str(data_id.get("created_time")), "%Y-%m-%dT%H:%M:%S") climData["location"] = {"latitude" : float(data.get("lat")), "longitude" : float(data.get("lon"))} climData["air_temperature_celsius"] = float(data.get("air_temp")) climData["relative_humidity"] = float(data.get("relative_humid")) climData["max_wind_speed"] = float(data.get("max_wind_speed")) climData["windspeed_knots"] = float(data.get("windspeed")) climData["precipitation"] = data.get("prep") geohash = pgh.encode(float(data.get("lat")),float(data.get("lon")),precision=5) climData["geohash"] = geohash climList.append(climData) uniq_sender_id = set(sender) #check unique sender for each iter ################################ PERFOMING JOIN AND CHECK LOCATION THEN PUSH TO MONGODB ################################## ####################### Received only from unique one sender #for climate data, there will be the case with on 2 streams of climate data go throught the app if len(uniq_sender_id) == 1 and "sender_1" in uniq_sender_id:#store to climate data to mongoDB print("---------------------received CLIMATE data------------------------") try: #find close location in climate data and print out if len(climList) > 1: #check 2 climate location data close_location_in_list(climList) for data in climList: clim_col.insert(data) except Exception as ex: print("Exception Occured. Message: {0}".format(str(ex))) # if there is one satellite data (AQUA), there will be no case with 2 same satelite data if len(uniq_sender_id) == 1 and "sender_2" in uniq_sender_id:#store to climate data to mongoDB print("---------------------received AQUA data------------------------") try: sat_col.insert(satData_2) except Exception as ex: print("Exception Occured. Message: {0}".format(str(ex))) # if there is one satellite data (TERRA) , there will be no case with 2 same satelite data if len(uniq_sender_id) == 1 and "sender_3" in uniq_sender_id:#store to climate data to mongoDB print("---------------------received TERRA data------------------------") try: sat_col.insert(satData_3) except Exception as ex: print("Exception Occured. Message: {0}".format(str(ex))) ########################## Received from 2 unique senders elif len(sender) == 2 and len(uniq_sender_id) == 2: print("---------------------received 2 streams------------------------") #will have 1 case, because there will be at least 1 climate data #if the consummer received 2, that will be the climat data and one sat data #or 2 climate data because we assume that there is at least 1 climate data in the stream try: for climate in climList: if len(satData_3)!=0: #check location close_location(climate,satData_3) #check lat lon first!!! print('---checking TERRA and Climate location---') if satData_3["location"] == climate["location"]: print('joining....') join_cli_sat = join_data_cli_sat(climate,satData_3) clim_col.insert(join_cli_sat) sat_col.insert(satData_3) else: print('no join') sat_col.insert(satData_3) clim_col.insert(climate) elif len(satData_2)!=0: #check close location close_location(climate,satData_2) print('---checking AQUA and Climate location---') #check lat lon first!!! if satData_2["location"] == climate["location"]: print('joining....') join_cli_sat = join_data_cli_sat(climate,satData_2) clim_col.insert(join_cli_sat) sat_col.insert(satData_2) else: print('no join') sat_col.insert(satData_2) clim_col.insert(climate) else: #received only 2 climate data print('received 2 climate data') clim_col.insert(climate) # if we received 2 sattelite data only (rare case, we ran out of climate data) if len(climList) == 0: if len(satData_3)!=0 and len(satData_2)!=0: #check location close_location(satData_3,satData_2) print('---checking AQUA and TERRA location---') if satData_2["location"] == satData_3["location"]: print('joining....') sat_data = merge_sat(satData_2,satData_3) #insert the data into the mongo with handling the exceptions : duplicate sat_col.insert(sat_data) else: sat_col.update(satData_3, satData_3, upsert=True) sat_col.update(satData_2, satData_2, upsert=True) except Exception as ex: print("Exception Occured. Message: {0}".format(str(ex))) #exception will occur with empty satelite data #########################################################Received 3 stream ########################## Received from 2 unique sender #we assume that there is at least 1 climate data in the stream , so if we have 3 streams of data # there will be 2 climate data and 1 satelite data because the app process 10 secs batch # if received 3 streams, there will be 2 climate data and 1 satellite data if len(sender) == 3: print("---------------------received 3 streams------------------------") try: if len(climList) > 1: #check 2 climate location data close_location_in_list(climList) for climate2 in climList: if len(satData_3)!=0: #check location close_location(climate2,satData_3) print('---checking TERRA and Climate location---') if satData_3["location"] == climate2["location"]: print('joining....') join_data = join_data_cli_sat(climate2,satData_3) clim_col.insert(join_data) sat_col.update(satData_3, satData_3, upsert=True) else: print('no join') clim_col.insert(climate2) #insert the data into the mongo with handling the exceptions : duplicate sat_col.update(satData_3, satData_3, upsert=True) elif len(satData_2)!=0: #check location close_location(climate2,satData_2) print('---checking AQUA and Climate location---') if satData_2["location"] == climate2["location"]: print('joining....') join_data = join_data_cli_sat(climate2,satData_2) clim_col.insert(join_data) sat_col.update(satData_2, satData_2, upsert=True) else: print('no join') clim_col.insert(climate2) #insert the data into the mongo with handling the exceptions : duplicate sat_col.update(satData_2, satData_2, upsert=True) except Exception as ex: print("Exception Occured. Message: {0}".format(str(ex))) ########################################Received 4 streams of data################################# # There will be 2 climate data and 2 satellite data from AQUA and TERRA elif len(sender) ==4 : # 4 will have 2 climate data and 2 sat data print("---------------------received 4 streams------------------------") try: if len(climList) > 1: #check 2 climate location data close_location_in_list(climList) for climate2 in climList: print('---checking AQUA , TERRA and Climate location---') #location sat2=sat3=climate if (satData_2["location"] == satData_3["location"])\ and (satData_2["location"] == climate2["location"]): print('joining....') #join 2 satellite data sat_data = merge_sat(satData_2,satData_3) sat_col.update(sat_data, sat_data, upsert=True) #join with the climate file final_data = join_data_cli_sat(climate2,sat_data) clim_col.insert(final_data) #location sat2=sat3 elif (satData_2["location"] == satData_3["location"])\ and (satData_2["location"] != climate2["location"]): print('joining....') sat_data = merge_sat(satData_2,satData_3) #insert the data into the mongo with handling the exceptions : duplicate sat_col.update(sat_data, sat_data, upsert=True) clim_col.insert(climate2) #check location close_location_2(sat_data,climate2) #location sat2=climate elif (satData_2["location"] != satData_3["location"])\ and (satData_2["location"] == climate2["location"]): print('joining....') join_data = join_data_cli_sat(climate2,satData_2) clim_col.insert(join_data) #insert the data into the mongo with handling the exceptions : duplicate sat_col.update(satData_3, satData_3, upsert=True) sat_col.update(satData_2, satData_2, upsert=True) # #check location close_location_2(join_data,satData_3) #location sat3 =climate elif (satData_2["location"] != satData_3["location"])\ and (satData_3["location"] == climate2["location"]): print('joining....') join_data = join_data_cli_sat(climate2,satData_3) clim_col.insert(join_data) #insert the data into the mongo with handling the exceptions : duplicate sat_col.update(satData_3, satData_3, upsert=True) sat_col.update(satData_2, satData_2, upsert=True) # #check location close_location_2(join_data,satData_2) #if nothing to merge else: print('no join') #check location close_location(climate2,satData_2) close_location(climate2,satData_3) close_location(satData_2,satData_3) clim_col.insert(climate2) #insert the data into the mongo with handling the exceptions sat_col.update(satData_3, satData_3, upsert=True) sat_col.update(satData_2, satData_2, upsert=True) except Exception as ex: print("Exception Occured. Message: {0}".format(str(ex))) client.close() ################################################ INITIATE THE STREAM ################################################ n_secs = 10 # set batch to 10 seconds topic = 'TaskC' conf = SparkConf().setAppName("KafkaStreamProcessor").setMaster("local[2]") #set 2 processors sc = SparkContext.getOrCreate() if sc is None: sc = SparkContext(conf=conf) sc.setLogLevel("WARN") ssc = StreamingContext(sc, n_secs) kafkaStream = KafkaUtils.createDirectStream(ssc, [topic], { 'bootstrap.servers':'localhost:9092', 'group.id':'taskC-group', 'fetch.message.max.bytes':'15728640', 'auto.offset.reset':'largest'}) # Group ID is completely arbitrary lines= kafkaStream.foreachRDD(lambda rdd: rdd.foreachPartition(sendDataToDB)) # this line print to check the data IDs has gone through the app for a specific time a = kafkaStream.map(lambda x:x[0]) a.pprint() ssc.start() # ssc.awaitTermination() # ssc.start() time.sleep(3000) # Run stream for 20 mins just to get the data for visualisation # # ssc.awaitTermination() ssc.stop(stopSparkContext=True,stopGraceFully=True)
------------------------------------------- Time: 2019-05-24 17:45:20 ------------------------------------------- sender_2 sender_3 sender_1 sender_1 ------------------------------------------- Time: 2019-05-24 17:45:30 ------------------------------------------- sender_3 sender_1 sender_1 ------------------------------------------- Time: 2019-05-24 17:45:40 ------------------------------------------- sender_2 sender_1 sender_1 ------------------------------------------- Time: 2019-05-24 17:45:50 ------------------------------------------- sender_1 sender_2 sender_1 ------------------------------------------- Time: 2019-05-24 17:46:00 ------------------------------------------- sender_3 sender_1 sender_1 ------------------------------------------- Time: 2019-05-24 17:46:10 ------------------------------------------- sender_1 sender_2 sender_3 sender_1 ------------------------------------------- Time: 2019-05-24 17:46:20 ------------------------------------------- sender_1 sender_1 ------------------------------------------- Time: 2019-05-24 17:46:30 ------------------------------------------- sender_2 sender_1 sender_1 ------------------------------------------- Time: 2019-05-24 17:46:40 ------------------------------------------- sender_1 sender_3 sender_1 ------------------------------------------- Time: 2019-05-24 17:46:50 ------------------------------------------- sender_1 sender_1 ------------------------------------------- Time: 2019-05-24 17:47:00 ------------------------------------------- sender_2 sender_3 sender_1 sender_1 ------------------------------------------- Time: 2019-05-24 17:47:10 ------------------------------------------- sender_2 sender_1 sender_3 sender_1 ------------------------------------------- Time: 2019-05-24 17:47:20 ------------------------------------------- sender_1 ------------------------------------------- Time: 2019-05-24 17:47:30 ------------------------------------------- sender_1 sender_3 sender_1 ------------------------------------------- Time: 2019-05-24 17:47:40 ------------------------------------------- sender_1 sender_2 sender_1 sender_3 ------------------------------------------- Time: 2019-05-24 17:47:50 ------------------------------------------- sender_1 sender_1 sender_3 ------------------------------------------- Time: 2019-05-24 17:48:00 ------------------------------------------- sender_1 sender_1 ------------------------------------------- Time: 2019-05-24 17:48:10 ------------------------------------------- sender_1 sender_2 sender_1 ------------------------------------------- Time: 2019-05-24 17:48:20 ------------------------------------------- sender_1 sender_3 sender_1 sender_2 ------------------------------------------- Time: 2019-05-24 17:48:30 ------------------------------------------- sender_1 sender_1 ------------------------------------------- Time: 2019-05-24 17:48:40 ------------------------------------------- sender_1 sender_1 sender_3
MIT
Assignment_TaskC_Streaming_Application.ipynb
tonbao30/Parallel-dataprocessing-simulation
Text Summarization Sequenece to Sequence Modelling Attention Mechanism Import Libraries
#import all the required libraries import numpy as np import pandas as pd import pickle from statistics import mode import nltk from nltk import word_tokenize from nltk.stem import LancasterStemmer nltk.download('wordnet') nltk.download('stopwords') nltk.download('punkt') from nltk.corpus import stopwords from tensorflow.keras.models import Model from tensorflow.keras import models from tensorflow.keras import backend as K from tensorflow.keras.preprocessing.sequence import pad_sequences from tensorflow.keras.preprocessing.text import Tokenizer from tensorflow.keras.utils import plot_model from tensorflow.keras.layers import Input,LSTM,Embedding,Dense,Concatenate,Attention from sklearn.model_selection import train_test_split from bs4 import BeautifulSoup import warnings pd.set_option("display.max_colwidth", 200) warnings.filterwarnings("ignore") from tensorflow.keras.callbacks import EarlyStopping
[nltk_data] Downloading package wordnet to /usr/share/nltk_data... [nltk_data] Package wordnet is already up-to-date! [nltk_data] Downloading package stopwords to /usr/share/nltk_data... [nltk_data] Package stopwords is already up-to-date! [nltk_data] Downloading package punkt to /usr/share/nltk_data... [nltk_data] Package punkt is already up-to-date!
MIT
text-summarization-attention-mechanism.ipynb
buddhadeb33/Text-Summarization-Attention-Mechanism
Parse the Data We’ll take a sample of 100,000 reviews to reduce the training time of our model.
#read the dataset file for text Summarizer df=pd.read_csv("../input/amazon-fine-food-reviews/Reviews.csv",nrows=10000) # df = pd.read_csv("../input/amazon-fine-food-reviews/Reviews.csv") #drop the duplicate and na values from the records df.drop_duplicates(subset=['Text'],inplace=True) df.dropna(axis=0,inplace=True) #dropping na input_data = df.loc[:,'Text'] target_data = df.loc[:,'Summary'] target_data.replace('', np.nan, inplace=True) df.info() df['Summary'][:10] df['Text'][:10]
_____no_output_____
MIT
text-summarization-attention-mechanism.ipynb
buddhadeb33/Text-Summarization-Attention-Mechanism
Preprocessing Performing basic preprocessing steps is very important before we get to the model building part. Using messy and uncleaned text data is a potentially disastrous move. So in this step, we will drop all the unwanted symbols, characters, etc. from the text that do not affect the objective of our problem.Here is the dictionary that we will use for expanding the contractions:
contraction_mapping = {"ain't": "is not", "aren't": "are not","can't": "cannot", "'cause": "because", "could've": "could have", "couldn't": "could not", "didn't": "did not", "doesn't": "does not", "don't": "do not", "hadn't": "had not", "hasn't": "has not", "haven't": "have not", "he'd": "he would","he'll": "he will", "he's": "he is", "how'd": "how did", "how'd'y": "how do you", "how'll": "how will", "how's": "how is", "I'd": "I would", "I'd've": "I would have", "I'll": "I will", "I'll've": "I will have","I'm": "I am", "I've": "I have", "i'd": "i would", "i'd've": "i would have", "i'll": "i will", "i'll've": "i will have","i'm": "i am", "i've": "i have", "isn't": "is not", "it'd": "it would", "it'd've": "it would have", "it'll": "it will", "it'll've": "it will have","it's": "it is", "let's": "let us", "ma'am": "madam", "mayn't": "may not", "might've": "might have","mightn't": "might not","mightn't've": "might not have", "must've": "must have", "mustn't": "must not", "mustn't've": "must not have", "needn't": "need not", "needn't've": "need not have","o'clock": "of the clock", "oughtn't": "ought not", "oughtn't've": "ought not have", "shan't": "shall not", "sha'n't": "shall not", "shan't've": "shall not have", "she'd": "she would", "she'd've": "she would have", "she'll": "she will", "she'll've": "she will have", "she's": "she is", "should've": "should have", "shouldn't": "should not", "shouldn't've": "should not have", "so've": "so have","so's": "so as", "this's": "this is","that'd": "that would", "that'd've": "that would have", "that's": "that is", "there'd": "there would", "there'd've": "there would have", "there's": "there is", "here's": "here is","they'd": "they would", "they'd've": "they would have", "they'll": "they will", "they'll've": "they will have", "they're": "they are", "they've": "they have", "to've": "to have", "wasn't": "was not", "we'd": "we would", "we'd've": "we would have", "we'll": "we will", "we'll've": "we will have", "we're": "we are", "we've": "we have", "weren't": "were not", "what'll": "what will", "what'll've": "what will have", "what're": "what are", "what's": "what is", "what've": "what have", "when's": "when is", "when've": "when have", "where'd": "where did", "where's": "where is", "where've": "where have", "who'll": "who will", "who'll've": "who will have", "who's": "who is", "who've": "who have", "why's": "why is", "why've": "why have", "will've": "will have", "won't": "will not", "won't've": "will not have", "would've": "would have", "wouldn't": "would not", "wouldn't've": "would not have", "y'all": "you all", "y'all'd": "you all would","y'all'd've": "you all would have","y'all're": "you all are","y'all've": "you all have", "you'd": "you would", "you'd've": "you would have", "you'll": "you will", "you'll've": "you will have", "you're": "you are", "you've": "you have"}
_____no_output_____
MIT
text-summarization-attention-mechanism.ipynb
buddhadeb33/Text-Summarization-Attention-Mechanism
We can use the contraction using two method, one we can use the above dictionary or we can keep the contraction file as a data set and import it.
input_texts=[] # Text column target_texts=[] # summary column input_words=[] target_words=[] # contractions=pickle.load(open("../input/contraction/contractions.pkl","rb"))['contractions'] contractions = contraction_mapping #initialize stop words and LancasterStemmer stop_words=set(stopwords.words('english')) stemm=LancasterStemmer()
_____no_output_____
MIT
text-summarization-attention-mechanism.ipynb
buddhadeb33/Text-Summarization-Attention-Mechanism
Data Cleaning
def clean(texts,src): texts = BeautifulSoup(texts, "lxml").text #remove the html tags words=word_tokenize(texts.lower()) #tokenize the text into words #filter words which contains \ #integers or their length is less than or equal to 3 words= list(filter(lambda w:(w.isalpha() and len(w)>=3),words)) #contraction file to expand shortened words words= [contractions[w] if w in contractions else w for w in words ] #stem the words to their root word and filter stop words if src=="inputs": words= [stemm.stem(w) for w in words if w not in stop_words] else: words= [w for w in words if w not in stop_words] return words #pass the input records and target records for in_txt,tr_txt in zip(input_data,target_data): in_words= clean(in_txt,"inputs") input_texts+= [' '.join(in_words)] input_words+= in_words #add 'sos' at start and 'eos' at end of text tr_words= clean("sos "+tr_txt+" eos","target") target_texts+= [' '.join(tr_words)] target_words+= tr_words #store only unique words from input and target list of words input_words = sorted(list(set(input_words))) target_words = sorted(list(set(target_words))) num_in_words = len(input_words) #total number of input words num_tr_words = len(target_words) #total number of target words #get the length of the input and target texts which appears most often max_in_len = mode([len(i) for i in input_texts]) max_tr_len = mode([len(i) for i in target_texts]) print("number of input words : ",num_in_words) print("number of target words : ",num_tr_words) print("maximum input length : ",max_in_len) print("maximum target length : ",max_tr_len)
number of input words : 10344 number of target words : 4169 maximum input length : 73 maximum target length : 17
MIT
text-summarization-attention-mechanism.ipynb
buddhadeb33/Text-Summarization-Attention-Mechanism
Split it
#split the input and target text into 80:20 ratio or testing size of 20%. x_train,x_test,y_train,y_test=train_test_split(input_texts,target_texts,test_size=0.2,random_state=0) #train the tokenizer with all the words in_tokenizer = Tokenizer() in_tokenizer.fit_on_texts(x_train) tr_tokenizer = Tokenizer() tr_tokenizer.fit_on_texts(y_train) #convert text into sequence of integers #where the integer will be the index of that word x_train= in_tokenizer.texts_to_sequences(x_train) y_train= tr_tokenizer.texts_to_sequences(y_train) #pad array of 0's if the length is less than the maximum length en_in_data= pad_sequences(x_train, maxlen=max_in_len, padding='post') dec_data= pad_sequences(y_train, maxlen=max_tr_len, padding='post') #decoder input data will not include the last word #i.e. 'eos' in decoder input data dec_in_data = dec_data[:,:-1] #decoder target data will be one time step ahead as it will not include # the first word i.e 'sos' dec_tr_data = dec_data.reshape(len(dec_data),max_tr_len,1)[:,1:]
_____no_output_____
MIT
text-summarization-attention-mechanism.ipynb
buddhadeb33/Text-Summarization-Attention-Mechanism
Model Building
K.clear_session() latent_dim = 500 #create input object of total number of encoder words en_inputs = Input(shape=(max_in_len,)) en_embedding = Embedding(num_in_words+1, latent_dim)(en_inputs) #create 3 stacked LSTM layer with the shape of hidden dimension for text summarizer using deep learning #LSTM 1 en_lstm1= LSTM(latent_dim, return_state=True, return_sequences=True) en_outputs1, state_h1, state_c1= en_lstm1(en_embedding) #LSTM2 en_lstm2= LSTM(latent_dim, return_state=True, return_sequences=True) en_outputs2, state_h2, state_c2= en_lstm2(en_outputs1) #LSTM3 en_lstm3= LSTM(latent_dim,return_sequences=True,return_state=True) en_outputs3 , state_h3 , state_c3= en_lstm3(en_outputs2) #encoder states en_states= [state_h3, state_c3]
_____no_output_____
MIT
text-summarization-attention-mechanism.ipynb
buddhadeb33/Text-Summarization-Attention-Mechanism
Decoder
# Decoder. dec_inputs = Input(shape=(None,)) dec_emb_layer = Embedding(num_tr_words+1, latent_dim) dec_embedding = dec_emb_layer(dec_inputs) #initialize decoder's LSTM layer with the output states of encoder dec_lstm = LSTM(latent_dim, return_sequences=True, return_state=True) dec_outputs, *_ = dec_lstm(dec_embedding,initial_state=en_states)
_____no_output_____
MIT
text-summarization-attention-mechanism.ipynb
buddhadeb33/Text-Summarization-Attention-Mechanism
Attention Layer
#Attention layer attention =Attention() attn_out = attention([dec_outputs,en_outputs3]) #Concatenate the attention output with the decoder outputs merge=Concatenate(axis=-1, name='concat_layer1')([dec_outputs,attn_out]) #Dense layer (output layer) dec_dense = Dense(num_tr_words+1, activation='softmax') dec_outputs = dec_dense(merge)
_____no_output_____
MIT
text-summarization-attention-mechanism.ipynb
buddhadeb33/Text-Summarization-Attention-Mechanism
Train the Model
#Model class and model summary for text Summarizer model = Model([en_inputs, dec_inputs], dec_outputs) model.summary() plot_model(model, to_file='model_plot.png', show_shapes=True, show_layer_names=True) model.compile(optimizer="rmsprop", loss="sparse_categorical_crossentropy", metrics=["accuracy"] ) history = model.fit( [en_in_data, dec_in_data], dec_tr_data, batch_size=512, epochs=10, validation_split=0.1,) # save model model.save('Text_Summarizer.h5') print('Model Saved!') from matplotlib import pyplot pyplot.plot(history.history['loss'], label='train') pyplot.plot(history.history['val_loss'], label='test') pyplot.legend() pyplot.show() max_text_len=30 max_summary_len=8
_____no_output_____
MIT
text-summarization-attention-mechanism.ipynb
buddhadeb33/Text-Summarization-Attention-Mechanism