markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
---|---|---|---|---|
Create a dense vector for each word in the pair. The output of Embedding has shape (batch_size, sequence_length, output_dim) which in our case is (batch_size, 1, DENSEVEC_DIM). We'll use Flatten to get rid of that pesky middle dimension (1), so going into the dot product we'll have shape (batch_size, DENSEVEC_DIM). | word1 = Input(shape=(1,), dtype='int64', name='word1')
word2 = Input(shape=(1,), dtype='int64', name='word2')
shared_embedding = Embedding(
input_dim=VOCAB_SIZE+1,
output_dim=DENSEVEC_DIM,
input_length=1,
embeddings_constraint = unit_norm(),
name='shared_embedding')
embedded_w1 = shared_embedding(word1)
embedded_w2 = shared_embedding(word2)
w1 = Flatten()(embedded_w1)
w2 = Flatten()(embedded_w2)
dotted = Dot(axes=1, name='dot_product')([w1, w2])
prediction = Dense(1, activation='sigmoid', name='output_layer')(dotted)
sg_model = Model(inputs=[word1, word2], outputs=prediction)
sg_model.compile(optimizer='adam', loss='mean_squared_error') | .ipynb_checkpoints/TextModels-checkpoint.ipynb | peterfig/keras-deep-learning-course | mit |
At this point you can check out how the data flows through your compiled model. | sg_model.layers
def print_layer(model, num):
print model.layers[num]
print model.layers[num].input_shape
print model.layers[num].output_shape
print_layer(sg_model,3) | .ipynb_checkpoints/TextModels-checkpoint.ipynb | peterfig/keras-deep-learning-course | mit |
Let's try training it with our toy data set! | import numpy as np
pairs = np.array(sg[0])
targets = np.array(sg[1])
targets
pairs
w1_list = np.reshape(pairs[:, 0], (len(pairs), 1))
w1_list
w2_list = np.reshape(pairs[:, 1], (len(pairs), 1))
w2_list
w2_list.shape
w2_list.dtype
sg_model.fit(x=[w1_list, w2_list], y=targets, epochs=10)
sg_model.layers[2].weights | .ipynb_checkpoints/TextModels-checkpoint.ipynb | peterfig/keras-deep-learning-course | mit |
Continuous Bag of Words (CBOW) model
CBOW means we take all the words in the window and use them to predict the target word. Note we are trying to predict an actual word (or a probability distribution over words) with CBOW, whereas in skip-gram we are trying to predict a similarity score.
FastText Model
FastText is creating dense document vectors using the words in the document enhanced with n-grams. These are embedded, averaged, and fed through a hidden dense layer, with a sigmoid activation. The prediction task is some binary classification of the documents. As usual, after training we can extract the dense vectors from the model.
FastText Model Data Prep | MAX_FEATURES = 20000 # number of unique words in the dataset
MAXLEN = 400 # max word (feature) length of a review
EMBEDDING_DIMS = 50
NGRAM_RANGE = 2 | .ipynb_checkpoints/TextModels-checkpoint.ipynb | peterfig/keras-deep-learning-course | mit |
Some data prep functions lifted from the example | def create_ngram_set(input_list, ngram_value=2):
"""
Extract a set of n-grams from a list of integers.
"""
return set(zip(*[input_list[i:] for i in range(ngram_value)]))
create_ngram_set([1, 2, 3, 4, 5], ngram_value=2)
create_ngram_set([1, 2, 3, 4, 5], ngram_value=3)
def add_ngram(sequences, token_indice, ngram_range=2):
"""
Augment the input list of list (sequences) by appending n-grams values.
"""
new_sequences = []
for input_list in sequences:
new_list = input_list[:]
for i in range(len(new_list) - ngram_range + 1):
for ngram_value in range(2, ngram_range + 1):
ngram = tuple(new_list[i:i + ngram_value])
if ngram in token_indice:
new_list.append(token_indice[ngram])
new_sequences.append(new_list)
return new_sequences
sequences = [[1,2,3,4,5, 6], [6,7,8]]
token_indice = {(1,2): 20000, (4,5): 20001, (6,7,8): 20002}
add_ngram(sequences, token_indice, ngram_range=2)
add_ngram(sequences, token_indice, ngram_range=3) | .ipynb_checkpoints/TextModels-checkpoint.ipynb | peterfig/keras-deep-learning-course | mit |
load canned training data | from keras.datasets import imdb
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=MAX_FEATURES)
x_train[0:2]
y_train[0:2] | .ipynb_checkpoints/TextModels-checkpoint.ipynb | peterfig/keras-deep-learning-course | mit |
Add n-gram features | ngram_set = set()
for input_list in x_train:
for i in range(2, NGRAM_RANGE + 1):
set_of_ngram = create_ngram_set(input_list, ngram_value=i)
ngram_set.update(set_of_ngram)
len(ngram_set) | .ipynb_checkpoints/TextModels-checkpoint.ipynb | peterfig/keras-deep-learning-course | mit |
Assign id's to the new features | ngram_set.pop()
start_index = MAX_FEATURES + 1
token_indice = {v: k + start_index for k, v in enumerate(ngram_set)}
indice_token = {token_indice[k]: k for k in token_indice} | .ipynb_checkpoints/TextModels-checkpoint.ipynb | peterfig/keras-deep-learning-course | mit |
Update MAX_FEATURES | import numpy as np
MAX_FEATURES = np.max(list(indice_token.keys())) + 1
MAX_FEATURES | .ipynb_checkpoints/TextModels-checkpoint.ipynb | peterfig/keras-deep-learning-course | mit |
Add n-grams to the input data | x_train = add_ngram(x_train, token_indice, NGRAM_RANGE)
x_test = add_ngram(x_test, token_indice, NGRAM_RANGE) | .ipynb_checkpoints/TextModels-checkpoint.ipynb | peterfig/keras-deep-learning-course | mit |
Make all input sequences the same length by padding with zeros | from keras.preprocessing import sequence
sequence.pad_sequences([[1,2,3,4,5], [6,7,8]], maxlen=10)
x_train = sequence.pad_sequences(x_train, maxlen=MAXLEN)
x_test = sequence.pad_sequences(x_test, maxlen=MAXLEN)
x_train.shape
x_test.shape | .ipynb_checkpoints/TextModels-checkpoint.ipynb | peterfig/keras-deep-learning-course | mit |
FastText Model | Image('diagrams/fasttext.png')
from keras.models import Sequential
from keras.layers.embeddings import Embedding
from keras.layers.pooling import GlobalAveragePooling1D
from keras.layers import Dense
ft_model = Sequential()
ft_model.add(Embedding(
input_dim = MAX_FEATURES,
output_dim = EMBEDDING_DIMS,
input_length= MAXLEN))
ft_model.add(GlobalAveragePooling1D())
ft_model.add(Dense(1, activation='sigmoid'))
ft_model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
ft_model.layers
print_layer(ft_model, 0)
print_layer(ft_model, 1)
print_layer(ft_model, 2)
ft_model.fit(x_train, y_train, batch_size=100, epochs=3, validation_data=(x_test, y_test)) | .ipynb_checkpoints/TextModels-checkpoint.ipynb | peterfig/keras-deep-learning-course | mit |
fastText classifier vs. convolutional neural network (CNN) vs. long short-term memory (LSTM) classifier: Fight!
A CNN takes the dot product of various "filters" (some new vector) with each word window down the sentence. For each convolutional layer in your model, you can choose the size of the filter (for example, 3 word vectors long) and the number of filters in the layer (for example, ten 3-word filters, or five 3-word filters).
Add a bias to each dot product of the filter and word window, and run it through an activation function. This produces a number.
Running a single filter down a sentence produces a series of numbers. Generally the maximum value is taken to represent the alignment of the sentence with that particular filter. All of this is just another way of extracting features from a sentence. In fastText, we extracted features in a human-readable way (n-grams) and tacked them onto the input data. With a CNN we take a different approach, letting the algorithm figure out what makes good features for the dataset.
insert filter operating on sentence image here | Image('diagrams/text-cnn-classifier.png') | .ipynb_checkpoints/TextModels-checkpoint.ipynb | peterfig/keras-deep-learning-course | mit |
Diagram from Convolutional Neural Networks for Sentence Classification, Kim Yoon (2014)
A CNN sentence classifier | embedding_dim = 50 # we'll get a vector representation of words as a by-product
filter_sizes = (2, 3, 4) # we'll make one convolutional layer for each filter we specify here
num_filters = 10 # each layer will contain this many filters
dropout_prob = (0.2, 0.2)
hidden_dims = 50
# Prepossessing parameters
sequence_length = 400
max_words = 5000 | .ipynb_checkpoints/TextModels-checkpoint.ipynb | peterfig/keras-deep-learning-course | mit |
Canned input data | from keras.datasets import imdb
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_words) # limits vocab to num_words
?imdb.load_data
from keras.preprocessing import sequence
x_train = sequence.pad_sequences(x_train, maxlen=sequence_length, padding="post", truncating="post")
x_test = sequence.pad_sequences(x_test, maxlen=sequence_length, padding="post", truncating="post")
x_train[0]
vocabulary = imdb.get_word_index() # word to integer map
vocabulary['good']
len(vocabulary) | .ipynb_checkpoints/TextModels-checkpoint.ipynb | peterfig/keras-deep-learning-course | mit |
Model build | from keras.models import Model
from keras.layers import Input
from keras.layers import Embedding
from keras.layers import Dropout
from keras.layers import Conv1D
from keras.layers import MaxPooling1D
from keras.layers import Flatten
from keras.layers import Dense
from keras.layers.merge import Concatenate
# Input, embedding, and dropout layers
input_shape = (sequence_length,)
model_input = Input(shape=input_shape)
z = Embedding(
input_dim=len(vocabulary) + 1,
output_dim=embedding_dim,
input_length=sequence_length,
name="embedding")(model_input)
z = Dropout(dropout_prob[0])(z)
# Convolutional block
# parallel set of n convolutions; output of all n are
# concatenated into one vector
conv_blocks = []
for sz in filter_sizes:
conv = Conv1D(filters=num_filters, kernel_size=sz, activation="relu" )(z)
conv = MaxPooling1D(pool_size=2)(conv)
conv = Flatten()(conv)
conv_blocks.append(conv)
z = Concatenate()(conv_blocks) if len(conv_blocks) > 1 else conv_blocks[0]
z = Dropout(dropout_prob[1])(z)
# Hidden dense layer and output layer
z = Dense(hidden_dims, activation="relu")(z)
model_output = Dense(1, activation="sigmoid")(z)
cnn_model = Model(model_input, model_output)
cnn_model.compile(loss="binary_crossentropy", optimizer="adam", metrics=["accuracy"])
cnn_model.layers
print_layer(cnn_model, 12)
print_layer(cnn_model, 12)
cnn_model.fit(x_train, y_train, batch_size=64, epochs=3, validation_data=(x_test, y_test))
cnn_model.layers[1].weights
cnn_model.layers[1].get_weights()
cnn_model.layers[3].weights | .ipynb_checkpoints/TextModels-checkpoint.ipynb | peterfig/keras-deep-learning-course | mit |
An LSTM sentence classifier | Image('diagrams/LSTM.png')
from keras.models import Sequential
from keras.layers import Embedding
from keras.layers.core import SpatialDropout1D
from keras.layers.core import Dropout
from keras.layers.recurrent import LSTM
from keras.layers.core import Dense
hidden_dims = 50
embedding_dim = 50
lstm_model = Sequential()
lstm_model.add(Embedding(len(vocabulary) + 1, embedding_dim, input_length=sequence_length, name="embedding"))
lstm_model.add(SpatialDropout1D(Dropout(0.2)))
lstm_model.add(LSTM(hidden_dims, dropout=0.2, recurrent_dropout=0.2)) # first arg, like Dense, is dim of output
lstm_model.add(Dense(1, activation='sigmoid'))
lstm_model.compile(loss="binary_crossentropy", optimizer="adam", metrics=["accuracy"])
lstm_model.fit(x_train, y_train, batch_size=64, epochs=3, validation_data=(x_test, y_test))
lstm_model.layers
lstm_model.layers[2].input_shape
lstm_model.layers[2].output_shape | .ipynb_checkpoints/TextModels-checkpoint.ipynb | peterfig/keras-deep-learning-course | mit |
Appendix: Our own data download and preparation
We'll use the Large Movie Review Dataset v1.0 for our corpus. While Keras has its own data samples you can import for modeling (including this one), I think it's very important to get and process your own data. Otherwise, the results appear to materialize out of thin air and it's more difficult to get on with your own research. | %matplotlib inline
import pandas as pd
import glob
datapath = "/Users/pfigliozzi/aclImdb/train/unsup"
files = glob.glob(datapath+"/*.txt")[:1000] #first 1000 (there are 50k)
df = pd.concat([pd.read_table(filename, header=None, names=['raw']) for filename in files], ignore_index=True)
df.raw.map(lambda x: len(x)).plot.hist()
50000. * 2000. / 10**6 | .ipynb_checkpoints/TextModels-checkpoint.ipynb | peterfig/keras-deep-learning-course | mit |
O pacote Caryocar também fornece algumas funções e classes auxiliares para realizar a limpeza dos dados. | from caryocar.cleaning import NamesAtomizer, namesFromString
from caryocar.cleaning import normalize, read_NamesMap_fromJson
from caryocar.cleaning import getNamesIndexes | _notebooks/construindo-redes-sociais-com-dados-de-colecoes-biologicas.ipynb | pedrosiracusa/pedrosiracusa.github.io | mit |
Etapa 1. Leitura do conjunto de dados
O primeiro passo é ler o conjunto de dados de ocorrência de espécies.
Para isso vamos extender as funcionalidades da linguagem Python usando uma biblioteca muito útil para a análise de dados: a Pandas.
Com esta biblioteca, podemos carregar, transformar e analisar nosso conjunto de dados no ambiente de programação. | import pandas as pd | _notebooks/construindo-redes-sociais-com-dados-de-colecoes-biologicas.ipynb | pedrosiracusa/pedrosiracusa.github.io | mit |
Com a função read_csv do Pandas, carregaremos nossos dados que estão no arquivo CSV e os colocaremos na estrutura de um Data Frame, que é basicamente uma tabela.
Esta função espera receber o nome do arquivo CSV que contém os dados, bem como uma lista com os nomes das colunas que estamos interessados em carregar.
Especificarei o caminho para o arquivo na variável dsetPath e a lista das colunas de interesse em cols.
O dataframe ficará armazenado na variável occs_df.
Para deixar este artigo o mais simples possível usarei apenas os campos essenciais:
* recordedBy: Armazena os nomes dos coletores responsáveis pelo registro. Caso haja mais que 1 coletor, os nomes são separados por ponto-e-vírgula;
* species: Armazena o nome científico, a nível de espécie, determinado para o espécime em questão. | dsetPath = '/home/pedro/datasets/ub_herbarium/occurrence.csv'
cols = ['recordedBy','species']
occs_df = pd.read_csv(dsetPath, sep='\t', usecols=cols) | _notebooks/construindo-redes-sociais-com-dados-de-colecoes-biologicas.ipynb | pedrosiracusa/pedrosiracusa.github.io | mit |
Vamos dar uma olhada no jeitão do dataframe. Para isso, vamos pedir as 10 primeira linhas apenas. | occs_df.head(10) | _notebooks/construindo-redes-sociais-com-dados-de-colecoes-biologicas.ipynb | pedrosiracusa/pedrosiracusa.github.io | mit |
Etapa 2: Limpeza dos dados
Antes de construir o modelo, precisamos fazer uma limpeza de dados para garantir que eles estejam no formato adequado para a construção dos modelos.
O primeiro passo é filtrar os registros com elementos nulos (NaN) para cada um dos campos do dataframe. Um elemento nulo significa ausência de informação, e portanto não ajudará muito na construção dos nossos modelos.
Vejamos o número de nulos em cada campo: | occs_df.isnull().sum() | _notebooks/construindo-redes-sociais-com-dados-de-colecoes-biologicas.ipynb | pedrosiracusa/pedrosiracusa.github.io | mit |
A informação de coletor está ausente em apenas 9 dos registros. Vamos simplesmente eliminá-los. Um outro ponto é que para simplificar nossa modelagem, vou apenas usar registros que tenham sido identificados ao nível de espécie. Isso significa que teremos que descartar 32711 registros, nos quais a informação sobre a identidade de espécie está ausente. | occs_df.dropna(how='any', inplace=True) | _notebooks/construindo-redes-sociais-com-dados-de-colecoes-biologicas.ipynb | pedrosiracusa/pedrosiracusa.github.io | mit |
Agora não temos mais nulos em nenhuma das colunas, e podemos prosseguir: | occs_df.isnull().sum() | _notebooks/construindo-redes-sociais-com-dados-de-colecoes-biologicas.ipynb | pedrosiracusa/pedrosiracusa.github.io | mit |
Atomização dos nomes de coletores
O campo de coletores (recordedBy) é fundamental para nossa modelagem, mas infelizmente costuma ser um pouco problemático.
O primeiro problema é que os nomes dos coletores não são atômicos. Isso significa múltiplos nomes podem ser codificados em um mesmo valor (no caso, a lista de nomes é codificada como uma única string, sendo cada nome separado por um ponto-e-vígula).
Segundo as recomendações do Biodiversity Information Standards (TDWG), nomes de coletores devem ser incluídos, em geral, usando a seguinte regra: sobrenome com a primeira letra maiúscula, seguido por vírgula e espaço e iniciais do nome em letras maiúsculas, separadas por pontos (ex. Proença, C.E.B.).
Além disso, o TDWG recomenda que o separador utilizado para delimitar nomes de coletore deva ser o caractere pipe ( | ).
No entanto, o caractere usado no dataset do UB é o ponto-e-vírgula.
Isso não será um grande problema no nosso caso, já que neste dataset o ponto-e-vírgula é usado de forma consistente, em quase todos os registros.
Para proceder com a atomização dos nomes utilizaremos uma classe auxiliar, chamada NamesAtomizer. Criaremos o objeto atomizador e atribuiremos à variável na. Passaremos a função namesFromString que especifica as regras usadas para separar os nomes. | na = NamesAtomizer(atomizeOp=namesFromString) | _notebooks/construindo-redes-sociais-com-dados-de-colecoes-biologicas.ipynb | pedrosiracusa/pedrosiracusa.github.io | mit |
O atomizador de nomes resolve a grande maioria dos casos. Mas existem alguns poucos registros com erros na delimitação dos nomes. Neste caso a correção deve ser feita fazendo a substituição em cada registro pela sua forma correta.
Para o dataset do UB, estas substituições estão especificadas no arquivo armazenado na variável names_replaces_file, abaixo: | names_replaces_file = '/home/pedro/data/ub_collectors_replaces.json' | _notebooks/construindo-redes-sociais-com-dados-de-colecoes-biologicas.ipynb | pedrosiracusa/pedrosiracusa.github.io | mit |
Só por curiosidade, vejamos o conteúdo deste arquivo: | ! cat {names_replaces_file} | _notebooks/construindo-redes-sociais-com-dados-de-colecoes-biologicas.ipynb | pedrosiracusa/pedrosiracusa.github.io | mit |
Prosseguindo com a substituição: | na.read_replaces(names_replaces_file) | _notebooks/construindo-redes-sociais-com-dados-de-colecoes-biologicas.ipynb | pedrosiracusa/pedrosiracusa.github.io | mit |
Agora, com o auxílio do atomizador de nomes, vamos adicionar uma nova coluna ao dataframe, contendo os nomes dos coletores atomizados. Ela se chamará recordedBy_atomized: | occs_df['recordedBy_atomized'] = na.atomize(occs_df['recordedBy']) | _notebooks/construindo-redes-sociais-com-dados-de-colecoes-biologicas.ipynb | pedrosiracusa/pedrosiracusa.github.io | mit |
Normalização e mapeamento de nomes
Um segundo problema é que nomes de coletores podem ter sido escritos de algumas formas diferentes, seja por conta de erros ou omissão de partes do nome.
Por exemplo, o nome 'Proença, C.E.B.' pode ter alguns variantes, incluindo 'Proenca, C.E.B,', 'Proença, C.E.', Proença, C.'.
Precisamos pensar em uma forma para ligar todas essas variantes a um nome principal.
A solução para este problema até o momento é armazenar um mapa ligando cada variante a uma forma normal do nome. O processo de normalização inclui a transformação do nome para uma forma simplificada. Isso significa que só usaremos caracteres em caixa-baixo, omitiremos acentos e pontuações, e removeremos caracteres não-alfanuméricos.
No exemplo citado acima, todos os nomes seriam mapeados para 'proenca,ceb'.
Para o conjunto de dados do UB, já tenho um mapa de nomes pronto, guardado no seguinte arquivo: | namesMap_file = '/home/pedro/data/ub_namesmap.json' | _notebooks/construindo-redes-sociais-com-dados-de-colecoes-biologicas.ipynb | pedrosiracusa/pedrosiracusa.github.io | mit |
Este arquivo é grande, mas vamos ver as 20 primeiras linhas para termos uma ideia: | ! head {namesMap_file} -n 20 | _notebooks/construindo-redes-sociais-com-dados-de-colecoes-biologicas.ipynb | pedrosiracusa/pedrosiracusa.github.io | mit |
Note que alguns nomes de coletores que não eram nulos porêm remetem à falta da informação (por exemplo '.', '?') são mapeados para uma string vazia. Mais tarde iremos filtrar estes nomes.
Vamos agora ler o mapa de nomes do arquivo e armazená-lo na variável nm. | nm = read_NamesMap_fromJson(namesMap_file, normalizationFunc=normalize) | _notebooks/construindo-redes-sociais-com-dados-de-colecoes-biologicas.ipynb | pedrosiracusa/pedrosiracusa.github.io | mit |
Caso haja nomes de coletores que não estão no arquivo, vamos nos assegurar de que eles serão inseridos: | collectors_names = list(set( n for n,st,num in na.getCachedNames() ))
nm.addNames(collectors_names) | _notebooks/construindo-redes-sociais-com-dados-de-colecoes-biologicas.ipynb | pedrosiracusa/pedrosiracusa.github.io | mit |
Assim, este mapa nos permite buscar, para cada variante do nome, sua forma normal: | nm.getMap()['Proença, CEB']
nm.getMap()['Proença, C'] | _notebooks/construindo-redes-sociais-com-dados-de-colecoes-biologicas.ipynb | pedrosiracusa/pedrosiracusa.github.io | mit |
A figura abaixo ilustra as etapas envolvidas no preprocessamento do campo dos coletores, conforme descrito.
{:width="700px"}
O índice de nomes
Finalmente, vamos construir um índice de nomes, apenas para mantermos a referência de quais linhas do dataframe cada coletor aparece. Para isso usaremos a função getNamesIndexes. Precisamos informar o nome do dataframe, o nome da coluna que armazena os nomes atomizados e o mapa de nomes. Mas enfatizo que este passo não é necessário para a construção dos modelos (apesar de ser útil para algumas análises). | ni = getNamesIndexes(occs_df,'recordedBy_atomized', namesMap=nm.getMap()) | _notebooks/construindo-redes-sociais-com-dados-de-colecoes-biologicas.ipynb | pedrosiracusa/pedrosiracusa.github.io | mit |
Etapa 3: Construindo os modelos
Chegamos na etapa que realmente interessa. Já temos um dataframe com os dados minimamente limpos e estruturados, e podemos então construir os modelos!
Rede Espécie-Coletor (SCN)
Redes espécie-coletor modelam relações de interesse, envolvendo necessariamente um coletor e uma espécie. A semântica destas relações pode ser descrita como coletor -[registra]-> espécie ou, por outro lado, espécie-[é registrada por]-> coletor. A figura abaixo exemplifica esta estrutura (a).
Como o modelo envolve duas classes de entidades (coletores e espécies), existem duas perspectivas adicionais que podem ser exploradas: Podemos investigar o quão fortemente dois coletores estão associados entre si em termos de seus interesses em comum (b); bem como quão fortemente duas espécies estão associadas entre si em termos do conjunto de coletores que as registram (c).
Nos referimos às perspectivas (b) e (c) como projeções da rede (a). Estas projeções são obtidas simplesmente ligando entidades da mesma classe tomando como base o número de entidades da classe oposta que eles compartilham, na estrutura (a).
{:width="500px"}
Vamos então ao código. Construiremos a rede espécie-coletor usando a classe SCN, disponível no pacote Caryocar. Para sua construção, devemos fornecer:
* Uma lista de espécies, neste caso a coluna do dataframe occs_df['species'];
* Uma lista contendo listas de coletores, neste caso a coluna do dataframe occs_df['recordedBy_atomized'];
* Um mapa de nomes. | scn = SCN(species=occs_df['species'], collectors=occs_df['recordedBy_atomized'], namesMap=nm) | _notebooks/construindo-redes-sociais-com-dados-de-colecoes-biologicas.ipynb | pedrosiracusa/pedrosiracusa.github.io | mit |
Após a construção do modelo, vamos remover nomes de coletores indevidos, como 'etal', 'ilegivel', 'incognito'. | cols_to_filter = ['','ignorado','ilegivel','incognito','etal']
scn.remove_nodes_from(cols_to_filter) | _notebooks/construindo-redes-sociais-com-dados-de-colecoes-biologicas.ipynb | pedrosiracusa/pedrosiracusa.github.io | mit |
Vejamos então um pequeno resumo sobre esta rede. Este pedaço de código pode ser um pouco feio, mas o que importa mesmo aqui são as informações imprimidas abaixo dele. | n_cols = len(scn.listCollectorsNodes())
cols_degrees = scn.degree(scn.listCollectorsNodes())
n_spp = len(scn.listSpeciesNodes())
spp_degrees = scn.degree(scn.listSpeciesNodes())
print(
f"""Rede Espécie-Coletor (SCN)
==========================
Número total de coletores:{n_cols}
Número total de espécies: {n_spp}
Em média, um coletor registra {round( sum( k for n,k in cols_degrees)/n_cols)} espécies distintas
Em média, uma espécie é registrada por {round( sum( k for n,k in spp_degrees)/n_spp)} coletores distintos
Número total de arestas: {len(scn.edges)}\n""")
print("Top-10 coletores mais produtivos:")
for n,k in sorted(cols_degrees,key=lambda x:x[1],reverse=True)[:10]:
print(f" {n} ({k} especies distintas)")
print("\nTop-10 espécies coletadas:")
for n,k in sorted(spp_degrees,key=lambda x:x[1],reverse=True)[:10]:
print(f" {n} ({k} coletores distintos)") | _notebooks/construindo-redes-sociais-com-dados-de-colecoes-biologicas.ipynb | pedrosiracusa/pedrosiracusa.github.io | mit |
Um aspecto interessante a ser notado é a distribuição de grau (número de conexões de um vértice) nesta rede.
Embora em média um coletor registre 21 espécies diferentes, os coletores mais produtivos registraram mais de 1000!
De forma simlar, embora em média uma espécie seja registrada por 9 coletores distintos, as primeiras 10 foram registradas por mais de 200 coletores cada.
Embora esteja fora do escopo deste artigo, é fácil mostrar que a distribuição desta rede está longe de ser normal. Na verdade, é aproximada por uma lei de potência.
Isso significa que enquanto uma grande maioria de coletores registra pouquíssimas espécies diferentes, alguns poucos (chamados hubs, ou coletores-chave) registram um número muito acima da média.
De forma análoga enquanto uma grande maioria de espécies foi coletadas por apenas um ou poucos coletores diferentes, algumas poucas foram coletadas por um grande número de coletores distintos.
Rede de Colaboração de Coletores (CWN)
Redes de colaboração de coletores (CWNs), como o nome sugere, modelam relações de colaboração que se estabelecem entre coletores enquanto registram espécies em campo. Uma ligação entre pares de coletores é criada ou fortalecida cada vez que eles co-autoram um registro de espécie. Sendo assim, a semântica destas relações é descrita como coletor -[coleta espécime com]-> coletor. A figura abaixo ilustra a estrutura destas redes. É importante notar que, diferente das SCNs, nas CWNs a identidade taxonômica de cada registro não é representada em sua estrutura. Coletores que nunca colaboraram aparecem como vértices isolados na rede.
{:width="300px"}
O pacote Caryocar também fornece a classe SCN, que facilita a construção de redes de colaboração de coletores. Para sua construção, devemos fornecer:
Uma lista contendo listas de coletores (cliques), neste caso a coluna do dataframe occs_df['recordedBy_atomized'];
Um mapa de nomes. | cwn = CWN(cliques=occs_df['recordedBy_atomized'],namesMap=nm) | _notebooks/construindo-redes-sociais-com-dados-de-colecoes-biologicas.ipynb | pedrosiracusa/pedrosiracusa.github.io | mit |
Assim como fizemos com a SCN, vamos remover nomes de coletores indevidos | cols_to_filter = ['','ignorado','ilegivel','incognito','etal']
cwn.remove_nodes_from(cols_to_filter) | _notebooks/construindo-redes-sociais-com-dados-de-colecoes-biologicas.ipynb | pedrosiracusa/pedrosiracusa.github.io | mit |
Vejamos um resumo sobre a rede: | n_cols = len(cwn.nodes)
cols_degrees = cwn.degree()
print(
f"""Rede de Colaboração de Coletores (CWN)
======================================
Número total de coletores:{n_cols}
Número total de arestas: {len(cwn.edges)}
Em média, um coletor colabora com {round( sum(k for n,k in cols_degrees)/n_cols )} pares ao longo de sua carreira
No total {len([ n for n,k in cols_degrees if k==0 ])} coletores nunca colaboraram
No total, {len([ n for n,k in cols_degrees if k>3 ])} coletores colaboraram com mais que 3 colegas\n""")
print("Top-10 coletores mais colaborativos:")
for n,k in sorted(cols_degrees,key=lambda x:x[1],reverse=True)[:10]:
print(f" {n} ({k} colegas)")
print("\nTop-10 coletores sem colaborações com maior número de registros:")
for n,k, in sorted([ (n,d['count']) for n,d in cwn.nodes(data=True) if cwn.degree(n)==0 ],key=lambda x: x[1], reverse=True)[:10]:
print(f" {n} ({cwn.nodes[n]['count']} registros, 0 colaborações)") | _notebooks/construindo-redes-sociais-com-dados-de-colecoes-biologicas.ipynb | pedrosiracusa/pedrosiracusa.github.io | mit |
In previous tutorials, we set interactive mode to True, and we obtained the result
of every operation. | ibis.options.interactive = True
countries['name', 'continent', 'population'].limit(3) | docs/source/tutorial/03-Expressions-Lazy-Mode-Logging.ipynb | cloudera/ibis | apache-2.0 |
But now let's see what happens if we leave the interactive option to False (the default),
and we operate in lazy mode. | ibis.options.interactive = False
countries['name', 'continent', 'population'].limit(3) | docs/source/tutorial/03-Expressions-Lazy-Mode-Logging.ipynb | cloudera/ibis | apache-2.0 |
What we find is the graph of the expressions that would return the desired result instead of the result itself.
Let's analyze the expressions in the graph:
We query the countries table (all rows and all columns)
We select the name, continent and population columns
We limit the results to only the first 3 rows
Now consider that the data is in a database, possibly in a different host than the one executing Ibis.
Also consider that the results returned to the user need to be moved to the memory of the host executing Ibis.
When using interactive (or eager) mode, if we perform one operation at a time, we would do:
We would move all the rows and columns from the backend (database, big data system, etc.) into memory
Once in memory, we would discard all the columns but name, continent and population
After that, we would discard all the rows, except the first 3
This is not very efficient. If you consider that the table can have millions of rows, backed by a
big data system like Spark or Impala, this may not even be possible (not enough memory to load all the data).
The solution is to use lazy mode. In lazy mode, instead of obtaining the results after each operation,
we build an expression (a graph) of all the operations that need to be done. After all the operations
are recorded, the graph is sent to the backend which will perform the operation in an efficient way - only
moving to memory the required data.
You can think of this as writing a shopping list and requesting someone to go to the supermarket and buy
everything you need once the list is complete. As opposed as getting someone to bring all the products of
the supermarket to your home and then return everything you don't want.
Let's continue with our example, save the expression in a variable countries_expression, and check its type. | countries_expression = countries['name', 'continent', 'population'].limit(3)
type(countries_expression) | docs/source/tutorial/03-Expressions-Lazy-Mode-Logging.ipynb | cloudera/ibis | apache-2.0 |
The type is an Ibis TableExpr, since the result is a table (in a broad way, you can consider it a dataframe).
Now we have our query instructions (our expression, fetching only 3 columns and 3 rows) in the variable countries_expression.
At this point, nothing has been requested from the database. We have defined what we want to extract, but we didn't
request it from the database yet. We can continue building our expression if we haven't finished yet. Or once we
are done, we can simply request it from the database using the method .execute(). | countries_expression.execute() | docs/source/tutorial/03-Expressions-Lazy-Mode-Logging.ipynb | cloudera/ibis | apache-2.0 |
We can build other types of expressions, for example, one that instead of returning a table,
returns a columns. | population_in_millions = (countries['population'] / 1_000_000).name('population_in_millions')
population_in_millions | docs/source/tutorial/03-Expressions-Lazy-Mode-Logging.ipynb | cloudera/ibis | apache-2.0 |
If we check its type, we can see how it is a FloatingColumn expression. | type(population_in_millions) | docs/source/tutorial/03-Expressions-Lazy-Mode-Logging.ipynb | cloudera/ibis | apache-2.0 |
We can combine the previous expression to be a column of a table expression. | countries['name', 'continent', population_in_millions].limit(3) | docs/source/tutorial/03-Expressions-Lazy-Mode-Logging.ipynb | cloudera/ibis | apache-2.0 |
Since we are in lazy mode (not interactive), those expressions don't request any data from the database
unless explicitly requested with .execute().
Logging queries
For SQL backends (and for others when it makes sense), the query sent to the database can be logged.
This can be done by setting the verbose option to True. | ibis.options.verbose = True
countries['name', 'continent', population_in_millions].limit(3).execute() | docs/source/tutorial/03-Expressions-Lazy-Mode-Logging.ipynb | cloudera/ibis | apache-2.0 |
By default, the logging is done to the terminal, but we can process the query with a custom function.
This allows us to save executed queries to a file, save to a database, send them to a web service, etc.
For example, to save queries to a file, we can write a custom function that given a query, saves it to a
log file. | import os
import datetime
def log_query_to_file(query):
"""
Log queries to `data/tutorial_queries.log`.
Each file is a query. Line breaks in the query are represented with the string '\n'.
A timestamp of when the query is executed is added.
"""
fname = os.path.join('data', 'tutorial_queries.log')
query_in_a_single_line = query.replace('\n', r'\n')
with open(fname, 'a') as f:
f.write(f'{datetime.datetime.now()} - {query_in_a_single_line}\n') | docs/source/tutorial/03-Expressions-Lazy-Mode-Logging.ipynb | cloudera/ibis | apache-2.0 |
Then we can set the verbose_log option to the custom function, execute one query,
wait one second, and execute another query. | import time
ibis.options.verbose_log = log_query_to_file
countries.execute()
time.sleep(1.)
countries['name', 'continent', population_in_millions].limit(3).execute() | docs/source/tutorial/03-Expressions-Lazy-Mode-Logging.ipynb | cloudera/ibis | apache-2.0 |
This has created a log file in data/tutorial_queries.log where the executed queries have been logged. | !cat data/tutorial_queries.log | docs/source/tutorial/03-Expressions-Lazy-Mode-Logging.ipynb | cloudera/ibis | apache-2.0 |
ZeroPadding2D
[convolutional.ZeroPadding2D.0] padding (1,1) on 3x5x2 input, data_format='channels_last' | data_in_shape = (3, 5, 2)
L = ZeroPadding2D(padding=(1, 1), data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(250)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['convolutional.ZeroPadding2D.0'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
} | notebooks/layers/convolutional/ZeroPadding2D.ipynb | transcranial/keras-js | mit |
[convolutional.ZeroPadding2D.1] padding (1,1) on 3x5x2 input, data_format='channels_first' | data_in_shape = (3, 5, 2)
L = ZeroPadding2D(padding=(1, 1), data_format='channels_first')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(251)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['convolutional.ZeroPadding2D.1'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
} | notebooks/layers/convolutional/ZeroPadding2D.ipynb | transcranial/keras-js | mit |
[convolutional.ZeroPadding2D.2] padding (3,2) on 2x6x4 input, data_format='channels_last' | data_in_shape = (2, 6, 4)
L = ZeroPadding2D(padding=(3, 2), data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(252)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['convolutional.ZeroPadding2D.2'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
} | notebooks/layers/convolutional/ZeroPadding2D.ipynb | transcranial/keras-js | mit |
[convolutional.ZeroPadding2D.3] padding (3,2) on 2x6x4 input, data_format='channels_first' | data_in_shape = (2, 6, 4)
L = ZeroPadding2D(padding=(3, 2), data_format='channels_first')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(253)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['convolutional.ZeroPadding2D.3'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
} | notebooks/layers/convolutional/ZeroPadding2D.ipynb | transcranial/keras-js | mit |
[convolutional.ZeroPadding2D.4] padding ((1,2),(3,4)) on 2x6x4 input, data_format='channels_last' | data_in_shape = (2, 6, 4)
L = ZeroPadding2D(padding=((1,2),(3,4)), data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(254)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['convolutional.ZeroPadding2D.4'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
} | notebooks/layers/convolutional/ZeroPadding2D.ipynb | transcranial/keras-js | mit |
[convolutional.ZeroPadding2D.5] padding 2 on 2x6x4 input, data_format='channels_last' | data_in_shape = (2, 6, 4)
L = ZeroPadding2D(padding=2, data_format='channels_last')
layer_0 = Input(shape=data_in_shape)
layer_1 = L(layer_0)
model = Model(inputs=layer_0, outputs=layer_1)
# set weights to random (use seed for reproducibility)
np.random.seed(255)
data_in = 2 * np.random.random(data_in_shape) - 1
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
print('')
print('in shape:', data_in_shape)
print('in:', data_in_formatted)
print('out shape:', data_out_shape)
print('out:', data_out_formatted)
DATA['convolutional.ZeroPadding2D.5'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
} | notebooks/layers/convolutional/ZeroPadding2D.ipynb | transcranial/keras-js | mit |
export for Keras.js tests | import os
filename = '../../../test/data/layers/convolutional/ZeroPadding2D.json'
if not os.path.exists(os.path.dirname(filename)):
os.makedirs(os.path.dirname(filename))
with open(filename, 'w') as f:
json.dump(DATA, f)
print(json.dumps(DATA)) | notebooks/layers/convolutional/ZeroPadding2D.ipynb | transcranial/keras-js | mit |
V Jupyter notebook (nástroj v kterém je tento tutoriál vytvořen) každý blok kódu ukončení proměnnou nebo výrazem vytiskne obsah této proměnné. Toho je v tomto tutoriálu využívano. Následuje příklad: | a = 1 # tady si vytvorim promennou
a # tohle vytiskne hodnotu bez uziti prikazu print | podklady/notebooks/skriptovani_v_pythonu.ipynb | matousc89/PPSI | mit |
List
List je pravděpodobně nejznámější kontejner na data v jazyce Python. Položky v listu se můžou opakovat a mají pořadí položek dané při vytvoření listu. Položky v listu je možné měnit, mazat a přidávat. Následují příklady. | [1, 1, 2] # list celych cisel, polozka 1 se opakukuje
["abc", 1, 0.5] # list obsahujici ruzne datove typy
[] # prazdny list
[[1,2], "abc", {1, "0", 3}] # list obsahujici take dalsi listy
[1, "a", 2] + [5, 3, 5] # spojeni dvou listu
[1, 2, 3]*5 # opakovani listu | podklady/notebooks/skriptovani_v_pythonu.ipynb | matousc89/PPSI | mit |
Indexování a porcování
V Pythonu se pro indexování užívají hranaté závorky []. Symbol : zastupuje všechny položky v daném rozsahu. Indexuje se od 0. Indexování a porcování listu je ukázáno na nasledujících příkladech. | a = ["a", "b", "c", "d", "e", "f", "g", "h"] # ukázkový list
a[3] # vrati objekt z indexem 3 (ctvrty objekt)
a[:2] # vrati prvni dva objekty
a[3:] # vrati vse od objektu 3 dal
a[2:5] # vse mezi objekty s indexy 2 a 5
a[-3] # treti objekt od konce
a[0:-1:2] # kazdy druhy objekt od zacatku do konce
a[::2] # kratsi ekvivalent predchoziho prikladu
b = [[1, 2, 3], [4, 5, 6]] # priklad vnorenych listu
b[1] # vraci druhy list
b[0][0:2] # vraci prvni dve polozky z druheho listu | podklady/notebooks/skriptovani_v_pythonu.ipynb | matousc89/PPSI | mit |
Přepisování, přidávání, vkládání a mazání položek z listu
Ukázáno na následujících příkladech. | a = ["a", "b", "c", "d"]
a[2] = "x" # prepsani objektu s indexem 2
print(a)
a.append("h") # pridani objektu h na konec
print(a)
a.insert(2, "y") # pridani objektu y na pozici 2
print(a)
del a[2] # odebere objekt na pozici 2
print (a) | podklady/notebooks/skriptovani_v_pythonu.ipynb | matousc89/PPSI | mit |
Podmínka If, else, elif
Podmínky slouží k implementaci logiky. Logika operuje s proměnnou bool, která nabývá pouze hodnot True nebo False.
Výrazy a jejich vyhodnocení
Následuje ukázka vyhodnocení pravdivosti několika výrazů. | a = 1
a == 1
a == 1
not a == 1
a > 1
a >= 1
1 in [1, 2]
not (1 in [1, 2]) == (not 1 > 0) | podklady/notebooks/skriptovani_v_pythonu.ipynb | matousc89/PPSI | mit |
Podmínky a jejich vyhodnocení
Podmínka if testuje, zda výraz pravdivé hodnoty - pokud ano, podmínka vykoná svůj kód. Následuje příklad. | fruit = "apple"
color = "No color"
if fruit == "apple":
color = "green"
color | podklady/notebooks/skriptovani_v_pythonu.ipynb | matousc89/PPSI | mit |
Podmínka else umožňuje nastavit alternativní kód pro případ kdy podmínka if není splněna. Příklad následuje. | fruit = "orange"
if fruit == "apple":
color = "red"
else:
color = "orange"
color | podklady/notebooks/skriptovani_v_pythonu.ipynb | matousc89/PPSI | mit |
Podmínka elif umožňuje zadat více podmínek pro případ nesplnění podmínky if. Podmínek elif je možné umístit více za jednu podmínku if. Příklad následuje. | fruit = "apple"
if fruit == "apple":
color = "red"
elif fruit == "orange":
color = "orange"
elif fruit == "pear":
color = "green"
else:
color = "yellow"
color | podklady/notebooks/skriptovani_v_pythonu.ipynb | matousc89/PPSI | mit |
Smyčky
Iterace je jedna z nejčastější operací v programování. Následující ukázky se vztahují k rovnici
$\forall i \in {2,\ldots,9}.\ a_i = a_{i-1} + a_{i-2}$.
For smyčka
For smyčka je navržena pro iterování přes předem daný iterovatelný objekt. Příklad následuje. | a = [] # list na vysledky
a.append(1) # prvni pocatecni podminka
a.append(1) # druha pocatecni podminka
for i in [2, 3, 4, 5, 6, 7, 8]: # rozsah pres ktery iterovat
a.append(a[i-1] + a[i-2]) # pridavani vypoctenych polozek do listu
print(a) | podklady/notebooks/skriptovani_v_pythonu.ipynb | matousc89/PPSI | mit |
Vylepšení předchozího příkladu následuje. | a = [0]*9 # list na vysledky
a[0:2] = [1, 1] # pocatecni podminky
for i in range(2,9): # fukce range
a[i] = a[i-1] + a[i-2] # realizace vypoctu
print(a) | podklady/notebooks/skriptovani_v_pythonu.ipynb | matousc89/PPSI | mit |
V případě že je potřeba přerušit smyčku před koncem, je možné použít příkaz break.
While smyčka
Tato smyčka iteruje dokud není splněna podmínka. Příklad následuje. | a = [0]*9
a[0:2] = [1, 1]
i = 2 # nastaveni pomocne promenne
while i < 9: # iteruj dokud pomocna promenna nesplni podminku
a[i] = a[i-1] + a[i-2]
i += 1 # pridej 1 k pomocne promenne
print(a) | podklady/notebooks/skriptovani_v_pythonu.ipynb | matousc89/PPSI | mit |
Get the Data | df = pd.read_csv("../kyphosis.csv")
df.head() | notebooks/launching_into_ml/labs/supplemental/decision_trees_and_random_Forests_in_Python.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
Exploratory Data Analysis
Lab Task #1: Check a pairplot for this small dataset. | # TODO 1
# TODO -- Your code here. | notebooks/launching_into_ml/labs/supplemental/decision_trees_and_random_Forests_in_Python.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
Train Test Split
Let's split up the data into a training set and a test set! | from sklearn.model_selection import train_test_split
X = df.drop("Kyphosis", axis=1)
y = df["Kyphosis"]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30) | notebooks/launching_into_ml/labs/supplemental/decision_trees_and_random_Forests_in_Python.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
Decision Trees
Lab Task #2: Train a single decision tree. | from sklearn.tree import DecisionTreeClassifier
dtree = DecisionTreeClassifier()
# TODO 2
# TODO -- Your code here. | notebooks/launching_into_ml/labs/supplemental/decision_trees_and_random_Forests_in_Python.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
Prediction and Evaluation
Let's evaluate our decision tree. | predictions = dtree.predict(X_test)
from sklearn.metrics import classification_report, confusion_matrix
# TODO 3a
# TODO -- Your code here.
# TODO 3b
print(confusion_matrix(y_test, predictions)) | notebooks/launching_into_ml/labs/supplemental/decision_trees_and_random_Forests_in_Python.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
Tree Visualization
Scikit learn actually has some built-in visualization capabilities for decision trees, you won't use this often and it requires you to install the pydot library, but here is an example of what it looks like and the code to execute this: | import pydot
from IPython.display import Image
from six import StringIO
from sklearn.tree import export_graphviz
features = list(df.columns[1:])
features
dot_data = StringIO()
export_graphviz(
dtree, out_file=dot_data, feature_names=features, filled=True, rounded=True
)
graph = pydot.graph_from_dot_data(dot_data.getvalue())
Image(graph[0].create_png()) | notebooks/launching_into_ml/labs/supplemental/decision_trees_and_random_Forests_in_Python.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
Random Forests
Lab Task #4: Compare the decision tree model to a random forest. | from sklearn.ensemble import RandomForestClassifier
rfc = RandomForestClassifier(n_estimators=100)
rfc.fit(X_train, y_train)
rfc_pred = rfc.predict(X_test)
# TODO 4a
# TODO -- Your code here.
# TODO 4b
# TODO -- Your code here. | notebooks/launching_into_ml/labs/supplemental/decision_trees_and_random_Forests_in_Python.ipynb | GoogleCloudPlatform/asl-ml-immersion | apache-2.0 |
<H2> Create a normally distributed random variable</H2> | # create some data
mymean = 28.74
mysigma = 8.33 # standard deviation!
rv_norm = norm(loc = mymean, scale = mysigma)
data = rv_norm.rvs(size = 150)
plt.hist(data, bins=20, facecolor='red', alpha=.3);
plt.ylabel('Number of events');
plt.xlim(0,100); | Optimization/Maximum_likelihood_estimation.ipynb | JoseGuzman/myIPythonNotebooks | gpl-2.0 |
<H2> Define a model function</H2> | def mynorm(x, params):
mu, sigma = params
# scipy implementation
mynorm = norm(loc = mu, scale = sigma)
return mynorm.pdf(x)
mynorm(0, [0,1]) # 0.39 | Optimization/Maximum_likelihood_estimation.ipynb | JoseGuzman/myIPythonNotebooks | gpl-2.0 |
<H2> Loglikelihood function to be minimize </H2> | def loglikelihood(params, data):
mu = params['mean'].value
sigma = params['std'].value
l1 = np.log( mynorm(data, [mu, sigma]) ).sum()
return(-l1) # return negative loglikelihood to minimize
myfoo = Parameters()
myfoo.add('mean', value = 20)
myfoo.add('std', value = 5.0)
loglikelihood(myfoo, data)
myparams = Parameters()
myparams.add('mean', value = 20.3)
myparams.add('std', value = 5.0)
out = minimize(fcn = loglikelihood, params=myparams, method='nelder', args=(data,))
print(out.userfcn(myparams, data)) # ~523.631337424
from lmfit import report_errors
report_errors(myparams) | Optimization/Maximum_likelihood_estimation.ipynb | JoseGuzman/myIPythonNotebooks | gpl-2.0 |
The estimated mean and standard deviation should be identical to the mean
and the standard deviation of the sample population | np.mean(data), np.std(data) | Optimization/Maximum_likelihood_estimation.ipynb | JoseGuzman/myIPythonNotebooks | gpl-2.0 |
<H2> Plot histogram and model together </H2> | # Compute binwidth
counts, binedge = np.histogram(data, bins=20);
bincenter = [0.5 * (binedge[i] + binedge[i+1]) for i in xrange(len(binedge)-1)]
binwidth = (max(bincenter) - min(bincenter)) / len(bincenter)
# Adjust PDF function to data
ivar = np.linspace(0, 100, 100)
params = [ myparams['mean'].value, myparams['std'].value ]
mynormpdf = mynorm(ivar, params)*binwidth*len(data)
# Plot everything together
plt.hist(data, bins=20, facecolor='white', histtype='stepfilled');
plt.plot(ivar, mynormpdf);
plt.ylabel('Number of events'); | Optimization/Maximum_likelihood_estimation.ipynb | JoseGuzman/myIPythonNotebooks | gpl-2.0 |
<a name="part-1---generative-adversarial-networks-gan--deep-convolutional-gan-dcgan"></a>
Part 1 - Generative Adversarial Networks (GAN) / Deep Convolutional GAN (DCGAN)
<a name="introduction"></a>
Introduction
Recall from the lecture that a Generative Adversarial Network is two networks, a generator and a discriminator. The "generator" takes a feature vector and decodes this feature vector to become an image, exactly like the decoder we built in Session 3's Autoencoder. The discriminator is exactly like the encoder of the Autoencoder, except it can only have 1 value in the final layer. We use a sigmoid to squash this value between 0 and 1, and then interpret the meaning of it as: 1, the image you gave me was real, or 0, the image you gave me was generated by the generator, it's a FAKE! So the discriminator is like an encoder which takes an image and then perfoms lie detection. Are you feeding me lies? Or is the image real?
Consider the AE and VAE we trained in Session 3. The loss function operated partly on the input space. It said, per pixel, what is the difference between my reconstruction and the input image? The l2-loss per pixel. Recall at that time we suggested that this wasn't the best idea because per-pixel differences aren't representative of our own perception of the image. One way to consider this is if we had the same image, and translated it by a few pixels. We would not be able to tell the difference, but the per-pixel difference between the two images could be enormously high.
The GAN does not use per-pixel difference. Instead, it trains a distance function: the discriminator. The discriminator takes in two images, the real image and the generated one, and learns what a similar image should look like! That is really the amazing part of this network and has opened up some very exciting potential future directions for unsupervised learning. Another network that also learns a distance function is known as the siamese network. We didn't get into this network in this course, but it is commonly used in facial verification, or asserting whether two faces are the same or not.
The GAN network is notoriously a huge pain to train! For that reason, we won't actually be training it. Instead, we'll discuss an extension to this basic network called the VAEGAN which uses the VAE we created in Session 3 along with the GAN. We'll then train that network in Part 2. For now, let's stick with creating the GAN.
Let's first create the two networks: the discriminator and the generator. We'll first begin by building a general purpose encoder which we'll use for our discriminator. Recall that we've already done this in Session 3. What we want is for the input placeholder to be encoded using a list of dimensions for each of our encoder's layers. In the case of a convolutional network, our list of dimensions should correspond to the number of output filters. We also need to specify the kernel heights and widths for each layer's convolutional network.
We'll first need a placeholder. This will be the "real" image input to the discriminator and the discrimintator will encode this image into a single value, 0 or 1, saying, yes this is real, or no, this is not real.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> | # We'll keep a variable for the size of our image.
n_pixels = 32
n_channels = 3
input_shape = [None, n_pixels, n_pixels, n_channels]
# And then create the input image placeholder
X = tf.placeholder(dtype=tf.float32, name='X', shape=[None, n_pixels, n_pixels,n_channels]) | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
<a name="building-the-encoder"></a>
Building the Encoder
Let's build our encoder just like in Session 3. We'll create a function which accepts the input placeholder, a list of dimensions describing the number of convolutional filters in each layer, and a list of filter sizes to use for the kernel sizes in each convolutional layer. We'll also pass in a parameter for which activation function to apply.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> | def encoder(x, channels, filter_sizes, activation=tf.nn.tanh, reuse=None):
# Set the input to a common variable name, h, for hidden layer
h = x
# Now we'll loop over the list of dimensions defining the number
# of output filters in each layer, and collect each hidden layer
hs = []
for layer_i in range(len(channels)):
with tf.variable_scope('layer{}'.format(layer_i+1), reuse=reuse):
# Convolve using the utility convolution function
# This requirs the number of output filter,
# and the size of the kernel in `k_h` and `k_w`.
# By default, this will use a stride of 2, meaning
# each new layer will be downsampled by 2.
h, W = utils.conv2d(h, channels[layer_i], k_h = filter_sizes[layer_i], k_w=filter_sizes[layer_i],reuse=reuse)
# Now apply the activation function
h = activation(h)
# Store each hidden layer
hs.append(h)
# Finally, return the encoding.
return h, hs | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
<a name="building-the-discriminator-for-the-training-samples"></a>
Building the Discriminator for the Training Samples
Finally, let's take the output of our encoder, and make sure it has just 1 value by using a fully connected layer. We can use the libs/utils module's, linear layer to do this, which will also reshape our 4-dimensional tensor to a 2-dimensional one prior to using the fully connected layer.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> | def discriminator(X,
channels=[50, 50, 50, 50],
filter_sizes=[4, 4, 4, 4],
activation=utils.lrelu,
reuse=None):
# We'll scope these variables to "discriminator_real"
with tf.variable_scope('discriminator', reuse=reuse):
# Encode X:
H, Hs = encoder(X, channels, filter_sizes, activation, reuse)
# Now make one last layer with just 1 output. We'll
# have to reshape to 2-d so that we can create a fully
# connected layer:
shape = H.get_shape().as_list()
H = tf.reshape(H, [-1, shape[1] * shape[2] * shape[3]])
# Now we can connect our 2D layer to a single neuron output w/
# a sigmoid activation:
D, W = utils.linear(H, 1, activation=tf.nn.sigmoid, reuse=reuse, name='FCNN')
return D | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
Now let's create the discriminator for the real training data coming from X: | D_real = discriminator(X) | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
And we can see what the network looks like now: | graph = tf.get_default_graph()
nb_utils.show_graph(graph.as_graph_def()) | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
<a name="building-the-decoder"></a>
Building the Decoder
Now we're ready to build the Generator, or decoding network. This network takes as input a vector of features and will try to produce an image that looks like our training data. We'll send this synthesized image to our discriminator which we've just built above.
Let's start by building the input to this network. We'll need a placeholder for the input features to this network. We have to be mindful of how many features we have. The feature vector for the Generator will eventually need to form an image. What we can do is create a 1-dimensional vector of values for each element in our batch, giving us [None, n_features]. We can then reshape this to a 4-dimensional Tensor so that we can build a decoder network just like in Session 3.
But how do we assign the values from our 1-d feature vector (or 2-d tensor with Batch number of them) to the 3-d shape of an image (or 4-d tensor with Batch number of them)? We have to go from the number of features in our 1-d feature vector, let's say n_latent to height x width x channels through a series of convolutional transpose layers. One way to approach this is think of the reverse process. Starting from the final decoding of height x width x channels, I will use convolution with a stride of 2, so downsample by 2 with each new layer. So the second to last decoder layer would be, height // 2 x width // 2 x ?. If I look at it like this, I can use the variable n_pixels denoting the height and width to build my decoder, and set the channels to whatever I want.
Let's start with just our 2-d placeholder which will have None x n_features, then convert it to a 4-d tensor ready for the decoder part of the network (a.k.a. the generator). | # We'll need some variables first. This will be how many
# channels our generator's feature vector has. Experiment w/
# this if you are training your own network.
n_code = 16
# And in total how many feature it has, including the spatial dimensions.
n_latent = (n_pixels // 16) * (n_pixels // 16) * n_code
# Let's build the 2-D placeholder, which is the 1-d feature vector for every
# element in our batch. We'll then reshape this to 4-D for the decoder.
Z = tf.placeholder(name='Z', shape=[None, n_latent], dtype=tf.float32)
# Now we can reshape it to input to the decoder. Here we have to
# be mindful of the height and width as described before. We need
# to make the height and width a factor of the final height and width
# that we want. Since we are using strided convolutions of 2, then
# we can say with 4 layers, that first decoder's layer should be:
# n_pixels / 2 / 2 / 2 / 2, or n_pixels / 16:
Z_tensor = tf.reshape(Z, [-1, n_pixels // 16, n_pixels // 16, n_code]) | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
Now we'll build the decoder in much the same way as we built our encoder. And exactly as we've done in Session 3! This requires one additional parameter "channels" which is how many output filters we want for each net layer. We'll interpret the dimensions as the height and width of the tensor in each new layer, the channels is how many output filters we want for each net layer, and the filter_sizes is the size of the filters used for convolution. We'll default to using a stride of two which will downsample each layer. We're also going to collect each hidden layer h in a list. We'll end up needing this for Part 2 when we combine the variational autoencoder w/ the generative adversarial network. | def decoder(z, dimensions, channels, filter_sizes,
activation=tf.nn.relu, reuse=None):
h = z
hs = []
for layer_i in range(len(dimensions)):
with tf.variable_scope('layer{}'.format(layer_i+1), reuse=reuse):
h, W = utils.deconv2d(x=h,
n_output_h=dimensions[layer_i],
n_output_w=dimensions[layer_i],
n_output_ch=channels[layer_i],
k_h=filter_sizes[layer_i],
k_w=filter_sizes[layer_i],
reuse=reuse)
h = activation(h)
hs.append(h)
return h, hs | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
<a name="building-the-generator"></a>
Building the Generator
Now we're ready to use our decoder to take in a vector of features and generate something that looks like our training images. We have to ensure that the last layer produces the same output shape as the discriminator's input. E.g. we used a [None, 64, 64, 3] input to the discriminator, so our generator needs to also output [None, 64, 64, 3] tensors. In other words, we have to ensure the last element in our dimensions list is 64, and the last element in our channels list is 3. | # Explore these parameters.
def generator(Z,
dimensions=[n_pixels//8, n_pixels//4, n_pixels//2, n_pixels],
channels=[50, 50, 50, n_channels],
filter_sizes=[4, 4, 4, 4],
activation=utils.lrelu):
with tf.variable_scope('generator'):
G, Hs = decoder(Z_tensor, dimensions, channels, filter_sizes, activation)
return G | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
Now let's call the generator function with our input placeholder Z. This will take our feature vector and generate something in the shape of an image. | G = generator(Z)
graph = tf.get_default_graph()
nb_utils.show_graph(graph.as_graph_def()) | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
<a name="building-the-discriminator-for-the-generated-samples"></a>
Building the Discriminator for the Generated Samples
Lastly, we need another discriminator which takes as input our generated images. Recall the discriminator that we have made only takes as input our placeholder X which is for our actual training samples. We'll use the same function for creating our discriminator and reuse the variables we already have. This is the crucial part! We aren't making new trainable variables, but reusing the ones we have. We just create a new set of operations that takes as input our generated image. So we'll have a whole new set of operations exactly like the ones we have created for our first discriminator. But we are going to use the exact same variables as our first discriminator, so that we optimize the same values. | D_fake = discriminator(G, reuse=True) | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
Now we can look at the graph and see the new discriminator inside the node for the discriminator. You should see the original discriminator and a new graph of a discriminator within it, but all the weights are shared with the original discriminator. | nb_utils.show_graph(graph.as_graph_def()) | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
<a name="gan-loss-functions"></a>
GAN Loss Functions
We now have all the components to our network. We just have to train it. This is the notoriously tricky bit. We will have 3 different loss measures instead of our typical network with just a single loss. We'll later connect each of these loss measures to two optimizers, one for the generator and another for the discriminator, and then pin them against each other and see which one wins! Exciting times!
Recall from Session 3's Supervised Network, we created a binary classification task: music or speech. We again have a binary classification task: real or fake. So our loss metric will again use the binary cross entropy to measure the loss of our three different modules: the generator, the discriminator for our real images, and the discriminator for our generated images.
To find out the loss function for our generator network, answer the question, what makes the generator successful? Successfully fooling the discriminator. When does that happen? When the discriminator for the fake samples produces all ones. So our binary cross entropy measure will measure the cross entropy with our predicted distribution and the true distribution which has all ones. | with tf.variable_scope('loss/generator'):
loss_G = tf.reduce_mean(utils.binary_cross_entropy(D_fake, tf.ones_like(D_fake))) | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
What we've just written is a loss function for our generator. The generator is optimized when the discriminator for the generated samples produces all ones. In contrast to the generator, the discriminator will have 2 measures to optimize. One which is the opposite of what we have just written above, as well as 1 more measure for the real samples. Try writing these two losses and we'll combine them using their average. We want to optimize the Discriminator for the real samples producing all 1s, and the Discriminator for the fake samples producing all 0s:
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> | with tf.variable_scope('loss/discriminator/real'):
loss_D_real = utils.binary_cross_entropy(D_real, tf.zeros_like(D_real))
with tf.variable_scope('loss/discriminator/fake'):
loss_D_fake = utils.binary_cross_entropy(D_fake, tf.ones_like(D_fake))
with tf.variable_scope('loss/discriminator'):
loss_D = tf.reduce_mean((loss_D_real + loss_D_fake) / 2)
nb_utils.show_graph(graph.as_graph_def()) | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
With our loss functions, we can create an optimizer for the discriminator and generator:
<a name="building-the-optimizers-w-regularization"></a>
Building the Optimizers w/ Regularization
We're almost ready to create our optimizers. We just need to do one extra thing. Recall that our loss for our generator has a flow from the generator through the discriminator. If we are training both the generator and the discriminator, we have two measures which both try to optimize the discriminator, but in opposite ways: the generator's loss would try to optimize the discriminator to be bad at its job, and the discriminator's loss would try to optimize it to be good at its job. This would be counter-productive, trying to optimize opposing losses. What we want is for the generator to get better, and the discriminator to get better. Not for the discriminator to get better, then get worse, then get better, etc... The way we do this is when we optimize our generator, we let the gradient flow through the discriminator, but we do not update the variables in the discriminator. Let's try and grab just the discriminator variables and just the generator variables below:
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> | # Grab just the variables corresponding to the discriminator
# and just the generator:
vars_d = [v for v in tf.trainable_variables()
if v.name.startswith('discriminator')]
print('Training discriminator variables:')
[print(v.name) for v in tf.trainable_variables()
if v.name.startswith('discriminator')]
vars_g = [v for v in tf.trainable_variables()
if v.name.startswith('generator')
print('Training generator variables:')
[print(v.name) for v in tf.trainable_variables()
if v.name.startswith('generator')] | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
We can also apply regularization to our network. This will penalize weights in the network for growing too large. | d_reg = tf.contrib.layers.apply_regularization(
tf.contrib.layers.l2_regularizer(1e-6), vars_d)
g_reg = tf.contrib.layers.apply_regularization(
tf.contrib.layers.l2_regularizer(1e-6), vars_g) | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
The last thing you may want to try is creating a separate learning rate for each of your generator and discriminator optimizers like so: | learning_rate = 0.0001
lr_g = tf.placeholder(tf.float32, shape=[], name='learning_rate_g')
lr_d = tf.placeholder(tf.float32, shape=[], name='learning_rate_d') | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
Now you can feed the placeholders to your optimizers. If you run into errors creating these, then you likely have a problem with your graph's definition! Be sure to go back and reset the default graph and check the sizes of your different operations/placeholders.
With your optimizers, you can now train the network by "running" the optimizer variables with your session. You'll need to set the var_list parameter of the minimize function to only train the variables for the discriminator and same for the generator's optimizer:
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> | opt_g = tf.train.AdamOptimizer(learning_rate=lr_g).minimize(loss_G + g_reg, var_list=vars_g)
opt_d = tf.train.AdamOptimizer(learning_rate=lr_d).minimize(loss_D + d_reg, var_list=vars_d) | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.