markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
---|---|---|---|---|
Carregar dados
Para começar, vejamos a parte superior do arquivo CSV para ver como ele está formatado. | !head {train_file_path} | site/pt-br/tutorials/load_data/csv.ipynb | tensorflow/docs-l10n | apache-2.0 |
Você pode [carregar isso usando pandas] (pandas.ipynb) e passar as matrizes NumPy para o TensorFlow. Se você precisar escalar até um grande conjunto de arquivos ou precisar de um carregador que se integre ao [TensorFlow e tf.data] (../../guide/data.ipynb), use o tf.data.experimental. função make_csv_dataset:
A única coluna que você precisa identificar explicitamente é aquela com o valor que o modelo pretende prever. | LABEL_COLUMN = 'survived'
LABELS = [0, 1] | site/pt-br/tutorials/load_data/csv.ipynb | tensorflow/docs-l10n | apache-2.0 |
Now read the CSV data from the file and create a dataset.
(For the full documentation, see tf.data.experimental.make_csv_dataset) | def get_dataset(file_path, **kwargs):
dataset = tf.data.experimental.make_csv_dataset(
file_path,
batch_size=5, # Artificialmente pequeno para facilitar a exibição de exemplos
label_name=LABEL_COLUMN,
na_value="?",
num_epochs=1,
ignore_errors=True,
**kwargs)
return dataset
raw_train_data = get_dataset(train_file_path)
raw_test_data = get_dataset(test_file_path)
def show_batch(dataset):
for batch, label in dataset.take(1):
for key, value in batch.items():
print("{:20s}: {}".format(key,value.numpy())) | site/pt-br/tutorials/load_data/csv.ipynb | tensorflow/docs-l10n | apache-2.0 |
Cada item do conjunto de dados é um lote, representado como uma tupla de ( muitos exemplos , * muitos rótulos *). Os dados dos exemplos são organizados em tensores baseados em colunas (em vez de tensores baseados em linhas), cada um com tantos elementos quanto o tamanho do lote (5 neste caso).
Pode ajudar a ver isso por si mesmo. | show_batch(raw_train_data) | site/pt-br/tutorials/load_data/csv.ipynb | tensorflow/docs-l10n | apache-2.0 |
Como você pode ver, as colunas no CSV são nomeadas. O construtor do conjunto de dados selecionará esses nomes automaticamente. Se o arquivo com o qual você está trabalhando não contém os nomes das colunas na primeira linha, passe-os em uma lista de strings para o argumento column_names na função make_csv_dataset. | CSV_COLUMNS = ['survived', 'sex', 'age', 'n_siblings_spouses', 'parch', 'fare', 'class', 'deck', 'embark_town', 'alone']
temp_dataset = get_dataset(train_file_path, column_names=CSV_COLUMNS)
show_batch(temp_dataset) | site/pt-br/tutorials/load_data/csv.ipynb | tensorflow/docs-l10n | apache-2.0 |
Este exemplo vai usar todas as colunas disponíveis. Se você precisar omitir algumas colunas do conjunto de dados, crie uma lista apenas das colunas que planeja usar e passe-a para o argumento (opcional) select_columns do construtor. | SELECT_COLUMNS = ['survived', 'age', 'n_siblings_spouses', 'class', 'deck', 'alone']
temp_dataset = get_dataset(train_file_path, select_columns=SELECT_COLUMNS)
show_batch(temp_dataset) | site/pt-br/tutorials/load_data/csv.ipynb | tensorflow/docs-l10n | apache-2.0 |
Pré-processamento dos Dados
Um arquivo CSV pode conter uma variedade de tipos de dados. Normalmente, você deseja converter desses tipos mistos em um vetor de comprimento fixo antes de alimentar os dados em seu modelo.
O TensorFlow possui um sistema interno para descrever conversões de entrada comuns: tf.feature_column, consulte [este tutorial] (../keras/feature_columns) para detalhes.
Você pode pré-processar seus dados usando qualquer ferramenta que desejar (como [nltk] (https://www.nltk.org/) ou [sklearn] (https://scikit-learn.org/stable/)) e apenas passar a saída processada para o TensorFlow.
A principal vantagem de fazer o pré-processamento dentro do seu modelo é que, quando você exporta o modelo, ele inclui o pré-processamento. Dessa forma, você pode passar os dados brutos diretamente para o seu modelo.
Dados contínuos
Se seus dados já estiverem em um formato numérico apropriado, você poderá compactá-los em um vetor antes de transmiti-los ao modelo: | SELECT_COLUMNS = ['survived', 'age', 'n_siblings_spouses', 'parch', 'fare']
DEFAULTS = [0, 0.0, 0.0, 0.0, 0.0]
temp_dataset = get_dataset(train_file_path,
select_columns=SELECT_COLUMNS,
column_defaults = DEFAULTS)
show_batch(temp_dataset)
example_batch, labels_batch = next(iter(temp_dataset)) | site/pt-br/tutorials/load_data/csv.ipynb | tensorflow/docs-l10n | apache-2.0 |
Aqui está uma função simples que agrupará todas as colunas: | def pack(features, label):
return tf.stack(list(features.values()), axis=-1), label | site/pt-br/tutorials/load_data/csv.ipynb | tensorflow/docs-l10n | apache-2.0 |
Aplique isso a cada elemento do conjunto de dados: | packed_dataset = temp_dataset.map(pack)
for features, labels in packed_dataset.take(1):
print(features.numpy())
print()
print(labels.numpy()) | site/pt-br/tutorials/load_data/csv.ipynb | tensorflow/docs-l10n | apache-2.0 |
Se você tiver tipos de dados mistos, poderá separar esses campos numéricos simples. A API tf.feature_column pode lidar com eles, mas isso gera alguma sobrecarga e deve ser evitado, a menos que seja realmente necessário. Volte para o conjunto de dados misto: | show_batch(raw_train_data)
example_batch, labels_batch = next(iter(temp_dataset)) | site/pt-br/tutorials/load_data/csv.ipynb | tensorflow/docs-l10n | apache-2.0 |
Portanto, defina um pré-processador mais geral que selecione uma lista de recursos numéricos e os agrupe em uma única coluna: | class PackNumericFeatures(object):
def __init__(self, names):
self.names = names
def __call__(self, features, labels):
numeric_features = [features.pop(name) for name in self.names]
numeric_features = [tf.cast(feat, tf.float32) for feat in numeric_features]
numeric_features = tf.stack(numeric_features, axis=-1)
features['numeric'] = numeric_features
return features, labels
NUMERIC_FEATURES = ['age','n_siblings_spouses','parch', 'fare']
packed_train_data = raw_train_data.map(
PackNumericFeatures(NUMERIC_FEATURES))
packed_test_data = raw_test_data.map(
PackNumericFeatures(NUMERIC_FEATURES))
show_batch(packed_train_data)
example_batch, labels_batch = next(iter(packed_train_data)) | site/pt-br/tutorials/load_data/csv.ipynb | tensorflow/docs-l10n | apache-2.0 |
Normalização dos dados
Dados contínuos sempre devem ser normalizados. | import pandas as pd
desc = pd.read_csv(train_file_path)[NUMERIC_FEATURES].describe()
desc
MEAN = np.array(desc.T['mean'])
STD = np.array(desc.T['std'])
def normalize_numeric_data(data, mean, std):
# Center the data
return (data-mean)/std
| site/pt-br/tutorials/load_data/csv.ipynb | tensorflow/docs-l10n | apache-2.0 |
Agora crie uma coluna numérica. A API tf.feature_columns.numeric_column aceita um argumento normalizer_fn, que será executado em cada lote.
Ligue o MEAN e oSTD ao normalizador fn usando [functools.partial] (https://docs.python.org/3/library/functools.html#functools.partial) | # Veja o que você acabou de criar.
normalizer = functools.partial(normalize_numeric_data, mean=MEAN, std=STD)
numeric_column = tf.feature_column.numeric_column('numeric', normalizer_fn=normalizer, shape=[len(NUMERIC_FEATURES)])
numeric_columns = [numeric_column]
numeric_column | site/pt-br/tutorials/load_data/csv.ipynb | tensorflow/docs-l10n | apache-2.0 |
Ao treinar o modelo, inclua esta coluna de característica para selecionar e centralizar este bloco de dados numéricos: | example_batch['numeric']
numeric_layer = tf.keras.layers.DenseFeatures(numeric_columns)
numeric_layer(example_batch).numpy() | site/pt-br/tutorials/load_data/csv.ipynb | tensorflow/docs-l10n | apache-2.0 |
A normalização baseada em média usada aqui requer conhecer os meios de cada coluna antes do tempo.
Dados categóricos
Algumas das colunas nos dados CSV são colunas categóricas. Ou seja, o conteúdo deve ser um dentre um conjunto limitado de opções.
Use a API tf.feature_column para criar uma coleção com uma tf.feature_column.indicator_column para cada coluna categórica. | CATEGORIES = {
'sex': ['male', 'female'],
'class' : ['First', 'Second', 'Third'],
'deck' : ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J'],
'embark_town' : ['Cherbourg', 'Southhampton', 'Queenstown'],
'alone' : ['y', 'n']
}
categorical_columns = []
for feature, vocab in CATEGORIES.items():
cat_col = tf.feature_column.categorical_column_with_vocabulary_list(
key=feature, vocabulary_list=vocab)
categorical_columns.append(tf.feature_column.indicator_column(cat_col))
# Veja o que você acabou de criar.
categorical_columns
categorical_layer = tf.keras.layers.DenseFeatures(categorical_columns)
print(categorical_layer(example_batch).numpy()[0]) | site/pt-br/tutorials/load_data/csv.ipynb | tensorflow/docs-l10n | apache-2.0 |
Isso fará parte de uma entrada de processamento de dados posteriormente, quando você construir o modelo.
Camada combinada de pré-processamento
Adicione as duas coleções de colunas de recursos e passe-as para um tf.keras.layers.DenseFeatures para criar uma camada de entrada que extrairá e pré-processará os dois tipos de entrada: | preprocessing_layer = tf.keras.layers.DenseFeatures(categorical_columns+numeric_columns)
print(preprocessing_layer(example_batch).numpy()[0]) | site/pt-br/tutorials/load_data/csv.ipynb | tensorflow/docs-l10n | apache-2.0 |
Construir o modelo
Crie um tf.keras.Sequential, começando com o preprocessing_layer. | model = tf.keras.Sequential([
preprocessing_layer,
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(1),
])
model.compile(
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
optimizer='adam',
metrics=['accuracy']) | site/pt-br/tutorials/load_data/csv.ipynb | tensorflow/docs-l10n | apache-2.0 |
Treinar, avaliar, e prever
Agora o modelo pode ser instanciado e treinado. | train_data = packed_train_data.shuffle(500)
test_data = packed_test_data
model.fit(train_data, epochs=20) | site/pt-br/tutorials/load_data/csv.ipynb | tensorflow/docs-l10n | apache-2.0 |
Depois que o modelo é treinado, você pode verificar sua acurácia no conjunto test_data. | test_loss, test_accuracy = model.evaluate(test_data)
print('\n\nTest Loss {}, Test Accuracy {}'.format(test_loss, test_accuracy)) | site/pt-br/tutorials/load_data/csv.ipynb | tensorflow/docs-l10n | apache-2.0 |
Use tf.keras.Model.predict para inferir rótulos em um lote ou em um conjunto de dados de lotes. | predictions = model.predict(test_data)
# Mostrar alguns resultados
for prediction, survived in zip(predictions[:10], list(test_data)[0][1][:10]):
print("Predicted survival: {:.2%}".format(prediction[0]),
" | Actual outcome: ",
("SURVIVED" if bool(survived) else "DIED"))
| site/pt-br/tutorials/load_data/csv.ipynb | tensorflow/docs-l10n | apache-2.0 |
Import TensorFlow and enable eager execution | import tensorflow as tf
tf.enable_eager_execution()
import glob
import imageio
import matplotlib.pyplot as plt
import numpy as np
import os
import PIL
import time
from IPython import display | tensorflow/contrib/eager/python/examples/generative_examples/dcgan.ipynb | asimshankar/tensorflow | apache-2.0 |
Load the dataset
We are going to use the MNIST dataset to train the generator and the discriminator. The generator will generate handwritten digits resembling the MNIST data. | (train_images, train_labels), (_, _) = tf.keras.datasets.mnist.load_data()
train_images = train_images.reshape(train_images.shape[0], 28, 28, 1).astype('float32')
train_images = (train_images - 127.5) / 127.5 # Normalize the images to [-1, 1]
BUFFER_SIZE = 60000
BATCH_SIZE = 256 | tensorflow/contrib/eager/python/examples/generative_examples/dcgan.ipynb | asimshankar/tensorflow | apache-2.0 |
Create the models
We will use tf.keras Sequential API to define the generator and discriminator models.
The Generator Model
The generator is responsible for creating convincing images that are good enough to fool the discriminator. The network architecture for the generator consists of Conv2DTranspose (Upsampling) layers. We start with a fully connected layer and upsample the image two times in order to reach the desired image size of 28x28x1. We increase the width and height, and reduce the depth as we move through the layers in the network. We use Leaky ReLU activation for each layer except for the last one where we use a tanh activation. | def make_generator_model():
model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(7*7*256, use_bias=False, input_shape=(100,)))
model.add(tf.keras.layers.BatchNormalization())
model.add(tf.keras.layers.LeakyReLU())
model.add(tf.keras.layers.Reshape((7, 7, 256)))
assert model.output_shape == (None, 7, 7, 256) # Note: None is the batch size
model.add(tf.keras.layers.Conv2DTranspose(128, (5, 5), strides=(1, 1), padding='same', use_bias=False))
assert model.output_shape == (None, 7, 7, 128)
model.add(tf.keras.layers.BatchNormalization())
model.add(tf.keras.layers.LeakyReLU())
model.add(tf.keras.layers.Conv2DTranspose(64, (5, 5), strides=(2, 2), padding='same', use_bias=False))
assert model.output_shape == (None, 14, 14, 64)
model.add(tf.keras.layers.BatchNormalization())
model.add(tf.keras.layers.LeakyReLU())
model.add(tf.keras.layers.Conv2DTranspose(1, (5, 5), strides=(2, 2), padding='same', use_bias=False, activation='tanh'))
assert model.output_shape == (None, 28, 28, 1)
return model | tensorflow/contrib/eager/python/examples/generative_examples/dcgan.ipynb | asimshankar/tensorflow | apache-2.0 |
The Discriminator model
The discriminator is responsible for distinguishing fake images from real images. It's similar to a regular CNN-based image classifier. | def make_discriminator_model():
model = tf.keras.Sequential()
model.add(tf.keras.layers.Conv2D(64, (5, 5), strides=(2, 2), padding='same'))
model.add(tf.keras.layers.LeakyReLU())
model.add(tf.keras.layers.Dropout(0.3))
model.add(tf.keras.layers.Conv2D(128, (5, 5), strides=(2, 2), padding='same'))
model.add(tf.keras.layers.LeakyReLU())
model.add(tf.keras.layers.Dropout(0.3))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(1))
return model
generator = make_generator_model()
discriminator = make_discriminator_model() | tensorflow/contrib/eager/python/examples/generative_examples/dcgan.ipynb | asimshankar/tensorflow | apache-2.0 |
Define the loss functions and the optimizer
Let's define the loss functions and the optimizers for the generator and the discriminator.
Generator loss
The generator loss is a sigmoid cross entropy loss of the generated images and an array of ones, since the generator is trying to generate fake images that resemble the real images. | def generator_loss(generated_output):
return tf.losses.sigmoid_cross_entropy(tf.ones_like(generated_output), generated_output) | tensorflow/contrib/eager/python/examples/generative_examples/dcgan.ipynb | asimshankar/tensorflow | apache-2.0 |
Discriminator loss
The discriminator loss function takes two inputs: real images, and generated images. Here is how to calculate the discriminator loss:
1. Calculate real_loss which is a sigmoid cross entropy loss of the real images and an array of ones (since these are the real images).
2. Calculate generated_loss which is a sigmoid cross entropy loss of the generated images and an array of zeros (since these are the fake images).
3. Calculate the total_loss as the sum of real_loss and generated_loss. | def discriminator_loss(real_output, generated_output):
# [1,1,...,1] with real output since it is true and we want our generated examples to look like it
real_loss = tf.losses.sigmoid_cross_entropy(multi_class_labels=tf.ones_like(real_output), logits=real_output)
# [0,0,...,0] with generated images since they are fake
generated_loss = tf.losses.sigmoid_cross_entropy(multi_class_labels=tf.zeros_like(generated_output), logits=generated_output)
total_loss = real_loss + generated_loss
return total_loss | tensorflow/contrib/eager/python/examples/generative_examples/dcgan.ipynb | asimshankar/tensorflow | apache-2.0 |
The discriminator and the generator optimizers are different since we will train two networks separately. | generator_optimizer = tf.train.AdamOptimizer(1e-4)
discriminator_optimizer = tf.train.AdamOptimizer(1e-4) | tensorflow/contrib/eager/python/examples/generative_examples/dcgan.ipynb | asimshankar/tensorflow | apache-2.0 |
Set up GANs for Training
Now it's time to put together the generator and discriminator to set up the Generative Adversarial Networks, as you see in the diagam at the beginning of the tutorial.
Define training parameters | EPOCHS = 50
noise_dim = 100
num_examples_to_generate = 16
# We'll re-use this random vector used to seed the generator so
# it will be easier to see the improvement over time.
random_vector_for_generation = tf.random_normal([num_examples_to_generate,
noise_dim]) | tensorflow/contrib/eager/python/examples/generative_examples/dcgan.ipynb | asimshankar/tensorflow | apache-2.0 |
Define training method
We start by iterating over the dataset. The generator is given a random vector as an input which is processed to output an image looking like a handwritten digit. The discriminator is then shown the real MNIST images as well as the generated images.
Next, we calculate the generator and the discriminator loss. Then, we calculate the gradients of loss with respect to both the generator and the discriminator variables. | def train_step(images):
# generating noise from a normal distribution
noise = tf.random_normal([BATCH_SIZE, noise_dim])
with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape:
generated_images = generator(noise, training=True)
real_output = discriminator(images, training=True)
generated_output = discriminator(generated_images, training=True)
gen_loss = generator_loss(generated_output)
disc_loss = discriminator_loss(real_output, generated_output)
gradients_of_generator = gen_tape.gradient(gen_loss, generator.variables)
gradients_of_discriminator = disc_tape.gradient(disc_loss, discriminator.variables)
generator_optimizer.apply_gradients(zip(gradients_of_generator, generator.variables))
discriminator_optimizer.apply_gradients(zip(gradients_of_discriminator, discriminator.variables)) | tensorflow/contrib/eager/python/examples/generative_examples/dcgan.ipynb | asimshankar/tensorflow | apache-2.0 |
This model takes about ~30 seconds per epoch to train on a single Tesla K80 on Colab, as of October 2018.
Eager execution can be slower than executing the equivalent graph as it can't benefit from whole-program optimizations on the graph, and also incurs overheads of interpreting Python code. By using tf.contrib.eager.defun to create graph functions, we get a ~20 secs/epoch performance boost (from ~50 secs/epoch down to ~30 secs/epoch). This way we get the best of both eager execution (easier for debugging) and graph mode (better performance). | train_step = tf.contrib.eager.defun(train_step)
def train(dataset, epochs):
for epoch in range(epochs):
start = time.time()
for images in dataset:
train_step(images)
display.clear_output(wait=True)
generate_and_save_images(generator,
epoch + 1,
random_vector_for_generation)
# saving (checkpoint) the model every 15 epochs
if (epoch + 1) % 15 == 0:
checkpoint.save(file_prefix = checkpoint_prefix)
print ('Time taken for epoch {} is {} sec'.format(epoch + 1,
time.time()-start))
# generating after the final epoch
display.clear_output(wait=True)
generate_and_save_images(generator,
epochs,
random_vector_for_generation) | tensorflow/contrib/eager/python/examples/generative_examples/dcgan.ipynb | asimshankar/tensorflow | apache-2.0 |
Generate and save images | def generate_and_save_images(model, epoch, test_input):
# make sure the training parameter is set to False because we
# don't want to train the batchnorm layer when doing inference.
predictions = model(test_input, training=False)
fig = plt.figure(figsize=(4,4))
for i in range(predictions.shape[0]):
plt.subplot(4, 4, i+1)
plt.imshow(predictions[i, :, :, 0] * 127.5 + 127.5, cmap='gray')
plt.axis('off')
plt.savefig('image_at_epoch_{:04d}.png'.format(epoch))
plt.show() | tensorflow/contrib/eager/python/examples/generative_examples/dcgan.ipynb | asimshankar/tensorflow | apache-2.0 |
Train the GANs
We will call the train() method defined above to train the generator and discriminator simultaneously. Note, training GANs can be tricky. It's important that the generator and discriminator do not overpower each other (e.g., that they train at a similar rate).
At the beginning of the training, the generated images look like random noise. As training progresses, you can see the generated digits look increasingly real. After 50 epochs, they look very much like the MNIST digits. | %%time
train(train_dataset, EPOCHS) | tensorflow/contrib/eager/python/examples/generative_examples/dcgan.ipynb | asimshankar/tensorflow | apache-2.0 |
Generated images
After training, its time to generate some images!
The last step is to plot the generated images and voila! | # Display a single image using the epoch number
def display_image(epoch_no):
return PIL.Image.open('image_at_epoch_{:04d}.png'.format(epoch_no))
display_image(EPOCHS) | tensorflow/contrib/eager/python/examples/generative_examples/dcgan.ipynb | asimshankar/tensorflow | apache-2.0 |
Generate a GIF of all the saved images
We will use imageio to create an animated gif using all the images saved during training. | with imageio.get_writer('dcgan.gif', mode='I') as writer:
filenames = glob.glob('image*.png')
filenames = sorted(filenames)
last = -1
for i,filename in enumerate(filenames):
frame = 2*(i**0.5)
if round(frame) > round(last):
last = frame
else:
continue
image = imageio.imread(filename)
writer.append_data(image)
image = imageio.imread(filename)
writer.append_data(image)
# this is a hack to display the gif inside the notebook
os.system('cp dcgan.gif dcgan.gif.png') | tensorflow/contrib/eager/python/examples/generative_examples/dcgan.ipynb | asimshankar/tensorflow | apache-2.0 |
Display the animated gif with all the mages generated during the training of GANs. | display.Image(filename="dcgan.gif.png") | tensorflow/contrib/eager/python/examples/generative_examples/dcgan.ipynb | asimshankar/tensorflow | apache-2.0 |
Download the animated gif
Uncomment the code below to download an animated gif from Colab. | #from google.colab import files
#files.download('dcgan.gif') | tensorflow/contrib/eager/python/examples/generative_examples/dcgan.ipynb | asimshankar/tensorflow | apache-2.0 |
The LArray library offers several methods and functions to combine arrays:
insert: inserts an array in another array along an axis
append: adds an array at the end of an axis.
prepend: adds an array at the beginning of an axis.
extend: extends an array along an axis.
stack: combines several arrays along a new axis.
Insert | other_countries = zeros((Axis('country=Luxembourg,Netherlands'), gender, time), dtype=int)
# insert new countries before 'France'
population_new_countries = population.insert(other_countries, before='France')
population_new_countries
# insert new countries after 'France'
population_new_countries = population.insert(other_countries, after='France')
population_new_countries | doc/source/tutorial/tutorial_combine_arrays.ipynb | gdementen/larray | gpl-3.0 |
See insert for more details and examples.
Append
Append one element to an axis of an array: | # append data for 'Luxembourg'
population_new = population.append('country', population_benelux['Luxembourg'], 'Luxembourg')
population_new | doc/source/tutorial/tutorial_combine_arrays.ipynb | gdementen/larray | gpl-3.0 |
The value being appended can have missing (or even extra) axes as long as common axes are compatible: | population_lux = Array([-1, 1], gender)
population_lux
population_new = population.append('country', population_lux, 'Luxembourg')
population_new | doc/source/tutorial/tutorial_combine_arrays.ipynb | gdementen/larray | gpl-3.0 |
See append for more details and examples.
Prepend
Prepend one element to an axis of an array: | # append data for 'Luxembourg'
population_new = population.prepend('country', population_benelux['Luxembourg'], 'Luxembourg')
population_new | doc/source/tutorial/tutorial_combine_arrays.ipynb | gdementen/larray | gpl-3.0 |
See prepend for more details and examples.
Extend
Extend an array along an axis with another array with that axis (but other labels) | population_extended = population.extend('country', population_benelux[['Luxembourg', 'Netherlands']])
population_extended | doc/source/tutorial/tutorial_combine_arrays.ipynb | gdementen/larray | gpl-3.0 |
See extend for more details and examples.
Stack
Stack several arrays together to create an entirely new dimension | # imagine you have loaded data for each country in different arrays
# (e.g. loaded from different Excel sheets)
population_be = population['Belgium']
population_fr = population['France']
population_de = population['Germany']
print(population_be)
print(population_fr)
print(population_de)
# create a new array with an extra axis 'country' by stacking the three arrays population_be/fr/de
population_stacked = stack({'Belgium': population_be, 'France': population_fr, 'Germany': population_de}, 'country')
population_stacked | doc/source/tutorial/tutorial_combine_arrays.ipynb | gdementen/larray | gpl-3.0 |
Attrribute data from a csv file and a W from a gal file | mexico = cp.importCsvData(ps.examples.get_path('mexico.csv'))
mexico.fieldNames
w = ps.open(ps.examples.get_path('mexico.gal')).read()
w.n
cp.addRook2Layer(ps.examples.get_path('mexico.gal'), mexico)
mexico.Wrook
mexico.cluster('arisel', ['pcgdp1940'], 5, wType='rook', inits=10, dissolve=0)
mexico.fieldNames
mexico.getVars('pcgdp1940')
# mexico example all together
csvfile = ps.examples.get_path('mexico.csv')
galfile = ps.examples.get_path('mexico.gal')
mexico = cp.importCsvData(csvfile)
cp.addRook2Layer(galfile, mexico)
mexico.cluster('arisel', ['pcgdp1940'], 5, wType='rook', inits=10, dissolve=0)
mexico.region2areas.index(2)
mexico.Wrook[0]
mexico.getVars('State')
regions = np.array(mexico.region2areas)
regions
Counter(regions) | pysal/contrib/clusterpy/clusterpy.ipynb | hasecbinusr/pysal | bsd-3-clause |
Attrribute data from a csv file and an external W object | mexico = cp.importCsvData(ps.examples.get_path('mexico.csv'))
w = ps.open(ps.examples.get_path('mexico.gal')).read()
cp.addW2Layer(w, mexico)
mexico.Wrook
mexico.cluster('arisel', ['pcgdp1940'], 5, wType='rook', inits=10, dissolve=0) | pysal/contrib/clusterpy/clusterpy.ipynb | hasecbinusr/pysal | bsd-3-clause |
Shapefile and mapping results with PySAL Viz | usf = ps.examples.get_path('us48.shp')
us = cp.loadArcData(usf.split(".")[0])
us.Wqueen
us.fieldNames
uscsv = ps.examples.get_path("usjoin.csv")
f = ps.open(uscsv)
pci = np.array([f.by_col[str(y)] for y in range(1929, 2010)]).T
pci
usy = cp.Layer()
cp.addQueen2Layer(ps.examples.get_path('states48.gal'), usy)
names = ["Y_%d"%v for v in range(1929,2010)]
cp.addArray2Layer(pci, usy, names)
names
usy.fieldNames
usy.getVars('Y_1929')
usy.Wrook
usy.cluster('arisel', ['Y_1980'], 8, wType='queen', inits=10, dissolve=0)
#mexico.cluster('arisel', ['pcgdp1940'], 5, wType='rook', inits=10, dissolve=0)
us = cp.Layer()
cp.addQueen2Layer(ps.examples.get_path('states48.gal'), us)
uscsv = ps.examples.get_path("usjoin.csv")
f = ps.open(uscsv)
pci = np.array([f.by_col[str(y)] for y in range(1929, 2010)]).T
names = ["Y_%d"%v for v in range(1929,2010)]
cp.addArray2Layer(pci, us, names)
usy.cluster('arisel', ['Y_1980'], 8, wType='queen', inits=10, dissolve=0)
us_alpha = cp.importCsvData(ps.examples.get_path('usjoin.csv'))
alpha_fips = us_alpha.getVars('STATE_FIPS')
alpha_fips
dbf = ps.open(ps.examples.get_path('us48.dbf'))
dbf.header
state_fips = dbf.by_col('STATE_FIPS')
names = dbf.by_col('STATE_NAME')
names
state_fips = map(int, state_fips)
state_fips
# the csv file has the states ordered alphabetically, but this isn't the case for the order in the shapefile so we have to reorder before any choropleths are drawn
alpha_fips = [i[0] for i in alpha_fips.values()]
reorder = [ alpha_fips.index(s) for s in state_fips]
regions = usy.region2areas
regions
from pysal.contrib.viz import mapping as maps
shp = ps.examples.get_path('us48.shp')
regions = np.array(regions)
maps.plot_choropleth(shp, regions[reorder], 'unique_values')
usy.cluster('arisel', ['Y_1929'], 8, wType='queen', inits=10, dissolve=0)
regions = usy.region2areas
regions = np.array(regions)
maps.plot_choropleth(shp, regions[reorder], 'unique_values')
names = ["Y_%d"%i for i in range(1929, 2010)]
#usy.cluster('arisel', ['Y_1929'], 8, wType='queen', inits=10, dissolve=0)
usy.cluster('arisel', names, 8, wType='queen', inits=10, dissolve=0)
regions = usy.region2areas
regions = np.array(regions)
maps.plot_choropleth(shp, regions[reorder], 'unique_values', title='All Years')
ps.version
usy.cluster('arisel', names[:40], 8, wType='queen', inits=10, dissolve=0)
regions = usy.region2areas
regions = np.array(regions)
maps.plot_choropleth(shp, regions[reorder], 'unique_values', title='1929-68')
usy.cluster('arisel', names[40:], 8, wType='queen', inits=10, dissolve=0)
regions = usy.region2areas
regions = np.array(regions)
maps.plot_choropleth(shp, regions[reorder], 'unique_values', title='1969-2009')
usy.cluster('arisel', names[40:], 8, wType='queen', inits=10, dissolve=0)
usy.dataOperation("CONSTANT = 1")
usy.Wrook = usy.Wqueen
usy.cluster('maxpTabu', ['Y_1929', 'Y_1929'], threshold=1000, dissolve=0)
regions = usy.region2areas
regions = np.array(regions)
maps.plot_choropleth(shp, regions[reorder], 'unique_values', title='maxp 1929')
Counter(regions)
usy.getVars('Y_1929')
usy.Wrook
usy.cluster('maxpTabu', ['Y_1929', 'CONSTANT'], threshold=8, dissolve=0)
regions = usy.region2areas
regions = np.array(regions)
maps.plot_choropleth(shp, regions[reorder], 'unique_values', title='maxp 1929')
regions
Counter(regions)
vars = names
vars.append('CONSTANT')
vars
usy.cluster('maxpTabu', vars, threshold=8, dissolve=0)
regions = usy.region2areas
regions = np.array(regions)
maps.plot_choropleth(shp, regions[reorder], 'unique_values', title='maxp 1929-2009')
Counter(regions)
south = cp.loadArcData(ps.examples.get_path("south.shp"))
south.fieldNames
# uncomment if you have some time ;->
#south.cluster('arisel', ['HR70'], 20, wType='queen', inits=10, dissolve=0)
#regions = south.region2areas
shp = ps.examples.get_path('south.shp')
#maps.plot_choropleth(shp, np.array(regions), 'unique_values')
south.dataOperation("CONSTANT = 1")
south.cluster('maxpTabu', ['HR70', 'CONSTANT'], threshold=70, dissolve=0)
regions = south.region2areas
regions = np.array(regions)
maps.plot_choropleth(shp, regions, 'unique_values', title='maxp HR70 threshold=70')
Counter(regions) | pysal/contrib/clusterpy/clusterpy.ipynb | hasecbinusr/pysal | bsd-3-clause |
Now you are ready to run the main part of the code. Click on the cell below and then type Shift+Enter. If you do not get an error, you should receive a message indicating that a new file has been written to your output folder. | # Import the os package to manage file paths
import os
# Create input and out file paths
input_file = os.path.join(input_path, filename)
output_file = os.path.join(output_path, filename)
# Open the input file and read it
f = open(input_file)
text = f.read()
f.close()
print("The input file says: " + text)
# Convert the text to lower case
text = text.lower()
# Open the output file for writing and save the new text to it
output_file = os.path.join(output_path, "file2.txt")
f = open(output_file, "w")
f.write(text)
f.close()
print("I've just written a new file called 'file2.txt' to your output folder. Check it out!") | we1s-test/ipython-notebook-test.ipynb | scottkleinman/WE1S | mit |
1. Data exploration and cleaning
The dataset is composed by several files. First, we are going to explore each of them and clean some variables. For a complete explanation of each file, please see the file DATA.md.
1.1 file 'train_users_2.csv'
This file is the most important file in our dataset as it contains the users, information about them and the country of destination.
When a user has booked a travel through Airbnb, the destination country will be specified. Otherwise, 'NDF' will be indicated. | filename = "train_users_2.csv"
folder = 'data'
fileAdress = os.path.join(folder, filename)
df = pd.read_csv(fileAdress)
df.head() | project/reports/airbnb_booking/Main Preprocessing.ipynb | mdeff/ntds_2016 | mit |
There are missing values in the columns :
date_first_booking : users that never booked an airbnb apartment
gender : users that didn't wish to specify their gender
age : users that didn't wish to specify their age
first_affiliate_tracked : problem of missing data
We wil go each of these variable and take decisions regarding the missing values | df.isnull().any() | project/reports/airbnb_booking/Main Preprocessing.ipynb | mdeff/ntds_2016 | mit |
Ages
There are 2 problems regarding ages in the dataset.
First, many users did not specify an age.
Also, some users specified their year of birth instead of age.
For the relevancy of the data we will keep users between the age of 15 and 100 years old, and those who specified their age.
For the others, we will naively assign a value of -1 | df = preprocessing_helper.cleanAge(df,'k') | project/reports/airbnb_booking/Main Preprocessing.ipynb | mdeff/ntds_2016 | mit |
The following graph presents the distribution of ages in the dataset. Also, the irrelevant ages are represented here, with their value of -1. | preprocessing_helper.plotAge(df) | project/reports/airbnb_booking/Main Preprocessing.ipynb | mdeff/ntds_2016 | mit |
Gender
The following graph highlights the gender of Airbnb users. Note that around 45% did not specify their age. | df = preprocessing_helper.cleanGender(df)
preprocessing_helper.plotGender(df) | project/reports/airbnb_booking/Main Preprocessing.ipynb | mdeff/ntds_2016 | mit |
first_affiliate_tracked feature
Set the first marketing the user interacted with before the signing up to 'Untracked' if not specified. | df = preprocessing_helper.cleanFirst_affiliate_tracked(df) | project/reports/airbnb_booking/Main Preprocessing.ipynb | mdeff/ntds_2016 | mit |
Date_first_booking
This has a high similarity with the dates where accounts were created. Despite the high growth of airbnb bookings throughout the years, it is possible to see that the difference between the months increases over the years as each year parabol curve increases.
By studying each year independently, we could see that four peaks arise each month corresponding to a certain day in the week. The following plots will show the bookings distribution over the months and later over the week
The most prolific day for Airbnb counted 248 bookings | df = preprocessing_helper.cleanDate_First_booking(df)
preprocessing_helper.plotDate_First_booking_years(df) | project/reports/airbnb_booking/Main Preprocessing.ipynb | mdeff/ntds_2016 | mit |
It is possible to understand from this histogram that the bookings are pretty well spread over the year. Much less bookings are made during november and december and the months of May and June are the ones where users book the most. For these two months Airbnb counts more than 20000 bookings which corresponds to allmost a quarter of the bookings from our dataset. | preprocessing_helper.plotDate_First_booking_months(df) | project/reports/airbnb_booking/Main Preprocessing.ipynb | mdeff/ntds_2016 | mit |
As for the day where most accounts are created, it seems that tuesday and wednesdays are the days where people book the most appartments on Airbnb. | preprocessing_helper.plotDate_First_booking_weekdays(df) | project/reports/airbnb_booking/Main Preprocessing.ipynb | mdeff/ntds_2016 | mit |
Save cleaned and explored file | filename = "cleaned_train_user.csv"
folder = 'cleaned_data'
fileAdress = os.path.join(folder, filename)
preprocessing_helper.saveFile(df, fileAdress) | project/reports/airbnb_booking/Main Preprocessing.ipynb | mdeff/ntds_2016 | mit |
1.2 file 'test_user.csv'
This file has a similar structure than train_user_2.csv, so here, we will just do the cleaning process here. | # extract file
filename = "test_users.csv"
folder = 'data'
fileAdress = os.path.join(folder, filename)
df = pd.read_csv(fileAdress)
# process file
df = preprocessing_helper.cleanAge(df,'k')
df = preprocessing_helper.cleanGender(df)
df = preprocessing_helper.cleanFirst_affiliate_tracked(df)
# save file
filename = "cleaned_test_user.csv"
folder = 'cleaned_data'
fileAdress = os.path.join(folder, filename)
preprocessing_helper.saveFile(df, fileAdress) | project/reports/airbnb_booking/Main Preprocessing.ipynb | mdeff/ntds_2016 | mit |
1.3 file 'countries.csv'
This file presents a summary of the countries presented in the dataset.
This is the signification:
- 'AU' = Australia
- 'ES' = Spain
- 'PT' = Portugal
- 'US' = USA
- 'FR' = France
- 'CA' = Canada
- 'GB' = Great Britain
- 'IT' = Italy
- 'NL' = Netherlands
- 'DE' = Germany
- 'NDF'= No destination found
All the variables are calculated wrt. the US and english. The levenshtein distance is an indication on how far is the language spoken in the destination country compared to english. All the other variables are general geographics elements. This file will not be used in our model as it does not give direct indications regarding the users. | filename = "countries.csv"
folder = 'data'
fileAdress = os.path.join(folder, filename)
df = pd.read_csv(fileAdress)
df
df.describe() | project/reports/airbnb_booking/Main Preprocessing.ipynb | mdeff/ntds_2016 | mit |
1.4 file 'age_gender_bkts.csv'
This file presents demograhpic statistics about each country present in our dataset. This file will not be used in our model. | filename = "age_gender_bkts.csv"
folder = 'data'
fileAdress = os.path.join(folder, filename)
df = pd.read_csv(fileAdress)
df.head() | project/reports/airbnb_booking/Main Preprocessing.ipynb | mdeff/ntds_2016 | mit |
Population total per country
The following table shows the population in the country in 2015. These numbers correspond to data that can be found on the web. | df_country = df.groupby(['country_destination'],as_index=False).sum()
df_country | project/reports/airbnb_booking/Main Preprocessing.ipynb | mdeff/ntds_2016 | mit |
1.5 file 'sessions.csv'
This file keeps a track of each action made by each user (represented by their id). For each action (lookup, search etc...), the device type is saved so as the time spend for this action. | filename = "sessions.csv"
folder = 'data'
fileAdress = os.path.join(folder, filename)
df = pd.read_csv(fileAdress)
df.head() | project/reports/airbnb_booking/Main Preprocessing.ipynb | mdeff/ntds_2016 | mit |
NaN users
As we can see, there are some missing user_id. Without a user_id, it is impossible to link them with the file train_user.csv. We will delete them as we cannot do anything with them. | df.isnull().any()
df = preprocessing_helper.cleanSubset(df, 'user_id') | project/reports/airbnb_booking/Main Preprocessing.ipynb | mdeff/ntds_2016 | mit |
Invalid session time
If a session time is NaN, there was probably an error during the session. We are not going to remove the rows correponding, because there are still some data interesting for the actions variable.
Instead, we are naively going to assign them a value of -1. | df['secs_elapsed'].fillna(-1, inplace = True) | project/reports/airbnb_booking/Main Preprocessing.ipynb | mdeff/ntds_2016 | mit |
Actions
Some action produce -unknown- for action_type and/or action_detail. Sometimes they produce NaN. We replace the NaN values with -unknown- for action_type,action_detail, action | df = preprocessing_helper.cleanAction(df) | project/reports/airbnb_booking/Main Preprocessing.ipynb | mdeff/ntds_2016 | mit |
As shown in the following, there are no more NaN values. | df.isnull().any() | project/reports/airbnb_booking/Main Preprocessing.ipynb | mdeff/ntds_2016 | mit |
Total number of actions per user
From the session, we can compute the total number of actions per user. Intuitively, we can imagine that a user totalising few actions might be a user that does not book in the end. This value will be used as a new feature for the machine learning.
Note: The total number of actions is represented on a logarithmic basis. | # Get total number of action per user_id
data_session_number_action = preprocessing_helper.createActionFeature(df)
# Save to .csv file
filename = "total_action_user_id.csv"
folder = 'cleaned_data'
fileAdress = os.path.join(folder, filename)
preprocessing_helper.saveFile(data_session_number_action, fileAdress)
# Plot distribution total number of action per user_id
preprocessing_helper.plotActionFeature(data_session_number_action) | project/reports/airbnb_booking/Main Preprocessing.ipynb | mdeff/ntds_2016 | mit |
Device types
There are 14 possible devices. Most of the users however are distributed on three main devices. | preprocessing_helper.plotHist(df['device_type']) | project/reports/airbnb_booking/Main Preprocessing.ipynb | mdeff/ntds_2016 | mit |
Time spent on average per user
The figure below shows the time spent on average per user. The following plot relates to the Total number of actions one with even clearer two gaussians.
We display only time > 20s.
This value will also be used as a feature for the machine learning. | # Get Time spent on average per user_id
data_time_mean = preprocessing_helper.createAverageTimeFeature(df)
# Save to .csv file
data_time_mean = data_time_mean.rename(columns={'user_id': 'id'})
filename = "time_mean_user_id.csv"
folder = 'cleaned_data'
fileAdress = os.path.join(folder, filename)
preprocessing_helper.saveFile(data_time_mean, fileAdress)
# Plot distribution average time of session per user_id
preprocessing_helper.plotTimeFeature(data_time_mean['secs_elapsed'],'mean') | project/reports/airbnb_booking/Main Preprocessing.ipynb | mdeff/ntds_2016 | mit |
Time spent in total per user
The figure below shows the total amount of time spent per user.
We display only time > 20s
This feature is the 3rd and last one used for the machine learning from the file session. Intuitively, a long time spent leads to a booking and possibly further destinations. | # Get Time spent in total per user_id
data_time_total = preprocessing_helper.createTotalTimeFeature(df)
# Save to .csv file
data_time_total = data_time_total.rename(columns={'user_id': 'id'})
filename = "time_total_user_id.csv"
folder = 'cleaned_data'
fileAdress = os.path.join(folder, filename)
preprocessing_helper.saveFile(data_time_total, fileAdress)
# Plot distribution total time of session per user_id
preprocessing_helper.plotTimeFeature(data_time_total['secs_elapsed'],'total') | project/reports/airbnb_booking/Main Preprocessing.ipynb | mdeff/ntds_2016 | mit |
Distribution of time spent
This last graph shows the distribution of time spent in second per session.
We display only time > 20s | preprocessing_helper.plotTimeFeature(df['secs_elapsed'],'dist') | project/reports/airbnb_booking/Main Preprocessing.ipynb | mdeff/ntds_2016 | mit |
分割以下字符串
s = "Hi there Sam!"
到一个列表中 | s = "Hi there Sam!"
s.split(' ') | training/submit/PythonExercises1stAnd2nd_sulution.ipynb | hanleilei/note | cc0-1.0 |
提供了一下两个变量
planet = "Earth"
diameter = 12742
使用format()函数输出一下字符串
The diameter of Earth is 12742 kilometers. | planet = "Earth"
diameter = 12742
"The diameter of {0} is {1} kilometers.".format(planet, diameter) | training/submit/PythonExercises1stAnd2nd_sulution.ipynb | hanleilei/note | cc0-1.0 |
提供了以下嵌套列表,使用索引的方法获取单词‘hello' | lst = [1,2,[3,4],[5,[100,200,['hello']],23,11],1,7]
lst[3][1][2][0] | training/submit/PythonExercises1stAnd2nd_sulution.ipynb | hanleilei/note | cc0-1.0 |
提供以下嵌套字典,从中抓去单词 “hello” | d = {'k1':[1,2,3,{'tricky':['oh','man','inception',{'target':[1,2,3,'hello']}]}]}
d['k1'][3]['tricky'][3]['target'][3] | training/submit/PythonExercises1stAnd2nd_sulution.ipynb | hanleilei/note | cc0-1.0 |
编写一个函数,该函数能够获取类似于以下email地址的域名部分
[email protected]
因此,对于这个示例,传入 "[email protected]" 将返回: domain.com | '[email protected]'.split('@')[-1]
def domain(email):
return email.split('@')[-1] | training/submit/PythonExercises1stAnd2nd_sulution.ipynb | hanleilei/note | cc0-1.0 |
创建一个函数,如果输入的字符串中包含‘dog’,(请忽略corn case)统计一下'dog'的个数 | ss = 'This dog runs faster than the other dog dude!'
def countdog(s):
return s.lower().split(' ').count('dog')
countdog(ss)
def countDog(st):
count = 0
for word in st.lower().split():
if word == 'dog':
count += 1
return count | training/submit/PythonExercises1stAnd2nd_sulution.ipynb | hanleilei/note | cc0-1.0 |
创建一个函数,判断'dog' 是否包含在输入的字符串中(请同样忽略corn case) | s = 'I have a dog'
def judge_dog_in_str(s):
return 'dog' in s.lower().split(' ')
judge_dog_in_str(s) | training/submit/PythonExercises1stAnd2nd_sulution.ipynb | hanleilei/note | cc0-1.0 |
如果你驾驶的过快,交警就会拦下你。编写一个函数来返回以下三种可能的情况之一:"No ticket", "Small ticket", 或者 "Big Ticket".
如果速度小于等于60, 结果为"No Ticket". 如果速度在61和80之间, 结果为"Small Ticket". 如果速度大于81,结果为"Big Ticket". 除非这是你的生日(传入一个boolean值),如果是生日当天,就允许超速5公里/小时。(同样,请忽略corn case)。 | def caught_speeding(speed, is_birthday):
if is_birthday:
speeding = speed - 5
else:
speeding = speed
if speeding > 80:
return 'Big Ticket'
elif speeding > 60:
return 'Small Ticket'
else:
return 'No Ticket'
caught_speeding(81,True)
caught_speeding(81,False) | training/submit/PythonExercises1stAnd2nd_sulution.ipynb | hanleilei/note | cc0-1.0 |
计算斐波那契数列,使用生成器实现 | def fib_dyn(n):
a,b = 1,1
for i in range(n-1):
a,b = b,a+b
return a
fib_dyn(10)
def fib_recur(n):
if n == 0:
return 0
if n == 1:
return 1
else:
return fib_recur(n-1) + fib_recur(n-2)
fib_recur(10)
def fib(max):
n, a, b = 0, 0, 1
while n < max:
yield b
# print(b)
a, b = b, a + b
n = n + 1
print(list(fib(10))[-1]) | training/submit/PythonExercises1stAnd2nd_sulution.ipynb | hanleilei/note | cc0-1.0 |
기본 훈련 루프
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https://www.tensorflow.org/guide/basic_training_loops" class=""><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org에서 보기</a> </td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/guide/basic_training_loops.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab에서 실행</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/guide/basic_training_loops.ipynb" class=""> <img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png"> GitHub에서 소스 보기</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/guide/basic_training_loops.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">노트북 다운로드</a></td>
</table>
이전 가이드에서 텐서, 변수, 그래디언트 테이프 , 모듈에 관해 배웠습니다. 이 가이드에서는 모델을 훈련하기 위해 이들 요소를 모두 맞춤 조정합니다.
TensorFlow에는 상용구를 줄이기 위해 유용한 추상화를 제공하는 상위 수준의 신경망 API인 tf.Keras API도 포함되어 있습니다. 그러나 이 가이드에서는 기본 클래스를 사용합니다.
설정 | import tensorflow as tf | site/ko/guide/basic_training_loops.ipynb | tensorflow/docs-l10n | apache-2.0 |
머신러닝 문제 해결하기
머신러닝 문제의 해결은 일반적으로 다음 단계로 구성됩니다.
훈련 데이터를 얻습니다.
모델을 정의합니다.
손실 함수를 정의합니다.
훈련 데이터를 실행하여 이상적인 값에서 손실을 계산합니다.
손실에 대한 기울기를 계산하고 최적화 프로그램 를 사용하여 데이터에 맞게 변수를 조정합니다.
결과를 평가합니다.
설명을 위해 이 가이드에서는 $W$(가중치) 및 $b$(바이어스)의 두 가지 변수가 있는 간단한 선형 모델 $f(x) = x * W + b$를 개발합니다.
이것이 가장 기본적인 머신러닝 문제입니다. $x$와 $y$가 주어지면 간단한 선형 회귀를 통해 선의 기울기와 오프셋을 찾습니다.
데이터
지도 학습은 입력(일반적으로 x로 표시됨)과 출력(y로 표시, 종종 레이블이라고 함)을 사용합니다. 목표는 입력에서 출력 값을 예측할 수 있도록 쌍을 이룬 입력과 출력에서 학습하는 것입니다.
TensorFlow에서 데이터의 각 입력은 거의 항상 텐서로 표현되며, 종종 벡터입니다. 지도 학습에서 출력(또는 예측하려는 값)도 텐서입니다.
다음은 선을 따라 점에 가우시안 (정규 분포) 노이즈를 추가하여 합성된 데이터입니다. | # The actual line
TRUE_W = 3.0
TRUE_B = 2.0
NUM_EXAMPLES = 1000
# A vector of random x values
x = tf.random.normal(shape=[NUM_EXAMPLES])
# Generate some noise
noise = tf.random.normal(shape=[NUM_EXAMPLES])
# Calculate y
y = x * TRUE_W + TRUE_B + noise
# Plot all the data
import matplotlib.pyplot as plt
plt.scatter(x, y, c="b")
plt.show() | site/ko/guide/basic_training_loops.ipynb | tensorflow/docs-l10n | apache-2.0 |
텐서는 일반적으로 배치 또는 입력과 출력이 함께 쌓인 그룹의 형태로 수집됩니다. 일괄 처리는 몇 가지 훈련 이점을 제공할 수 있으며 가속기 및 벡터화된 계산에서 잘 동작합니다. 데이터세트가 얼마나 작은지를 고려할 때 전체 데이터세트를 단일 배치로 처리할 수 있습니다.
모델 정의하기
tf.Variable을 사용하여 모델의 모든 가중치를 나타냅니다. tf.Variable은 값을 저장하고 필요에 따라 텐서 형식으로 제공합니다. 자세한 내용은 변수 가이드를 참조하세요.
tf.Module을 사용하여 변수와 계산을 캡슐화합니다. 모든 Python 객체를 사용할 수 있지만 이렇게 하면 쉽게 저장할 수 있습니다.
여기서 w와 b를 모두 변수로 정의합니다. | class MyModel(tf.Module):
def __init__(self, **kwargs):
super().__init__(**kwargs)
# Initialize the weights to `5.0` and the bias to `0.0`
# In practice, these should be randomly initialized
self.w = tf.Variable(5.0)
self.b = tf.Variable(0.0)
def __call__(self, x):
return self.w * x + self.b
model = MyModel()
# List the variables tf.modules's built-in variable aggregation.
print("Variables:", model.variables)
# Verify the model works
assert model(3.0).numpy() == 15.0 | site/ko/guide/basic_training_loops.ipynb | tensorflow/docs-l10n | apache-2.0 |
초기 변수는 여기에서 고정된 방식으로 설정되지만 Keras에는 나머지 Keras의 유무에 관계없이 사용할 수 있는 여러 초기화 프로그램이 함께 제공됩니다.
손실 함수 정의하기
손실 함수는 주어진 입력에 대한 모델의 출력이 목표 출력과 얼마나 잘 일치하는지 측정합니다. 목표는 훈련 중에 이러한 차이를 최소화하는 것입니다. "평균 제곱" 오류라고도 하는 표준 L2 손실을 정의합니다. | # This computes a single loss value for an entire batch
def loss(target_y, predicted_y):
return tf.reduce_mean(tf.square(target_y - predicted_y)) | site/ko/guide/basic_training_loops.ipynb | tensorflow/docs-l10n | apache-2.0 |
모델을 훈련하기 전에 모델의 예측을 빨간색으로, 훈련 데이터를 파란색으로 플롯하여 손실값을 시각화할 수 있습니다. | plt.scatter(x, y, c="b")
plt.scatter(x, model(x), c="r")
plt.show()
print("Current loss: %1.6f" % loss(y, model(x)).numpy()) | site/ko/guide/basic_training_loops.ipynb | tensorflow/docs-l10n | apache-2.0 |
훈련 루프 정의하기
훈련 루프는 순서대로 3가지 작업을 반복적으로 수행하는 것으로 구성됩니다.
모델을 통해 입력 배치를 전송하여 출력 생성
출력을 출력(또는 레이블)과 비교하여 손실 계산
그래디언트 테이프를 사용하여 그래디언트 찾기
해당 그래디언트로 변수 최적화
이 예제에서는 경사 하강법을 사용하여 모델을 훈련할 수 있습니다.
tf.keras.optimizers에서 캡처되는 경사 하강법 체계에는 다양한 변형이 있습니다. 하지만 첫 번째 원칙을 준수하는 의미에서, 기본적인 수학을 직접 구현할 것입니다. 자동 미분을 위한 tf.GradientTape 및 값 감소를 위한 tf.assign_sub(tf.assign과 tf.sub를 결합하는 값)의 도움을 받습니다. | # Given a callable model, inputs, outputs, and a learning rate...
def train(model, x, y, learning_rate):
with tf.GradientTape() as t:
# Trainable variables are automatically tracked by GradientTape
current_loss = loss(y, model(x))
# Use GradientTape to calculate the gradients with respect to W and b
dw, db = t.gradient(current_loss, [model.w, model.b])
# Subtract the gradient scaled by the learning rate
model.w.assign_sub(learning_rate * dw)
model.b.assign_sub(learning_rate * db) | site/ko/guide/basic_training_loops.ipynb | tensorflow/docs-l10n | apache-2.0 |
훈련을 살펴보려면 훈련 루프를 통해 x 및 y의 같은 배치를 보내고 W 및 b가 발전하는 모습을 확인합니다. | model = MyModel()
# Collect the history of W-values and b-values to plot later
Ws, bs = [], []
epochs = range(10)
# Define a training loop
def training_loop(model, x, y):
for epoch in epochs:
# Update the model with the single giant batch
train(model, x, y, learning_rate=0.1)
# Track this before I update
Ws.append(model.w.numpy())
bs.append(model.b.numpy())
current_loss = loss(y, model(x))
print("Epoch %2d: W=%1.2f b=%1.2f, loss=%2.5f" %
(epoch, Ws[-1], bs[-1], current_loss))
print("Starting: W=%1.2f b=%1.2f, loss=%2.5f" %
(model.w, model.b, loss(y, model(x))))
# Do the training
training_loop(model, x, y)
# Plot it
plt.plot(epochs, Ws, "r",
epochs, bs, "b")
plt.plot([TRUE_W] * len(epochs), "r--",
[TRUE_B] * len(epochs), "b--")
plt.legend(["W", "b", "True W", "True b"])
plt.show()
# Visualize how the trained model performs
plt.scatter(x, y, c="b")
plt.scatter(x, model(x), c="r")
plt.show()
print("Current loss: %1.6f" % loss(model(x), y).numpy()) | site/ko/guide/basic_training_loops.ipynb | tensorflow/docs-l10n | apache-2.0 |
같은 솔루션이지만, Keras를 사용한 경우
위의 코드를 Keras의 해당 코드와 대조해 보면 유용합니다.
tf.keras.Model을 하위 클래스화하면 모델 정의는 정확히 같게 보입니다. Keras 모델은 궁극적으로 모듈에서 상속한다는 것을 기억하세요. | class MyModelKeras(tf.keras.Model):
def __init__(self, **kwargs):
super().__init__(**kwargs)
# Initialize the weights to `5.0` and the bias to `0.0`
# In practice, these should be randomly initialized
self.w = tf.Variable(5.0)
self.b = tf.Variable(0.0)
def call(self, x):
return self.w * x + self.b
keras_model = MyModelKeras()
# Reuse the training loop with a Keras model
training_loop(keras_model, x, y)
# You can also save a checkpoint using Keras's built-in support
keras_model.save_weights("my_checkpoint") | site/ko/guide/basic_training_loops.ipynb | tensorflow/docs-l10n | apache-2.0 |
모델을 생성할 때마다 새로운 훈련 루프를 작성하는 대신 Keras의 내장 기능을 바로 가기로 사용할 수 있습니다. Python 훈련 루프를 작성하거나 디버그하지 않으려는 경우 유용할 수 있습니다.
그렇게 하려면, model.compile()을 사용하여 매개변수를 설정하고 model.fit()을 사용하여 훈련해야 합니다. L2 손실 및 경사 하강법의 Keras 구현을 바로 가기로 사용하면 코드가 적을 수 있습니다. Keras 손실 및 최적화 프록그램은 이러한 편의성 함수 외부에서 사용할 수 있으며 이전 예제에서 사용할 수 있습니다. | keras_model = MyModelKeras()
# compile sets the training parameters
keras_model.compile(
# By default, fit() uses tf.function(). You can
# turn that off for debugging, but it is on now.
run_eagerly=False,
# Using a built-in optimizer, configuring as an object
optimizer=tf.keras.optimizers.SGD(learning_rate=0.1),
# Keras comes with built-in MSE error
# However, you could use the loss function
# defined above
loss=tf.keras.losses.mean_squared_error,
) | site/ko/guide/basic_training_loops.ipynb | tensorflow/docs-l10n | apache-2.0 |
Keras fit 배치 데이터 또는 전체 데이터세트를 NumPy 배열로 예상합니다. NumPy 배열은 배치로 분할되며, 기본 배치 크기는 32입니다.
이 경우 손으로 쓴 루프의 동작과 일치시키려면 x를 크기 1000의 단일 배치로 전달해야 합니다. | print(x.shape[0])
keras_model.fit(x, y, epochs=10, batch_size=1000) | site/ko/guide/basic_training_loops.ipynb | tensorflow/docs-l10n | apache-2.0 |
For this example I am using MNIST dataset of handwritten images | scaler = StandardScaler()
mnist = fetch_mldata('MNIST original')
# converting data to be of type float .astype(float) to supress
# data conversion warrning during scaling
X= pd.DataFrame(scaler.fit_transform(mnist['data'].astype(float)))
y= pd.DataFrame(mnist['target'].astype(int))
# This function plots the given sample set of images as a grid with labels
# if labels are available.
def plot_sample(S,labels=None):
m, n = S.shape;
example_width = int(np.round(np.sqrt(n)));
example_height = int((n / example_width));
# Compute number of items to display
display_rows = int(np.floor(np.sqrt(m)));
display_cols = int(np.ceil(m / display_rows));
fig = plt.figure()
for i in range(0,m):
arr = S[i,:]
arr = arr.reshape((example_width,example_height))
ax = fig.add_subplot(display_rows,display_cols , i+1)
ax.imshow(arr, aspect='auto', cmap=plt.get_cmap('gray'))
if labels is not None:
ax.text(0,0, '{}'.format(labels[i]), bbox={'facecolor':'white', 'alpha':0.8,'pad':2})
ax.axis('off')
plt.show() | neural_network/recognize_hand_written_digits.ipynb | xsolo/machine-learning | mit |
Let's plot a random 100 images | samples = X.sample(100)
plot_sample(samples.as_matrix()) | neural_network/recognize_hand_written_digits.ipynb | xsolo/machine-learning | mit |
Now, let use the Neural Network with 1 hidden layers. The number of neurons in each layer is X_train.shape[1] which is 400 in our example (excluding the extra bias unit). | from sklearn.neural_network import MLPClassifier
from sklearn.model_selection import train_test_split
# since the data we have is one big array, we want to split it into training
# and testing sets, the split is 70% goes to training and 30% of data for testing
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)
neural_network =(80,)
# for this excersize we are using MLPClassifier with lbfgs optimizer (the family of quasi-Newton methods). In my simple
# experiments it produces good quality outcome
clf = MLPClassifier(solver='lbfgs', alpha=1, hidden_layer_sizes=neural_network)
clf.fit(X_train, y_train[0].ravel())
# So after the classifier is trained, lets see what it predicts on the test data
prediction = clf.predict(X_test)
quality = np.where(prediction == y_test[0].ravel(),1,0)
print ("Percentage of correct results is {:.04f}".format(accuracy_score(y_test,prediction)))
# I am going to use the same test set of data and will select random 48 examples from it.
# The top left corner is the prediction from the Neural Network
# please note that 0 is represented as 10 in this data set
samples = X_test.sample(100)
plot_sample(samples.as_matrix(),clf.predict(samples)) | neural_network/recognize_hand_written_digits.ipynb | xsolo/machine-learning | mit |
First let us import the module and create a GraphMap that persists in memory. | from graphmap.graphmap_main import GraphMap
from graphmap.memory_persistence import MemoryPersistence
G = GraphMap(MemoryPersistence()) | notebook/Getting_Started.ipynb | abhishekraok/GraphMap | apache-2.0 |
Let us create two nodes with images of Seattle skyline and Mt. Tacoma from wikimedia. | from graphmap.graph_helpers import NodeLink
seattle_skyline_image_url = 'https://upload.wikimedia.org/wikipedia/commons/thumb/2/2f/Space_Needle002.jpg/640px-Space_Needle002.jpg'
mt_tacoma_image_url = 'https://upload.wikimedia.org/wikipedia/commons/thumb/a/a2/Mount_Rainier_from_the_Silver_Queen_Peak.jpg/1024px-Mount_Rainier_from_the_Silver_Queen_Peak.jpg'
seattle_node_link = NodeLink('seattle')
mt_tacoma_node_link = NodeLink('tacoma')
G.create_node(root_node_link=seattle_node_link, image_value_link=seattle_skyline_image_url)
G.create_node(root_node_link=mt_tacoma_node_link, image_value_link=mt_tacoma_image_url) | notebook/Getting_Started.ipynb | abhishekraok/GraphMap | apache-2.0 |
Now that we have created the 'seattle' node let's see how it looks | seattle_pil_image_result = G.get_image_at_quad_key(root_node_link=seattle_node_link, resolution=256, quad_key='')
mt_tacoma_pil_image_result = G.get_image_at_quad_key(root_node_link=mt_tacoma_node_link, resolution=256, quad_key='')
import matplotlib.pyplot as plt
plt.imshow(seattle_pil_image_result.value)
plt.figure()
plt.imshow(mt_tacoma_pil_image_result.value) | notebook/Getting_Started.ipynb | abhishekraok/GraphMap | apache-2.0 |
Let us insert the 'tacoma' node into the 'seattle' node at the top right. The quad key we will use is 13. 1 correpsonds to the top right quadrant, inside that we will insert at bottom right hence 3. | insert_quad_key = '13'
created_node_link_result = G.connect_child(root_node_link=seattle_node_link,
quad_key=insert_quad_key,
child_node_link=mt_tacoma_node_link,)
print(created_node_link_result) | notebook/Getting_Started.ipynb | abhishekraok/GraphMap | apache-2.0 |
Let us see how the new_seattle_node looks after the insertion. | created_node_link = created_node_link_result.value
new_seattle_image_result = G.get_image_at_quad_key(created_node_link, resolution=256, quad_key='')
new_seattle_image_result
plt.imshow(new_seattle_image_result.value) | notebook/Getting_Started.ipynb | abhishekraok/GraphMap | apache-2.0 |
Model specification
Our initial beliefs about the parameters are quite informative (sd=1) and a bit off the true values. | basic_model = Model()
with basic_model:
# Priors for unknown model parameters
alpha = Normal('alpha', mu=0, sd=1)
beta0 = Normal('beta0', mu=12, sd=1)
beta1 = Normal('beta1', mu=18, sd=1)
# Expected value of outcome
mu = alpha + beta0 * X1 + beta1 * X2
# Likelihood (sampling distribution) of observations
Y_obs = Normal('Y_obs', mu=mu, sd=1, observed=Y)
# draw 10000 posterior samples
trace = sample(10000)
traceplot(trace); | notebooks/updating_priors.ipynb | dolittle007/dolittle007.github.io | gpl-3.0 |
Now we just need to generate more data and build our Bayesian model so that the prior distributions for the current iteration are the posterior distributions from the previous iteration. It is still possible to continue using NUTS sampling method because Interpolated class implements calculation of gradients that are necessary for Hamiltonian Monte Carlo samplers. | traces = [trace]
for _ in range(10):
# generate more data
X1 = np.random.randn(size)
X2 = np.random.randn(size) * 0.2
Y = alpha_true + beta0_true * X1 + beta1_true * X2 + np.random.randn(size)
model = Model()
with model:
# Priors are posteriors from previous iteration
alpha = from_posterior('alpha', trace['alpha'])
beta0 = from_posterior('beta0', trace['beta0'])
beta1 = from_posterior('beta1', trace['beta1'])
# Expected value of outcome
mu = alpha + beta0 * X1 + beta1 * X2
# Likelihood (sampling distribution) of observations
Y_obs = Normal('Y_obs', mu=mu, sd=1, observed=Y)
# draw 10000 posterior samples
trace = sample(10000)
traces.append(trace)
print('Posterior distributions after ' + str(len(traces)) + ' iterations.')
cmap = mpl.cm.autumn
for param in ['alpha', 'beta0', 'beta1']:
plt.figure(figsize=(8, 2))
for update_i, trace in enumerate(traces):
samples = trace[param]
smin, smax = np.min(samples), np.max(samples)
x = np.linspace(smin, smax, 100)
y = stats.gaussian_kde(samples)(x)
plt.plot(x, y, color=cmap(1 - update_i / len(traces)))
plt.axvline({'alpha': alpha_true, 'beta0': beta0_true, 'beta1': beta1_true}[param], c='k')
plt.ylabel('Frequency')
plt.title(param)
plt.show() | notebooks/updating_priors.ipynb | dolittle007/dolittle007.github.io | gpl-3.0 |
Classification Project
In this project you will apply what you have learned about classification and TensorFlow to complete a project from Kaggle. The challenge is to achieve a high accuracy score while trying to predict which passengers survived the Titanic ship crash. After building your model, you will upload your predictions to Kaggle and submit the score that you get.
The Titanic Dataset
Kaggle has a dataset containing the passenger list on the Titanic. The data contains passenger features such as age, gender, ticket class, as well as whether or not they survived.
Your job is to create a binary classifier using TensorFlow to determine if a passenger survived or not. The Survived column lets you know if the person survived. Then, upload your predictions to Kaggle and submit your accuracy score at the end of this Colab, along with a brief conclusion.
To get the dataset, you'll need to accept the competition's rules by clicking the "I understand and accept" button on the competition rules page. Then upload your kaggle.json file and run the code below. | ! chmod 600 kaggle.json && (ls ~/.kaggle 2>/dev/null || mkdir ~/.kaggle) && cp kaggle.json ~/.kaggle/ && echo 'Done'
! kaggle competitions download -c titanic
! ls | content/04_classification/04_classification_project/colab.ipynb | google/applied-machine-learning-intensive | apache-2.0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.