markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
---|---|---|---|---|
Run the smoothing algorithm on imported data
The smoothing algorithm is applied to the datastream by calling the run_algorithm method and passing the method as a parameter along with which columns, some_vals, that should be sent. Finally, the windowDuration parameter specified the size of the time windows on which to segment the data before applying the algorithm. Notice that when the next cell is run, the operation completes nearly instantaneously. This is due to the lazy evaluation aspects of the Spark framework. When you run the next cell to show the data, the algorithm will be applied to the whole dataset before displaying the results on the screen. | smooth_stream = iot_stream.compute(smooth_algo, windowDuration=10)
smooth_stream.show(truncate=False) | jupyter_demo/import_and_analyse_data.ipynb | MD2Korg/CerebralCortex | bsd-2-clause |
Visualize data
These are two plots that show the original and smoothed data to visually check how the algorithm transformed the data. | from cerebralcortex.plotting.basic.plots import plot_timeseries
plot_timeseries(iot_stream, user_id=USER_ID)
plot_timeseries(smooth_stream, user_id=USER_ID) | jupyter_demo/import_and_analyse_data.ipynb | MD2Korg/CerebralCortex | bsd-2-clause |
We load the data in a Pandas dataframe as always and specify our column names. | # read .csv from provided dataset
csv_filename="zoo.data"
# df=pd.read_csv(csv_filename,index_col=0)
df=pd.read_csv(csv_filename,
names=["Animal", "Hair" , "Feathers" , "Eggs" , "Milk" , "Airborne",
"Aquatic" , "Predator" , "Toothed" , "Backbone", "Breathes" , "Venomous",
"Fins", "Legs", "Tail", "Domestic", "Catsize", "Type" ]) | Classification/Zoo Animal Classification using Naive Bayes.ipynb | Aniruddha-Tapas/Applied-Machine-Learning | mit |
We'll have a look at our dataset: | df.head()
df.tail()
df['Animal'].unique() | Classification/Zoo Animal Classification using Naive Bayes.ipynb | Aniruddha-Tapas/Applied-Machine-Learning | mit |
Now this data contains textual data. The values are in String not in Integer or Float as we would like for our classifier. So we'll use LabelEncoder to transform the data:
Next we convert the Legs column in a binarized form using the get_dummies method. | #Convert animal labels to numbers
le_animals = preprocessing.LabelEncoder()
df['animals'] = le_animals.fit_transform(df.Animal)
#Get binarized Legs columns
df['Legs'] = pd.get_dummies(df.Legs)
#types = pd.get_dummies(df.Type) | Classification/Zoo Animal Classification using Naive Bayes.ipynb | Aniruddha-Tapas/Applied-Machine-Learning | mit |
Our data now looks like: | df.head()
df['Type'].unique() | Classification/Zoo Animal Classification using Naive Bayes.ipynb | Aniruddha-Tapas/Applied-Machine-Learning | mit |
Our class values range from 1 to 7 denotin specific Animal type.
We specify our features and target variable | features=(list(df.columns[1:]))
features
features.remove('Type')
X = df[features]
y = df['Type']
X.head() | Classification/Zoo Animal Classification using Naive Bayes.ipynb | Aniruddha-Tapas/Applied-Machine-Learning | mit |
As usual, we split our dataset to 60% training and 40% testing | # split dataset to 60% training and 40% testing
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.4, random_state=0)
X_train.shape, y_train.shape | Classification/Zoo Animal Classification using Naive Bayes.ipynb | Aniruddha-Tapas/Applied-Machine-Learning | mit |
Finding Feature importances with forests of trees
This examples shows the use of forests of trees to evaluate the importance of features on an artificial classification task. The red bars are the feature importances of the forest, along with their inter-trees variability. | %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from sklearn.ensemble import ExtraTreesClassifier
# Build a classification task using 3 informative features
# Build a forest and compute the feature importances
forest = ExtraTreesClassifier(n_estimators=250,
random_state=0)
forest.fit(X, y)
importances = forest.feature_importances_
std = np.std([tree.feature_importances_ for tree in forest.estimators_],
axis=0)
indices = np.argsort(importances)[::-1]
# Print the feature ranking
print("Feature ranking:")
for f in range(X.shape[1]):
print("%d. feature %d - %s (%f) " % (f + 1, indices[f], features[indices[f]], importances[indices[f]]))
# Plot the feature importances of the forest
plt.figure(num=None, figsize=(14, 10), dpi=80, facecolor='w', edgecolor='k')
plt.title("Feature importances")
plt.bar(range(X.shape[1]), importances[indices],
color="r", yerr=std[indices], align="center")
plt.xticks(range(X.shape[1]), indices)
plt.xlim([-1, X.shape[1]])
plt.show()
importances[indices[:5]]
for f in range(5):
print("%d. feature %d - %s (%f)" % (f + 1, indices[f], features[indices[f]] ,importances[indices[f]]))
best_features = []
for i in indices[:5]:
best_features.append(features[i])
# Plot the top 5 feature importances of the forest
plt.figure(num=None, figsize=(8, 6), dpi=80, facecolor='w', edgecolor='k')
plt.title("Feature importances")
plt.bar(range(5), importances[indices][:5],
color="r", yerr=std[indices][:5], align="center")
plt.xticks(range(5), best_features)
plt.xlim([-1, 5])
plt.show() | Classification/Zoo Animal Classification using Naive Bayes.ipynb | Aniruddha-Tapas/Applied-Machine-Learning | mit |
<hr>
Naive Bayes
Naive Bayes algorithm is a simple yet powerful algorithm. The naive term comes from the fact that Naive Bayes takes a few shortcuts, which we would take a look over soon, to compute the probabilities for classification. It is flexible enough and can be used on different types of datasets easily and doesn't have constraints like only numerical features should be used. Hence it is a most appropriate for tasks like Text Classification. Naive Bayes works on a probabilistic model that is built upon a naive interpretation of Bayesian statistics. Despite the naive aspect, the method performs very well in a large number of contexts.
Bayes' theorem
Approach to learning statistics is roughly divided into two parts : the frequentist
approach and the Bayesian approach.
Most of us have been familiar with the frequentist approach when we first encountered with Statistics. According to frequentist approach, the data is assumed to data come from some distribution and then our aim is to determine what the parameters are for that particular distribution. However, those parameters are assumed to be fixed (perhaps incorrectly). We use our model to
describe the data, and then even test to ensure the data fits our model.
Bayesian statistics instead model how people actually reason when given with some data. We have a set of data and we use that data to update our model about with the probability of how likely something is to occur. In Bayesian statistics, we use the data to describe the model rather than using a model first and confirming it later with data like the frequentist approach.
Bayes' theorem computes the value of P(A|B), which denotes what is the probability of even A occurring, given that B has already occurred. In most cases, B would be an observed event for e.g. it has rained
yesterday, and A would be a prediction like it will rain today. For data mining, B is usually what we refer to as our observed data for training our model and A would be finding out to what class a new data point belongs. We will see how Bayes'
theorem can be used for data mining in the next section.
The equation for Bayes' theorem is given as follows:
<img src = "images/bayes.png">
To understand Naive Bayes better, we would take our Zoo animal classification example, where we are classifying animals according to their type. For simplicity we assume we want to find the probability that an animal with a set of features X1 and X2 is of Type 1 i.e vertebrates.
A, in this context, is the probability that this animal is a vertebrate. We can compute P(A),
called the prior belief directly from a training dataset by computing the percentage
of animals in our dataset that are vertebrates. If our dataset contains 30 vertebrates for
every 100 animals, P(A) is 30/100 or 0.3.
B, in this context, is this animal contains the features 'X1' and 'X2'. Likewise, we can compute P(B)
by computing the percentage of animals in our dataset containing these specific features. If 10
animals in every 100 of our training dataset contain the features 'X1' and 'X2', P(B) is 10/100 or
0.1. Note that we don't care if the animal is vertebrate or not when computing this value.
P(B|A) is the probability that an animal contains the features 'X1' and 'X2' if it is a vertebrate.
It is also easy to compute from our training dataset. We look through our training
set for vertebrate animals and compute the percentage of them that contain the features X1 and X2. Of our 30 vertebrates, if 6 contain the features 'X1' and 'X2', then P(B|A) is calculated as 6/30 or 0.2.
From here, we use Bayes' theorem to compute P(A|B), which is the probability that
a animal containg the particular features 'X1' and 'X2' is a vertebrate. Using the previous equation, we see the
result is 0.6. This indicates that if an animal has the features 'X1' and 'X2' in it, there is a 60
percent chance that it is vertebrate.
Note how we calculated the probability in the preceding example. We used evidence directly
from our training dataset, not from some presumed distribution. In contrast, a
frequentist approach of this would rely on us creating a distribution of the probability of
different features in animals to compute similar equations.
<hr>
Naive Bayes algorithm
As we found out, the Bayes' theorem equation can be used to compute the
probability that a given sample belongs to a given class. Hence we can use the equation as a classification algorithm.
Using C as a given class and D as a sample data-point in our dataset, we create the elements
necessary for Bayes' theorem and Naive Bayes. Naive Bayes is a
classification algorithm that utilizes Bayes' theorem to compute the probability
that a new data sample belongs to a particular class.
P(C) is the probability of a class, which is computed from the training dataset itself
(as we did with the vertebrate example). We simply compute the percentage of samples
in our training dataset that belong to the given class.
P(D) is the probability of a given data-point. It might get difficult for us to compute this, as the sample can contain many different features, but since it is a constant across all classes and we don't need to compute it at all. We will see
later how to work around this issue.
P(D|C) is the probability of the data sample D belonging to the class C. This could also be difficult to compute due to the different features of D. However, this is where the naive part of the Naive Bayes algorithm comes into picture. Naive Bayes naively assume that each feature is independent of each other. Rather than computing the full probability of
P(D|C), we compute the probability of each feature D1, D2, D3, … and so on. Then,
we multiply them together:
P(D|C) = P(D1|C) x P(D2|C).... x P(Dn|C) as if they were all independent from each other.
Each of these values is relatively easy to compute; we simply compute the percentage of times it is equal in our sample dataset.
In contrast, if we wanted to perform a non-Naive Bayes version for this part, we would
need to compute the correlations between different features for each class. Such
computation is too complex and hence infeasible, without vast amounts of data or sufficient language analysis models.
From here, the algorithm is simple. We compute P(C|D) for each possible
class, ignoring the P(D) term. Then we choose the class with the highest probability.
As the P(D) term is constant across each of the classes, ignoring it has no impact on
the final prediction.
How it works
As an example, suppose we have the following (binary) feature values from a
sample in our dataset: [0, 0, 0, 1].
Our training dataset contains two classes with 25 percent (1 out of 4) belonging to the class 1 and 75 percent (3 out of 4) of samples belonging to the class 0. The probabilities of the feature values for each class are as follows:
For Class 0: [0.3, 0.4, 0.4, 0.7]
For Class 1: [0.7, 0.3, 0.4, 0.9]
Interpretation of these values would be: for feature 1, it is a 1 in 30 percent of cases for class 0.
We can now compute the probability that this sample should belong to the class 0.
P(C=0) = 0.75 which is the probability that the class is 0.
P(D) isn't needed for the Naive Bayes algorithm. Let's take a look at the calculation:
P(D|C=0) = P(D1|C=0) x P(D2|C=0) x P(D3|C=0) x P(D4|C=0)
= 0.3 x 0.6 x 0.6 x 0.7
= 0.0756
The second and third values are 0.6, because the value of that feature
in the sample was 0. The listed probabilities are for values of 1 for each
feature. Therefore, the probability of a 0 is its inverse: P(0) = 1 – P(1).
Now, we can compute the probability of the data sample belonging to this class.
An important point to note is that we haven't computed P(D), so this isn't a real
probability. However, it is good enough to compare against the same value for
the probability of the class 1. Let's take a look at the calculation:
P(C=0|D) = P(C=0) P(D|C=0)
= 0.75 * 0.0756
= 0.0567
Now, we compute the same values for the class 1:
P(C=1) = 0.25
P(D) isn't needed for naive Bayes. Let's take a look at the calculation:
P(D|C=1) = P(D1|C=1) x P(D2|C=1) x P(D3|C=1) x P(D4|C=1)
= 0.7 x 0.7 x 0.6 x 0.9
= 0.2646
P(C=1|D) = P(C=1)P(D|C=1)
= 0.25 * 0.2646
= 0.06615
The data point should be classified as belonging to the class 1. You may have
guessed this while going through the equations anyway; however, you may have
been a bit surprised that the final decision was so close. After all, the probabilities
in computing P(D|C) were much, much higher for the class 1. This is because we
introduced a prior belief that most samples generally belong to the class 0.
If the classes had been equal sizes, the resulting probabilities would be much
different. Try it yourself by changing both P(C=0) and P(C=1) to 0.5 for equal class
sizes and computing the result again.
Now that we understand the theory, implementing Naive Bayes classifier is identical to implementations of other classifiers and very simple:
We follow the same steps :
Create a NaiveBayesClassifier object. (Here we use the BernoulliNB classifier)
We fit our model on our training sets
We print the Accuracies calculated on the test sets. | t4=time()
print ("NaiveBayes")
nb = BernoulliNB()
clf_nb=nb.fit(X_train,y_train)
print ("Acurracy: ", clf_nb.score(X_test,y_test))
t5=time()
print ("time elapsed: ", t5-t4) | Classification/Zoo Animal Classification using Naive Bayes.ipynb | Aniruddha-Tapas/Applied-Machine-Learning | mit |
Thus the Accuracy is found to be 87% which is quite good for such limited data. However this accuracy might not be a perfect measure of our model efficiency. Hence we use Cross Validation :
Cross-validation for Naive Bayes | tt4=time()
print ("cross result========")
scores = cross_validation.cross_val_score(nb, X,y, cv=3)
print (scores)
print (scores.mean())
tt5=time()
print ("time elapsed: ", tt5-tt4) | Classification/Zoo Animal Classification using Naive Bayes.ipynb | Aniruddha-Tapas/Applied-Machine-Learning | mit |
Next we import and configure Pandas, a Python library to work with data. | import pandas as pd
from pandas.io.json import json_normalize
pd.set_option('max_colwidth', 1000)
pd.set_option("display.max_rows",100)
pd.set_option("display.max_columns",100) | EXERCISE TranSMART REST API V2 (2017).ipynb | thehyve/transmart-api-training | gpl-3.0 |
Part 1: Plotting blood pressure over time
As a first REST API call it would be nice to see what studies are available in this tranSMART server.
You will see a list of all studies, their name (studyId) and what dimensions are available for this study. Remember that tranSMART previously only supported the dimensions patients, concepts and studies. Now you should see studies with many more dimensions! | studies = api.get_studies()
json_normalize(studies['studies']) | EXERCISE TranSMART REST API V2 (2017).ipynb | thehyve/transmart-api-training | gpl-3.0 |
We choose the TRAINING study and ask for all patients in this study. You will get a list with their patient details and patient identifier. | study_id = 'TRAINING'
patients = api.get_patients(study = study_id)
json_normalize(patients['patients']) | EXERCISE TranSMART REST API V2 (2017).ipynb | thehyve/transmart-api-training | gpl-3.0 |
Next we ask for the full list of observations for this study. This list will include one row per observation, with information from all their dimensions. The columns will have headers like <dimension name>.<field name> and numericValue or stringValue for the actual observation value. | obs = api.get_observations(study = study_id)
obsDataframe = json_normalize(api.format_observations(obs))
obsDataframe
#DO STUFF WITH THE TRAINING STUDY HERE | EXERCISE TranSMART REST API V2 (2017).ipynb | thehyve/transmart-api-training | gpl-3.0 |
Part 2: Combining Glowing Bear and the Python client
For the second part we will work with the Glowing Bear user interface that was developed at The Hyve, funded by IMI Translocation and BBMRI.
An API is great to extract exactly the data you need and analyze that. But it is harder to get a nice overview of all data that is available and define the exact set to extract. That is where the Glowing Bear was built for.
Please go to http://glowingbear2-head.thehyve.net and create a Patient Set on the Data Selection tab (under Select patients). Once you have saved your patient set, copy the patient set identifier and paste that below. | patient_set_id = 28733 | EXERCISE TranSMART REST API V2 (2017).ipynb | thehyve/transmart-api-training | gpl-3.0 |
Now let's return all patients for the patient set we made! | patients = api.get_patients(patientSet = patient_set_id)
json_normalize(patients['patients']) | EXERCISE TranSMART REST API V2 (2017).ipynb | thehyve/transmart-api-training | gpl-3.0 |
And do the same for all observations for this patient set. | obs = api.get_observations(study = study_id, patientSet = patient_set_id)
obsDataframe = json_normalize(api.format_observations(obs))
obsDataframe | EXERCISE TranSMART REST API V2 (2017).ipynb | thehyve/transmart-api-training | gpl-3.0 |
Download data | %bash
wget http://data.statmt.org/wmt17/translation-task/training-parallel-nc-v12.tgz
wget http://data.statmt.org/wmt17/translation-task/dev.tgz
!ls *.tgz | blogs/t2t/translate_ende.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
Set up problem
The Problem in tensor2tensor is where you specify parameters like the size of your vocabulary and where to get the training data from. | %bash
rm -rf t2t
mkdir -p t2t/ende
!pwd
%writefile t2t/ende/problem.py
import tensorflow as tf
from tensor2tensor.data_generators import generator_utils
from tensor2tensor.data_generators import problem
from tensor2tensor.data_generators import text_encoder
from tensor2tensor.data_generators import translate
from tensor2tensor.utils import registry
#TOPDIR="gs://{}/translate_ende/".format("BUCKET_NAME")
TOPDIR="file:///content/t2t" # Make sure this matches the !pwd above
_ENDE_TRAIN_DATASETS = [
[
"{}/training-parallel-nc-v12.tgz".format(TOPDIR),
("training/news-commentary-v12.de-en.en",
"training/news-commentary-v12.de-en.de")
],
]
_ENDE_TEST_DATASETS = [
[
"{}/dev.tgz".format(TOPDIR),
("dev/newstest2013.en", "dev/newstest2013.de")
],
]
@registry.register_problem
class MyTranslateProblem(translate.TranslateProblem):
@property
def targeted_vocab_size(self):
return 2**13 # 8192
@property
def vocab_name(self):
return "vocab.english_to_german"
def generator(self, data_dir, tmp_dir, train):
symbolizer_vocab = generator_utils.get_or_generate_vocab(
data_dir, tmp_dir, self.vocab_file, self.targeted_vocab_size, sources=_ENDE_TRAIN_DATASETS)
datasets = _ENDE_TRAIN_DATASETS if train else _ENDE_TEST_DATASETS
tag = "train" if train else "dev"
data_path = translate.compile_data(tmp_dir, datasets, "wmt_ende_tok_%s" % tag)
return translate.token_generator(data_path + ".lang1", data_path + ".lang2",
symbolizer_vocab, text_encoder.EOS_ID)
@property
def input_space_id(self):
return problem.SpaceID.EN_TOK
@property
def target_space_id(self):
return problem.SpaceID.DE_TOK
%%writefile t2t/ende/__init__.py
from . import problem
%%writefile t2t/setup.py
from setuptools import find_packages
from setuptools import setup
REQUIRED_PACKAGES = [
'tensor2tensor'
]
setup(
name='ende',
version='0.1',
author = 'Google',
author_email = '[email protected]',
install_requires=REQUIRED_PACKAGES,
packages=find_packages(),
include_package_data=True,
description='My Translate Problem',
requires=[]
) | blogs/t2t/translate_ende.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
Generate training data
Our problem (translation) requires the creation of text sequences from the training dataset. This is done using t2t-datagen and the Problem defined in the previous section. | %bash
DATA_DIR=./t2t_data
TMP_DIR=$DATA_DIR/tmp
rm -rf $DATA_DIR $TMP_DIR
mkdir -p $DATA_DIR $TMP_DIR
# Generate data
t2t-datagen \
--t2t_usr_dir=./t2t/ende \
--problem=$PROBLEM \
--data_dir=$DATA_DIR \
--tmp_dir=$TMP_DIR | blogs/t2t/translate_ende.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
Provide Cloud ML Engine access to data
Copy the data to Google Cloud Storage, and then provide access to the data | %bash
DATA_DIR=./t2t_data
gsutil -m rm -r gs://${BUCKET}/translate_ende/
gsutil -m cp ${DATA_DIR}/${PROBLEM}* ${DATA_DIR}/vocab* gs://${BUCKET}/translate_ende/data
%bash
PROJECT_ID=$PROJECT
AUTH_TOKEN=$(gcloud auth print-access-token)
SVC_ACCOUNT=$(curl -X GET -H "Content-Type: application/json" \
-H "Authorization: Bearer $AUTH_TOKEN" \
https://ml.googleapis.com/v1/projects/${PROJECT_ID}:getConfig \
| python -c "import json; import sys; response = json.load(sys.stdin); \
print response['serviceAccount']")
echo "Authorizing the Cloud ML Service account $SVC_ACCOUNT to access files in $BUCKET"
gsutil -m defacl ch -u $SVC_ACCOUNT:R gs://$BUCKET
gsutil -m acl ch -u $SVC_ACCOUNT:R -r gs://$BUCKET # error message (if bucket is empty) can be ignored
gsutil -m acl ch -u $SVC_ACCOUNT:W gs://$BUCKET | blogs/t2t/translate_ende.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
Train model as a Python package
To submit the training job to Cloud Machine Learning Engine, we need a Python module with a main(). We'll use the t2t-trainer that is distributed with tensor2tensor as the main | %bash
wget https://raw.githubusercontent.com/tensorflow/tensor2tensor/master/tensor2tensor/bin/t2t-trainer
mv t2t-trainer t2t/ende/t2t-trainer.py
!touch t2t/__init__.py
!find t2t | blogs/t2t/translate_ende.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
Let's test that the Python package works. Since we are running this locally, I'll try it out on a subset of the original data | %bash
BASE=gs://${BUCKET}/translate_ende/data
OUTDIR=gs://${BUCKET}/translate_ende/subset
gsutil -m rm -r $OUTDIR
gsutil -m cp \
${BASE}/${PROBLEM}-train-0008* \
${BASE}/${PROBLEM}-dev-00000* \
${BASE}/vocab* \
$OUTDIR
%bash
OUTDIR=./trained_model
rm -rf $OUTDIR
export PYTHONPATH=${PYTHONPATH}:${PWD}/t2t
python -m ende.t2t-trainer \
--data_dir=gs://${BUCKET}/translate_ende/subset \
--problems=$PROBLEM \
--model=transformer \
--hparams_set=transformer_base_single_gpu \
--output_dir=$OUTDIR --job-dir=$OUTDIR | blogs/t2t/translate_ende.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
Train on Cloud ML Engine
Once we have a working Python package, training on a Cloud ML Engine GPU is straightforward | %bash
OUTDIR=gs://${BUCKET}/translate_ende/model
JOBNAME=t2t_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
gcloud ml-engine jobs submit training $JOBNAME \
--region=$REGION \
--staging-bucket=gs://$BUCKET \
--scale-tier=BASIC_GPU \
--module-name=ende.t2t-trainer \
--package-path=${PWD}/t2t/ende \
--job-dir=$OUTDIR \
--runtime-version=1.4 \
-- \
--data_dir=gs://${BUCKET}/translate_ende/data \
--problems=my_translate_problem \
--model=transformer \
--hparams_set=transformer_base_single_gpu \
--output_dir=$OUTDIR | blogs/t2t/translate_ende.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
Instantiating a spaghetti.Network object
Instantiate the network from a .shp file | ntw = spaghetti.Network(in_data=libpysal.examples.get_path("streets.shp")) | notebooks/pointpattern-attributes.ipynb | pysal/spaghetti | bsd-3-clause |
1. Allocating observations (snapping points) to a network:
A network is composed of a single topological representation of network elements (arcs and vertices) to which point patterns may be snapped. | pp_name = "crimes"
pp_shp = libpysal.examples.get_path("%s.shp" % pp_name)
ntw.snapobservations(pp_shp, pp_name, attribute=True)
ntw.pointpatterns | notebooks/pointpattern-attributes.ipynb | pysal/spaghetti | bsd-3-clause |
Attributes for every point pattern
dist_snapped dict keyed by point id with the value as snapped distance from observation to network arc | ntw.pointpatterns[pp_name].dist_snapped[0] | notebooks/pointpattern-attributes.ipynb | pysal/spaghetti | bsd-3-clause |
dist_to_vertex dict keyed by pointid with the value being a dict in the form
{node: distance to vertex, node: distance to vertex} | ntw.pointpatterns[pp_name].dist_to_vertex[0] | notebooks/pointpattern-attributes.ipynb | pysal/spaghetti | bsd-3-clause |
npoints point observations in set | ntw.pointpatterns[pp_name].npoints | notebooks/pointpattern-attributes.ipynb | pysal/spaghetti | bsd-3-clause |
obs_to_arc dict keyed by arc with the value being a dict in the form
{pointID:(x-coord, y-coord), pointID:(x-coord, y-coord), ... } | ntw.pointpatterns[pp_name].obs_to_arc[(161, 162)] | notebooks/pointpattern-attributes.ipynb | pysal/spaghetti | bsd-3-clause |
obs_to_vertex list of incident network vertices to snapped observation points | ntw.pointpatterns[pp_name].obs_to_vertex[0] | notebooks/pointpattern-attributes.ipynb | pysal/spaghetti | bsd-3-clause |
points geojson like representation of the point pattern. Includes properties if read with attributes=True | ntw.pointpatterns[pp_name].points[0] | notebooks/pointpattern-attributes.ipynb | pysal/spaghetti | bsd-3-clause |
snapped_coordinates dict keyed by pointid with the value being (x-coord, y-coord) | ntw.pointpatterns[pp_name].snapped_coordinates[0] | notebooks/pointpattern-attributes.ipynb | pysal/spaghetti | bsd-3-clause |
2. Counts per link
Counts per link (arc or edge) are important, but should not be precomputed since there are spatial and graph representations. | def fetch_cpl(net, pp, mean=True):
"""Create a counts per link object and find mean."""
cpl = net.count_per_link(net.pointpatterns[pp].obs_to_arc, graph=False)
if mean:
mean_cpl = sum(list(cpl.values())) / float(len(cpl.keys()))
return cpl, mean_cpl
return cpl
ntw_counts, ntw_ctmean = fetch_cpl(ntw, pp_name)
list(ntw_counts.items())[:4]
ntw_ctmean | notebooks/pointpattern-attributes.ipynb | pysal/spaghetti | bsd-3-clause |
3. Simulate a point pattern on the network
The number of points must supplied.
The only distribution currently supported is uniform.
Generally, this will not be called by the user since the simulation will be used for Monte Carlo permutation. | npts = ntw.pointpatterns[pp_name].npoints
npts
sim_uniform = ntw.simulate_observations(npts)
sim_uniform
print(dir(sim_uniform)) | notebooks/pointpattern-attributes.ipynb | pysal/spaghetti | bsd-3-clause |
Extract the simulated points along the network a geopandas.GeoDataFrame | def as_gdf(pp):
pp = {idx: Point(coords) for idx, coords in pp.items()}
gdf = geopandas.GeoDataFrame.from_dict(
pp, orient="index", columns=["geometry"]
)
gdf.index.name = "id"
return gdf
sim_uniform_gdf = as_gdf(sim_uniform.points)
sim_uniform_gdf.head() | notebooks/pointpattern-attributes.ipynb | pysal/spaghetti | bsd-3-clause |
Create geopandas.GeoDataFrame objects of the vertices and arcs | vertices_df, arcs_df = spaghetti.element_as_gdf(
ntw, vertices=ntw.vertex_coords, arcs=ntw.arcs
) | notebooks/pointpattern-attributes.ipynb | pysal/spaghetti | bsd-3-clause |
Create geopandas.GeoDataFrame objects of the actual and snapped crime locations | crimes = spaghetti.element_as_gdf(ntw, pp_name=pp_name)
crimes_snapped = spaghetti.element_as_gdf(ntw, pp_name=pp_name, snapped=True) | notebooks/pointpattern-attributes.ipynb | pysal/spaghetti | bsd-3-clause |
Helper plotting function | def plotter():
"""Generate a spatial plot."""
def _patch(_kws, labinfo):
"""Generate a legend patch."""
label = "%s — %s" % tuple(labinfo)
_kws.update({"lw":0, "label":label, "alpha":.5})
return matplotlib.lines.Line2D([], [], **_kws)
def _legend(handles, anchor=(1., .75)):
"""Generate a legend."""
lkws = {"fancybox":True,"framealpha":0.85, "fontsize":"xx-large"}
lkws.update({"bbox_to_anchor": anchor, "labelspacing": 2.})
lkws.update({"borderpad": 2., "handletextpad":1.5})
lkws.update({"title": "Crime locations & counts", "title_fontsize":25})
matplotlib.pyplot.legend(handles=handles, **lkws)
def carto_elements(b):
"""Add/adjust cartographic elements."""
kw = {"units":"ft", "dimension":"imperial-length", "fixed_value":1000}
b.add_artist(ScaleBar(1, **kw))
b.set(xticklabels=[], xticks=[], yticklabels=[], yticks=[]);
pkws = {"alpha":0.25}
base = arcs_df.plot(color="k", figsize=(9, 9), zorder=0, **pkws)
patches = []
gdfs = [crimes, crimes_snapped, sim_uniform_gdf]
colors, zo = ["k", "g", "b"], [1 ,2 ,3]
markers, markersizes = ["o", "X", "X"], [150, 150, 150]
labels = [["Empirical"], ["Network-snapped"], ["Simulated"]]
iterinfo = list(zip(gdfs, colors, zo, markers, markersizes, labels))
for gdf, c, z, m, ms, lab in iterinfo:
gdf.plot(ax=base, c=c, marker=m, markersize=ms, zorder=z, **pkws)
patch_args = {"marker":m, "markersize":ms/10,"c":c}, lab+[gdf.shape[0]]
patches.append(_patch(*patch_args))
_legend(patches)
carto_elements(base) | notebooks/pointpattern-attributes.ipynb | pysal/spaghetti | bsd-3-clause |
Crimes: empirical, network-snapped, and simulated locations | plotter() | notebooks/pointpattern-attributes.ipynb | pysal/spaghetti | bsd-3-clause |
Model preparation
Variables
Any model exported using the export_inference_graph.py tool can be loaded here simply by changing the path.
By default we use an "SSD with Mobilenet" model here. See the detection model zoo for a list of other models that can be run out-of-the-box with varying speeds and accuracies.
Loader | def load_model(model_name):
base_url = 'http://download.tensorflow.org/models/object_detection/'
model_file = model_name + '.tar.gz'
model_dir = tf.keras.utils.get_file(
fname=model_name,
origin=base_url + model_file,
untar=True)
model_dir = pathlib.Path(model_dir)/"saved_model"
model = tf.saved_model.load(str(model_dir))
return model | research/object_detection/colab_tutorials/object_detection_tutorial.ipynb | tombstone/models | apache-2.0 |
Check the model's input signature, it expects a batch of 3-color images of type uint8: | print(detection_model.signatures['serving_default'].inputs) | research/object_detection/colab_tutorials/object_detection_tutorial.ipynb | tombstone/models | apache-2.0 |
And returns several outputs: | detection_model.signatures['serving_default'].output_dtypes
detection_model.signatures['serving_default'].output_shapes | research/object_detection/colab_tutorials/object_detection_tutorial.ipynb | tombstone/models | apache-2.0 |
Add a wrapper function to call the model, and cleanup the outputs: | def run_inference_for_single_image(model, image):
image = np.asarray(image)
# The input needs to be a tensor, convert it using `tf.convert_to_tensor`.
input_tensor = tf.convert_to_tensor(image)
# The model expects a batch of images, so add an axis with `tf.newaxis`.
input_tensor = input_tensor[tf.newaxis,...]
# Run inference
model_fn = model.signatures['serving_default']
output_dict = model_fn(input_tensor)
# All outputs are batches tensors.
# Convert to numpy arrays, and take index [0] to remove the batch dimension.
# We're only interested in the first num_detections.
num_detections = int(output_dict.pop('num_detections'))
output_dict = {key:value[0, :num_detections].numpy()
for key,value in output_dict.items()}
output_dict['num_detections'] = num_detections
# detection_classes should be ints.
output_dict['detection_classes'] = output_dict['detection_classes'].astype(np.int64)
# Handle models with masks:
if 'detection_masks' in output_dict:
# Reframe the the bbox mask to the image size.
detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks(
output_dict['detection_masks'], output_dict['detection_boxes'],
image.shape[0], image.shape[1])
detection_masks_reframed = tf.cast(detection_masks_reframed > 0.5,
tf.uint8)
output_dict['detection_masks_reframed'] = detection_masks_reframed.numpy()
return output_dict | research/object_detection/colab_tutorials/object_detection_tutorial.ipynb | tombstone/models | apache-2.0 |
Instance Segmentation | model_name = "mask_rcnn_inception_resnet_v2_atrous_coco_2018_01_28"
masking_model = load_model(model_name) | research/object_detection/colab_tutorials/object_detection_tutorial.ipynb | tombstone/models | apache-2.0 |
Initilaize NN context, it will get a SparkContext with optimized configuration for BigDL performance. | sc = init_nncontext("NCF Example") | apps/recommendation-ncf/ncf-explicit-feedback.ipynb | intel-analytics/analytics-zoo | apache-2.0 |
Data Preparation
Download and read movielens 1M data | movielens_data = movielens.get_id_ratings("/tmp/movielens/") | apps/recommendation-ncf/ncf-explicit-feedback.ipynb | intel-analytics/analytics-zoo | apache-2.0 |
Understand the data. Each record is in format of (userid, movieid, rating_score). UserIDs range between 1 and 6040. MovieIDs range between 1 and 3952. Ratings are made on a 5-star scale (whole-star ratings only). Counts of users and movies are recorded for later use. | min_user_id = np.min(movielens_data[:,0])
max_user_id = np.max(movielens_data[:,0])
min_movie_id = np.min(movielens_data[:,1])
max_movie_id = np.max(movielens_data[:,1])
rating_labels= np.unique(movielens_data[:,2])
print(movielens_data.shape)
print(min_user_id, max_user_id, min_movie_id, max_movie_id, rating_labels) | apps/recommendation-ncf/ncf-explicit-feedback.ipynb | intel-analytics/analytics-zoo | apache-2.0 |
Transform original data into RDD of sample.
We use optimizer of BigDL directly to train the model, it requires data to be provided in format of RDD(Sample). A Sample is a BigDL data structure which can be constructed using 2 numpy arrays, feature and label respectively. The API interface is Sample.from_ndarray(feature, label)
Here, labels are tranformed into zero-based since original labels start from 1. | def build_sample(user_id, item_id, rating):
sample = Sample.from_ndarray(np.array([user_id, item_id]), np.array([rating]))
return UserItemFeature(user_id, item_id, sample)
pairFeatureRdds = sc.parallelize(movielens_data)\
.map(lambda x: build_sample(x[0], x[1], x[2]-1))
pairFeatureRdds.take(3) | apps/recommendation-ncf/ncf-explicit-feedback.ipynb | intel-analytics/analytics-zoo | apache-2.0 |
Randomly split the data into train (80%) and validation (20%) | trainPairFeatureRdds, valPairFeatureRdds = pairFeatureRdds.randomSplit([0.8, 0.2], seed= 1)
valPairFeatureRdds.cache()
train_rdd= trainPairFeatureRdds.map(lambda pair_feature: pair_feature.sample)
val_rdd= valPairFeatureRdds.map(lambda pair_feature: pair_feature.sample)
val_rdd.persist()
print(train_rdd.count())
train_rdd.take(3) | apps/recommendation-ncf/ncf-explicit-feedback.ipynb | intel-analytics/analytics-zoo | apache-2.0 |
Build Model
In Analytics Zoo, it is simple to build NCF model by calling NeuralCF API. You need specify the user count, item count and class number according to your data, then add hidden layers as needed, you can also choose to include matrix factorization in the network. The model could be fed into an Optimizer of BigDL or NNClassifier of analytics-zoo. Please refer to the document for more details. In this example, we demostrate how to use optimizer of BigDL. | ncf = NeuralCF(user_count=max_user_id,
item_count=max_movie_id,
class_num=5,
hidden_layers=[20, 10],
include_mf = False) | apps/recommendation-ncf/ncf-explicit-feedback.ipynb | intel-analytics/analytics-zoo | apache-2.0 |
Compile model
Compile model given specific optimizers, loss, as well as metrics for evaluation. Optimizer tries to minimize the loss of the neural net with respect to its weights/biases, over the training set. To create an Optimizer in BigDL, you want to at least specify arguments: model(a neural network model), criterion(the loss function), traing_rdd(training dataset) and batch size. Please refer to (ProgrammingGuide)and (Optimizer) for more details to create efficient optimizers. | ncf.compile(optimizer= "adam",
loss= "sparse_categorical_crossentropy",
metrics=['accuracy']) | apps/recommendation-ncf/ncf-explicit-feedback.ipynb | intel-analytics/analytics-zoo | apache-2.0 |
Collect logs
You can leverage tensorboard to see the summaries. | tmp_log_dir = create_tmp_path()
ncf.set_tensorboard(tmp_log_dir, "training_ncf") | apps/recommendation-ncf/ncf-explicit-feedback.ipynb | intel-analytics/analytics-zoo | apache-2.0 |
Train the model | ncf.fit(train_rdd,
nb_epoch= 10,
batch_size= 8000,
validation_data=val_rdd) | apps/recommendation-ncf/ncf-explicit-feedback.ipynb | intel-analytics/analytics-zoo | apache-2.0 |
Prediction
Zoo models make inferences based on the given data using model.predict(val_rdd) API. A result of RDD is returned. predict_class returns the predicted label. | results = ncf.predict(val_rdd)
results.take(5)
results_class = ncf.predict_class(val_rdd)
results_class.take(5) | apps/recommendation-ncf/ncf-explicit-feedback.ipynb | intel-analytics/analytics-zoo | apache-2.0 |
In Analytics Zoo, Recommender has provied 3 unique APIs to predict user-item pairs and make recommendations for users or items given candidates.
Predict for user item pairs | userItemPairPrediction = ncf.predict_user_item_pair(valPairFeatureRdds)
for result in userItemPairPrediction.take(5): print(result) | apps/recommendation-ncf/ncf-explicit-feedback.ipynb | intel-analytics/analytics-zoo | apache-2.0 |
Recommend 3 items for each user given candidates in the feature RDDs | userRecs = ncf.recommend_for_user(valPairFeatureRdds, 3)
for result in userRecs.take(5): print(result) | apps/recommendation-ncf/ncf-explicit-feedback.ipynb | intel-analytics/analytics-zoo | apache-2.0 |
Recommend 3 users for each item given candidates in the feature RDDs | itemRecs = ncf.recommend_for_item(valPairFeatureRdds, 3)
for result in itemRecs.take(5): print(result) | apps/recommendation-ncf/ncf-explicit-feedback.ipynb | intel-analytics/analytics-zoo | apache-2.0 |
Evaluation
Plot the train and validation loss curves | #retrieve train and validation summary object and read the loss data into ndarray's.
train_loss = np.array(ncf.get_train_summary("Loss"))
val_loss = np.array(ncf.get_validation_summary("Loss"))
#plot the train and validation curves
# each event data is a tuple in form of (iteration_count, value, timestamp)
plt.figure(figsize = (12,6))
plt.plot(train_loss[:,0],train_loss[:,1],label='train loss')
plt.plot(val_loss[:,0],val_loss[:,1],label='val loss',color='green')
plt.scatter(val_loss[:,0],val_loss[:,1],color='green')
plt.legend();
plt.xlim(0,train_loss.shape[0]+10)
plt.grid(True)
plt.title("loss") | apps/recommendation-ncf/ncf-explicit-feedback.ipynb | intel-analytics/analytics-zoo | apache-2.0 |
plot accuracy | plt.figure(figsize = (12,6))
top1 = np.array(ncf.get_validation_summary("Top1Accuracy"))
plt.plot(top1[:,0],top1[:,1],label='top1')
plt.title("top1 accuracy")
plt.grid(True)
plt.legend(); | apps/recommendation-ncf/ncf-explicit-feedback.ipynb | intel-analytics/analytics-zoo | apache-2.0 |
We begin be defining a model, identical to the Fitzhugh Nagumo toy model implemented in pints. The corresponding toy model in pints has its evaluateS1() method defined, so we can compare the results using automatic differentiation. | class AutoGradFitzhughNagumoModel(pints.ForwardModel):
def simulate(self, parameters, times):
y0 = np.array([-1, 1], dtype=float)
def rhs(y, t, p):
V, R = y
a, b, c = p
dV_dt = (V - V**3 / 3 + R) * c
dR_dt = (V - a + b * R) / -c
return np.array([dV_dt, dR_dt])
return odeint(rhs, y0, times, tuple((parameters,)))
def n_parameters(self):
return 3
def n_outputs(self):
return 2
| examples/interfaces/automatic-differentiation-using-autograd.ipynb | martinjrobins/hobo | bsd-3-clause |
Now we wrap an existing pints likelihood class, and use the autograd.grad function to calculate the gradient of the given log-likelihood | class AutoGradLogLikelihood(pints.ProblemLogLikelihood):
def __init__(self, likelihood):
self.likelihood = likelihood
f = lambda x: self.likelihood(x)
self.likelihood_grad = grad(f)
def __call__(self, x):
return self.likelihood(x)
def evaluateS1(self, x):
values = self.likelihood(x)
gradient = self.likelihood_grad(x)
return values, gradient
def n_parameters(self):
return self.likelihood.n_parameters()
autograd_model = AutoGradFitzhughNagumoModel()
pints_model = pints.toy.FitzhughNagumoModel()
| examples/interfaces/automatic-differentiation-using-autograd.ipynb | martinjrobins/hobo | bsd-3-clause |
Now create some toy data and ensure that the new model gives the same output as the toy model in pints | # Create some toy data
real_parameters = np.array(pints_model.suggested_parameters(), dtype='float64')
times = pints_model.suggested_times()
pints_values = pints_model.simulate(real_parameters, times)
autograd_values = autograd_model.simulate(real_parameters, times)
plt.figure()
plt.plot(times, autograd_values)
plt.plot(times, pints_values)
plt.show() | examples/interfaces/automatic-differentiation-using-autograd.ipynb | martinjrobins/hobo | bsd-3-clause |
Add some noise to the values, and then create log-likelihoods using both the new model, and the pints model | noise = 0.1
values = pints_values + np.random.normal(0, noise, pints_values.shape)
# Create an object with links to the model and time series
autograd_problem = pints.MultiOutputProblem(autograd_model, times, values)
pints_problem = pints.MultiOutputProblem(pints_model, times, values)
# Create a log-likelihood function
autograd_log_likelihood = pints.GaussianKnownSigmaLogLikelihood(autograd_problem, noise)
autograd_likelihood = AutoGradLogLikelihood(autograd_log_likelihood)
pints_log_likelihood = pints.GaussianKnownSigmaLogLikelihood(pints_problem, noise) | examples/interfaces/automatic-differentiation-using-autograd.ipynb | martinjrobins/hobo | bsd-3-clause |
We can calculate the gradients of both likelihood functions at the given parameters to make sure that they are the same | autograd_likelihood.evaluateS1(real_parameters)
pints_log_likelihood.evaluateS1(real_parameters) | examples/interfaces/automatic-differentiation-using-autograd.ipynb | martinjrobins/hobo | bsd-3-clause |
Now we'll time both functions. You can see that the function using autgrad is significantly slower than the in-built evaluateS1 function for the PINTS model, which calculates the sensitivities analytically. | statement = 'autograd_likelihood.evaluateS1(real_parameters)'
setup = 'from __main__ import autograd_likelihood, real_parameters'
time_taken = min(repeat(stmt=statement, setup=setup, number=1, repeat=5))
'Elapsed time: {:.0f} ms'.format(1000. * time_taken)
statement = 'pints_log_likelihood.evaluateS1(real_parameters)'
setup = 'from __main__ import pints_log_likelihood, real_parameters'
time_taken = min(repeat(stmt=statement, setup=setup, number=1, repeat=5))
'Elapsed time: {:.0f} ms'.format(1000. * time_taken) | examples/interfaces/automatic-differentiation-using-autograd.ipynb | martinjrobins/hobo | bsd-3-clause |
Pandas es un paquete que Python que provee estructuras de datos rápidas, flexibles y expresivas diseñadas para trabajar con datos rotulados. Dichas estructuras de datos se pueden pensar como arrays de NumPy donde las filas y columnas están rotuladas. O de forma similar como una planilla de cálculo bajo Python.
Así como NumPy es una muy buena herramienta para trabajar con números, vectores, álgebra lineal, etc. Pandas es adecuado para trabajar con:
Datos tabulares y heterogéneos (flotantes, string, enteros, etc)
Series temporales
Los mismos datos que se pueden manipular con arreglos de NumPy!
¿Por qué es importante tener una herramienta como Pandas?
<img src="imagenes/analisis.png" width=400>
El tiempo consumido (por un humano) en aplicar métodos estadísticos y/o de machine learning es en muchos casos mucho menor que el tiempo requerido para obtener y procesar los datos. Pandas intenta facilitar el procesado de los datos y reducir el tiempo que este consume para que podamos aumentar el tiempo que pasamos pensando en los problemas que queremos resolver. Procesar datos es una tarea que suele involucrar los siguientes pasos
Leer los datos: Los datos pueden estar contenidos en diversos formatos, CSV, HTML, xls, pdf, texto plano, imágenes, hojas de papel, etc
Procesar datos: Los datos rara vez están listos para usar, pueden faltar datos, haber dudas sobre los valores registrados, inconsistencias, etc. Además puede ser necesario generar datos derivados a partir de los datos disponibles, por ej podríamos necesitar la densidad poblacional pero solo tenemos la población y la superficie.
Almacenar datos: Ya sea para pasárselos a otra pieza de software para por ej visualizarlos o hacer un análisis estadístico de los datos, o para su posterior uso por nosotros o terceros.
Pandas introduce fundamentalmente 3 nuevas estructuras de datos:
* Las Series
* Los DataFrame
* Los Index
Empecemos por la primera de estas.
Series
Una Series de Pandas es un conjunto unidimensional de datos (similar a un array) acompañado de un índice que "rotula" a cada elemento del vector. Puede ser creada a partir de un array o tupla o lista. | conteo = pd.Series([632, 1638, 569, 115])
conteo | 03_Manipulación_de_datos_y_Pandas.ipynb | PrACiDa/intro_ciencia_de_datos | gpl-3.0 |
Una Series tiene siempre dos columnas. La primer columna contiene los índices y la segunda los datos. En el ejemplo anterior, pasamos una lista de datos y omitimos el índice por lo que Pandas creo un índice automáticamente usando una secuencia de enteros, empezando por 0 (como es usual en Python).
La idea que una Series es un array con un índice explícito, no es solo una metáfora. De hecho a partir de una Serie es posible obtener el array de NumPy "contenido" en ella. | conteo.values | 03_Manipulación_de_datos_y_Pandas.ipynb | PrACiDa/intro_ciencia_de_datos | gpl-3.0 |
Y también es posible obtener el índice. | conteo.index | 03_Manipulación_de_datos_y_Pandas.ipynb | PrACiDa/intro_ciencia_de_datos | gpl-3.0 |
Indexado
Es importante notar que los arreglos de NumPy también tienen índices, solo que estos son implícitios y siempre son enteros comenzando desde el 0. En cambio los Index en Pandas son explícitos y no están limitados a enteros. Podemos asignar rótulos que tengan sentido según nuestros datos. Si nuestros datos representan la cantidad de bacterias según sus especies, podríamos tener algo como: | bacteria = pd.Series([632, 1638, 569, 115],
index=['Firmicutes', 'Proteobacteria',
'Actinobacteria', 'Bacteroidetes'])
bacteria | 03_Manipulación_de_datos_y_Pandas.ipynb | PrACiDa/intro_ciencia_de_datos | gpl-3.0 |
Ahora el Index contiene strings en lugar de enteros. Es posible que estos pares rótulo-dato nos recuerden a un diccionario. Si esta analogía es válida deberíamos poder usar los rótulos para referirnos directamente a los valores contenidos en la serie. | bacteria['Actinobacteria'] | 03_Manipulación_de_datos_y_Pandas.ipynb | PrACiDa/intro_ciencia_de_datos | gpl-3.0 |
Incluso podemos crear Series a partir de diccionarios | bacteria_dict = {'Firmicutes': 632,
'Proteobacteria': 1638,
'Actinobacteria': 569,
'Bacteroidetes': 115}
pd.Series(bacteria_dict) | 03_Manipulación_de_datos_y_Pandas.ipynb | PrACiDa/intro_ciencia_de_datos | gpl-3.0 |
O podemos hacerlo de forma algo más breve usando atributos. Un atributo es el nombre que se le da a un dato o propiedad en programación orientada a objetos. En el siguiente ejemplo bacteria es un objeto y Actinobacteria el atributo. | bacteria.Actinobacteria | 03_Manipulación_de_datos_y_Pandas.ipynb | PrACiDa/intro_ciencia_de_datos | gpl-3.0 |
El hecho que tengamos índices explícitos, como los nombres de bacterias, no elimina la posibilidad de acceder a los datos usando índices implicitos, como es común hacer con listas y arreglos. | bacteria[2] | 03_Manipulación_de_datos_y_Pandas.ipynb | PrACiDa/intro_ciencia_de_datos | gpl-3.0 |
Si tuvieras una Series con índices que fuesen enteros ¿Qué pasaría al indexarla?
una_series[1]
¿Obtendríamos el segundo elemento de la serie o el elemento cuyo índice explícito es 1? ¿Y si el ńumero 1 no estuviera contenido en el índice de la serie?
Más adelante veremos que solución ofrece Pandas para evitar confusiones. Pero mientras quizá podemos pensar una solución y discutirla.
<br>
<br>
<br>
<br>
Al igual que con los arreglos de NumPy podemos usar booleanos para indexar una serie. De esta forma podemos contestar la pregunta; cuáles bacterias dieron conteos superiores a 1000 de forma bastante intuitiva (¿Es realmente intuitiva?). | bacteria[bacteria > 1000] | 03_Manipulación_de_datos_y_Pandas.ipynb | PrACiDa/intro_ciencia_de_datos | gpl-3.0 |
O podríamos necesitar encontrar el subconjunto de bacterias cuyos nombres terminan en "bacteria": | bacteria[[nombre.endswith('bacteria') for nombre in bacteria.index]] | 03_Manipulación_de_datos_y_Pandas.ipynb | PrACiDa/intro_ciencia_de_datos | gpl-3.0 |
Slicing
Es posible hacer slicing incluso cuando los índices son strings. | bacteria[:'Actinobacteria']
bacteria[:3] | 03_Manipulación_de_datos_y_Pandas.ipynb | PrACiDa/intro_ciencia_de_datos | gpl-3.0 |
Al indexar con un índice implícito el último índice NO se incluye. Esto es lo que esperamos de listas, tuplas, arreglos etc. En cambio, al indexar con un índice explícito el último índice se incluye!
También es posible indexar usando una lista. | bacteria[['Actinobacteria', 'Proteobacteria']] | 03_Manipulación_de_datos_y_Pandas.ipynb | PrACiDa/intro_ciencia_de_datos | gpl-3.0 |
Indexadores: loc e iloc
La presencia de índices implícitos y explícitos puede ser una fuente de gran confusión al usar Pandas. Veamos, que sucede cuando tenemos una serie con índices explícitos que son enteros. | datos = pd.Series(['x', 'y', 'z'], index=range(10, 13))
datos | 03_Manipulación_de_datos_y_Pandas.ipynb | PrACiDa/intro_ciencia_de_datos | gpl-3.0 |
Pandas usará el índice explítico al indexar | datos[10] | 03_Manipulación_de_datos_y_Pandas.ipynb | PrACiDa/intro_ciencia_de_datos | gpl-3.0 |
Pero el implícito al tomar rebanadas! | datos[0:2] | 03_Manipulación_de_datos_y_Pandas.ipynb | PrACiDa/intro_ciencia_de_datos | gpl-3.0 |
Pandas provee de dos métodos para indexar. El primero de ellos es loc que permite hacer las operaciones de indexado/rebanado usando SIEMPRE el índice explícito. | datos.loc[10]
datos.loc[10:11] | 03_Manipulación_de_datos_y_Pandas.ipynb | PrACiDa/intro_ciencia_de_datos | gpl-3.0 |
El otro método es iloc el cual permite usar el índice implícito. | datos.iloc[0]
datos.iloc[0:2] | 03_Manipulación_de_datos_y_Pandas.ipynb | PrACiDa/intro_ciencia_de_datos | gpl-3.0 |
Siguiendo el zen de Python que dice “explícito es mejor que implícito", la recomendación general es usar loc e iloc. De esta forma la intención del código se hace explícita lo que contribuye a una lectura más fluida y a reducir la posibilidad de errores.
Funciones universales
Una de las características más valiosas de NumPy es la posiblidad de vectorizar código, evitando escribir loops, al realizar operaciones como sumas, multiplicaciones, logaritmos, etc. Pandas hereda de NumPy esta capacidad y la adapta de dos formas:
Para operaciones unarias al aplicar funciones universales se preserva el índice, es decir solo se aplican las operaciones a los valores y no a los rótulos. | np.log(bacteria) | 03_Manipulación_de_datos_y_Pandas.ipynb | PrACiDa/intro_ciencia_de_datos | gpl-3.0 |
Para operaciones binarias, las mismas se realizan sobre los índices alineados
Esto facilita el realizar operaciones que implican combinar datos de distintas fuentes, algo que puede no ser tan simple al usar NumPy.
Para ejemplificar este comportamiento vamos a crear una nueva Series a partir de un diccionario, pero especificando los índices | bacteria2 = pd.Series(bacteria_dict,
index=['Cyanobacteria',
'Firmicutes',
'Actinobacteria'])
bacteria2 | 03_Manipulación_de_datos_y_Pandas.ipynb | PrACiDa/intro_ciencia_de_datos | gpl-3.0 |
Observemos dos detalles de este ejemplo. El orden en el que aparecen los elementos es el mismo que el orden especificado por el argumento index, comparemos esto con el caso anterior donde creamos una serie a partir de un diccionario, pero sin especificar el índice.
El otro detalle es que hemos pasado un rótulo para un valor que no existe en el diccionario (y hemos omitido dos valores que si existen). Como resultado Pandas no devolvió un error, si no que interpretó que tenemos datos faltantes (missing data). El dato faltante se indica utilizando un tipo especial de float NaN (del inglés Not A Number).
Los índices son una conveniencia para manipular datos haciendo referencia a nombres que nos puede resultar más familiares o convenientes (comparado con recordar la posición de los datos). Además, los índices son usados para alinear datos al operar con más de una serie, por ej podríamos querer obtener el total de bacterias en dos conjuntos de datos. | bacteria + bacteria2 | 03_Manipulación_de_datos_y_Pandas.ipynb | PrACiDa/intro_ciencia_de_datos | gpl-3.0 |
El resultado es una serie donde el índice corresponde a la unión de los índices originales. Pandas suma solo los valores para los cuales los índices de ambas Series coinciden! Y además propaga los valores faltantes (NaN).
¿Qué sucede si intentamos sumar dos arreglos de NumPy de distinta longitud?
<br>
<br>
<br>
<br>
Es posible controlar que pasa con los datos faltantes usando el método .add() (en vez del operador +). Usando este método podemos indicar que cambie NaN por otro valor. | bacteria.add(bacteria2, fill_value=0) | 03_Manipulación_de_datos_y_Pandas.ipynb | PrACiDa/intro_ciencia_de_datos | gpl-3.0 |
Una variante sería hacer la operación y luego cambiar los NaN por cualquier otro valor. | (bacteria + bacteria2).fillna(0) | 03_Manipulación_de_datos_y_Pandas.ipynb | PrACiDa/intro_ciencia_de_datos | gpl-3.0 |
DataFrame
Al analizar datos es común que tengamos que trabajar con datos multivariados. Para esos casos es útil tener algo como una Series donde a cada índice le correspondan más de una columna de valores. Ese objeto se llama DataFrame.
Un DataFrame es una estructura de datos tabular que se puede pensar como una colección de Series que comparten un mismo índice. También es posible pensar un DataFrame como una generalización de un arreglo de NumPy o la generalización de un diccionario. | datos = pd.DataFrame({'conteo':[632, 1638, 569, 115, 433, 1130, 754, 555],
'phylum':['Firmicutes', 'Proteobacteria', 'Actinobacteria', 'Bacteroidetes'] * 2,
'paciente':np.repeat([1, 2], 4)})
datos | 03_Manipulación_de_datos_y_Pandas.ipynb | PrACiDa/intro_ciencia_de_datos | gpl-3.0 |
Lo primero que notamos es que Jupyter le pone onda al DataFrame, y lo muestra como una tabla con algunas mejoras estéticas.
También podemos ver que contrario a un arreglo de NumPy en un DataFrame es posible tener datos de distinto tipo (enteros y strings en este caso). Además se ve que las columnas están ordenadas alfabéticamente, podemos cambiar el orden indexando el DataFrame en el orden preferido. | datos[['paciente', 'phylum', 'conteo']] | 03_Manipulación_de_datos_y_Pandas.ipynb | PrACiDa/intro_ciencia_de_datos | gpl-3.0 |
Los DataFrame tienen dos Index:
Uno que se corresponde con las filas, al igual que como vimos con las Series
Uno que se corresponde con las columnas | datos.columns | 03_Manipulación_de_datos_y_Pandas.ipynb | PrACiDa/intro_ciencia_de_datos | gpl-3.0 |
Es posible acceder a los valores de las columnas de forma similar a como lo haríamos en una serie o en un diccionario. | datos['conteo'] | 03_Manipulación_de_datos_y_Pandas.ipynb | PrACiDa/intro_ciencia_de_datos | gpl-3.0 |
También podemos hacerlo por atributo. | datos.conteo | 03_Manipulación_de_datos_y_Pandas.ipynb | PrACiDa/intro_ciencia_de_datos | gpl-3.0 |
Esta sintáxis no funciona para todos los casos. Algunos casos donde fallará es si la columna continene espacios o si el nombre de la columna entra en conflicto con algún método existente para DataFrames, por ejemplo no sería raro que llamaramos a una columna con alguno de estos nombres all, cov, index, mean.
Una posible fuente de confusión es que la sintaxis que acabamos de ver devuelve filas en una Series, pero columnas en un DataFrame. Si queremos acceder a las filas de un DataFrame podemos hacerlo usando el atributo loc: | datos.loc[3] | 03_Manipulación_de_datos_y_Pandas.ipynb | PrACiDa/intro_ciencia_de_datos | gpl-3.0 |
¿Que pasa si intentamos acceder a una fila usando la sintaxis datos[3]?
<br>
<br>
<br>
<br>
La Series que se obtienen al indexar un DataFrame es una vista (view) del DataFrame y NO una copia. Por lo que hay que tener cuidado al manipularla, por ello Pandas nos devuelve una advertencia. | cont = datos['conteo']
cont
cont[5] = 0
cont
datos | 03_Manipulación_de_datos_y_Pandas.ipynb | PrACiDa/intro_ciencia_de_datos | gpl-3.0 |
Si queremos modificar una Series que proviene de un DataFrame puede ser buena idea hacer una copia primero. | cont = datos['conteo'].copy()
cont[5] = 1000
datos | 03_Manipulación_de_datos_y_Pandas.ipynb | PrACiDa/intro_ciencia_de_datos | gpl-3.0 |
Es posible agregar columnas a un DataFrame mediante una asignación. | datos['año'] = 2013
datos | 03_Manipulación_de_datos_y_Pandas.ipynb | PrACiDa/intro_ciencia_de_datos | gpl-3.0 |
Podemos agregar una Series como una nueva columna en un DataFrame, el resultado dependerá de los índices de ambos objetos. | tratamiento = pd.Series([0] * 4 + [1] * 4)
tratamiento
datos['tratamiento'] = tratamiento
datos | 03_Manipulación_de_datos_y_Pandas.ipynb | PrACiDa/intro_ciencia_de_datos | gpl-3.0 |
¿Qué sucede si intentamos agregar una nueva columna a partir de una lista cuya longitud no coincida con la del DataFrame? ¿Y si en vez de una lista es una Series?
<br>
<br>
<br>
<br> | datos['mes'] = ['enero'] * len(datos)
datos | 03_Manipulación_de_datos_y_Pandas.ipynb | PrACiDa/intro_ciencia_de_datos | gpl-3.0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.