markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
---|---|---|---|---|
注:如果您有许多数值特征(数百个或更多),首先将它们连接起来并使用单个 normalization 层会更有效。
分类列
在此数据集中,Type 表示为字符串(例如 'Dog' 或 'Cat')。您不能将字符串直接馈送给模型。预处理层负责将字符串表示为独热向量。
get_category_encoding_layer 函数返回一个层,该层将值从词汇表映射到整数索引,并对特征进行独热编码。 | def get_category_encoding_layer(name, dataset, dtype, max_tokens=None):
# Create a StringLookup layer which will turn strings into integer indices
if dtype == 'string':
index = preprocessing.StringLookup(max_tokens=max_tokens)
else:
index = preprocessing.IntegerLookup(max_tokens=max_tokens)
# Prepare a Dataset that only yields our feature
feature_ds = dataset.map(lambda x, y: x[name])
# Learn the set of possible values and assign them a fixed integer index.
index.adapt(feature_ds)
# Create a Discretization for our integer indices.
encoder = preprocessing.CategoryEncoding(num_tokens=index.vocabulary_size())
# Apply one-hot encoding to our indices. The lambda function captures the
# layer so we can use them, or include them in the functional model later.
return lambda feature: encoder(index(feature))
type_col = train_features['Type']
layer = get_category_encoding_layer('Type', train_ds, 'string')
layer(type_col) | site/zh-cn/tutorials/structured_data/preprocessing_layers.ipynb | tensorflow/docs-l10n | apache-2.0 |
通常,您不应将数字直接输入模型,而是改用这些输入的独热编码。考虑代表宠物年龄的原始数据。 | type_col = train_features['Age']
category_encoding_layer = get_category_encoding_layer('Age', train_ds,
'int64', 5)
category_encoding_layer(type_col) | site/zh-cn/tutorials/structured_data/preprocessing_layers.ipynb | tensorflow/docs-l10n | apache-2.0 |
选择要使用的列
您已经了解了如何使用多种类型的预处理层。现在您将使用它们来训练模型。您将使用 Keras-functional API 来构建模型。Keras 函数式 API 是一种比 tf.keras.Sequential API 更灵活的创建模型的方式。
本教程的目标是向您展示使用预处理层所需的完整代码(例如机制)。任意选择了几列来训练我们的模型。
要点:如果您的目标是构建一个准确的模型,请尝试使用自己的更大的数据集,并仔细考虑哪些特征最有意义,以及它们应该如何表示。
之前,您使用了小批次来演示输入流水线。现在让我们创建一个具有更大批次大小的新输入流水线。 | batch_size = 256
train_ds = df_to_dataset(train, batch_size=batch_size)
val_ds = df_to_dataset(val, shuffle=False, batch_size=batch_size)
test_ds = df_to_dataset(test, shuffle=False, batch_size=batch_size)
all_inputs = []
encoded_features = []
# Numeric features.
for header in ['PhotoAmt', 'Fee']:
numeric_col = tf.keras.Input(shape=(1,), name=header)
normalization_layer = get_normalization_layer(header, train_ds)
encoded_numeric_col = normalization_layer(numeric_col)
all_inputs.append(numeric_col)
encoded_features.append(encoded_numeric_col)
# Categorical features encoded as integers.
age_col = tf.keras.Input(shape=(1,), name='Age', dtype='int64')
encoding_layer = get_category_encoding_layer('Age', train_ds, dtype='int64',
max_tokens=5)
encoded_age_col = encoding_layer(age_col)
all_inputs.append(age_col)
encoded_features.append(encoded_age_col)
# Categorical features encoded as string.
categorical_cols = ['Type', 'Color1', 'Color2', 'Gender', 'MaturitySize',
'FurLength', 'Vaccinated', 'Sterilized', 'Health', 'Breed1']
for header in categorical_cols:
categorical_col = tf.keras.Input(shape=(1,), name=header, dtype='string')
encoding_layer = get_category_encoding_layer(header, train_ds, dtype='string',
max_tokens=5)
encoded_categorical_col = encoding_layer(categorical_col)
all_inputs.append(categorical_col)
encoded_features.append(encoded_categorical_col)
| site/zh-cn/tutorials/structured_data/preprocessing_layers.ipynb | tensorflow/docs-l10n | apache-2.0 |
创建、编译并训练模型
接下来,您可以创建端到端模型。 | all_features = tf.keras.layers.concatenate(encoded_features)
x = tf.keras.layers.Dense(32, activation="relu")(all_features)
x = tf.keras.layers.Dropout(0.5)(x)
output = tf.keras.layers.Dense(1)(x)
model = tf.keras.Model(all_inputs, output)
model.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=["accuracy"]) | site/zh-cn/tutorials/structured_data/preprocessing_layers.ipynb | tensorflow/docs-l10n | apache-2.0 |
我们来可视化连接图: | # rankdir='LR' is used to make the graph horizontal.
tf.keras.utils.plot_model(model, show_shapes=True, rankdir="LR")
| site/zh-cn/tutorials/structured_data/preprocessing_layers.ipynb | tensorflow/docs-l10n | apache-2.0 |
训练模型。 | model.fit(train_ds, epochs=10, validation_data=val_ds)
loss, accuracy = model.evaluate(test_ds)
print("Accuracy", accuracy) | site/zh-cn/tutorials/structured_data/preprocessing_layers.ipynb | tensorflow/docs-l10n | apache-2.0 |
根据新数据进行推断
要点:您开发的模型现在可以直接从 CSV 文件中对行进行分类,因为预处理代码包含在模型本身中。
现在,您可以保存并重新加载 Keras 模型。请按照此处的教程了解有关 TensorFlow 模型的更多信息。 | model.save('my_pet_classifier')
reloaded_model = tf.keras.models.load_model('my_pet_classifier') | site/zh-cn/tutorials/structured_data/preprocessing_layers.ipynb | tensorflow/docs-l10n | apache-2.0 |
要获得对新样本的预测,只需调用 model.predict()。您只需要做两件事:
将标量封装成列表,以便具有批次维度(模型只处理成批次的数据,而不是单个样本)
对每个特征调用 convert_to_tensor | sample = {
'Type': 'Cat',
'Age': 3,
'Breed1': 'Tabby',
'Gender': 'Male',
'Color1': 'Black',
'Color2': 'White',
'MaturitySize': 'Small',
'FurLength': 'Short',
'Vaccinated': 'No',
'Sterilized': 'No',
'Health': 'Healthy',
'Fee': 100,
'PhotoAmt': 2,
}
input_dict = {name: tf.convert_to_tensor([value]) for name, value in sample.items()}
predictions = reloaded_model.predict(input_dict)
prob = tf.nn.sigmoid(predictions[0])
print(
"This particular pet had a %.1f percent probability "
"of getting adopted." % (100 * prob)
) | site/zh-cn/tutorials/structured_data/preprocessing_layers.ipynb | tensorflow/docs-l10n | apache-2.0 |
Let's say we only have 4 words in our vocabulary: "the", "fight", "wind", and "like".
Maybe each word is associated with numbers.
| Word | Number |
| ------ |:------:|
| 'the' | 17 |
| 'fight' | 22 |
| 'wind' | 35 |
| 'like' | 51 | | embeddings_0d = tf.constant([17,22,35,51])
| ch11_seq2seq/Concept02_embedding_lookup.ipynb | BinRoot/TensorFlow-Book | mit |
Or maybe, they're associated with one-hot vectors.
| Word | Vector |
| ------ |:------:|
| 'the ' | [1, 0, 0, 0] |
| 'fight' | [0, 1, 0, 0] |
| 'wind' | [0, 0, 1, 0] |
| 'like' | [0, 0, 0, 1] | | embeddings_4d = tf.constant([[1, 0, 0, 0],
[0, 1, 0, 0],
[0, 0, 1, 0],
[0, 0, 0, 1]]) | ch11_seq2seq/Concept02_embedding_lookup.ipynb | BinRoot/TensorFlow-Book | mit |
This may sound over the top, but you can have any tensor you want, not just numbers or vectors.
| Word | Tensor |
| ------ |:------:|
| 'the ' | [[1, 0] , [0, 0]] |
| 'fight' | [[0, 1] , [0, 0]] |
| 'wind' | [[0, 0] , [1, 0]] |
| 'like' | [[0, 0] , [0, 1]] | | embeddings_2x2d = tf.constant([[[1, 0], [0, 0]],
[[0, 1], [0, 0]],
[[0, 0], [1, 0]],
[[0, 0], [0, 1]]]) | ch11_seq2seq/Concept02_embedding_lookup.ipynb | BinRoot/TensorFlow-Book | mit |
Let's say we want to find the embeddings for the sentence, "fight the wind". | ids = tf.constant([1, 0, 2]) | ch11_seq2seq/Concept02_embedding_lookup.ipynb | BinRoot/TensorFlow-Book | mit |
We can use the embedding_lookup function provided by TensorFlow: | lookup_0d = sess.run(tf.nn.embedding_lookup(embeddings_0d, ids))
print(lookup_0d)
lookup_4d = sess.run(tf.nn.embedding_lookup(embeddings_4d, ids))
print(lookup_4d)
lookup_2x2d = sess.run(tf.nn.embedding_lookup(embeddings_2x2d, ids))
print(lookup_2x2d) | ch11_seq2seq/Concept02_embedding_lookup.ipynb | BinRoot/TensorFlow-Book | mit |
Load data | % ll dadiExercises/
% cat dadiExercises/ERY.FOLDED.sfs.dadi_format | Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb | claudiuskerth/PhDthesis | mit |
I have turned the 1D folded SFS's from realSFS into $\delta$d$\delta$i format by hand according to the description in section 3.1 of the manual.
Note, that the last line, indicating the mask, has length 37, but the folded spectrum has length 19. Dadi wants to mask counts from invariable sites. For an unfolded spectrum, i. e. polarised with respect to an inferred ancestral allele at each site, the first and the last count classes would correspond to invariable sites. In a folded spectrum, i. e. with counts of the minor allele at each site, the last count class corresponds to SNP's with minor sample allele frequency of $n/2$ (with even sample size). | fs_ery = dadi.Spectrum.from_file('dadiExercises/ERY.FOLDED.sfs.dadi_format')
%pdoc dadi.Spectrum.from_file
fs_ery
ns = fs_ery.sample_sizes
ns
fs_ery.pop_ids = ['ery'] # must be an array, otherwise leads to error later on
# the number of segregating sites in the spectrum
fs_ery.sum() | Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb | claudiuskerth/PhDthesis | mit |
According to the number of segregating sites, this spectrum should have good power to distinguish between alternative demographic models (see Adams2004). However, the noise in the data is extreme, as can be seen below, which might compromise this power and maybe even lead to false inferences.
Plot the data | %pdoc dadi.Plotting.plot_1d_fs
pylab.rcParams['figure.figsize'] = [12.0, 10.0]
dadi.Plotting.plot_1d_fs(fs_ery, show=False) | Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb | claudiuskerth/PhDthesis | mit |
Built-in 1D models | # show modules within dadi
dir(dadi)
dir(dadi.Demographics1D)
# show the source of the 'Demographics1D' method
%psource dadi.Demographics1D | Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb | claudiuskerth/PhDthesis | mit |
standard neutral model | # create link to method
func = dadi.Demographics1D.snm
# make the extrapolating version of the demographic model function
func_ex = dadi.Numerics.make_extrap_log_func(func)
# setting the smallest grid size slightly larger than the largest population sample size
pts_l = [40, 50, 60] | Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb | claudiuskerth/PhDthesis | mit |
The snm function does not take parameters to optimize. I can therefore get directly the expected model. The snm function does not take a fold argument. I am therefore going to calculated an unfolded expected spectrum and then fold. | # calculate unfolded AFS under standard neutral model (up to a scaling factor theta)
model = func_ex(0, ns, pts_l)
model
dadi.Plotting.plot_1d_fs(model.fold()[:19], show=False) | Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb | claudiuskerth/PhDthesis | mit |
What's happening in the 18th count class? | # get the source of the fold method, which is part of the Spectrum object
%psource dadi.Spectrum.fold
# get the docstring of the Spectrum object
%pdoc dadi.Spectrum
# retrieve the spectrum array from the Spectrum object
model.data | Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb | claudiuskerth/PhDthesis | mit |
I am going to fold manually now. | # reverse spectrum and add to itself
model_fold = model.data + model.data[::-1]
model_fold
# discard all count classes >n/2
model_fold = model_fold[:19]
model_fold | Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb | claudiuskerth/PhDthesis | mit |
When the sample size is even, then highest sample frequency class corresponds to just one unfolded class (18). This has been added to itself and those SNP's are counted twice at the moment. I need to divide this class by 2 to get the correct count for this folded class. | # divide highest sample frequency class by 2
model_fold[18] = model_fold[18]/2.0
model_fold
# create dadi Spectrum object from array, need to specify custom mask
model_folded = dadi.Spectrum(data=model_fold, mask_corners=False, mask= [1] + [0]*18)
model_folded
dadi.Plotting.plot_1d_fs(model_folded) | Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb | claudiuskerth/PhDthesis | mit |
The folded expected spectrum is correct. Also, see figure 4.5 in Wakeley2009.
How to fold an unfolded spectrum | # fold the unfolded model
model_folded = model.fold()
#model_folded = model_folded[:(ns[0]+1)]
model_folded.pop_ids = ['ery'] # be sure to give an array, not a scalar string
model_folded
ll_model_folded = dadi.Inference.ll_multinom(model_folded, fs_ery)
print 'The log composite likelihood of the observed ery spectrum given a standard neutral model is {0:.3f}.'.format(ll_model_folded) | Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb | claudiuskerth/PhDthesis | mit |
$\theta$ and implied $N_{ref}$ | theta = dadi.Inference.optimal_sfs_scaling(model_folded, fs_ery)
print 'The optimal value of theta is {0:.3f}.'.format(theta) | Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb | claudiuskerth/PhDthesis | mit |
This theta estimate is a little bit higher than what I estimated with curve fitting in Fist_Steps_with_dadi.ipynb, which was 10198.849.
What effective ancestral population size would that imply?
According to section 4.4 in the dadi manual:
$$
\theta = 4 N_{ref} \mu_{L} \qquad \text{L: sequence length}
$$
Let's assume the mutation rate per nucleotide site per generation is $3\times 10^{-9}$ (see e. g. Liu2017). Then
$$
\mu_{L} = \mu_{site} \times L
$$
So
$$
\theta = 4 N_{ref} \mu_{site} \times L
$$
and
$$
N_{ref} = \frac{\theta}{4 \mu_{site} L}
$$ | mu = 3e-9
L = fs_ery.data.sum() # this sums over all entries in the spectrum, including masked ones, i. e. also contains invariable sites
print "The total sequence length is " + str(L)
N_ref = theta/L/mu/4
print "The effective ancestral population size (in number of diploid individuals) implied by this theta is: {0}.".format(int(N_ref)) | Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb | claudiuskerth/PhDthesis | mit |
This effective population size is consistent with those reported in Lynch2016 for other insect species.
Begin Digression: | x = pylab.arange(0, 100)
y = 0.5**(x)
pylab.plot(x, y)
x[:10] * y[:10]
sum(x * y) | Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb | claudiuskerth/PhDthesis | mit |
End Digression | model_folded * theta
pylab.semilogy(model_folded * theta, "bo-", label='SNM')
pylab.plot(fs_ery, "ro-", label='ery')
pylab.legend()
%psource dadi.Plotting.plot_1d_comp_Poisson
# compare model prediction and data visually with dadi function
dadi.Plotting.plot_1d_comp_multinom(model_folded[:19], fs_ery[:19], residual='linear') | Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb | claudiuskerth/PhDthesis | mit |
The lower plot is for the scaled Poisson residuals.
$$
residuals = (model - data)/\sqrt{model}
$$
The model is the expected counts in each frequency class. If these counts are Poisson distributed, then their variance is equal to their expectation. The differences between model and data are therefore scaled by the expected standard deviation of the model counts.
The observed counts deviate by up to 30 standard deviations from the model!
What could be done about this?
The greatest deviations are seen for the first two frequency classes, the ones that should provide the greatest amount of information (Fu1994) for theta and therefore probably also other parameters. Toni has suggested that the doubleton class is inflated due to "miscalling" heterozygotes as homozygotes. When they contain a singleton they will be "called" as homozygote and therefore contribute to the doubleton count. This is aggravated by the fact that the sequenced individuals are all male which only possess one X chromosome. The X chromosome is the fourth largest of the 9 chromosomes of these grasshoppers (8 autosomes + X) (see Gosalvez1988, fig. 2). That is, about 1/9th of the sequenced RAD loci are haploid but ANGSD assumes all loci to be diploid. The genotype likelihoods it calculates are all referring to diploid genotypes.
I think one potential reason for the extreme deviations is that the genotype likelihoods are generally biased toward homozygote genotypes (i. e. also for autosomal loci) due to PCR duplicates (see eq. 1 in Nielsen2012). So, one potential improvement would be to remove PCR duplicates.
Another potential improvement could be found by subsampling 8/9th to 8/10th of the contigs in the SAF files and estimating an SFS from these. Given enough subsamples, one should eventually be found that maximally excludes loci from the X chromosome. This subsample is expected to produce the least squared deviations from an expected SFS under the standard neutral model. However, one could argue that this attempt to exclude problematic loci could also inadvertently remove loci that strongly deviate from neutral expectations due to non-neutral evolution, again reducing power to detect deviations from the standard neutral model. I think one could also just apply the selection criterion of the second MAF class to be lower than the first and just save all contig subsamples and SFS's that fulfill that criterioin, since that should be true for all demographic scenarios.
Exponential growth
Creating a folded spectrum exactly how dadi expects it
As seen above in the folded model spectrum, dadi just masks out entries that are not sensical in a folded spectrum, but keeps the length of the spectrum the same as the unfolded. That way the sample size (i. e. number of chromosomes) is determined correctly. Let's create a correct folded spectrum object for ery. | fs_ery
# make copy of spectrum array
data_abc = fs_ery.data.copy()
# resize the array to the unfolded length
data_abc.resize((37,))
data_abc
fs_ery_ext = dadi.Spectrum(data_abc)
fs_ery_ext
fs_ery_ext.fold()
fs_ery_ext = fs_ery_ext.fold()
fs_ery_ext.pop_ids = ['ery']
fs_ery_ext
fs_ery_ext.sample_sizes | Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb | claudiuskerth/PhDthesis | mit |
Now, the reported sample size is correct and we have a Spectrum object that dadi can handle correctly.
To fold or not to fold by ANGSD
Does estimating an unfolded spectrum with ANGSD and then folding yield a sensible folded SFS when the sites are not polarised with respect to an ancestral allele but with respect to the reference allele? Matteo Fumagalli thinks that this is sensible.
Load SFS folded by ANGSD | % cat dadiExercises/ERY.FOLDED.sfs.dadi_format
# load the spectrum that was created from folded SAF's
fs_ery_folded_by_Angsd = dadi.Spectrum.from_file('dadiExercises/ERY.FOLDED.sfs.dadi_format')
fs_ery_folded_by_Angsd
# extract unmasked entries of the SFS
m = fs_ery_folded_by_Angsd.mask
fs_ery_folded_by_Angsd[m == False] | Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb | claudiuskerth/PhDthesis | mit |
Load unfolded SFS | % ll ../ANGSD/SFS/ERY/ | Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb | claudiuskerth/PhDthesis | mit |
I have copied the unfolded SFS into the current directory. | % ll
% cat ERY.unfolded.sfs
# load unfolded spectrum
fs_ery_unfolded_by_ANGSD = dadi.Spectrum.from_file('ERY.unfolded.sfs')
fs_ery_unfolded_by_ANGSD
# fold unfolded spectrum
fs_ery_unfolded_by_Angsd_folded = fs_ery_unfolded_by_ANGSD.fold()
fs_ery_unfolded_by_Angsd_folded
# plot the two spectra
pylab.rcParams['figure.figsize'] = [12.0, 10.0]
pylab.plot(fs_ery_folded_by_Angsd, 'ro-', label='folded by ANGSD')
pylab.plot(fs_ery_unfolded_by_Angsd_folded, 'bo-', label='folded by DADI')
pylab.legend()
pylab.savefig('ery_fold_comp.png')
%psource dadi.Plotting.plot_1d_comp_Poisson
dadi.Plotting.plot_1d_comp_Poisson(fs_ery_folded_by_Angsd[:19], fs_ery_unfolded_by_Angsd_folded[:19], \
residual='linear') | Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb | claudiuskerth/PhDthesis | mit |
The sizes of the residuals (scaled by the Poisson standard deviations) indicate that the two versions of the folded SFS of ery are significantly different.
Now, what does the parallelus data say? | % ll dadiExercises/
% cat dadiExercises/PAR.FOLDED.sfs.dadi_format
# load the spectrum folded by ANGSD
fs_par_folded_by_Angsd = dadi.Spectrum.from_file('dadiExercises/PAR.FOLDED.sfs.dadi_format')
fs_par_folded_by_Angsd
% cat PAR.unfolded.sfs
# load spectrum that has been created from unfolded SAF's
fs_par_unfolded_by_Angsd = dadi.Spectrum.from_file('PAR.unfolded.sfs')
fs_par_unfolded_by_Angsd
fs_par_unfolded_by_Angsd_folded = fs_par_unfolded_by_Angsd.fold()
fs_par_unfolded_by_Angsd_folded
dadi.Plotting.plot_1d_comp_Poisson(fs_par_folded_by_Angsd[:19], fs_par_unfolded_by_Angsd_folded[:19], \
residual='linear')
#pylab.subplot(2,1,1)
pylab.plot(fs_par_folded_by_Angsd[:19], 'ro-', label='folded by ANGSD')
#pylab.subplot(2,1,2)
pylab.plot(fs_par_unfolded_by_Angsd_folded, 'bo-', label='folded by DADI')
pylab.legend()
pylab.savefig('par_fold_comp.png') | Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb | claudiuskerth/PhDthesis | mit |
The unfolded spectrum folded by dadi seems to be a bit better behaved than the one folded by ANGSD. I really wonder whether folding in ANGSD is needed.
The folded 2D spectrum from ANGSD is a 19 x 19 matrix. This is not a format that dadi can understand. | %psource dadi.Spectrum.from_data_dict | Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb | claudiuskerth/PhDthesis | mit |
See this thread on the dadi forum.
Exponential growth model | # show the source of the 'Demographics1D' method
%psource dadi.Demographics1D.growth
# create link to function that specifies a simple growth or decline model
func = dadi.Demographics1D.growth
# create extrapolating version of the function
func_ex = dadi.Numerics.make_extrap_log_func(func)
# set lower and upper bounds to nu and T
upper_bound = [100, 3]
lower_bound = [1e-2, 0]
# set starting value
p0 = [1, 1] # corresponds to constant population size
%pdoc dadi.Misc.perturb_params
# perturb starting values by up to a factor of 2
p0 = dadi.Misc.perturb_params(p0, fold=1, upper_bound=upper_bound, lower_bound=lower_bound)
p0
%psource dadi.Inference.optimize_log
# run optimisation of paramters
popt = dadi.Inference.optimize_log(p0=p0, data=fs_ery, model_func=func_ex, pts=pts_l, \
lower_bound=lower_bound, upper_bound=upper_bound, \
verbose=0, maxiter=100, full_output=False)
popt | Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb | claudiuskerth/PhDthesis | mit |
Parallelised $\delta$a$\delta$i
I need to run the simulation with different starting values to check convergence.
I would like to do these runs in parallel. I have 12 cores available on huluvu. | from ipyparallel import Client
cl = Client()
cl.ids | Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb | claudiuskerth/PhDthesis | mit |
I now have connections to 11 engines. I started the engines with ipcluster start -n 11 & in the terminal. | # create load balanced view of the engines
lbview = cl.load_balanced_view()
lbview.block
# create direct view of all engines
dview = cl[:] | Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb | claudiuskerth/PhDthesis | mit |
import variables to namespace of engines | # set starting value for all engines
dview['p0'] = [1, 1]
dview['p0']
# set lower and upper bounds to nu and T for all engines
dview['upper_bound'] = [100, 3]
dview['lower_bound'] = [1e-2, 0]
dview['fs_ery'] = fs_ery
cl[0]['fs_ery']
dview['func_ex'] = func_ex
dview['pts_l'] = pts_l | Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb | claudiuskerth/PhDthesis | mit |
import dadi on all engines | with dview.sync_imports():
import sys
dview.execute('sys.path.insert(0, \'/home/claudius/Downloads/dadi\')')
cl[0]['sys.path']
with dview.sync_imports():
import dadi | Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb | claudiuskerth/PhDthesis | mit |
create parallel function to run dadi | @lbview.parallel(block=True)
def run_dadi(x): # for the function to be called with map, it needs to have one input variable
# perturb starting values by up to a factor of 2
p1 = dadi.Misc.perturb_params(p0, fold=1, upper_bound=upper_bound, lower_bound=lower_bound)
# run optimisation of paramters
popt = dadi.Inference.optimize_log(p0=p1, data=fs_ery, model_func=func_ex, pts=pts_l, \
lower_bound=lower_bound, upper_bound=upper_bound, \
verbose=0, maxiter=100, full_output=False)
return popt
run_dadi.map(range(20))
popt
# set starting value
p0 = [1, 1]
# perturb starting values by up to a factor of 2
p0 = dadi.Misc.perturb_params(p0, fold=1, upper_bound=upper_bound, lower_bound=lower_bound)
# run optimisation of paramters
popt = dadi.Inference.optimize_log(p0=p0, data=fs_ery_ext, model_func=func_ex, pts=pts_l, \
lower_bound=lower_bound, upper_bound=upper_bound, \
verbose=0, maxiter=100, full_output=False)
popt | Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb | claudiuskerth/PhDthesis | mit |
def exp_growth(x):
p0 = [1, 1]
# perturb starting values by up to a factor of 2
p0 = dadi.Misc.perturb_params(p0, fold=1, upper_bound=upper_bound, lower_bound=lower_bound)
# run optimisation of paramters
popt = dadi.Inference.optimize_log(p0=p0, data=fs_ery_ext, model_func=func_ex, pts=pts_l, \
lower_bound=lower_bound, upper_bound=upper_bound, \
verbose=0, maxiter=100, full_output=False)
return popt
popt = map(exp_growth, range(10))
# this will run a few minutes
# popt
import ipyparallel as ipp
c = ipp.Client()
c.ids
%%time
dview = c[:]
popt = dview.map_sync(exp_growth, range(10)) | Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb | claudiuskerth/PhDthesis | mit |
|
Unfortunately, parallelisation is not as straightforward as it should be. | popt | Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb | claudiuskerth/PhDthesis | mit |
Except for the last iteration, the two parameter estimates seem to have converged. | ns = fs_ery_ext.sample_sizes
ns
print popt[0]
print popt[9] | Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb | claudiuskerth/PhDthesis | mit |
What is the log likelihood of the model given these two different parameter sets? | model_one = func_ex(popt[0], ns, pts_l)
ll_model_one = dadi.Inference.ll_multinom(model_one, fs_ery_ext)
ll_model_one
model_two = func_ex(popt[9], ns, pts_l)
ll_model_two = dadi.Inference.ll_multinom(model_two, fs_ery_ext)
ll_model_two | Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb | claudiuskerth/PhDthesis | mit |
The lower log-likelihood for the last set of parameters inferred indicates that the optimisation got trapped in a local minimum in the last run of the optimisation.
What the majority of the parameter sets seem to indicate is that at about time $0.007 \times 2 N_{ref}$ generations in the past the ancestral population started to shrink exponentially, reaching a population size of about $0.14 \times N_{ref}$ at present. | print 'The model suggests that exponential decline in population size started {0:.0f} generations ago.'.format(popt[0][1] * 2 * N_ref) | Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb | claudiuskerth/PhDthesis | mit |
Two epoch model | dir(dadi.Demographics1D)
%psource dadi.Demographics1D.two_epoch | Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb | claudiuskerth/PhDthesis | mit |
This model specifies a stepwise change in population size some time ago. It assumes that the population size has stayed constant since the change. | func = dadi.Demographics1D.two_epoch
func_ex = dadi.Numerics.make_extrap_log_func(func)
upper_bound = [10, 3]
lower_bound = [1e-3, 0]
pts_l = [40, 50, 60]
def stepwise_pop_change(x):
# set initial values
p0 = [1, 1]
# perturb initial parameter values randomly by up to 2 * fold
p0 = dadi.Misc.perturb_params(p0, fold=1.5, \
upper_bound=upper_bound, lower_bound=lower_bound)
# run optimisation
popt = dadi.Inference.optimize_log(p0, fs_ery_ext, func_ex, pts_l, \
upper_bound=upper_bound, lower_bound=lower_bound,
verbose=0, maxiter=10)
return popt
stepwise_pop_change(1)
stepwise_pop_change(1)
popt = map(stepwise_pop_change, range(10))
popt | Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb | claudiuskerth/PhDthesis | mit |
This model does not converge on a set of parameter values. | nu = [i[0] for i in popt]
nu
T = [i[1] for i in popt]
T
pylab.rcParams['font.size'] = 14.0
pylab.loglog(nu, T, 'bo')
pylab.xlabel(r'$\nu$')
pylab.ylabel('T') | Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb | claudiuskerth/PhDthesis | mit |
Both parameters seem to be correlated. With the available data, it may not be possible to distinguish between a moderate reduction in population size a long time ago (topright in the above figure) and a drastic reduction in population size a short time ago (bottomleft in the above figure).
Bottleneck then exponential growth | %psource dadi.Demographics1D | Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb | claudiuskerth/PhDthesis | mit |
This model has three parameters. $\nu_B$ is the ratio of the population size (with respect to the ancestral population size $N_{ref}$) after the first stepwise change at time T in the past. The population is then asumed to undergo exponential growth/decline to a ratio of population size $\nu_F$ at present. | func = dadi.Demographics1D.bottlegrowth
func_ex = dadi.Numerics.make_extrap_log_func(func)
upper_bound = [100, 100, 3]
lower_bound = [1e-3, 1e-3, 0]
pts_l = [40, 50, 60]
def bottleneck_growth(x):
p0 = [1, 1, 1] # corresponds to constant population size
# perturb initial parameter values randomly by up to 2 * fold
p0 = dadi.Misc.perturb_params(p0, fold=1.5, \
upper_bound=upper_bound, lower_bound=lower_bound)
# run optimisation
popt = dadi.Inference.optimize_log(p0, fs_ery_ext, func_ex, pts_l, \
upper_bound=upper_bound, lower_bound=lower_bound,
verbose=0, maxiter=10)
return popt
%%time
popt = map(bottleneck_growth, range(10))
popt | Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb | claudiuskerth/PhDthesis | mit |
There is no convergence of parameters estimates. The parameter combinations stand for vastly different demographic scenarios. Most seem to suggest a population increase (up to 100 times the ancestral population size), followed by exponential decrease to about the ancestral population size.
Three epochs | func = dadi.Demographics1D.three_epoch
func_ex = dadi.Numerics.make_extrap_log_func(func)
%psource dadi.Demographics1D.three_epoch | Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb | claudiuskerth/PhDthesis | mit |
This model tries to estimate three parameters. The populations is expected to undergo a stepwise population size change (bottleneck) at time TF + TB. At time TF it is expected to recover immediately to the current population size. | upper_bound = [100, 100, 3, 3]
lower_bound = [1e-3, 1e-3, 0, 0]
pts_l = [40, 50, 60]
def opt_three_epochs(x):
p0 = [1, 1, 1, 1] # corresponds to constant population size
# perturb initial parameter values randomly by up to 2 * fold
p0 = dadi.Misc.perturb_params(p0, fold=1.5, \
upper_bound=upper_bound, lower_bound=lower_bound)
# run optimisation
popt = dadi.Inference.optimize_log(p0, fs_ery_ext, func_ex, pts_l, \
upper_bound=upper_bound, lower_bound=lower_bound,
verbose=0, maxiter=10)
return popt
%%time
popt = map(opt_three_epochs, range(10))
popt | Data_analysis/SNP-indel-calling/dadi/1D_models.ipynb | claudiuskerth/PhDthesis | mit |
A bit of basic pandas
Let's first start by reading in the CSV file as a pandas.DataFrame(). | import pandas as pd
df = pd.read_csv('data/boston_budget.csv')
df.head() | 3-data-checks.ipynb | ericmjl/data-testing-tutorial | mit |
To get the columns of a DataFrame object df, call df.columns. This is a list-like object that can be iterated over. | df.columns | 3-data-checks.ipynb | ericmjl/data-testing-tutorial | mit |
YAML Files
Describe data in a human-friendly & computer-readable format. The environment.yml file in your downloaded repository is also a YAML file, by the way!
Structure:
yaml
key1: value
key2:
- value1
- value2
- subkey1:
- value3
Example YAML-formatted schema:
yaml
filename: boston_budget.csv
column_names:
- "Fiscal Year"
- "Service (cabinet)"
- "Department"
- "Program #"
...
- "Fund"
- "Amount"
YAML-formatted text can be read as dictionaries. | spec = """
filename: boston_budget.csv
columns:
- "Fiscal Year"
- "Service (Cabinet)"
- "Department"
- "Program #"
- "Program"
- "Expense Type"
- "ACCT #"
- "Expense Category (Account)"
- "Fund"
- "Amount"
"""
import yaml
metadata = yaml.load(spec)
metadata | 3-data-checks.ipynb | ericmjl/data-testing-tutorial | mit |
You can also take dictionaries, and return YAML-formatted text. | print(yaml.dump(metadata)) | 3-data-checks.ipynb | ericmjl/data-testing-tutorial | mit |
By having things YAML formatted, you preserve human-readability and computer-readability simultaneously.
Providing metadata should be something already done when doing analytics; YAML-format is a strong suggestion, but YAML schema will depend on use case.
Let's now switch roles, and pretend that we're on side of the "analyst" and are no longer the "data provider".
How would you check that the columns match the spec? Basically, check that every element in df.columns is present inside the metadata['columns'] list.
Exercise
Inside test_datafuncs.py, write a utility function, check_schema(df, meta_columns) that tests whether every column in a DataFrame is present in some metadata spec file. It should accept two arguments:
df: a pandas.DataFrame
meta_columns: A list of columns from the metadata spec.
```python
def check_schema(df, meta_columns):
for col in df.columns:
assert col in meta_columns, f'"{col}" not in metadata column spec'
```
In your test file, outside the function definition, write another test function, test_budget_schemas(), explicitly runs a test for just the budget data.
```python
def test_budget_schemas():
columns = read_metadata('data/metadata_budget.yml')['columns']
df = pd.read_csv('data/boston_budget.csv')
check_schema(df, columns)
```
Now, run the test. Do you get the following error? Can you spot the error?
```bash
def check_schema(df, meta_columns):
for col in df.columns:
assert col in meta_columns, f'"{col}" not in metadata column spec'
E AssertionError: " Amount" not in metadata column spec
E assert ' Amount' in ['Fiscal Year', 'Service (Cabinet)', 'Department', 'Program #', 'Program', 'Expense Type', ...]
test_datafuncs_soln.py:63: AssertionError
=================================== 1 failed, 7 passed in 0.91 seconds ===================================
```
If there is even a slight mis-spelling, this kind of check will help you pinpoint where that is. Note how the "Amount" column is spelled with an extra space.
At this point, I would contact the data provider to correct errors like this.
It is a logical practice to keep one schema spec file per table provided to you. However, it is also possible to take advantage of YAML "documents" to keep multiple schema specs inside a single YAML file.
The choice is yours - in cases where there are a lot of data files, it may make sense (for the sake of file-system sanity) to keep all of the specs in multiple files that represent logical groupings of data.
Exercise: Write YAML metadata spec.
Put yourself in the shoes of a data provider. Take the boston_ei.csv file in the data/ directory, and make a schema spec file for that file.
Exercise: Write test for metadata spec.
Next, put yourself in the shoes of a data analyst. Take the schema spec file and write a test for it.
Exercise: Auto YAML Spec.
Inside datafuncs.py, write a function with the signature autospec(handle) that takes in a file path, and does the following:
Create a dictionary, with two keys:
a "filename" key, whose value only records the filename (and not the full file path),
a "columns" key, whose value records the list of columns in the dataframe.
Converts the dictionary to a YAML string
Writes the YAML string to disk.
Optional Exercise: Write meta-test
Now, let's go "meta". Write a "meta-test" that ensures that every CSV file in the data/ directory has a schema file associated with it. (The function need not check each schema.) Until we finish filling out the rest of the exercises, this test can be allowed to fail, and we can mark it as a test to skip by marking it with an @skip decorator:
python
@pytest.mark.skip(reason="no way of currently testing this")
def test_my_func():
...
Notes
The point here is to have a trusted copy of schema apart from data file. YAML not necessarily only way!
If no schema provided, manually create one; this is exploratory data analysis anyways - no effort wasted!
Datum Checks
Now that we're done with the schema checks, let's do some sanity checks on the data as well. This is my personal favourite too, as some of the activities here overlap with the early stages of exploratory data analysis.
We're going to switch datasets here, and move to a 'corrupted' version of the Boston Economic Indicators dataset. Its file path is: ./data/boston_ei-corrupt.csv. | import pandas as pd
import seaborn as sns
sns.set_style('white')
%matplotlib inline
df = pd.read_csv('data/boston_ei-corrupt.csv')
df.head() | 3-data-checks.ipynb | ericmjl/data-testing-tutorial | mit |
Demo: Visual Diagnostics
We can use a package called missingno, which gives us a quick visual view of the completeness of the data. This is a good starting point for deciding whether you need to manually comb through the data or not. | # First, we check for missing data.
import missingno as msno
msno.matrix(df) | 3-data-checks.ipynb | ericmjl/data-testing-tutorial | mit |
Immediately it's clear that there's a number of rows with empty values! Nothing beats a quick visual check like this one.
We can get a table version of this using another package called pandas_summary. | # We can do the same using pandas-summary.
from pandas_summary import DataFrameSummary
dfs = DataFrameSummary(df)
dfs.summary() | 3-data-checks.ipynb | ericmjl/data-testing-tutorial | mit |
dfs.summary() returns a Pandas DataFrame; this means we can write tests for data completeness!
Exercise: Test for data completeness.
Write a test named check_data_completeness(df) that takes in a DataFrame and confirms that there's no missing data from the pandas-summary output. Then, write a corresponding test_boston_ei() that tests the schema for the Boston Economic Indicators dataframe.
```python
In test_datafuncs.py
from pandas_summary import DataFrameSummary
def check_data_completeness(df):
df_summary = DataFrameSummary(df).summary()
for col in df_summary.columns:
assert df_summary.loc['missing', col] == 0, f'{col} has missing values'
def test_boston_ei():
df = pd.read_csv('data/boston_ei.csv')
check_data_completeness(df)
```
Exercise: Test for value correctness.
In the Economic Indicators dataset, there are four "rate" columns: ['labor_force_part_rate', 'hotel_occup_rate', 'hotel_avg_daily_rate', 'unemp_rate'], which must have values between 0 and 1.
Add a utility function to test_datafuncs.py, check_data_range(data, lower=0, upper=1), which checks the range of the data such that:
- data is a list-like object.
- data <= upper
- data >= lower
- upper and lower have default values of 1 and 0 respectively.
Then, add to the test_boston_ei() function tests for each of these four columns, using the check_data_range() function.
```python
In test_datafuncs.py
def check_data_range(data, lower=0, upper=1):
assert min(data) >= lower, f"minimum value less than {lower}"
assert max(data) <= upper, f"maximum value greater than {upper}"
def test_boston_ei():
df = pd.read_csv('data/boston_ei.csv')
check_data_completeness(df)
zero_one_cols = ['labor_force_part_rate', 'hotel_occup_rate',
'hotel_avg_daily_rate', 'unemp_rate']
for col in zero_one_cols:
check_data_range(df['labor_force_part_rate'])
```
Distributions
Most of what is coming is going to be a demonstration of the kinds of tools that are potentially useful for you. Feel free to relax from coding, as these aren't necessarily obviously automatable.
Numerical Data
We can take the EDA portion further, by doing an empirical cumulative distribution plot for each data column. | import numpy as np
def compute_dimensions(length):
"""
Given an integer, compute the "square-est" pair of dimensions for plotting.
Examples:
- length: 17 => rows: 4, cols: 5
- length: 14 => rows: 4, cols: 4
This is a utility function; can be tested separately.
"""
sqrt = np.sqrt(length)
floor = int(np.floor(sqrt))
ceil = int(np.ceil(sqrt))
if floor ** 2 >= length:
return (floor, floor)
elif floor * ceil >= length:
return (floor, ceil)
else:
return (ceil, ceil)
compute_dimensions(length=17)
assert compute_dimensions(17) == (4, 5)
assert compute_dimensions(16) == (4, 4)
assert compute_dimensions(15) == (4, 4)
assert compute_dimensions(11) == (3, 4)
# Next, let's visualize the empirical CDF for each column of data.
import matplotlib.pyplot as plt
def empirical_cumdist(data, ax, title=None):
"""
Plots the empirical cumulative distribution of values.
"""
x, y = np.sort(data), np.arange(1, len(data)+1) / len(data)
ax.scatter(x, y)
ax.set_title(title)
data_cols = [i for i in df.columns if i not in ['Year', 'Month']]
n_rows, n_cols = compute_dimensions(len(data_cols))
fig = plt.figure(figsize=(n_cols*3, n_rows*3))
from matplotlib.gridspec import GridSpec
gs = GridSpec(n_rows, n_cols)
for i, col in enumerate(data_cols):
ax = plt.subplot(gs[i])
empirical_cumdist(df[col], ax, title=col)
plt.tight_layout()
plt.show() | 3-data-checks.ipynb | ericmjl/data-testing-tutorial | mit |
It's often a good idea to standardize numerical data (that aren't count data). The term standardize often refers to the statistical procedure of subtracting the mean and dividing by the standard deviation, yielding an empirical distribution of data centered on 0 and having standard deviation of 1.
Exercise
Write a test for a function that standardizes a column of data. Then, write the function.
Note: This function is also implemented in the scikit-learn library as part of their preprocessing module. However, in case an engineering decision that you make is that you don't want to import an entire library just to use one function, you can re-implement it on your own.
```python
def standard_scaler(x):
return (x - x.mean()) / x.std()
def test_standard_scaler(x):
std = standard_scaler(x)
assert np.allclose(std.mean(), 0)
assert np.allclose(std.std(), 1)
```
Exercise
Now, plot the grid of standardized values. | data_cols = [i for i in df.columns if i not in ['Year', 'Month']]
n_rows, n_cols = compute_dimensions(len(data_cols))
fig = plt.figure(figsize=(n_cols*3, n_rows*3))
from matplotlib.gridspec import GridSpec
gs = GridSpec(n_rows, n_cols)
for i, col in enumerate(data_cols):
ax = plt.subplot(gs[i])
empirical_cumdist(standard_scaler(df[col]), ax, title=col)
plt.tight_layout()
plt.show() | 3-data-checks.ipynb | ericmjl/data-testing-tutorial | mit |
Exercise
Did we just copy/paste the function?! It's time to stop doing this. Let's refactor the code into a function that can be called.
Categorical Data
For categorical-type data, we can plot the empirical distribution as well. (This example uses the smartphone_sanitization.csv dataset.) | from collections import Counter
def empirical_catdist(data, ax, title=None):
d = Counter(data)
print(d)
x = range(len(d.keys()))
labels = list(d.keys())
y = list(d.values())
ax.bar(x, y)
ax.set_xticks(x)
ax.set_xticklabels(labels)
smartphone_df = pd.read_csv('data/smartphone_sanitization.csv')
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
empirical_catdist(smartphone_df['site'], ax=ax) | 3-data-checks.ipynb | ericmjl/data-testing-tutorial | mit |
Statistical Checks
Report on deviations from normality.
Normality?!
The Gaussian (Normal) distribution is commonly assumed in downstream statistical procedures, e.g. outlier detection.
We can test for normality by using a K-S test.
K-S test
From Wikipedia:
In statistics, the Kolmogorov–Smirnov test (K–S test or KS test) is a nonparametric test of the equality of continuous, one-dimensional probability distributions that can be used to compare a sample with a reference probability distribution (one-sample K–S test), or to compare two samples (two-sample K–S test). It is named after Andrey Kolmogorov and Nikolai Smirnov. | from scipy.stats import ks_2samp
import numpy.random as npr
# Simulate a normal distribution with 10000 draws.
normal_rvs = npr.normal(size=10000)
result = ks_2samp(normal_rvs, df['labor_force_part_rate'].dropna())
result.pvalue < 0.05
fig = plt.figure()
ax = fig.add_subplot(111)
empirical_cumdist(normal_rvs, ax=ax)
empirical_cumdist(df['hotel_occup_rate'], ax=ax) | 3-data-checks.ipynb | ericmjl/data-testing-tutorial | mit |
Exercise
Re-create the panel of cumulative distribution plots, this time adding on the Normal distribution, and annotating the p-value of the K-S test in the title. | data_cols = [i for i in df.columns if i not in ['Year', 'Month']]
n_rows, n_cols = compute_dimensions(len(data_cols))
fig = plt.figure(figsize=(n_cols*3, n_rows*3))
from matplotlib.gridspec import GridSpec
gs = GridSpec(n_rows, n_cols)
for i, col in enumerate(data_cols):
ax = plt.subplot(gs[i])
test = ks_2samp(normal_rvs, standard_scaler(df[col]))
empirical_cumdist(normal_rvs, ax)
empirical_cumdist(standard_scaler(df[col]), ax, title=f"{col}, p={round(test.pvalue, 2)}")
plt.tight_layout()
plt.show() | 3-data-checks.ipynb | ericmjl/data-testing-tutorial | mit |
Download or use cached file oecd-canada.json. Caching file on disk permits to work off-line and to speed up the exploration of the data. | url = 'http://json-stat.org/samples/oecd-canada.json'
file_name = "oecd-canada.json"
file_path = os.path.abspath(os.path.join("..", "tests", "fixtures", "www.json-stat.org", file_name))
if os.path.exists(file_path):
print("using already downloaded file {}".format(file_path))
else:
print("download file and storing on disk")
jsonstat.download(url, file_name)
file_path = file_name | examples-notebooks/oecd-canada-jsonstat_v1.ipynb | 26fe/jsonstat.py | lgpl-3.0 |
Select the dataset named oedc. Oecd dataset has three dimensions (concept, area, year), and contains 432 values. | oecd = collection.dataset('oecd')
oecd | examples-notebooks/oecd-canada-jsonstat_v1.ipynb | 26fe/jsonstat.py | lgpl-3.0 |
Shows some detailed info about dimensions | oecd.dimension('concept')
oecd.dimension('area')
oecd.dimension('year') | examples-notebooks/oecd-canada-jsonstat_v1.ipynb | 26fe/jsonstat.py | lgpl-3.0 |
Accessing value in the dataset
Print the value in oecd dataset for area = IT and year = 2012 | oecd.data(area='IT', year='2012')
oecd.value(area='IT', year='2012')
oecd.value(concept='unemployment rate',area='Australia',year='2004') # 5.39663128
oecd.value(concept='UNR',area='AU',year='2004') | examples-notebooks/oecd-canada-jsonstat_v1.ipynb | 26fe/jsonstat.py | lgpl-3.0 |
Load Iris Flower Data | # Load feature and target data
iris = datasets.load_iris()
X = iris.data
y = iris.target | machine-learning/support_vector_classifier.ipynb | tpin3694/tpin3694.github.io | mit |
Create Previously Unseen Observation | # Create new observation
new_observation = [[-0.7, 1.1, -1.1 , -1.7]] | machine-learning/support_vector_classifier.ipynb | tpin3694/tpin3694.github.io | mit |
Predict Class Of Observation | # Predict class of new observation
svc.predict(new_observation) | machine-learning/support_vector_classifier.ipynb | tpin3694/tpin3694.github.io | mit |
Feature Selection
We are now going to explore Some feature selection procedures, the output of this will then be sent to a classifier
Recursive elimination with cross validation
Simple best percentile features
Tree based feature selection
<br/>
<br/>
The output from this is then sent to the following classifiers
<br/>
1. Random Forrests - Good ensemble technique
2. QDA - Other experiments with this classifier have been successful
3. LDA - A good simple technique
4. Gaussian Naive Bayes - Experiments with this classifier have proven successful in the past | from sklearn.svm import SVC
from sklearn.cross_validation import StratifiedKFold
from sklearn.feature_selection import RFECV
from sklearn.feature_selection import SelectFromModel
from sklearn.ensemble import ExtraTreesClassifier
# Make the Labels vector
clabels1 = [1] * 946 + [0] * 223
# Concatenate and Scale
combined1 = pd.concat([ADHD_men, BP_men])
combined1 = pd.DataFrame(preprocessing.scale(combined1))
# Recursive Feature elimination with cross validation
svc = SVC(kernel="linear")
rfecv = RFECV(estimator=svc, step=1, cv=StratifiedKFold(clabels1, 2),
scoring='accuracy')
rfecv.fit(combined1, clabels1)
combined1_recf = rfecv.transform(combined1)
combined1_recf = pd.DataFrame(combined1_recf)
print combined1_recf.head()
# Percentile base feature selection
from sklearn.feature_selection import SelectPercentile, f_classif
selector = SelectPercentile(f_classif, percentile=5)
combined_kpercentile = selector.fit_transform(combined1, clabels1)
combined1_kpercentile = pd.DataFrame(combined_kpercentile)
print combined1_kpercentile.head()
# Tree based selection
from sklearn.ensemble import ExtraTreesClassifier
clf = ExtraTreesClassifier()
clf = clf.fit(combined1, clabels1)
combined1_trees = SelectFromModel(clf, prefit=True).transform(combined1)
combined1_trees = pd.DataFrame(combined1_trees)
print combined1_trees.head() | Code/Assignment-11/AdvancedFeatureSelection.ipynb | Upward-Spiral-Science/spect-team | apache-2.0 |
Classifiers | # Leave one Out cross validation
def leave_one_out(classifier, values, labels):
leave_one_out_validator = LeaveOneOut(len(values))
classifier_metrics = cross_validation.cross_val_score(classifier, values, labels, cv=leave_one_out_validator)
accuracy = classifier_metrics.mean()
deviation = classifier_metrics.std()
return accuracy, deviation
rf = RandomForestClassifier(n_estimators = 22)
qda = QDA()
lda = LDA()
gnb = GaussianNB()
classifier_accuracy_list = []
classifiers = [(rf, "Random Forest"), (lda, "LDA"), (qda, "QDA"), (gnb, "Gaussian NB")]
for classifier, name in classifiers:
accuracy, deviation = leave_one_out(classifier, combined1_recf, clabels1)
print '%s accuracy is %0.4f (+/- %0.3f)' % (name, accuracy, deviation)
classifier_accuracy_list.append((name, accuracy))
for classifier, name in classifiers:
accuracy, deviation = leave_one_out(classifier, combined1_kpercentile, clabels1)
print '%s accuracy is %0.4f (+/- %0.3f)' % (name, accuracy, deviation)
classifier_accuracy_list.append((name, accuracy))
for classifier, name in classifiers:
accuracy, deviation = leave_one_out(classifier, combined1_trees, clabels1)
print '%s accuracy is %0.4f (+/- %0.3f)' % (name, accuracy, deviation)
classifier_accuracy_list.append((name, accuracy)) | Code/Assignment-11/AdvancedFeatureSelection.ipynb | Upward-Spiral-Science/spect-team | apache-2.0 |
Instantiate Persistable:
Each persistable object is instantiated with parameters that should uniquely (or nearly uniquely) define the payload. | params = {
"hello": "world",
"another_dict": {
"test": [1,2,3]
},
"a": 1,
"b": 4
}
p = Persistable(
payload_name="first_payload",
params=params,
workingdatapath=LOCALDATAPATH / "knowledgeshare_20170929" # object will live in this local disk location
) | examples/Persistable.ipynb | DataReply/persistable | gpl-3.0 |
Define Payload:
Payloads are defined by overriding the _generate_payload function:
Payload defined by _generate_payload function:
Simply override _generate_payload to give the Persistable object generate functionality. Note that generate here means to create the payload. The term is not meeant to indicate that a python generator is being produced. | # ML Example:
"""
def _generate_payload(self):
X = pd.read_csv(self.params['datafile'])
model = XGboost(X)
model.fit()
self.payload['model'] = model
"""
# Silly Example:
def _generate_payload(self):
self.payload['sum'] = self.params['a'] + self.params['b']
self.payload['msg'] = self.params['hello'] | examples/Persistable.ipynb | DataReply/persistable | gpl-3.0 |
Now we will monkeypatch the payload generator to override its counterpart in Persistable object (only necessary because we've defined the generator outside of an IDE). | def bind(instance, method):
def binding_scope_fn(*args, **kwargs):
return method(instance, *args, **kwargs)
return binding_scope_fn
p._generate_payload = bind(p, _generate_payload)
p.generate() | examples/Persistable.ipynb | DataReply/persistable | gpl-3.0 |
Persistable as a Super Class:
The non Monkey Patching equivalent to what we did above: | class SillyPersistableExample(Persistable):
def _generate_payload(self):
self.payload['sum'] = self.params['a'] + self.params['b']
self.payload['msg'] = self.params['hello']
p2 = SillyPersistableExample(payload_name="silly_example", params=params, workingdatapath=LOCALDATAPATH / "knowledgeshare_20170929")
p2.generate() | examples/Persistable.ipynb | DataReply/persistable | gpl-3.0 |
Load: | p_test = Persistable(
"first_payload",
params=params,
workingdatapath=LOCALDATAPATH/"knowledgeshare_20170929"
)
p_test.load()
p_test.payload | examples/Persistable.ipynb | DataReply/persistable | gpl-3.0 |
Load csv | df = pd.read_csv('in/gifts_Feb2016_2.csv')
source_columns = ['donor_id', 'amount_initial', 'donation_date', 'appeal', 'fund', 'city', 'state', 'zipcode_initial', 'charitable', 'sales']
df.columns = source_columns
df.info()
strip_func = lambda x: x.strip() if isinstance(x, str) else x
df = df.applymap(strip_func) | notebooks/0_DataCleanup-Feb2016.ipynb | smalladi78/SEF | unlicense |
Address nan column values | df.replace({'appeal': {'0': ''}}, inplace=True)
df.appeal.fillna('', inplace=True)
df.fund.fillna('', inplace=True) | notebooks/0_DataCleanup-Feb2016.ipynb | smalladi78/SEF | unlicense |
Change column types and drop unused columns | df.donation_date = pd.to_datetime(df.donation_date)
df.charitable = df.charitable.astype('bool')
df['zipcode'] = df.zipcode_initial.str[0:5]
fill_zipcode = lambda x: '0'*(5-len(str(x))) + str(x)
x1 = pd.DataFrame([[1, '8820'], [2, 8820]], columns=['a','b'])
x1.b = x1.b.apply(fill_zipcode)
x1
df.zipcode = df.zipcode.apply(fill_zipcode) | notebooks/0_DataCleanup-Feb2016.ipynb | smalladi78/SEF | unlicense |
Cleanup amounts | ## Ensure that all amounts are dollar figures
df[~df.amount_initial.str.startswith('-$') & ~df.amount_initial.str.startswith('$')]
## drop row with invalid data
df.drop(df[df.donation_date == '1899-12-31'].index, axis=0, inplace=True)
df['amount_cleanup'] = df.amount_initial.str.replace(',', '')
df['amount_cleanup'] = df.amount_cleanup.str.replace('$', '')
df['amount'] = df.amount_cleanup.astype(float)
## Make sure we did not throw away valid numbers by checking with the original value
df[(df.amount == 0)].amount_initial.unique() | notebooks/0_DataCleanup-Feb2016.ipynb | smalladi78/SEF | unlicense |
Outlier data | # There are some outliers in the data, quite a few of them are recent.
_ = plt.scatter(df[df.amount > 5000].amount.values, df[df.amount > 5000].donation_date.values)
plt.show()
# Fun little thing to try out bokeh (we can hover and detect the culprits)
def plot_data(df):
dates = map(getdate_ym, pd.DatetimeIndex(df[df.amount > 5000].donation_date))
amounts = map(thousands_sep, df[df.amount > 5000].amount)
x = df[df.amount > 5000].donation_date.values
y = df[df.amount > 5000].amount.values
donor_ids = df[df.amount > 5000].donor_id.values
states = df[df.amount > 5000].state.values
source = ColumnDataSource(
data=dict(
x=x,
y=y,
dates=dates,
amounts=amounts,
donor_ids=donor_ids,
states=states,
)
)
hover = HoverTool(
tooltips=[
("date", "@dates"),
("amount", "@amounts"),
("donor", "@donor_ids"),
("states", "@states"),
]
)
p = figure(plot_width=400, plot_height=400, title=None, tools=[hover])
p.circle('x', 'y', size=5, source=source)
show(p)
plot_data(df.query('amount > 5000'))
# All the Outliers seem to have the following properties: state == YY and specific donorid.
# Plot the remaining data outside of these to check that we caught all the outliers.
plot_data(df[~df.index.isin(df.query('state == "YY" and amount > 5000').index)])
# Outlier data
df[(df.state == 'YY') & (df.amount >= 45000)]
df[(df.state == 'YY') & (df.amount >= 45000)]\
.sort_values(by='amount', ascending=False)\
.head(6)[source_columns]\
.to_csv('out/0/outlier_data.csv') | notebooks/0_DataCleanup-Feb2016.ipynb | smalladi78/SEF | unlicense |
Exchanged emails with Anil and confirmed the decision to drop the outlier for the anonymous donor with the 9.5 million dollars. | df.drop(df[(df.state == 'YY') & (df.amount >= 45000)].index, inplace=True)
print 'After dropping the anonymous donor, total amounts from the unknown state as a percentage of all amounts is: '\
, thousands_sep(100*df[(df.state == 'YY')].amount.sum()/df.amount.sum()), '%' | notebooks/0_DataCleanup-Feb2016.ipynb | smalladi78/SEF | unlicense |
Amounts with zero values | ## Some funds have zero amounts associated with them.
## They mostly look like costs - expense fees, transaction fees, administrative fees
## Let us examine if we can safely drop them from our analysis
df[df.amount_initial == '$0.00'].groupby(['fund', 'appeal'])['donor_id'].count() | notebooks/0_DataCleanup-Feb2016.ipynb | smalladi78/SEF | unlicense |
Dropping rows with zero amounts (after confirmation with SEF office) | df.drop(df[df.amount == 0].index, axis=0, inplace=True) | notebooks/0_DataCleanup-Feb2016.ipynb | smalladi78/SEF | unlicense |
Negative amounts | ## What is the total amount of the negative?
print 'Total negative amount is: ', df[df.amount < 0].amount.sum()
# Add if condition to make this re-runnable
if df[df.amount < 0].amount.sum() > 0:
print 'Amounts grouped by fund and appeal, sorted by most negative amounts'
df[df.amount < 0]\
.groupby(['fund', 'appeal'])['amount',]\
.sum()\
.sort_values(by='amount')\
.to_csv('out/0/negative_amounts_sorted.csv')
df[df.amount < 0]\
.groupby(['fund', 'appeal'])['amount',]\
.sum()\
.to_csv('out/0/negative_amounts_grouped_by_fund.csv') | notebooks/0_DataCleanup-Feb2016.ipynb | smalladi78/SEF | unlicense |
Dropping rows with negative amounts (after confirmation with SEF office) | df.drop(df[df.amount < 0].index, axis=0, inplace=True) | notebooks/0_DataCleanup-Feb2016.ipynb | smalladi78/SEF | unlicense |
Investigate invalid state codes | df.info()
df.state.unique()
## States imported from http://statetable.com/
states = pd.read_csv('in/state_table.csv')
states.rename(columns={'abbreviation': 'state'}, inplace=True)
all_states = pd.merge(states, pd.DataFrame(df.state.unique(), columns=['state']), on='state', how='right')
invalid_states = all_states[pd.isnull(all_states.id)].state
df[df.state.isin(invalid_states)].state.value_counts().sort_index()
df[df.state.isin(['56', 'AB', 'BC', 'CF', 'Ca', 'Co', 'HY', 'IO', 'Ny', 'PR', 'UK', 'VI', 'ja'])]
%%html
<style>table {float:left}</style> | notebooks/0_DataCleanup-Feb2016.ipynb | smalladi78/SEF | unlicense |
Explanation for invalid state codes:
State|Count|Action|Explanation|
-----|-----|------|-----------|
YY|268|None|All these rows are bogus entries (City and Zip are also YYYYs) - about 20% of the donation amount has this
ON|62|Remove|This is the state of Ontario, Canada
AP|18|Remove|This is data for Hyderabad
VI|6|Remove|Virgin Islands
PR|5|Remove|Peurto Rico
Ny|5|NY|Same as NY - rename Ny as NY
56|1|Remove|This is one donation from Bangalore, Karnataka
HY|1|Remove|Hyderabad
BC|1|Remove|British Columbia, Canada
IO|1|IA|Changed to Iowa - based on city and zip code
AB|1|Remove|AB stands for Alberta, Canada
Ca|1|CA|Same as California - rename Ca to CA
Co|1|CO|Same as Colarado - rename Co to CO
CF|1|FL|Changed to Florida based on zip code and city
ja|1|FL|Change to FL based on zip code and city
UK|1|Remove|London, UK
KA|1|Remove|Bangalore, Karnataka | state_renames = {'Ny': 'NY', 'IO': 'IA', 'Ca' : 'CA', 'Co' : 'CO', 'CF' : 'FL', 'ja' : 'FL'}
df.replace({'state': state_renames}, inplace=True) | notebooks/0_DataCleanup-Feb2016.ipynb | smalladi78/SEF | unlicense |
Dropping data for non-US locations | non_usa_states = ['ON', 'AP', 'VI', 'PR', '56', 'HY', 'BC', 'AB', 'UK', 'KA']
print 'Total amount for locations outside USA: ', sum(df[df.state.isin(non_usa_states)].amount)
#### Total amount for locations outside USA: 30710.63
df.drop(df[df.state.isin(non_usa_states)].index, axis=0, inplace=True) | notebooks/0_DataCleanup-Feb2016.ipynb | smalladi78/SEF | unlicense |
Investigate donations with state of YY | print 'Percentage of amount for unknown (YY) state : {:.2f}'.format(100*df[df.state == 'YY'].amount.sum()/df.amount.sum())
print 'Total amount for the unknown state excluding outliers: ', df[(df.state == 'YY') & (df.amount < 45000)].amount.sum()
print 'Total amount for the unknown state: ', df[(df.state == 'YY')].amount.sum()
print 'Total amount: ', df.amount.sum() | notebooks/0_DataCleanup-Feb2016.ipynb | smalladi78/SEF | unlicense |
We will add these donations to the noloc_df below (which is the donations that have empty strings for the city/state/zipcode.
Investigate empty city, state and zip code
Pecentage of total amount from donations with no location: 3.087
Moving all the data with no location to a different dataframe.
We will investigate the data that does have location information for correctness of location and then merge the no location data back at the end. | print 'Pecentage of total amount from donations with no location: ', 100*sum(df[(df.city == '') & (df.state == '') & (df.zipcode_initial == '')].amount)/sum(df.amount)
noloc_df = df[(df.city == '') & (df.state == '') & (df.zipcode_initial == '')].copy()
df = df[~((df.city == '') & (df.state == '') & (df.zipcode_initial == ''))].copy()
print df.shape[0] + noloc_df.shape[0]
noloc_df = noloc_df.append(df[(df.state == 'YY')])
df = df[~(df.state == 'YY')]
# Verify that we transferred all the rows over correctly. This total must match the total from above.
print df.shape[0] + noloc_df.shape[0] | notebooks/0_DataCleanup-Feb2016.ipynb | smalladi78/SEF | unlicense |
Investigate City in ('YYY','yyy')
These entries have invalid location information and will be added to the noloc_df dataframe. | noloc_df = noloc_df.append(df[(df.city.str.lower() == 'yyy') | (df.city.str.lower() == 'yyyy')])
df = df[~((df.city.str.lower() == 'yyy') | (df.city.str.lower() == 'yyyy'))]
# Verify that we transferred all the rows over correctly. This total must match the total from above.
print df.shape[0] + noloc_df.shape[0] | notebooks/0_DataCleanup-Feb2016.ipynb | smalladi78/SEF | unlicense |
Investigate empty state but non-empty city
Percentage of total amount for data with City but no state: 0.566 | print 'Percentage of total amount for data with City but no state: {:.3f}'.format(100*sum(df[df.state == ''].amount)/sum(df.amount))
df[((df.state == '') & (df.city != ''))][['city','zipcode','amount']].sort_values('city', ascending=True).to_csv('out/0/City_No_State.csv') | notebooks/0_DataCleanup-Feb2016.ipynb | smalladi78/SEF | unlicense |
By visually examining the cities for rows that don't have a state, we can see that all the cities are coming from Canada and India and some from other countries (except two entries). So we will correct these two entries and drop all the other rows as they are not relevant to the USA. | index = df[(df.donor_id == '-28K0T47RF') & (df.donation_date == '2007-11-30') & (df.city == 'Cupertino')].index
df.ix[index,'state'] = 'CA'
index = df[(df.donor_id == '9F4812A118') & (df.donation_date == '2012-06-30') & (df.city == 'San Juan')].index
df.ix[index,'state'] = 'WA'
df.ix[index,'zipcode'] = 98250
# Verified that these remaining entries are for non-US location
print 'Total amount for non-USA location: ', df[((df.state == '') & (df.city != ''))].amount.sum()
df.drop(df[((df.state == '') & (df.city != ''))].index, inplace=True) | notebooks/0_DataCleanup-Feb2016.ipynb | smalladi78/SEF | unlicense |
Investigate empty city and zipcode but valid US state
Percentage of total amount for data with valid US state, but no city, zipcode: 4.509
Most of this amount (1.7 of 1.8 million) is coming from about 600 donors in California. We already know that about California is a major contributor to donations.
Although, we can do some analytics based on just the US state using this data, it complicates the analysis that does not substantiate the knowledge gain.
Therefore, we are dropping the state column from these rows and moving over this data to the dataset that has no location (the one that we created earlier) to simplify our analysis. | print 'Percentage of total amount for data with valid US state, but no city, zipcode: {:.3f}'.format(100*sum(df[(df.city == '') & (df.zipcode_initial == '')].amount)/sum(df.amount))
# Verify that we transferred all the rows over correctly. This total must match the total from above.
print df.shape[0] + noloc_df.shape[0]
stateonly_df = df[(df.city == '') & (df.zipcode_initial == '')].copy()
stateonly_df.state = ''
## Move the rows with just the state over to the noloc_df dataset
noloc_df = pd.concat([noloc_df, stateonly_df])
df = df[~((df.city == '') & (df.zipcode_initial == ''))].copy()
# Verify that we transferred all the rows over correctly. This total must match the total from above.
print df.shape[0] + noloc_df.shape[0]
print 100*sum(df[df.city == ''].amount)/sum(df.amount)
print len(df[df.city == '']), len(df[df.zipcode_initial == ''])
print sum(df[df.city == ''].amount), sum(df[df.zipcode_initial == ''].amount)
print sum(df[(df.city == '') & (df.zipcode_initial != '')].amount),\
sum(df[(df.city != '') & (df.zipcode_initial == '')].amount)
print sum(df.amount) | notebooks/0_DataCleanup-Feb2016.ipynb | smalladi78/SEF | unlicense |
Investigating empty city and empty state with non-empty zip code
Since we have the zip code data from the US census data, we can use that to fill in the city and state | ## Zip codes from ftp://ftp.census.gov/econ2013/CBP_CSV/zbp13totals.zip
zipcodes = pd.read_csv('in/zbp13totals.txt', dtype={'zip': object})
zipcodes = zipcodes[['zip', 'city', 'stabbr']]
zipcodes = zipcodes.rename(columns = {'zip':'zipcode', 'stabbr': 'state', 'city': 'city'})
zipcodes.city = zipcodes.city.str.title()
zipcodes.zipcode = zipcodes.zipcode.astype('str')
## If we know the zip code, we can populate the city by using the zipcodes data
df.replace({'city': {'': np.nan}, 'state': {'': np.nan}}, inplace=True)
## Set the index correctly for update to work. Then reset it back.
df.set_index(['zipcode'], inplace=True)
zipcodes.set_index(['zipcode'], inplace=True)
df.update(zipcodes, join='left', overwrite=False, raise_conflict=False)
df.reset_index(drop=False, inplace=True)
zipcodes.reset_index(drop=False, inplace=True)
zipcodesdetail = pd.read_csv('in/zip_code_database.csv')
zipcodesdetail = zipcodesdetail[zipcodesdetail.country == 'US'][['zip', 'primary_city', 'county', 'state', 'timezone', 'latitude', 'longitude']]
zipcodesdetail = zipcodesdetail.rename(columns = {'zip':'zipcode', 'primary_city': 'city'})
# The zip codes dataset has quite a few missing values. Filling in what we need for now.
# If this happens again, search for a different data source!!
zipcodesdetail.loc[(zipcodesdetail.city == 'Frisco') & (zipcodesdetail.state == 'TX') & (pd.isnull(zipcodesdetail.county)), 'county'] = 'Denton'
# Strip the ' County' portion from the county names
def getcounty(county):
if pd.isnull(county):
return county
elif county.endswith(' County'):
return county[:-7]
else:
return county
zipcodesdetail.county = zipcodesdetail['county'].apply(getcounty)
zipcodesdetail.zipcode = zipcodesdetail.zipcode.apply(fill_zipcode)
newcols = np.array(list(set(df.columns).union(zipcodesdetail.columns)))
df = pd.merge(df, zipcodesdetail, on=['state', 'city', 'zipcode'], how='inner', suffixes=('_x', ''))[newcols]
# For some reason, the data types are being reset. So setting them back to their expected data types.
df.donation_date = df.donation_date.apply(pd.to_datetime)
df.charitable = df.charitable.apply(bool)
df.amount = df.amount.apply(int) | notebooks/0_DataCleanup-Feb2016.ipynb | smalladi78/SEF | unlicense |
Investigate invalid zip codes | all_zipcodes = pd.merge(df, zipcodes, on='zipcode', how='left')
all_zipcodes[pd.isnull(all_zipcodes.city_x)].head()
## There seems to be only one row with an invalid zip code. Let's drop it.
df.drop(df[df.zipcode_initial.isin(['GU214ND','94000'])].index, axis=0, inplace=True) | notebooks/0_DataCleanup-Feb2016.ipynb | smalladi78/SEF | unlicense |
Final check on all location data to confirm that we have no rows with empty state, city or location | print 'No state: count of rows: ', len(df[df.state == ''].amount),\
'Total amount: ', sum(df[df.state == ''].amount)
print 'No zipcode: count of rows: ', len(df[df.zipcode == ''].amount),\
'Total amount: ', sum(df[df.zipcode == ''].amount)
print 'No city: count of rows: ', len(df[df.city == ''].amount),\
'Total amount: ', sum(df[df.city == ''].amount)
# Examining data - top 10 states by amount and number of donors
print df.groupby('state')['amount',].sum().sort_values(by='amount', ascending=False)[0:10]
print df.groupby('state')['donor_id',].count().sort_values(by='donor_id', ascending=False)[0:10]
print noloc_df.state.unique()
print noloc_df.city.unique()
print noloc_df.zipcode.unique()
noloc_df['city'] = ''
noloc_df['state'] = ''
noloc_df['zipcode'] = ''
print df.shape[0] + noloc_df.shape[0]
df.shape, noloc_df.shape
# The input data has the latest zip code for each donor. So we cannot observe any movement even if there was any since
# all donations by a given donor will only have the same exact zipcode.
x1 = pd.DataFrame(df.groupby(['donor_id','zipcode']).zipcode.nunique())
x1[x1.zipcode != 1]
# The noloc_df and the df with location values have no donors in common - so we cannot use the donor
# location information from df to detect the location in noloc_df.
set(df.donor_id.values).intersection(noloc_df.donor_id.values)
df.rename(columns={'donation_date': 'activity_date'}, inplace=True)
df['activity_year'] = df.activity_date.apply(lambda x: x.year)
df['activity_month'] = df.activity_date.apply(lambda x: x.month)
df['activity_dow'] = df.activity_date.apply(lambda x: x.dayofweek)
df['activity_ym'] = df['activity_date'].map(lambda x: 100*x.year + x.month)
df['activity_yq'] = df['activity_date'].map(lambda x: 10*x.year + (x.month-1)//3)
df['activity_ymd'] = df['activity_date'].map(lambda x: 10000*x.year + 100*x.month + x.day)
# Drop the zipcode_initial (for privacy reasons)
df.drop('zipcode_initial', axis=1, inplace=True) | notebooks/0_DataCleanup-Feb2016.ipynb | smalladi78/SEF | unlicense |
All done! Let's save our dataframes for the next stage of processing | !mkdir -p out/0
df.to_pickle('out/0/donations.pkl')
noloc_df.to_pickle('out/0/donations_noloc.pkl')
df[df.donor_id == '_1D50SWTKX'].sort_values(by='activity_date').tail()
df.columns
df.shape | notebooks/0_DataCleanup-Feb2016.ipynb | smalladi78/SEF | unlicense |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.