markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Download and load the model. SpaCy has an excellent English NLP processor. It has the following features which we shall explore: - Entity recognition - Dependency Parsing - Part of Speech tagging - Word Vectorization - Tokenization - Lemmatization - Noun Chunks Download the Model, it may take a while
import spacy import spacy.en.download # spacy.en.download.main() processor = spacy.en.English() processed_text = processor(text) processed_text
notebooks/nlp_spacy.ipynb
AlJohri/DAT-DC-12
mit
Looks like the same text? Let's dig a little deeper Tokenization Sentences
n = 0 for sentence in processed_text.sents: print(n, sentence) n+=1
notebooks/nlp_spacy.ipynb
AlJohri/DAT-DC-12
mit
Words and Punctuation - Along with POS tagging
n = 0 for sentence in processed_text.sents: for token in sentence: print(n, token, token.pos_, token.lemma_) n+=1
notebooks/nlp_spacy.ipynb
AlJohri/DAT-DC-12
mit
Entities - Explanation of Entity Types
for entity in processed_text.ents: print(entity, entity.label_)
notebooks/nlp_spacy.ipynb
AlJohri/DAT-DC-12
mit
Noun Chunks
for noun_chunk in processed_text.noun_chunks: print(noun_chunk)
notebooks/nlp_spacy.ipynb
AlJohri/DAT-DC-12
mit
The Semi Holy Grail - Syntactic Depensy Parsing See Demo for clarity
def pr_tree(word, level): if word.is_punct: return for child in word.lefts: pr_tree(child, level+1) print('\t'* level + word.text + ' - ' + word.dep_) for child in word.rights: pr_tree(child, level+1) for sentence in processed_text.sents: pr_tree(sentence.root, 0) print('-------------------------------------------')
notebooks/nlp_spacy.ipynb
AlJohri/DAT-DC-12
mit
What is 'nsubj'? 'acomp'? See The Universal Dependencies Word Vectorization - Word2Vec
proc_fruits = processor('''I think green apples are delicious. While pears have a strange texture to them. The bowls they sit in are ugly.''') apples, pears, bowls = proc_fruits.sents fruit = processed_text.vocab['fruit'] print(apples.similarity(fruit)) print(pears.similarity(fruit)) print(bowls.similarity(fruit))
notebooks/nlp_spacy.ipynb
AlJohri/DAT-DC-12
mit
Add dropout layer by hand to an MLP
def dropout_layer(X, dropout): assert 0 <= dropout <= 1 # In this case, all elements are dropped out if dropout == 1: return torch.zeros_like(X) # In this case, all elements are kept if dropout == 0: return X mask = (torch.Tensor(X.shape).uniform_(0, 1) > dropout).float() return mask * X / (1.0 - dropout) # quick test torch.manual_seed(0) X = torch.arange(16, dtype=torch.float32).reshape((2, 8)) print(X) print(dropout_layer(X, 0.0)) print(dropout_layer(X, 0.5)) print(dropout_layer(X, 1.0)) # A common trend is to set a lower dropout probability closer to the input layer class Net(nn.Module): def __init__( self, num_inputs, num_outputs, num_hiddens1, num_hiddens2, is_training=True, dropout1=0.2, dropout2=0.5 ): super(Net, self).__init__() self.dropout1 = dropout1 self.dropout2 = dropout2 self.num_inputs = num_inputs self.training = is_training self.lin1 = nn.Linear(num_inputs, num_hiddens1) self.lin2 = nn.Linear(num_hiddens1, num_hiddens2) self.lin3 = nn.Linear(num_hiddens2, num_outputs) self.relu = nn.ReLU() def forward(self, X): H1 = self.relu(self.lin1(X.reshape((-1, self.num_inputs)))) # Use dropout only when training the model if self.training == True: # Add a dropout layer after the first fully connected layer H1 = dropout_layer(H1, self.dropout1) H2 = self.relu(self.lin2(H1)) if self.training == True: # Add a dropout layer after the second fully connected layer H2 = dropout_layer(H2, self.dropout2) out = self.lin3(H2) return out
notebooks/misc/dropout_MLP_torch.ipynb
probml/pyprobml
mit
Fit to FashionMNIST Uses the d2l.load_data_fashion_mnist function.
train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size=256)
notebooks/misc/dropout_MLP_torch.ipynb
probml/pyprobml
mit
Fit model using SGD. Uses the d2l.train_ch3 function.
torch.manual_seed(0) # We pick a wide model to cause overfitting without dropout num_inputs, num_outputs, num_hiddens1, num_hiddens2 = 784, 10, 256, 256 net = Net(num_inputs, num_outputs, num_hiddens1, num_hiddens2, dropout1=0.5, dropout2=0.5) loss = nn.CrossEntropyLoss() lr = 0.5 trainer = torch.optim.SGD(net.parameters(), lr=lr) num_epochs = 10 d2l.train_ch3(net, train_iter, test_iter, loss, num_epochs, trainer)
notebooks/misc/dropout_MLP_torch.ipynb
probml/pyprobml
mit
When we turn dropout off, we notice a slightly larger gap between train and test accuracy.
torch.manual_seed(0) net = Net(num_inputs, num_outputs, num_hiddens1, num_hiddens2, dropout1=0.0, dropout2=0.0) loss = nn.CrossEntropyLoss() trainer = torch.optim.SGD(net.parameters(), lr=lr) num_epochs = 10 d2l.train_ch3(net, train_iter, test_iter, loss, num_epochs, trainer)
notebooks/misc/dropout_MLP_torch.ipynb
probml/pyprobml
mit
Dropout using PyTorch layer
dropout1 = 0.5 dropout2 = 0.5 net = nn.Sequential( nn.Flatten(), nn.Linear(num_inputs, num_hiddens1), nn.ReLU(), # Add a dropout layer after the first fully connected layer nn.Dropout(dropout1), nn.Linear(num_hiddens2, num_hiddens1), nn.ReLU(), # Add a dropout layer after the second fully connected layer nn.Dropout(dropout2), nn.Linear(num_hiddens2, num_outputs), ) def init_weights(m): if type(m) == nn.Linear: nn.init.normal_(m.weight, std=0.01) torch.manual_seed(0) net.apply(init_weights); trainer = torch.optim.SGD(net.parameters(), lr=lr) d2l.train_ch3(net, train_iter, test_iter, loss, num_epochs, trainer)
notebooks/misc/dropout_MLP_torch.ipynb
probml/pyprobml
mit
Visualize some predictions
def display_predictions(net, test_iter, n=6): # Extract first batch from iterator for X, y in test_iter: break # Get labels trues = d2l.get_fashion_mnist_labels(y) preds = d2l.get_fashion_mnist_labels(d2l.argmax(net(X), axis=1)) # Plot titles = [true + "\n" + pred for true, pred in zip(trues, preds)] d2l.show_images(d2l.reshape(X[0:n], (n, 28, 28)), 1, n, titles=titles[0:n]) # d2l.predict_ch3(net, test_iter) display_predictions(net, test_iter)
notebooks/misc/dropout_MLP_torch.ipynb
probml/pyprobml
mit
Work up a minimum example of a trend following system Let's get some data We can get data from various places; however for now we're going to use prepackaged 'legacy' data stored in csv files
data = csvFuturesSimData() data
examples/introduction/asimpletradingrule.ipynb
robcarver17/pysystemtrade
gpl-3.0
We get stuff out of data with methods
print(data.get_instrument_list()) print(data.get_raw_price("EDOLLAR").tail(5))
examples/introduction/asimpletradingrule.ipynb
robcarver17/pysystemtrade
gpl-3.0
data can also behave in a dict like manner (though it's not a dict)
data['SP500'] data.keys()
examples/introduction/asimpletradingrule.ipynb
robcarver17/pysystemtrade
gpl-3.0
... however this will only access prices (note these prices have already been backadjusted for rolls) We have extra futures data here
data.get_instrument_raw_carry_data("EDOLLAR").tail(6)
examples/introduction/asimpletradingrule.ipynb
robcarver17/pysystemtrade
gpl-3.0
Technical note: csvFuturesSimData inherits from FuturesData which itself inherits from simData The chain is 'data specific' <- 'asset class specific' <- 'generic' Let's create a simple trading rule No capping or scaling
import pandas as pd from sysquant.estimators.vol import robust_vol_calc def calc_ewmac_forecast(price, Lfast, Lslow=None): """ Calculate the ewmac trading rule forecast, given a price and EWMA speeds Lfast, Lslow and vol_lookback """ # price: This is the stitched price series # We can't use the price of the contract we're trading, or the volatility # will be jumpy # And we'll miss out on the rolldown. See # https://qoppac.blogspot.com/2015/05/systems-building-futures-rolling.html price = price.resample("1B").last() if Lslow is None: Lslow = 4 * Lfast # We don't need to calculate the decay parameter, just use the span # directly fast_ewma = price.ewm(span=Lfast).mean() slow_ewma = price.ewm(span=Lslow).mean() raw_ewmac = fast_ewma - slow_ewma vol = robust_vol_calc(price.diff()) return raw_ewmac / vol
examples/introduction/asimpletradingrule.ipynb
robcarver17/pysystemtrade
gpl-3.0
Try it out (this isn't properly scaled at this stage of course)
instrument_code = 'EDOLLAR' price = data.daily_prices(instrument_code) ewmac = calc_ewmac_forecast(price, 32, 128) ewmac.columns = ['forecast'] ewmac.tail(5) ewmac.plot(); plt.title('Forecast') plt.ylabel('Position') plt.xlabel('Time')
examples/introduction/asimpletradingrule.ipynb
robcarver17/pysystemtrade
gpl-3.0
Did we make money?
from systems.accounts.account_forecast import pandl_for_instrument_forecast account = pandl_for_instrument_forecast(forecast=ewmac, price = price) account.curve().plot(); plt.title('Profit and Loss') plt.ylabel('PnL') plt.xlabel('Time'); account.percent.stats()
examples/introduction/asimpletradingrule.ipynb
robcarver17/pysystemtrade
gpl-3.0
Obtenga las posiciones en el espacio articular, $q_1$ y $q_2$, necesarias para que el punto final del pendulo doble llegue a las coordenadas $p_1 = (0,1)$, $p_2 = (1,3)$ y $p_3 = (3,2)$.
# YOUR CODE HERE raise NotImplementedError() from numpy.testing import assert_allclose assert_allclose((q11, q21),(0.25268 , 2.636232), rtol=1e-05, atol=1e-05) from numpy.testing import assert_allclose assert_allclose((q12, q22),(0.589988, 1.318116), rtol=1e-05, atol=1e-05) from numpy.testing import assert_allclose assert_allclose((q13, q23),(0.14017 , 0.895665), rtol=1e-05, atol=1e-05)
Practicas/practica5/Problemas.ipynb
robblack007/clase-cinematica-robot
mit
Genere las trayectorias necesarias para que el pendulo doble se mueva del punto $p_1$ al punto $p_2$ en $2s$, del punto $p_2$ al punto $p_3$ en $2s$ y del punto $p_3$ al punto $p_1$ en $2s$. Utiliza 100 puntos por segundo y asegurate de guardar las trayectorias generadas en las variables correctas para que q1s y q2s tengan las trayectorias completas.
from generacion_trayectorias import grafica_trayectoria # YOUR CODE HERE raise NotImplementedError() q1s = q1s1 + q1s2 + q1s3 q2s = q2s1 + q2s2 + q2s3 from numpy.testing import assert_allclose assert_allclose((q1s[0], q1s[-1]),(0.25268, 0.25268), rtol=1e-05, atol=1e-05) from numpy.testing import assert_allclose assert_allclose((q2s[0], q2s[-1]),(2.636232, 2.636232), rtol=1e-05, atol=1e-05)
Practicas/practica5/Problemas.ipynb
robblack007/clase-cinematica-robot
mit
Cree una animación con las trayectorias generadas y las funciones proporcionadas a continuación (algunas funciones estan marcadas con comentarios en donde hace falta agregar código).
from matplotlib.pyplot import figure, style from matplotlib import animation, rc rc('animation', html='html5') from numpy import sin, cos, arange fig = figure(figsize=(8, 8)) axi = fig.add_subplot(111, autoscale_on=False, xlim=(-0.6, 3.1), ylim=(-0.6, 3.1)) linea, = axi.plot([], [], "-o", lw=2, color='gray') def cd_pendulo_doble(q1, q2): l1, l2 = 2, 2 # YOUR CODE HERE raise NotImplementedError() return xs, ys def inicializacion(): '''Esta funcion se ejecuta una sola vez y sirve para inicializar el sistema''' linea.set_data([], []) return linea def animacion(i): '''Esta funcion se ejecuta para cada cuadro del GIF''' # YOUR CODE HERE raise NotImplementedError() linea.set_data(xs, ys) return linea ani = animation.FuncAnimation(fig, animacion, arange(1, len(q1s)), interval=10, init_func=inicializacion) ani from numpy.testing import assert_allclose assert_allclose(cd_pendulo_doble(0, 0), ([0,2,4], [0,0,0]), rtol=1e-05, atol=1e-05) assert_allclose(cd_pendulo_doble(1.57079632,0), ([0, 0, 0],[0, 2, 4]), rtol=1e-05, atol=1e-05)
Practicas/practica5/Problemas.ipynb
robblack007/clase-cinematica-robot
mit
Are there certain populations we're not getting reports from? We can create a basic cross tab between age and gender to see if there are any patterns that emerges.
pd.crosstab(data['GenderDescription'], data['age_range'])
reports/api data.ipynb
minh5/cpsc
mit
From the data, it seems that there's not much underrepresentation by gender. There are only around a thousand less males than females in a dataset of 28,000. Age seems to be a bigger issue. There appears to be a lack of representation of older people using the API. Given that older folks may be less likely to self report, or if they wanted to self report, they may not be tech-savvy enough to use with a web interface. My assumption that people over 70 are probably experience product harm at a higher rate and are not reporting this. If we wanted to raise awareness about a certain tool or item, where should we focus our efforts To construct this, I removed any incidents that did not cause any bodily harm and taking the top ten categories. There were several levels of severity. We can remove complaints that does not involve any physical harm. After removing these complaint, it is really interesting to see that "Footwear" was the product category of harm.
#removing minor harm incidents no_injuries = ['Incident, No Injury', 'Unspecified', 'Level of care not known', 'No Incident, No Injury', 'No First Aid or Medical Attention Received'] damage = data.ix[~data['SeverityTypePublicName'].isin(no_injuries), :] damage.ProductCategoryPublicName.value_counts()[0:9]
reports/api data.ipynb
minh5/cpsc
mit
This is actually preplexing, so I decided to investigate further by analyzing the complaints filed for the "Footwear" category. To do this, I created a Word2Vec model that uses a convolution neural network for text analysis. This process maps a word and the linguistic context it is in to be able to calculate similarity between words. The purpose of this is to find words that related to each other. Rather than doing a simple cross tab of product categories, I can ingest the complaints and map out their relationship. For instance, using the complaints that resulted in bodily harm, I found that footwear was associated with pain and walking. It seems that there is injuries related to Sketcher sneakers specifically since it was the only brand that showed up enough to be included in the model's dictionary. In fact, there was a lawsuit regarding Sketchers and their toning shoes Are there certain complaints that people are filing? Quality issues vs injuries? Look below, we see that a vast majority are incidents with any bodily harm. Over 60% of all complaints were categorized as Incident, No Injury.
data.SeverityTypeDescription.value_counts()
reports/api data.ipynb
minh5/cpsc
mit
Although, while it is label to have no injury, it does not necessarily mean that we can't take precaution. What I did was take the same approach as the previous model, I subsetted the data to only complaints that had "no injury" and ran a model to examine words used. From the analysis, we see that the word to, was, and it were the top three words. At first glance, it may seem that these words are meaningless, however if we examine words that are similar to it, we can start seeing a connection. For instance, the word most closely related to "to" was "unable" and "trying", which conveys a sense of urgency in attempting to turn something on or off. Examining the words "unable," I was able to see it was related to words such as "attempted" and "disconnect." Further investigation lead me to find it was dealing with a switch or a plug, possibly dealing with an electrical item. A similar picture is painted when trying to examine the word "was." The words that felt out of place was "emitting", "extinguish," and "smelled." It is not surprise that after a few investigations of these words, that words like "sparks" and "smoke" started popping up more. This leads me to believe that these complaints have something to do with encounters closely related to fire. So while these complaints are maybe encounters with danger, it may be worthwile to review these complaints further with an eye out for fire related injuries or products that could cause fire.
model.most_similar('was')
reports/api data.ipynb
minh5/cpsc
mit
Who are the people who are actually reporting to us? This question is difficult to answer because of a lack of data on the reporter. From the cross tabulation in Section 3.1, we see that the majority of our the respondents are female and the largest age group are 40-60. That is probably the best guess of who are the people who are using the API. Conclusion This is meant to serve as a starting point on examining the API data. The main findings were that: From the self-reported statistics of people who reported their injury to the API, it appears that there is a skew against people who are older. The data shows that people who are reporting are 40-60 years old. An overwhelming amount of reports did not involve bodily harm or require medical attention; much of the reports were just incident reports with a particular product Out of the reports that resulted in some harm, the most reported product was in the footwear category regarding some harm and discomfort with walking with the Sketchers Tone-Ups shoes Although not conclusive, but from the reports, there appears to be come indication that there are a lot of fire-related incidents from a cursory examination of the most popular words While text analysis is helpful, it is often not sufficient. What would really help the analysis process would be include more information from the user. The following information would be helpful to collect to make conduct more actionable insight. Ethnicity/Race Self Reported Income Geographic information Region (Mid Atlantic, New England, etc) Closest Metropolitan Area State City Geolocation of IP address coordinates can be "jittered" to conserve anonymity A great next step would be a deeper text analysis on shoes. It may be possible to train a neural network to consider smaller batches of words so we can capture the context better. Other steps that I would do if I had more time would be to find a way to fix up unicode issues with some of the complaints (there were special characters that prevented some of the complaints to be converted into strings). I would also look further into the category that had the most overall complaints: "Electric Ranges and Stoves" and see what the complaints were. If we could implement these challenges, there is no doubt we could gain some valuable insights on products that are harming Americans. This report serves as the first step. I would like to thank CPSC for this data set and DKDC for the opportunity to conduct this analysis. References Question 2.1 The data that we worked with had limited information regarding the victim's demographics beside age and gender. However, that was enough to draw some base inferences. Below we can grab a counts of gender, which a plurality is females. Age is a bit tricky, we have the victim's birthday in months. I converted it into years and break them down into 10 year age ranges so we can better examine the data.
data.GenderDescription.value_counts() data['age'] = map(lambda x: x/12, data['VictimAgeInMonths']) labels = ['under 10', '10-20', '20-30', '30-40', '40-50', '50-60', '60-70','70-80', '80-90', '90-100', 'over 100'] data['age_range'] = pd.cut(data['age'], bins=np.arange(0,120,10), labels=labels) data['age_range'][data['age'] > 100] = 'over 100' counts = data['age_range'].value_counts() counts.sort() counts
reports/api data.ipynb
minh5/cpsc
mit
However after doing this, we still have around 13,000 people with an age of zero, whether it is that they did not fill in the age or that the incident involves infant is still unknown but looking at the distribution betweeen of the product that are affecting people with an age of 0 and the overall dataset, it appears that null values in the age range represents people who did not fill out an age when reporting
#Top products affect by people with 0 age data.ix[data['age_range'].isnull(), 'ProductCategoryPublicName'].value_counts()[0:9] #top products that affect people overall data.ProductCategoryPublicName.value_counts()[0:9]
reports/api data.ipynb
minh5/cpsc
mit
Question 2.2 At first glance, we can look at the products that were reported, like below. And see that Eletric Ranges or Ovens is at top in terms of harm. However, there are levels of severity within the API that needs to be filtered before we can assess which products causes the most harm.
#overall products listed data.ProductCategoryPublicName.value_counts()[0:9] #removing minor harm incidents no_injuries = ['Incident, No Injury', 'Unspecified', 'Level of care not known', 'No Incident, No Injury', 'No First Aid or Medical Attention Received'] damage = data.ix[~data['SeverityTypePublicName'].isin(no_injuries), :] damage.ProductCategoryPublicName.value_counts()[0:9]
reports/api data.ipynb
minh5/cpsc
mit
This shows that incidents where there are actually injuries and medical attention was given was that in footwear, which was weird. To explore this, I created a Word2Vec model that maps out how certain words relate to each other. To train the model, I used the comments that were made from the API. This will train a model and help us identify words that are similar. For instance, if you type in foot, you will get left and right as these words are most closely related to the word foot. However after some digging around, I found out that the word "walking" was associated with "painful". I have some reason to believe that there are orthopedic injuries associated with shoes and people have been experience pain while walking with Sketchers that were supposed to tone up their bodies and having some instability or balance issues.
model = gensim.models.Word2Vec.load('/home/datauser/cpsc/models/footwear') model.most_similar('walking') model.most_similar('injury') model.most_similar('instability')
reports/api data.ipynb
minh5/cpsc
mit
Question 2.3
model = gensim.models.Word2Vec.load('/home/datauser/cpsc/models/severity') items_dict = {} for word, vocab_obj in model.vocab.items(): items_dict[word] = vocab_obj.count sorted_dict = sorted(items_dict.items(), key=operator.itemgetter(1)) sorted_dict.reverse() sorted_dict[0:5]
reports/api data.ipynb
minh5/cpsc
mit
Let us define two vector variables (a regular sequence and a random one) and print them.
x, y = np.arange(10), np.random.rand(10) print(x, y, sep='\n')
exercises.ipynb
okartal/popgen-systemsX
cc0-1.0
The following command plots $y$ as a function of $x$ and labels the axes using $\LaTeX$.
plt.plot(x, y, linestyle='--', color='r', linewidth=2) plt.xlabel('time, $t$') plt.ylabel('frequency, $f$')
exercises.ipynb
okartal/popgen-systemsX
cc0-1.0
From the tutorial: "matplotlib.pyplot is a collection of command style functions that make matplotlib work like MATLAB." Comment: The tutorial is a good starting point to learn about the most basic functionalities of matplotlib, especially if you are familiar with MATLAB. Matplotlib is a powerful library but sometimes too complicated for making statistical plots à la R. However, there are other libraries that, in part, are built on matplotlib and provide more convenient functionality for statistical use cases, especially in conjunction with the data structures that the library pandas provides (see pandas, seaborn, ggplot and many more). Hardy-Weinberg Equilibrium These exercises should make you comfortable with the fundamental notions of population genetics: allele and genotype frequencies, homo- and heterozygosity, and inbreeding. We will use data from a classical paper on enzyme polymorphisms at the alkaline phosphatase (ALP) locus in humans (Harris 1966). In this case, the alleles have been defined in terms of protein properties. Harris could electrophoretically distinguish three proteins by their migration speed and called them S (slow), F (fast), and I (intermediate). We use a Python dictionary to store the observed numbers of genotypes at the ALP locus in a sample from the English people.
alp_genotype = {'obs_number': {'SS': 141, 'SF': 111, 'FF': 28, 'SI': 32, 'FI': 15, 'II': 5} }
exercises.ipynb
okartal/popgen-systemsX
cc0-1.0
Generating samples from a single stoichiometry
stoich = stoichiometry.Stoichiometry({'C': 10, 'O': 2, 'N': 3, 'H': 16, 'F': 2, 'O-': 1, 'N+': 1}) assert stoichiometry.is_valid(stoich), 'Cannot form a connected graph with this stoichiometry.' print('Number of heavy atoms:', sum(stoich.counts.values()) - stoich.counts['H']) %%time sampler = molecule_sampler.MoleculeSampler(stoich, relative_precision=0.03, rng_seed=2044365744) weighted_samples = [graph for graph in sampler] stats = sampler.stats() rejector = molecule_sampler.RejectToUniform(weighted_samples, max_importance=stats['max_final_importance'], rng_seed=265580748) uniform_samples = [graph for graph in rejector] print(f'generated {len(weighted_samples)}, kept {len(uniform_samples)}, ' f'estimated total: {stats["estimated_num_graphs"]:.2E} ± ' f'{stats["num_graphs_std_err"]:.2E}') #@title Draw some examples mols = [molecule_sampler.to_mol(g) for g in uniform_samples] Chem.Draw.MolsToGridImage(mols[:8], molsPerRow=4, subImgSize=(200, 140))
graph_sampler/molecule_sampling_demo.ipynb
google-research/google-research
apache-2.0
Combining samples from multiple stoichiometries Here we'll generate random molecules with 5 heavy atoms selected from C, N, and O. These small numbers are chosen just to illustrate the code. In this small an example, you could just enumerate all molecules. For large numbers of heavy atoms selected from a large set, you'd want to parallelize a lot of this.
#@title Enumerate valid stoichiometries subject to the given constraint heavy_elements = ['C', 'N', 'O'] num_heavy = 5 # We'll dump stoichiometries, samples, and statistics into a big dictionary. all_data = {} for stoich in stoichiometry.enumerate_stoichiometries(num_heavy, heavy_elements): key = ''.join(stoich.to_element_list()) all_data[key] = {'stoich': stoich} max_key_size = max(len(k) for k in all_data.keys()) print(f'{len(all_data)} stoichiometries') #@title For each stoichiometry, generate samples and estimate the number of molecules for key, data in all_data.items(): sampler = molecule_sampler.MoleculeSampler(data['stoich'], relative_precision=0.2) data['weighted_samples'] = [graph for graph in sampler] stats = sampler.stats() data['stats'] = stats rejector = molecule_sampler.RejectToUniform(data['weighted_samples'], max_importance=stats['max_final_importance']) data['uniform_samples'] = [graph for graph in rejector] print(f'{key:>{max_key_size}}:\tgenerated {len(data["weighted_samples"])},\t' f'kept {len(data["uniform_samples"])},\t' f'estimated total {int(stats["estimated_num_graphs"])} ± {int(stats["num_graphs_std_err"])}') #@title Combine into one big uniform sampling of the whole space bucket_sizes = [data['stats']['estimated_num_graphs'] for data in all_data.values()] sample_sizes = [len(data['uniform_samples']) for data in all_data.values()] base_iters = [data['uniform_samples'] for data in all_data.values()] aggregator = molecule_sampler.AggregateUniformSamples(bucket_sizes, sample_sizes, base_iters) merged_uniform_samples = [graph for graph in aggregator] total_estimate = sum(data['stats']['estimated_num_graphs'] for data in all_data.values()) total_variance = sum(data['stats']['num_graphs_std_err']**2 for data in all_data.values()) total_std = np.sqrt(total_variance) print(f'{len(merged_uniform_samples)} samples after merging, of an estimated ' f'{total_estimate:.1f} ± {total_std:.1f}') #@title Draw some examples mols = [molecule_sampler.to_mol(g) for g in merged_uniform_samples] Chem.Draw.MolsToGridImage(np.random.choice(mols, size=16), molsPerRow=4, subImgSize=(200, 140))
graph_sampler/molecule_sampling_demo.ipynb
google-research/google-research
apache-2.0
oracledb integration with Pandas
import pandas as pd # query Oracle using ora_conn and put the result into a pandas Dataframe df_ora = pd.read_sql('select * from emp', con=ora_conn) df_ora
Oracle_Jupyter/Oracle_Jupyter_oracledb_pandas.ipynb
LucaCanali/Miscellaneous
apache-2.0
Use of bind variables
df_ora = pd.read_sql('select * from emp where empno=:myempno', params={"myempno":7839}, con=ora_conn) df_ora
Oracle_Jupyter/Oracle_Jupyter_oracledb_pandas.ipynb
LucaCanali/Miscellaneous
apache-2.0
Basic visualization
import matplotlib.pyplot as plt plt.style.use('seaborn-darkgrid') df_ora = pd.read_sql('select ename "Name", sal "Salary" from emp', con=ora_conn) ora_conn.close() df_ora.plot(x='Name', y='Salary', title='Salary details, from Oracle demo table', figsize=(10, 6), kind='bar', color='blue');
Oracle_Jupyter/Oracle_Jupyter_oracledb_pandas.ipynb
LucaCanali/Miscellaneous
apache-2.0
A few Qumulo API file and direcotory python bindings fs.create_directory arguments: - name: Name of directory to be created - dir_path*: Destination path for the parent of created directory - dir_id*: Destination inode id for the parent of the created directory *Either dir_path or dir_id is required fs.create_file arguments: - name: Name of file to be created - dir_path: Destination path for the directory of created file - dir_id: Destination inode id for the directory of the created file fs.write_file arguments: - data_file: A python object of the local file's content - path: Destination file path on Qumulo - id_: Destination inode file id on Qumulo - if_match: fs.get_attr arguments: - path: - id_: - snapshot: Create a working directory for this exercise
base_path = '/' dir_name = 'test-qumulo-fs-data' try: the_dir_meta = rc.fs.create_directory(dir_path=base_path, name=dir_name) print("Successfully created %s%s." % (base_path, dir_name)) except RequestError as e: print("** Exception: %s - Details: %s\n" % (e.error_class,e)) if e.error_class == 'fs_entry_exists_error': the_dir_meta = rc.fs.get_attr(base_path + dir_name) for k, v in the_dir_meta.iteritems(): if re.search('(id|size|path|change_time)', k): print("%19s - %s" % (k, v))
notebooks/File and Data Management.ipynb
Qumulo/python-notebooks
gpl-3.0
Create a file in an existing path
file_name = 'first-file.txt' # relies on the base path and direcotry name created in the code above. try: the_file_meta = rc.fs.create_file(name=file_name, dir_path=base_path + dir_name) except RequestError as e: print("** Exception: %s - Details: %s\n" % (e.error_class,e)) if e.error_class == 'fs_entry_exists_error': the_file_meta = rc.fs.get_attr(base_path + dir_name + '/' + file_name) print("We've got a file. Its id is: %s" % the_file_meta['id']) # writing a local file from /tmp/ to the qumulo cluster fw = open("/tmp/local-file-from-temp.txt", "w") fw.write("Let's write 100 sentences on this virtual chalkboard\n" * 100) fw.close() write_file_meta = rc.fs.write_file(data_file=open("/tmp/local-file-from-temp.txt"), path=base_path + dir_name + '/' + file_name) print("""name: %(path)s bytes: %(size)s mod time: %(modification_time)s""" % write_file_meta) string_io_file_name = 'write-from-string-io.txt' try: rc.fs.create_file(name=string_io_file_name, dir_path=base_path + dir_name) except RequestError as e: print("Exception: %s - Details: %s\n" % (e.error_class,e)) fw = StringIO.StringIO() fw.write("Let's write 200 sentences on this virtual chalkboard\n" * 200) write_file_meta = rc.fs.write_file(data_file=fw, path=base_path + dir_name + '/' + string_io_file_name) fw.close() print("""name: %(path)s bytes: %(size)s mod time: %(modification_time)s""" % write_file_meta)
notebooks/File and Data Management.ipynb
Qumulo/python-notebooks
gpl-3.0
1. Implementar o algoritmo K-means Nesta etapa você irá implementar as funções que compõe o algoritmo do KMeans uma a uma. É importante entender e ler a documentação de cada função, principalmente as dimensões dos dados esperados na saída. 1.1 Inicializar os centróides A primeira etapa do algoritmo consiste em inicializar os centróides de maneira aleatória. Essa etapa é uma das mais importantes do algoritmo e uma boa inicialização pode diminuir bastante o tempo de convergência. Para inicializar os centróides você pode considerar o conhecimento prévio sobre os dados, mesmo sem saber a quantidade de grupos ou sua distribuição. Dica: https://docs.scipy.org/doc/numpy/reference/generated/numpy.random.uniform.html
def calculate_initial_centers(dataset, k): """ Inicializa os centróides iniciais de maneira arbitrária Argumentos: dataset -- Conjunto de dados - [m,n] k -- Número de centróides desejados Retornos: centroids -- Lista com os centróides calculados - [k,n] """ #### CODE HERE #### m = dataset.shape[0] centroids = list(dataset[np.random.randint(0, m - 1, 1)]) for it1 in range(k - 1): max_dist = -1 for it2 in range(m): nrst_cent_dist = sys.float_info.max for it3 in range(len(centroids)): dist = np.linalg.norm(dataset[it2] - centroids[it3]) # Get the distance to the nearest centroid if (dist < nrst_cent_dist): nrst_cent_dist = dist nrst_cent = dataset[it2] if (nrst_cent_dist > max_dist): max_dist = nrst_cent_dist new_cent = nrst_cent centroids.append(new_cent) centroids = np.array(centroids) ### END OF CODE ### return centroids
2019/09-clustering/cl_AlefCarneiro.ipynb
InsightLab/data-science-cookbook
mit
Teste a função criada e visualize os centróides que foram calculados.
k = 3 centroids = calculate_initial_centers(dataset, k) plt.scatter(dataset[:,0], dataset[:,1], s=10) plt.scatter(centroids[:,0], centroids[:,1], marker='^', c='red',s=100) plt.show()
2019/09-clustering/cl_AlefCarneiro.ipynb
InsightLab/data-science-cookbook
mit
1.2 Definir os clusters Na segunda etapa do algoritmo serão definidos o grupo de cada dado, de acordo com os centróides calculados. 1.2.1 Função de distância Codifique a função de distância euclidiana entre dois pontos (a, b). Definido pela equação: $$ dist(a, b) = \sqrt{(a_1-b_1)^{2}+(a_2-b_2)^{2}+ ... + (a_n-b_n)^{2}} $$ $$ dist(a, b) = \sqrt{\sum_{i=1}^{n}(a_i-b_i)^{2}} $$
def euclidean_distance(a, b): """ Calcula a distância euclidiana entre os pontos a e b Argumentos: a -- Um ponto no espaço - [1,n] b -- Um ponto no espaço - [1,n] Retornos: distance -- Distância euclidiana entre os pontos """ #### CODE HERE #### n = len(a) distance = 0 for i in range(n): distance = distance + (a[i] - b[i])**2 distance = distance**0.5 ### END OF CODE ### return distance
2019/09-clustering/cl_AlefCarneiro.ipynb
InsightLab/data-science-cookbook
mit
Teste a função criada.
a = np.array([1, 5, 9]) b = np.array([3, 7, 8]) if (euclidean_distance(a,b) == 3): print("Distância calculada corretamente!") else: print("Função de distância incorreta")
2019/09-clustering/cl_AlefCarneiro.ipynb
InsightLab/data-science-cookbook
mit
1.2.2 Calcular o centroide mais próximo Utilizando a função de distância codificada anteriormente, complete a função abaixo para calcular o centroid mais próximo de um ponto qualquer. Dica: https://docs.scipy.org/doc/numpy/reference/generated/numpy.argmin.html
def nearest_centroid(a, centroids): """ Calcula o índice do centroid mais próximo ao ponto a Argumentos: a -- Um ponto no espaço - [1,n] centroids -- Lista com os centróides - [k,n] Retornos: nearest_index -- Índice do centróide mais próximo """ #### CODE HERE #### # Check if centroids has two dimensions and, if not, convert to if len(centroids.shape) == 1: centroids = np.array([centroids]) nrst_cent_dist = sys.float_info.max for j in range(len(centroids)): dist = euclidean_distance(a, centroids[j]) if (dist < nrst_cent_dist): nrst_cent_dist = dist nearest_index = j ### END OF CODE ### return nearest_index
2019/09-clustering/cl_AlefCarneiro.ipynb
InsightLab/data-science-cookbook
mit
Teste a função criada
# Seleciona um ponto aleatório no dataset index = np.random.randint(dataset.shape[0]) a = dataset[index,:] # Usa a função para descobrir o centroid mais próximo idx_nearest_centroid = nearest_centroid(a, centroids) # Plota os dados ------------------------------------------------ plt.scatter(dataset[:,0], dataset[:,1], s=10) # Plota o ponto aleatório escolhido em uma cor diferente plt.scatter(a[0], a[1], c='magenta', s=30) # Plota os centroids plt.scatter(centroids[:,0], centroids[:,1], marker='^', c='red', s=100) # Plota o centroid mais próximo com uma cor diferente plt.scatter(centroids[idx_nearest_centroid,0], centroids[idx_nearest_centroid,1], marker='^', c='springgreen', s=100) # Cria uma linha do ponto escolhido para o centroid selecionado plt.plot([a[0], centroids[idx_nearest_centroid,0]], [a[1], centroids[idx_nearest_centroid,1]],c='orange') plt.annotate('CENTROID', (centroids[idx_nearest_centroid,0], centroids[idx_nearest_centroid,1],)) plt.show()
2019/09-clustering/cl_AlefCarneiro.ipynb
InsightLab/data-science-cookbook
mit
1.2.3 Calcular centroid mais próximo de cada dado do dataset Utilizando a função anterior que retorna o índice do centroid mais próximo, calcule o centroid mais próximo de cada dado do dataset.
def all_nearest_centroids(dataset, centroids): """ Calcula o índice do centroid mais próximo para cada ponto do dataset Argumentos: dataset -- Conjunto de dados - [m,n] centroids -- Lista com os centróides - [k,n] Retornos: nearest_indexes -- Índices do centróides mais próximos - [m,1] """ #### CODE HERE #### # Check if centroids has two dimensions and, if not, convert to if len(centroids.shape) == 1: centroids = np.array([centroids]) nearest_indexes = np.zeros(len(dataset)) for i in range(len(dataset)): nearest_indexes[i] = nearest_centroid(dataset[i], centroids) ### END OF CODE ### return nearest_indexes
2019/09-clustering/cl_AlefCarneiro.ipynb
InsightLab/data-science-cookbook
mit
Teste a função criada visualizando os cluster formados.
nearest_indexes = all_nearest_centroids(dataset, centroids) plt.scatter(dataset[:,0], dataset[:,1], c=nearest_indexes) plt.scatter(centroids[:,0], centroids[:,1], marker='^', c='red', s=100) plt.show()
2019/09-clustering/cl_AlefCarneiro.ipynb
InsightLab/data-science-cookbook
mit
1.3 Métrica de avaliação Após formar os clusters, como sabemos se o resultado gerado é bom? Para isso, precisamos definir uma métrica de avaliação. O algoritmo K-means tem como objetivo escolher centróides que minimizem a soma quadrática das distância entre os dados de um cluster e seu centróide. Essa métrica é conhecida como inertia. $$\sum_{i=0}^{n}\min_{c_j \in C}(||x_i - c_j||^2)$$ A inertia, ou o critério de soma dos quadrados dentro do cluster, pode ser reconhecido como uma medida de o quão internamente coerentes são os clusters, porém ela sofre de alguns inconvenientes: A inertia pressupõe que os clusters são convexos e isotrópicos, o que nem sempre é o caso. Desta forma, pode não representar bem em aglomerados alongados ou variedades com formas irregulares. A inertia não é uma métrica normalizada: sabemos apenas que valores mais baixos são melhores e zero é o valor ótimo. Mas em espaços de dimensões muito altas, as distâncias euclidianas tendem a se tornar infladas (este é um exemplo da chamada “maldição da dimensionalidade”). A execução de um algoritmo de redução de dimensionalidade, como o PCA, pode aliviar esse problema e acelerar os cálculos. Fonte: https://scikit-learn.org/stable/modules/clustering.html Para podermos avaliar os nosso clusters, codifique a métrica da inertia abaixo, para isso você pode utilizar a função de distância euclidiana construída anteriormente. $$inertia = \sum_{i=0}^{n}\min_{c_j \in C} (dist(x_i, c_j))^2$$
def inertia(dataset, centroids, nearest_indexes): """ Soma das distâncias quadradas das amostras para o centro do cluster mais próximo. Argumentos: dataset -- Conjunto de dados - [m,n] centroids -- Lista com os centróides - [k,n] nearest_indexes -- Índices do centróides mais próximos - [m,1] Retornos: inertia -- Soma total do quadrado da distância entre os dados de um cluster e seu centróide """ #### CODE HERE #### # Check if centroids has two dimensions and, if not, convert to if len(centroids.shape) == 1: centroids = np.array([centroids]) inertia = 0 for i in range(len(dataset)): inertia = inertia + euclidean_distance(dataset[i], centroids[int(nearest_indexes[i])])**2 ### END OF CODE ### return inertia
2019/09-clustering/cl_AlefCarneiro.ipynb
InsightLab/data-science-cookbook
mit
Teste a função codificada executando o código abaixo.
tmp_data = np.array([[1,2,3],[3,6,5],[4,5,6]]) tmp_centroide = np.array([[2,3,4]]) tmp_nearest_indexes = all_nearest_centroids(tmp_data, tmp_centroide) if inertia(tmp_data, tmp_centroide, tmp_nearest_indexes) == 26: print("Inertia calculada corretamente!") else: print("Função de inertia incorreta!") # Use a função para verificar a inertia dos seus clusters inertia(dataset, centroids, nearest_indexes)
2019/09-clustering/cl_AlefCarneiro.ipynb
InsightLab/data-science-cookbook
mit
1.4 Atualizar os clusters Nessa etapa, os centróides são recomputados. O novo valor de cada centróide será a media de todos os dados atribuídos ao cluster.
def update_centroids(dataset, centroids, nearest_indexes): """ Atualiza os centroids Argumentos: dataset -- Conjunto de dados - [m,n] centroids -- Lista com os centróides - [k,n] nearest_indexes -- Índices do centróides mais próximos - [m,1] Retornos: centroids -- Lista com centróides atualizados - [k,n] """ #### CODE HERE #### # Check if centroids has two dimensions and, if not, convert to if len(centroids.shape) == 1: centroids = np.array([centroids]) sum_data_inCentroids = np.zeros((len(centroids), len(centroids[0]))) num_data_inCentroids = np.zeros(len(centroids)) for i in range(len(dataset)): cent_idx = int(nearest_indexes[i]) sum_data_inCentroids[cent_idx] += dataset[i] num_data_inCentroids[cent_idx] += 1 for i in range(len(centroids)): centroids[i] = sum_data_inCentroids[i]/num_data_inCentroids[i] ### END OF CODE ### return centroids
2019/09-clustering/cl_AlefCarneiro.ipynb
InsightLab/data-science-cookbook
mit
Visualize os clusters formados
nearest_indexes = all_nearest_centroids(dataset, centroids) # Plota os os cluster ------------------------------------------------ plt.scatter(dataset[:,0], dataset[:,1], c=nearest_indexes) # Plota os centroids plt.scatter(centroids[:,0], centroids[:,1], marker='^', c='red', s=100) for index, centroid in enumerate(centroids): dataframe = dataset[nearest_indexes == index,:] for data in dataframe: plt.plot([centroid[0], data[0]], [centroid[1], data[1]], c='lightgray', alpha=0.3) plt.show()
2019/09-clustering/cl_AlefCarneiro.ipynb
InsightLab/data-science-cookbook
mit
Execute a função de atualização e visualize novamente os cluster formados
centroids = update_centroids(dataset, centroids, nearest_indexes)
2019/09-clustering/cl_AlefCarneiro.ipynb
InsightLab/data-science-cookbook
mit
2. K-means 2.1 Algoritmo completo Utilizando as funções codificadas anteriormente, complete a classe do algoritmo K-means!
class KMeans(): def __init__(self, n_clusters=8, max_iter=300): self.n_clusters = n_clusters self.max_iter = max_iter def fit(self,X): # Inicializa os centróides self.cluster_centers_ = calculate_initial_centers(X, self.n_clusters) # Computa o cluster de cada amostra self.labels_ = all_nearest_centroids(X, self.cluster_centers_) # Calcula a inércia inicial old_inertia = inertia(X, self.cluster_centers_, self.labels_) self.inertia_ = old_inertia for index in range(self.max_iter): #### CODE HERE #### self.cluster_centers_ = update_centroids(X, self.cluster_centers_, self.labels_) self.labels_ = all_nearest_centroids(X, self.cluster_centers_) self.inertia_ = inertia(X, self.cluster_centers_, self.labels_) if (self.inertia_ == old_inertia): break else: old_inertia = self.inertia_ ### END OF CODE ### return self def predict(self, X): return all_nearest_centroids(X, self.cluster_centers_)
2019/09-clustering/cl_AlefCarneiro.ipynb
InsightLab/data-science-cookbook
mit
Verifique o resultado do algoritmo abaixo!
kmeans = KMeans(n_clusters=3) kmeans.fit(dataset) print("Inércia = ", kmeans.inertia_) plt.scatter(dataset[:,0], dataset[:,1], c=kmeans.labels_) plt.scatter(kmeans.cluster_centers_[:,0], kmeans.cluster_centers_[:,1], marker='^', c='red', s=100) plt.show()
2019/09-clustering/cl_AlefCarneiro.ipynb
InsightLab/data-science-cookbook
mit
2.2 Comparar com algoritmo do Scikit-Learn Use a implementação do algoritmo do scikit-learn do K-means para o mesmo conjunto de dados. Mostre o valor da inércia e os conjuntos gerados pelo modelo. Você pode usar a mesma estrutura da célula de código anterior. Dica: https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans
#### CODE HERE #### from sklearn.cluster import KMeans as sk_KMeans skkmeans = sk_KMeans(n_clusters=3).fit(dataset) print("Scikit-Learn KMeans' inertia: ", skkmeans.inertia_) print("My KMeans inertia: ", kmeans.inertia_)
2019/09-clustering/cl_AlefCarneiro.ipynb
InsightLab/data-science-cookbook
mit
3. Método do cotovelo Implemete o método do cotovelo e mostre o melhor K para o conjunto de dados.
#### CODE HERE #### # Initialize array of Ks ks = np.array(range(1, 11)) # Create array to receive the inertias for each K inertias = np.zeros(len(ks)) for i in range(len(ks)): # Compute inertia for K kmeans = KMeans(ks[i]).fit(dataset) inertias[i] = kmeans.inertia_ # Best K is the last one to improve the inertia in 30% if (i > 0 and (inertias[i - 1] - inertias[i])/inertias[i] > 0.3): best_k_idx = i print("Best K: {}\n".format(ks[best_k_idx])) plt.plot(ks, inertias, marker='o') plt.plot(ks[best_k_idx], inertias[best_k_idx], 'ro')
2019/09-clustering/cl_AlefCarneiro.ipynb
InsightLab/data-science-cookbook
mit
4. Dataset Real Exercícios 1 - Aplique o algoritmo do K-means desenvolvido por você no datatse iris [1]. Mostre os resultados obtidos utilizando pelo menos duas métricas de avaliação de clusteres [2]. [1] http://archive.ics.uci.edu/ml/datasets/iris [2] http://scikit-learn.org/stable/modules/clustering.html#clustering-evaluation Dica: você pode utilizar as métricas completeness e homogeneity. 2 - Tente melhorar o resultado obtido na questão anterior utilizando uma técnica de mineração de dados. Explique a diferença obtida. Dica: você pode tentar normalizar os dados [3]. - [3] https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.normalize.html 3 - Qual o número de clusteres (K) você escolheu na questão anterior? Desenvolva o Método do Cotovelo sem usar biblioteca e descubra o valor de K mais adequado. Após descobrir, utilize o valor obtido no algoritmo do K-means. 4 - Utilizando os resultados da questão anterior, refaça o cálculo das métricas e comente os resultados obtidos. Houve uma melhoria? Explique.
#### CODE HERE ####
2019/09-clustering/cl_AlefCarneiro.ipynb
InsightLab/data-science-cookbook
mit
Import File into Python Change File Name!
df = pd.read_csv('/Users/John/Dropbox/LLU/ROP/Pulse Ox/ROP018PO.csv', parse_dates={'timestamp': ['Date','Time']}, index_col='timestamp', usecols=['Date', 'Time', 'SpO2', 'PR', 'PI', 'Exceptions'], na_values=['0'], converters={'Exceptions': readD} ) #parse_dates tells the read_csv function to combine the date and time column #into one timestamp column and parse it as a timestamp. # pandas is smart enough to know how to parse a date in various formats #index_col sets the timestamp column to be the index. #usecols tells the read_csv function to select only the subset of the columns. #na_values is used to turn 0 into NaN #converters: readD is the dict that means any string with 'C' with be NaN (for PI) #dfclean = df[27:33][df[27:33].loc[:, ['SpO2', 'PR', 'PI', 'Exceptions']].apply(pd.notnull).all(1)] #clean the dataframe to get rid of rows that have NaN for PI purposes df_clean = df[df.loc[:, ['PI', 'Exceptions']].apply(pd.notnull).all(1)] """Pulse ox date/time is 1 mins and 32 seconds faster than phone. Have to correct for it.""" TC = timedelta(minutes=1, seconds=32)
Old Code/.ipynb_checkpoints/Masimo160127-checkpoint.ipynb
johntanz/ROP
gpl-2.0
Set Date and Time of ROP Exam and Eye Drops
df_first = df.first_valid_index() #get the first number from index Y = pd.to_datetime(df_first) #convert index to datetime # Y = TIME DATA COLLECTION BEGAN / First data point on CSV # SYNTAX: # datetime(year, month, day[, hour[, minute[, second[, microsecond[,tzinfo]]]]]) W = datetime(2016, 1, 20, 7, 30)+TC # W = first eye drop dtarts X = datetime(2016, 1, 20, 8, 42)+TC # X = ROP Exam Started Z = datetime(2016, 1, 20, 8, 46)+TC # Z = ROP Exam Ended df_last = df.last_valid_index() #get the last number from index Q = pd.to_datetime(df_last) # Q = TIME DATA COLLECTION ENDED / Last Data point on CSV
Old Code/.ipynb_checkpoints/Masimo160127-checkpoint.ipynb
johntanz/ROP
gpl-2.0
Baseline Averages
avg0PI = df_clean.PI[Y:W].mean() avg0O2 = df.SpO2[Y:W].mean() avg0PR = df.PR[Y:W].mean() print 'Baseline Averages\n', 'PI :\t',avg0PI, '\nSpO2 :\t',avg0O2,'\nPR :\t',avg0PR, #df.std() for standard deviation
Old Code/.ipynb_checkpoints/Masimo160127-checkpoint.ipynb
johntanz/ROP
gpl-2.0
Average q 5 Min for 1 hour after 1st Eye Drops
# Every 5 min Average from start of eye drops to start of exam def perdeltadrop(start, end, delta): rdrop = [] curr = start while curr < end: rdrop.append(curr) curr += delta return rdrop dfdropPI = df_clean.PI[W:W+timedelta(hours=1)] dfdropO2 = df.SpO2[W:W+timedelta(hours=1)] dfdropPR = df.PR[W:W+timedelta(hours=1)] windrop = timedelta(minutes=5)#make the range rdrop = perdeltadrop(W, W+timedelta(minutes=15), windrop) avgdropPI = Series(index = rdrop, name = 'PI DurEyeD') avgdropO2 = Series(index = rdrop, name = 'SpO2 DurEyeD') avgdropPR = Series(index = rdrop, name = 'PR DurEyeD') for i in rdrop: avgdropPI[i] = dfdropPI[i:(i+windrop)].mean() avgdropO2[i] = dfdropO2[i:(i+windrop)].mean() avgdropPR[i] = dfdropPR[i:(i+windrop)].mean() resultdrops = concat([avgdropPI, avgdropO2, avgdropPR], axis=1, join='inner') print resultdrops
Old Code/.ipynb_checkpoints/Masimo160127-checkpoint.ipynb
johntanz/ROP
gpl-2.0
Average Every 10 Sec During ROP Exam for first 4 minutes
#AVERAGE DURING ROP EXAM FOR FIRST FOUR MINUTES def perdelta1(start, end, delta): r1 = [] curr = start while curr < end: r1.append(curr) curr += delta return r1 df1PI = df_clean.PI[X:X+timedelta(minutes=4)] df1O2 = df.SpO2[X:X+timedelta(minutes=4)] df1PR = df.PR[X:X+timedelta(minutes=4)] win1 = timedelta(seconds=10) #any unit of time & make the range r1 = perdelta1(X, X+timedelta(minutes=4), win1) #make the series to store avg1PI = Series(index = r1, name = 'PI DurEx') avg1O2 = Series(index = r1, name = 'SpO2 DurEx') avg1PR = Series(index = r1, name = 'PR DurEX') #average! for i1 in r1: avg1PI[i1] = df1PI[i1:(i1+win1)].mean() avg1O2[i1] = df1O2[i1:(i1+win1)].mean() avg1PR[i1] = df1PR[i1:(i1+win1)].mean() result1 = concat([avg1PI, avg1O2, avg1PR], axis=1, join='inner') print result1
Old Code/.ipynb_checkpoints/Masimo160127-checkpoint.ipynb
johntanz/ROP
gpl-2.0
Average Every 5 Mins Hour 1-2 After ROP Exam
#AVERAGE EVERY 5 MINUTES ONE HOUR AFTER ROP EXAM def perdelta2(start, end, delta): r2 = [] curr = start while curr < end: r2.append(curr) curr += delta return r2 # datetime(year, month, day, hour, etc.) df2PI = df_clean.PI[Z:(Z+timedelta(hours=1))] df2O2 = df.SpO2[Z:(Z+timedelta(hours=1))] df2PR = df.PR[Z:(Z+timedelta(hours=1))] win2 = timedelta(minutes=5) #any unit of time, make the range r2 = perdelta2(Z, (Z+timedelta(hours=1)), win2) #define the average using function #make the series to store avg2PI = Series(index = r2, name = 'PI q5MinHr1') avg2O2 = Series(index = r2, name = 'O2 q5MinHr1') avg2PR = Series(index = r2, name = 'PR q5MinHr1') #average! for i2 in r2: avg2PI[i2] = df2PI[i2:(i2+win2)].mean() avg2O2[i2] = df2O2[i2:(i2+win2)].mean() avg2PR[i2] = df2PR[i2:(i2+win2)].mean() result2 = concat([avg2PI, avg2O2, avg2PR], axis=1, join='inner') print result2
Old Code/.ipynb_checkpoints/Masimo160127-checkpoint.ipynb
johntanz/ROP
gpl-2.0
Average Every 15 Mins Hour 2-3 After ROP Exam
#AVERAGE EVERY 15 MINUTES TWO HOURS AFTER ROP EXAM def perdelta3(start, end, delta): r3 = [] curr = start while curr < end: r3.append(curr) curr += delta return r3 # datetime(year, month, day, hour, etc.) df3PI = df_clean.PI[(Z+timedelta(hours=1)):(Z+timedelta(hours=2))] df3O2 = df.SpO2[(Z+timedelta(hours=1)):(Z+timedelta(hours=2))] df3PR = df.PR[(Z+timedelta(hours=1)):(Z+timedelta(hours=2))] win3 = timedelta(minutes=15) #any unit of time, make the range r3 = perdelta3((Z+timedelta(hours=1)), (Z+timedelta(hours=2)), win3) #make the series to store avg3PI = Series(index = r3, name = 'PI q15MinHr2') avg3O2 = Series(index = r3, name = 'O2 q15MinHr2') avg3PR = Series(index = r3, name = 'PR q15MinHr2') #average! for i3 in r3: avg3PI[i3] = df3PI[i3:(i3+win3)].mean() avg3O2[i3] = df3O2[i3:(i3+win3)].mean() avg3PR[i3] = df3PR[i3:(i3+win3)].mean() result3 = concat([avg3PI, avg3O2, avg3PR], axis=1, join='inner') print result3
Old Code/.ipynb_checkpoints/Masimo160127-checkpoint.ipynb
johntanz/ROP
gpl-2.0
Average Every 30 Mins Hour 3-4 After ROP Exam
#AVERAGE EVERY 30 MINUTES THREE HOURS AFTER ROP EXAM def perdelta4(start, end, delta): r4 = [] curr = start while curr < end: r4.append(curr) curr += delta return r4 # datetime(year, month, day, hour, etc.) df4PI = df_clean.PI[(Z+timedelta(hours=2)):(Z+timedelta(hours=3))] df4O2 = df.SpO2[(Z+timedelta(hours=2)):(Z+timedelta(hours=3))] df4PR = df.PR[(Z+timedelta(hours=2)):(Z+timedelta(hours=3))] win4 = timedelta(minutes=30) #any unit of time, make the range r4 = perdelta4((Z+timedelta(hours=2)), (Z+timedelta(hours=3)), win4) #make the series to store avg4PI = Series(index = r4, name = 'PI q30MinHr3') avg4O2 = Series(index = r4, name = 'O2 q30MinHr3') avg4PR = Series(index = r4, name = 'PR q30MinHr3') #average! for i4 in r4: avg4PI[i4] = df4PI[i4:(i4+win4)].mean() avg4O2[i4] = df4O2[i4:(i4+win4)].mean() avg4PR[i4] = df4PR[i4:(i4+win4)].mean() result4 = concat([avg4PI, avg4O2, avg4PR], axis=1, join='inner') print result4
Old Code/.ipynb_checkpoints/Masimo160127-checkpoint.ipynb
johntanz/ROP
gpl-2.0
Average Every Hour 4-24 Hours Post ROP Exam
#AVERAGE EVERY 60 MINUTES 4-24 HOURS AFTER ROP EXAM def perdelta5(start, end, delta): r5 = [] curr = start while curr < end: r5.append(curr) curr += delta return r5 # datetime(year, month, day, hour, etc.) df5PI = df_clean.PI[(Z+timedelta(hours=3)):(Z+timedelta(hours=24))] df5O2 = df.SpO2[(Z+timedelta(hours=3)):(Z+timedelta(hours=24))] df5PR = df.PR[(Z+timedelta(hours=3)):(Z+timedelta(hours=24))] win5 = timedelta(minutes=60) #any unit of time, make the range r5 = perdelta5((Z+timedelta(hours=3)), (Z+timedelta(hours=24)), win5) #make the series to store avg5PI = Series(index = r5, name = 'PI q60MinHr4+') avg5O2 = Series(index = r5, name = 'O2 q60MinHr4+') avg5PR = Series(index = r5, name = 'PR q60MinHr4+') #average! for i5 in r5: avg5PI[i5] = df5PI[i5:(i5+win5)].mean() avg5O2[i5] = df5O2[i5:(i5+win5)].mean() avg5PR[i5] = df5PR[i5:(i5+win5)].mean() result5 = concat([avg5PI, avg5O2, avg5PR], axis=1, join='inner') print result5
Old Code/.ipynb_checkpoints/Masimo160127-checkpoint.ipynb
johntanz/ROP
gpl-2.0
Mild, Moderate, and Severe Desaturation Events
df_O2_pre = df[Y:W] #Find count of these ranges below = 0 # v <=80 middle = 0 #v >= 81 and v<=84 above = 0 #v >=85 and v<=89 ls = [] b_dict = {} m_dict = {} a_dict = {} for i, v in df_O2_pre['SpO2'].iteritems(): if v <= 80: #below block if not ls: ls.append(v) else: if ls[0] >= 81: #if the range before was not below 80 if len(ls) >= 5: #if the range was greater than 10 seconds, set to 5 because data points are every 2 if ls[0] <= 84: #was it in the middle range? m_dict[middle] = ls middle += 1 ls = [v] elif ls[0] >= 85 and ls[0] <=89: #was it in the above range? a_dict[above] = ls above += 1 ls = [v] else: #old list wasn't long enough to count ls = [v] else: #if in the same range ls.append(v) elif v >= 81 and v<= 84: #middle block if not ls: ls.append(v) else: if ls[0] <= 80 or (ls[0]>=85 and ls[0]<= 89): #if not in the middle range if len(ls) >= 5: #if range was greater than 10 seconds if ls[0] <= 80: #was it in the below range? b_dict[below] = ls below += 1 ls = [v] elif ls[0] >= 85 and ls[0] <=89: #was it in the above range? a_dict[above] = ls above += 1 ls = [v] else: #old list wasn't long enough to count ls = [v] else: ls.append(v) elif v >= 85 and v <=89: #above block if not ls: ls.append(v) else: if ls[0] <=84 : #if not in the above range if len(ls) >= 5: #if range was greater than if ls[0] <= 80: #was it in the below range? b_dict[below] = ls below += 1 ls = [v] elif ls[0] >= 81 and ls[0] <=84: #was it in the middle range? m_dict[middle] = ls middle += 1 ls = [v] else: #old list wasn't long enough to count ls = [v] else: ls.append(v) else: #v>90 or something else weird. start the list over ls = [] #final list check if len(ls) >= 5: if ls[0] <= 80: #was it in the below range? b_dict[below] = ls below += 1 ls = [v] elif ls[0] >= 81 and ls[0] <=84: #was it in the middle range? m_dict[middle] = ls middle += 1 ls = [v] elif ls[0] >= 85 and ls[0] <=89: #was it in the above range? a_dict[above] = ls above += 1 b_len = 0.0 for key, val in b_dict.iteritems(): b_len += len(val) m_len = 0.0 for key, val in m_dict.iteritems(): m_len += len(val) a_len = 0.0 for key, val in a_dict.iteritems(): a_len += len(val) #post exam duraiton length analysis df_O2_post = df[Z:Q] #Find count of these ranges below2 = 0 # v <=80 middle2= 0 #v >= 81 and v<=84 above2 = 0 #v >=85 and v<=89 ls2 = [] b_dict2 = {} m_dict2 = {} a_dict2 = {} for i2, v2 in df_O2_post['SpO2'].iteritems(): if v2 <= 80: #below block if not ls2: ls2.append(v2) else: if ls2[0] >= 81: #if the range before was not below 80 if len(ls2) >= 5: #if the range was greater than 10 seconds, set to 5 because data points are every 2 if ls2[0] <= 84: #was it in the middle range? m_dict2[middle2] = ls2 middle2 += 1 ls2 = [v2] elif ls2[0] >= 85 and ls2[0] <=89: #was it in the above range? a_dict2[above2] = ls2 above2 += 1 ls2 = [v2] else: #old list wasn't long enough to count ls2 = [v2] else: #if in the same range ls2.append(v2) elif v2 >= 81 and v2<= 84: #middle block if not ls2: ls2.append(v2) else: if ls2[0] <= 80 or (ls2[0]>=85 and ls2[0]<= 89): #if not in the middle range if len(ls2) >= 5: #if range was greater than 10 seconds if ls2[0] <= 80: #was it in the below range? b_dict2[below2] = ls2 below2 += 1 ls2 = [v2] elif ls2[0] >= 85 and ls2[0] <=89: #was it in the above range? a_dict2[above2] = ls2 above2 += 1 ls2 = [v2] else: #old list wasn't long enough to count ls2 = [v2] else: ls2.append(v2) elif v2 >= 85 and v2 <=89: #above block if not ls2: ls2.append(v2) else: if ls2[0] <=84 : #if not in the above range if len(ls2) >= 5: #if range was greater than if ls2[0] <= 80: #was it in the below range? b_dict2[below2] = ls2 below2 += 1 ls2 = [v2] elif ls2[0] >= 81 and ls2[0] <=84: #was it in the middle range? m_dict2[middle2] = ls2 middle2 += 1 ls2 = [v2] else: #old list wasn't long enough to count ls2 = [v2] else: ls2.append(v2) else: #v2>90 or something else weird. start the list over ls2 = [] #final list check if len(ls2) >= 5: if ls2[0] <= 80: #was it in the below range? b_dict2[below2] = ls2 below2 += 1 ls2= [v2] elif ls2[0] >= 81 and ls2[0] <=84: #was it in the middle range? m_dict2[middle2] = ls2 middle2 += 1 ls2 = [v2] elif ls2[0] >= 85 and ls2[0] <=89: #was it in the above range? a_dict2[above2] = ls2 above2 += 1 b_len2 = 0.0 for key, val2 in b_dict2.iteritems(): b_len2 += len(val2) m_len2 = 0.0 for key, val2 in m_dict2.iteritems(): m_len2 += len(val2) a_len2 = 0.0 for key, val2 in a_dict2.iteritems(): a_len2 += len(val2) #print results from count and min print "Desat Counts for X mins\n" print "Pre Mild Desat (85-89) Count: %s\t" %above, "for %s min" %((a_len*2)/60.) print "Pre Mod Desat (81-84) Count: %s\t" %middle, "for %s min" %((m_len*2)/60.) print "Pre Sev Desat (=< 80) Count: %s\t" %below, "for %s min\n" %((b_len*2)/60.) print "Post Mild Desat (85-89) Count: %s\t" %above2, "for %s min" %((a_len2*2)/60.) print "Post Mod Desat (81-84) Count: %s\t" %middle2, "for %s min" %((m_len2*2)/60.) print "Post Sev Desat (=< 80) Count: %s\t" %below2, "for %s min\n" %((b_len2*2)/60.) print "Data Recording Time!" print '*' * 10 print "Pre-Exam Data Recording Length\t", X - Y # start of exam - first data point print "Post-Exam Data Recording Length\t", Q - Z #last data point - end of exam print "Total Data Recording Length\t", Q - Y #last data point - first data point Pre = ['Pre',(X-Y)] Post = ['Post',(Q-Z)] Total = ['Total',(Q-Y)] RTL = [Pre, Post, Total] PreMild = ['Pre Mild Desats \t',(above), 'for', (a_len*2)/60., 'mins'] PreMod = ['Pre Mod Desats \t',(middle), 'for', (m_len*2)/60., 'mins'] PreSev = ['Pre Sev Desats \t',(below), 'for', (b_len*2)/60., 'mins'] PreDesats = [PreMild, PreMod, PreSev] PostMild = ['Post Mild Desats \t',(above2), 'for', (a_len2*2)/60., 'mins'] PostMod = ['Post Mod Desats \t',(middle2), 'for', (m_len2*2)/60., 'mins'] PostSev = ['Post Sev Desats \t',(below2), 'for', (b_len2*2)/60., 'mins'] PostDesats = [PostMild, PostMod, PostSev] #creating a list for recording time length #did it count check sort correctly? get rid of the ''' if you want to check your values ''' print "Mild check" for key, val in b_dict.iteritems(): print all(i <=80 for i in val) print "Moderate check" for key, val in m_dict.iteritems(): print all(i >= 81 and i<=84 for i in val) print "Severe check" for key, val in a_dict.iteritems(): print all(i >= 85 and i<=89 for i in val) '''
Old Code/.ipynb_checkpoints/Masimo160127-checkpoint.ipynb
johntanz/ROP
gpl-2.0
Export to CSV
import csv class excel_tab(csv.excel): delimiter = '\t' csv.register_dialect("excel_tab", excel_tab) with open('ROP018_PO.csv', 'w') as f: #CHANGE CSV FILE NAME, saves in same directory writer = csv.writer(f, dialect=excel_tab) #writer.writerow(['PI, O2, PR']) accidently found this out but using commas = gives me columns YAY! fix this #to make code look nice ok nice writer.writerow([avg0PI, ',PI Start']) for i in rdrop: writer.writerow([avgdropPI[i]]) #NEEDS BRACKETS TO MAKE IT SEQUENCE for i in r1: writer.writerow([avg1PI[i]]) for i in r2: writer.writerow([avg2PI[i]]) for i in r3: writer.writerow([avg3PI[i]]) for i in r4: writer.writerow([avg4PI[i]]) for i in r5: writer.writerow([avg5PI[i]]) writer.writerow([avg0O2, ',SpO2 Start']) for i in rdrop: writer.writerow([avgdropO2[i]]) for i in r1: writer.writerow([avg1O2[i]]) for i in r2: writer.writerow([avg2O2[i]]) for i in r3: writer.writerow([avg3O2[i]]) for i in r4: writer.writerow([avg4O2[i]]) for i in r5: writer.writerow([avg5O2[i]]) writer.writerow([avg0PR, ',PR Start']) for i in rdrop: writer.writerow([avgdropPR[i]]) for i in r1: writer.writerow([avg1PR[i]]) for i in r2: writer.writerow([avg2PR[i]]) for i in r3: writer.writerow([avg3PR[i]]) for i in r4: writer.writerow([avg4PR[i]]) for i in r5: writer.writerow([avg5PR[i]]) writer.writerow(['Data Recording Time Length']) writer.writerows(RTL) writer.writerow(['Pre Desat Counts for X Minutes']) writer.writerows(PreDesats) writer.writerow(['Post Dest Counts for X Minutes']) writer.writerows(PostDesats)
Old Code/.ipynb_checkpoints/Masimo160127-checkpoint.ipynb
johntanz/ROP
gpl-2.0
4. Pandas简介 最重要的是DataFrame和Series
import numpy as np import pandas as pd
Python数据科学101.ipynb
liulixiang1988/documents
mit
4.1 Series 创建一个series,包含空值NaN
s = pd.Series([1, 3, 5, np.nan, 6, 8]) s[4] # 6.0
Python数据科学101.ipynb
liulixiang1988/documents
mit
4.2 Dataframes
df = pd.DataFrame({'data': ['2016-01-01', '2016-01-02', '2016-01-03'], 'qty': [20, 30, 40]}) df
Python数据科学101.ipynb
liulixiang1988/documents
mit
更大的数据应当从文件里获取
rain = pd.read_csv('data/rainfall/rainfall.csv') rain # 加载一列 rain['City'] # 加载一行(第二行) rain.loc[[1]] # 第一行和第二行 rain.loc[0:1]
Python数据科学101.ipynb
liulixiang1988/documents
mit
4.3 过滤
# 查找所有降雨量小于10的数据 rain[rain['Rainfall'] < 10]
Python数据科学101.ipynb
liulixiang1988/documents
mit
查找4月份的降雨
rain[rain['Month'] == 'Apr']
Python数据科学101.ipynb
liulixiang1988/documents
mit
查找Los Angeles的数据
rain[rain['City'] == 'Los Angeles']
Python数据科学101.ipynb
liulixiang1988/documents
mit
4.4 给行起名(Naming Rows)
rain = rain.set_index(rain['City'] + rain['Month'])
Python数据科学101.ipynb
liulixiang1988/documents
mit
注意,当我们修改dataframe时,其实是在创建一个副本,因此要把这个值再赋值给原有的dataframe
rain.loc['San FranciscoApr']
Python数据科学101.ipynb
liulixiang1988/documents
mit
5. Pandas 例子
%matplotlib inline import pandas as pd import matplotlib.pyplot as plt import numpy as np df = pd.read_csv('data/nycflights13/flights.csv.gz') df
Python数据科学101.ipynb
liulixiang1988/documents
mit
这里我们主要关注统计数据和可视化。我们来看一下按月统计的晚点时间的均值。
mean_delay_by_month = df.groupby(['month'])['arr_delay'].mean() mean_delay_by_month mean_month_plt = mean_delay_by_month.plot(kind='bar', title='Mean Delay By Month') mean_month_plt
Python数据科学101.ipynb
liulixiang1988/documents
mit
注意,这里9、10月均值会有负值。
mean_delay_by_month_ord = df[(df.dest == 'ORD')].groupby(['month'])['arr_delay'].mean() print("Flights to Chicago (ORD)") print(mean_delay_by_month_ord) mean_month_plt_ord = mean_delay_by_month_ord.plot(kind='bar', title="Mean Delay By Month (Chicago)") mean_month_plt_ord # 再看看Los Angeles进行比较一下 mean_delay_by_month_lax = df[(df.dest == 'LAX')].groupby(['month'])['arr_delay'].mean() print("Flights to Chicago (LAX)") print(mean_delay_by_month_lax) mean_month_plt_lax = mean_delay_by_month_lax.plot(kind='bar', title="Mean Delay By Month (Los Angeles)") mean_month_plt_lax
Python数据科学101.ipynb
liulixiang1988/documents
mit
从上面的图表中我们可以直观的看到一些特征。现在我们再来看看每个航空公司晚点的情况,并进行一些可视化。
# 看看是否不同的航空公司对晚点会有不同的影响 df[['carrier', 'arr_delay']].groupby('carrier').mean().plot(kind='bar', figsize=(12, 8)) plt.xticks(rotation=0) plt.xlabel('Carrier') plt.ylabel('Average Delay in Min') plt.title('Average Arrival Delay by Carrier in 2008, All airports') df[['carrier', 'dep_delay']].groupby('carrier').mean().plot(kind='bar', figsize=(12, 8)) plt.xticks(rotation=0) plt.xlabel('Carrier') plt.ylabel('Average Delay in Min') plt.title('Average Departure Delay by Carrier in 2008, All airports')
Python数据科学101.ipynb
liulixiang1988/documents
mit
从上面的图表里我们可以看到F9(Front Airlines)几乎是最经常晚点的,而夏威夷(HA)在这方面表现最好。 5.3 Joins 我们有多个数据集,天气、机场的。现在我们来看一下如何把两个表连接在一起
weather = pd.read_csv('data/nycflights13/weather.csv.gz') weather df_withweather = pd.merge(df, weather, how='left', on=['year', 'month', 'day', 'hour']) df_withweather airports = pd.read_csv('data/nycflights13/airports.csv.gz') airports df_withairport = pd.merge(df_withweather, airports, how='left', left_on='dest', right_on='faa') df_withairport
Python数据科学101.ipynb
liulixiang1988/documents
mit
6 Numpy和SciPy Numpy和SciPy是Python数据科学的CP。早期Python的list比较慢,并且对于处理矩阵和向量运算不太好,因此有了Numpy来解决这个问题。它引入了array-type的数据类型。 创建数组:
import numpy as np a = np.array([1, 2, 3]) a
Python数据科学101.ipynb
liulixiang1988/documents
mit
注意这里我们传的是列表,而不是np.array(1, 2, 3)。 现在我们创建一个arange
np.arange(10) # 给序列乘以一个系数 np.arange(10) * np.pi
Python数据科学101.ipynb
liulixiang1988/documents
mit
我们也可以使用shape方法从一维数组创建多维数组
a = np.array([1, 2, 3, 4, 5, 6]) a.shape = (2, 3) a
Python数据科学101.ipynb
liulixiang1988/documents
mit
6.1 矩阵Matrix
np.matrix('1 2; 3 4') #矩阵乘 a1 = np.matrix('1 2; 3 4') a2 = np.matrix('3 4; 5 7') a1 * a2 #array转换为矩阵 mat_a = np.mat(a1) mat_a
Python数据科学101.ipynb
liulixiang1988/documents
mit
6.2 稀疏矩阵(Sparse Matrices)
import numpy, scipy.sparse n = 100000 x = (numpy.random.rand(n) * 2).astype(int).astype(float) #50%稀疏矩阵 x_csr = scipy.sparse.csr_matrix(x) x_dok = scipy.sparse.dok_matrix(x.reshape(x_csr.shape)) x_dok
Python数据科学101.ipynb
liulixiang1988/documents
mit
6.3 从CSV文件中加载数据
import csv with open('data/array/array.csv', 'r') as csvfile: csvreader = csv.reader(csvfile) data = [] for row in csvreader: row = [float(x) for x in row] data.append(row) data
Python数据科学101.ipynb
liulixiang1988/documents
mit
6.4 求解矩阵方程(Solving a matrix)
import numpy as np import scipy as sp a = np.array([[3, 2, 0], [1, -1, 0], [0, 5, 1]]) b = np.array([2, 4, -1]) x = np.linalg.solve(a, b) x #检查结果是否正确 np.dot(a, x) == b
Python数据科学101.ipynb
liulixiang1988/documents
mit
7 Scikit-learn 简介 前面我们介绍了pandas和numpy、scipy。现在我们来介绍python机器库Scikit。首先需要先知道机器学习的两种: 监督学习(Supervised Learning): 从训练集建立模型进行预测 非监督学习(Unsupervised Learning): 从数据中推测模型,比如从文本中找出主题 Scikit-learn有一下特性: - 预处理(Preprocessing):为机器学习reshape数据 - 降维处理(Dimensionality reduction):减少变量的重复 - 分类(Classification): 预测分类 - 回归(regression):预测连续变量 - 聚类(Clustering):从数据中发现自然的模式 - 模型选取(Model Selection):为数据找到最优模型 这里我们还是看nycflights13的数据集。
%matplotlib inline import pandas as pd import matplotlib.pyplot as plt import numpy as np from sklearn.cross_validation import train_test_split from sklearn.preprocessing import StandardScaler, OneHotEncoder flights = pd.read_csv('data/nycflights13/flights.csv.gz') weather = pd.read_csv('data/nycflights13/weather.csv.gz') airports = pd.read_csv('data/nycflights13/airports.csv.gz') df_withweather = pd.merge(flights, weather, how='left', on=['year', 'month', 'day', 'hour']) df = pd.merge(df_withweather, airports, how='left', left_on='dest', right_on='faa') df = df.dropna() df
Python数据科学101.ipynb
liulixiang1988/documents
mit
7.1 特征向量
pred = 'dep_delay' features = ['month', 'day', 'dep_time', 'arr_time', 'carrier', 'dest', 'air_time', 'distance', 'lat', 'lon', 'alt', 'dewp', 'humid', 'wind_speed', 'wind_gust', 'precip', 'pressure', 'visib'] features_v = df[features] pred_v = df[pred] pd.options.mode.chained_assignment = None #default='warn' # 因为航空公司不是一个数字,我们把它转化为数字哑变量 features_v['carrier'] = pd.factorize(features_v['carrier'])[0] # dest也不是一个数字,我们也把它转为数字 features_v['dest'] = pd.factorize(features_v['dest'])[0] features_v
Python数据科学101.ipynb
liulixiang1988/documents
mit
7.2 对特征向量进行标准化(Scaling the feature vector)
# 因为各个特征的维度各不相同,我们需要做标准化 scaler = StandardScaler() scaled_features = scaler.fit_transform(features_v) scaled_features
Python数据科学101.ipynb
liulixiang1988/documents
mit
7.3 特征降维(Reducing Dimensions) 我们使用PCA(Principle Component Analysis主成分析)把特征降维为2个
from sklearn.decomposition import PCA pca = PCA(n_components=2) X_r = pca.fit(scaled_features).transform(scaled_features) X_r
Python数据科学101.ipynb
liulixiang1988/documents
mit
7.4 画图(Plotting)
import matplotlib.pyplot as plt print('explained variance ratio (first two components): %s' % str(pca.explained_variance_ratio_)) plt.figure() lw = 2 plt.scatter(X_r[:,0], X_r[:,1], alpha=.8, lw=lw) plt.title('PCA of flights dataset')
Python数据科学101.ipynb
liulixiang1988/documents
mit
8 构建分类器(Build a classifier) 我们来预测一个航班是否会晚点
%matplotlib inline import pandas as pd import matplotlib.pyplot as plt import numpy as np import sklearn from sklearn import linear_model, cross_validation, metrics, svm, ensemble from sklearn.metrics import classification_report, confusion_matrix, precision_recall_fscore_support, accuracy_score from sklearn.cross_validation import train_test_split, cross_val_score, ShuffleSplit from sklearn.ensemble import RandomForestClassifier from sklearn.preprocessing import StandardScaler, OneHotEncoder flights = pd.read_csv('data/nycflights13/flights.csv.gz') weather = pd.read_csv('data/nycflights13/weather.csv.gz') airports = pd.read_csv('data/nycflights13/airports.csv.gz') df_withweather = pd.merge(flights, weather, how='left', on=['year', 'month', 'day', 'hour']) df = pd.merge(df_withweather, airports, how='left', left_on='dest', right_on='faa') df = df.dropna() df pred = 'dep_delay' features = ['month', 'day', 'dep_time', 'arr_time', 'carrier', 'dest', 'air_time', 'distance', 'lat', 'lon', 'alt', 'dewp', 'humid', 'wind_speed', 'wind_gust', 'precip', 'pressure', 'visib'] features_v = df[features] pred_v = df[pred] how_late_is_late = 15.0 pd.options.mode.chained_assignment = None #default='warn' # 因为航空公司不是一个数字,我们把它转化为数字哑变量 features_v['carrier'] = pd.factorize(features_v['carrier'])[0] # dest也不是一个数字,我们也把它转为数字 features_v['dest'] = pd.factorize(features_v['dest'])[0] scaler = StandardScaler() scaled_features_v = scaler.fit_transform(features_v) features_train, features_test, pred_train, pred_test = train_test_split( scaled_features_v, pred_v, test_size=0.30, random_state=0) # 使用logistic回归来执行分类 clf_lr = sklearn.linear_model.LogisticRegression(penalty='l2', class_weight='balanced') logistic_fit = clf_lr.fit(features_train, np.where(pred_train >= how_late_is_late, 1, 0)) predictions = clf_lr.predict(features_test) # summary Report # Confusion Matrix cm_lr = confusion_matrix(np.where(pred_test >= how_late_is_late, 1, 0), predictions) print("Confusion Matrix") print(pd.DataFrame(cm_lr)) # 获取精确值 report_lr = precision_recall_fscore_support( list(np.where(pred_test >= how_late_is_late, 1, 0)), list(predictions), average='binary') #打印精度值 print("\nprecision = %0.2f, recall = %0.2f, F1 = %0.2f, accuracy = %0.2f" % (report_lr[0], report_lr[1], report_lr[2], accuracy_score(list(np.where(pred_test >= how_late_is_late, 1, 0)), list(predictions))))
Python数据科学101.ipynb
liulixiang1988/documents
mit
9 聚合数据(Cluster data) 最简单的聚类方法是K-Means
%matplotlib inline import pandas as pd import matplotlib.pyplot as plt import numpy as np import sklearn from sklearn.cluster import KMeans from sklearn import linear_model, cross_validation, cluster from sklearn.metrics import classification_report, confusion_matrix, precision_recall_fscore_support, accuracy_score from sklearn.cross_validation import train_test_split, cross_val_score, ShuffleSplit from sklearn.preprocessing import StandardScaler, OneHotEncoder flights = pd.read_csv('data/nycflights13/flights.csv.gz') weather = pd.read_csv('data/nycflights13/weather.csv.gz') airports = pd.read_csv('data/nycflights13/airports.csv.gz') df_withweather = pd.merge(flights, weather, how='left', on=['year', 'month', 'day', 'hour']) df = pd.merge(df_withweather, airports, how='left', left_on='dest', right_on='faa') df = df.dropna() pred = 'dep_delay' features = ['month', 'day', 'dep_time', 'arr_time', 'carrier', 'dest', 'air_time', 'distance', 'lat', 'lon', 'alt', 'dewp', 'humid', 'wind_speed', 'wind_gust', 'precip', 'pressure', 'visib'] features_v = df[features] pred_v = df[pred] how_late_is_late = 15.0 pd.options.mode.chained_assignment = None #default='warn' # 因为航空公司不是一个数字,我们把它转化为数字哑变量 features_v['carrier'] = pd.factorize(features_v['carrier'])[0] # dest也不是一个数字,我们也把它转为数字 features_v['dest'] = pd.factorize(features_v['dest'])[0] scaler = StandardScaler() scaled_features_v = scaler.fit_transform(features_v) features_train, features_test, pred_train, pred_test = train_test_split( scaled_features_v, pred_v, test_size=0.30, random_state=0) cluster = sklearn.cluster.KMeans(n_clusters=8, init='k-means++', n_init=10, max_iter=300, tol=0.0001, precompute_distances='auto', random_state=None, verbose=0) cluster.fit(features_train) # 预测测试数据 result = cluster.predict(features_test) result import matplotlib.pyplot as plt from sklearn.decomposition import PCA reduced_data = PCA(n_components=2).fit_transform(features_train) kmeans = KMeans(init='k-means++', n_clusters=8, n_init=10) kmeans.fit(reduced_data) # mesh的步长 h = .02 x_min, x_max = reduced_data[:, 0].min() - 1, reduced_data[:, 0].max() + 1 y_min, y_max = reduced_data[:, 1].min() - 1, reduced_data[:, 1].max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) z = kmeans.predict(np.c_[xx.ravel(), yy.ravel()]) z = z.reshape(xx.shape) plt.figure(1) plt.clf() plt.imshow(z, interpolation='nearest', extend=(xx.min(), xx.max(), yy.min(), yy.max()), cmap=plt.cm.Paired #aspect='auto' # origin='lower' ) plt.plot(reduced_data[:, 0], reduced_data[:, 1], 'k.', markersize=2) centroids = kmeans.cluster_centers_ plt.scatter(centroids[:, 0], centroids[:, 1], marker='x', s=169, linewidths=3, color='w', zorder=10) plt.title('K-Means clustering on the dataset (PCA-reduced data)\n' 'Centroids are marked with white cross') plt.xlim(x_min, x_max) plt.ylim(y_min, y_max) plt.xticks(()) plt.yticks(()) plt.show()
Python数据科学101.ipynb
liulixiang1988/documents
mit
10 PySpark简介 扩展我们的算法:有时我们需要处理大量数据,并且采样已经无效,这个时候可以通过把数据分到多个机器来处理。 Spark是一个用来并行进行大数据处理的API。它将数据切割到集群来处理。在开发阶段,我们可以只在本地运行。 我们使用PySpark Shell来连接到集群。 运行下面路径的pyspark,会启动PySpark Shell ~/spark/bin/pyspark (Max/Linux) C:\spark\bin\pyspark (Windows) 此时,可以在Shell中运行文件加载: lines = sc.textFile("README.md") lines.first() # 加载第一行 可以在http://localhost:4040查看PySpark运行的Job 大多数情况下,我们希望能够在Jupyter Notebook中运行PySpark,为此,我们需要设置环境变量: PYSPARK_PYTHON=python3 PYSPARK_DRIVER_PYTHON="jupyter" PYSPARK_DRIVER_PYTHON_OPTS="notebook" 然后运行~/spark/bin/pyspark,最后一个命令会启动一个jupyter server,样子跟我们用的一样。
lines = sc.text('README.md') lines.take(5)
Python数据科学101.ipynb
liulixiang1988/documents
mit