markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Cleaned data:
raw_tog.plot_psd(fmax=30)
0.22/_downloads/f781cba191074d5f4243e5933c1e870d/plot_find_ref_artifacts.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Now try the "separate" algorithm.
raw_sep = raw.copy() # Do ICA only on the reference channels. ref_picks = mne.pick_types(raw_sep.info, meg=False, ref_meg=True) ica_ref = ICA(n_components=2, allow_ref_meg=True, **ica_kwargs) ica_ref.fit(raw_sep, picks=ref_picks) # Do ICA on both reference and standard channels. Here, we can just reuse # ica_tog from the section above. ica_sep = ica_tog.copy() # Extract the time courses of these components and add them as channels # to the raw data. Think of them the same way as EOG/EKG channels, but instead # of giving info about eye movements/cardiac activity, they give info about # external magnetic noise. ref_comps = ica_ref.get_sources(raw_sep) for c in ref_comps.ch_names: # they need to have REF_ prefix to be recognised ref_comps.rename_channels({c: "REF_" + c}) raw_sep.add_channels([ref_comps]) # Now that we have our noise channels, we run the separate algorithm. bad_comps, scores = ica_sep.find_bads_ref(raw_sep, method="separate") # Plot scores with bad components marked. ica_sep.plot_scores(scores, bad_comps) # Examine the properties of removed components. ica_sep.plot_properties(raw_sep, picks=bad_comps) # Remove the components. raw_sep = ica_sep.apply(raw_sep, exclude=bad_comps)
0.22/_downloads/f781cba191074d5f4243e5933c1e870d/plot_find_ref_artifacts.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Cleaned raw data traces:
raw_sep.plot(**plot_kwargs)
0.22/_downloads/f781cba191074d5f4243e5933c1e870d/plot_find_ref_artifacts.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Cleaned raw data PSD:
raw_sep.plot_psd(fmax=30)
0.22/_downloads/f781cba191074d5f4243e5933c1e870d/plot_find_ref_artifacts.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
The code begins by loading a local dataset of gene annotations and extracting their promotorial regions (here defined as regions at $\left[gene_{start}-2000;gene_{start}+2000\right])$. Note that the start and stop attributes automatically consider the strand of the region.
genes = gl.load_from_path("../data/genes/") promoters = genes.reg_project(new_field_dict={ 'start':genes.start-2000, 'stop':genes.start + 2000})
examples/notebooks/02a_Mixing_Local_Remote_Processing_SIMPLE.ipynb
DEIB-GECO/PyGMQL
apache-2.0
The genes and promoters variables are GMQLDataset; the former is loaded directly, the latter results from a projection operation. Region feature names can be accessed directly from variables to build expressions and predicates (e.g., gene.start + 2000). Next, we load the external dataset of Chip-Seq from a remote GMQL Web service; in order to do so, the user has to specify the remote address and login. If the user has already signed to the remote GMQL installation, he/she can use his/her own credentials (this will also grant the access to private datasets), otherwise a guest account is automatically created, without requiring the user to do it manually.
gl.set_remote_address("http://gmql.eu/gmql-rest/") gl.login()
examples/notebooks/02a_Mixing_Local_Remote_Processing_SIMPLE.ipynb
DEIB-GECO/PyGMQL
apache-2.0
In the following snippet we show how to load the Chip-Seq data of the ENCODE dataset from the remote GMQL repository and select only the experiments of interest. First, the user sets the remote execution mode and imports remote datasets with the load_from_remote function; such loading is lazy, therefore no actual data is moved or read at this point. Then the user specifies the select condition; the hms["experiment\_target"] notation enables the user to build predicates on the given metadata attribute. The GMQL engine loads from the dataset only the samples whose metadata satisfy such condition; specifically, only experiments targeting the human H3K9ac marker will be selected.
gl.set_mode("remote") hms = gl.load_from_remote("HG19_ENCODE_BROAD_AUG_2017", owner="public") hms_ac = hms[hms["experiment_target"] == "H3K9ac-human"]
examples/notebooks/02a_Mixing_Local_Remote_Processing_SIMPLE.ipynb
DEIB-GECO/PyGMQL
apache-2.0
Next, the PyGMQL map operation is used to compute the average of the signal of hms_ac intersecting each promoter; iteration over all samples is implicit. Finally, the materialize method triggers the execution of the query. Since the mode is set to \texttt{"remote"}, the dataset stored at ./genes is sent to the remote service GMQL system that performs the specified operations. The result is loaded into the mapping GDataframe variable which resides on the local machine.
mapping = promoters.map( hms_ac, refName='prom', expName='hm', new_reg_fields={ 'avg_signal': gl.AVG('signal')}) mapping = mapping.materialize()
examples/notebooks/02a_Mixing_Local_Remote_Processing_SIMPLE.ipynb
DEIB-GECO/PyGMQL
apache-2.0
At this point, Python libraries for data manipulation, visualization or analysis can be applied to the GDataframe. The following portion of code provides an example of data manipulation of a query result. The to_matrix method transforms the GDataframe into a Pandas matrix, where each row corresponds to a gene and each column to a cell line; values are the average signal on the promoter of the given gene in the given cell line. Finally, the matrix is visualized as a heatmap.
import seaborn as sns heatmap=mapping.to_matrix( columns_meta=['hm.biosample_term_name'], index_regs=['gene_symbol'], values_regs=['avg_signal'], fill_value=0) plt.figure(figsize=(10, 10)) sns.heatmap(heatmap,vmax = 20) plt.show()
examples/notebooks/02a_Mixing_Local_Remote_Processing_SIMPLE.ipynb
DEIB-GECO/PyGMQL
apache-2.0
Controlling a single qubit The simulation of a unitary evolution with Processor is defiend by the control pulses. Each pulse is represented by a Pulse object consisting of the control Hamiltonian $H_j$, the target qubits, the pulse strength $c_j$ and the time sequence $t$. The evolution is given by \begin{equation} U(t)=\exp(-\mathrm{i} \sum_j c_j(t) H_j t) \end{equation} In this example, we define a single-qubit quantum device with $\sigma_z$ and $\sigma_y$ pulses.
processor = Processor(N=1) processor.add_control(0.5 * sigmaz(), targets=0, label="sigmaz") processor.add_control(0.5 * sigmay(), targets=0, label="sigmay")
examples/qip-noisy-device-simulator.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
The list of defined pulses are saved in an attribute Processor.pulses. We can see the pulse that we just defined by
for pulse in processor.pulses: pulse.print_info()
examples/qip-noisy-device-simulator.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
We can see that the pulse strength coeff and time sequence tlist still remain undefined. To fully characterize the evolution, we need to define them both. The pulse strength and time are both given as a NumPy array. For discrete pulses, tlist specifies the start and the end time of each pulse coefficient, and thus is one element longer than coeff. (This is different from the usual requirement in QuTiP solver where tlist and coeff needs to have the same length.) The definition below means that we turn on the $\sigma_y$ pulse for $t=\pi$ and with strength 1. (Notice that the Hamiltonian is $H=\frac{1}{2} \sigma_z$)
processor.pulses[1].coeff = np.array([1.]) processor.pulses[1].tlist = np.array([0., pi]) for pulse in processor.pulses: pulse.print_info()
examples/qip-noisy-device-simulator.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
This pulse is a $\pi$ pulse that flips the qubit from $\left |0 \right\rangle$ to $\left |1 \right\rangle$, equivalent to a rotation around y-axis of angle $\pi$: $$R_y(\theta) = \begin{pmatrix} cos(\theta/2) & -sin(\theta/2) \ sin(\theta/2) & cos(\theta/2) \end{pmatrix}$$ We can run the simulation to see the result of the evolution starting from $\left |0 \right\rangle$:
basis0 = basis(2, 0) result = processor.run_state(init_state=basis0) result.states[-1].tidyup(1.e-5)
examples/qip-noisy-device-simulator.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
As arbitrary single-qubit gate can be decomposed into $R_z(\theta_1) \cdot R_y(\theta_2) \cdot R_z(\theta_3)$, it is enough to use three pulses. For demonstration purpose, we choose $\theta_1=\theta_2=\theta_3=\pi/2$
processor.pulses[0].coeff = np.array([1., 0., 1.]) processor.pulses[1].coeff = np.array([0., 1., 0.]) processor.pulses[0].tlist = np.array([0., pi/2., 2*pi/2, 3*pi/2]) processor.pulses[1].tlist = np.array([0., pi/2., 2*pi/2, 3*pi/2]) result = processor.run_state(init_state=basis(2, 1)) result.states[-1].tidyup(1.0e-5)
examples/qip-noisy-device-simulator.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
Pulse with continuous amplitude If your pulse strength is generated somewhere else and is a discretization of a continuous function, you can also tell the Processor to use them with the cubic spline interpolation. In this case tlist and coeff must have the same length.
tlist = np.linspace(0., 2*np.pi, 20) processor = Processor(N=1, spline_kind="step_func") processor.add_control(sigmaz(), 0) processor.pulses[0].tlist = tlist processor.pulses[0].coeff = np.array([np.sin(t) for t in tlist]) processor.plot_pulses(); tlist = np.linspace(0., 2*np.pi, 20) processor = Processor(N=1, spline_kind="cubic") processor.add_control(sigmaz()) processor.pulses[0].tlist = tlist processor.pulses[0].coeff = np.array([np.sin(t) for t in tlist]) processor.plot_pulses();
examples/qip-noisy-device-simulator.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
Noisy evolution In real quantum devices, noise affects the perfect execution of gate-based quantum circuits, limiting their depths. In general, we can divide quantum noise into two types: coherent and incoherent noise. The former one usually dues to the deviation of the control pulse. The noisy evolution is still unitary. Incoherent noise comes from the coupling of the quantum system with the environment. This type of noise leads to the loss of information. In QIP theory, we describe this type of noise with a noisy channel, corresponding to the collapse operators in the Lindblad equation. Although noise can, in general, be simulated with quantum channel representation, it will need some pre-analysis and approximation, which can be difficult in a large system. This simulator offers an easier, but computationally more demanding solution from the viewpoint of quantum control. Processor, as a circuit simulator, is different from the common simulator of QIP, as it simulates the evolution of the qubits under the driving Hamiltonian. The noise will be defined according to the control pulses and the evolution will be calculated using QuTiP solvers. This enables one to define more complicated noise such as cross-talk and leakage error, depending on the physical device and the problem one wants to study. On the one hand, the simulation can help one analyze the noise composition and identify the dominant noise source. On the other hand, together with a backend compiler, one can also use it to study if an algorithm is sensitive to a certain type of noise. Decoherence In Processor, decoherence noise is simulated by adding collapse operator into the Lindbladian equation. For single-qubit decoherence, it is equivalent to applying random bit flip and phase flip error after applying the quantum gate. For qubit relaxation, one can simply specify the $t_1$ and $t_2$ time for the device or for each qubit. Here we assume the qubit system has a drift Hamiltonian $H_d=\hbar \omega \sigma_z$, for simplicity, we let $\hbar \omega = 10$
a = destroy(2) initial_state = basis(2,1) plus_state = (basis(2,1) + basis(2,0)).unit() tlist = np.arange(0.00, 2.02, 0.02) H_d = 10.*sigmaz()
examples/qip-noisy-device-simulator.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
Decay time $T_1$ The $T_1$ relaxation time describes the strength of amplitude damping and can be described, in a two-level system, by a collapse operator $\frac{1}{\sqrt{T_1}}a$, where $a$ is the annihilation operator. This leads to an exponential decay of the population of excited states proportional to $\exp({-t/T_1})$. This amplitude damping can be simulated by specifying the attribute t1 of the processor
from qutip.qip.pulse import Pulse t1 = 1. processor = Processor(1, t1=t1) # creat a dummpy pulse that has no Hamiltonian, but only a tlist. processor.add_pulse(Pulse(None, None, tlist=tlist, coeff=False)) result = processor.run_state(init_state=initial_state, e_ops=[a.dag()*a]) fig, ax = plt.subplots() ax.plot(tlist[0: 100: 10], result.expect[0][0: 100: 10], 'o', label="simulation") ax.plot(tlist, np.exp(-1./t1*tlist), label="theory") ax.set_xlabel("t") ax.set_ylabel("population in the excited state") ax.legend() plt.grid()
examples/qip-noisy-device-simulator.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
Decay time $T_2$ The $T_2$ time describes the dephasing process. Here one has to be careful that the amplitude damping channel characterized by $T_1$ will also lead to a dephasing proportional to $\exp(-t/2T_1)$. To make sure that the overall phase dampling is $exp(-t/T_2)$, the processor (internally) uses an collapse operator $\frac{1}{\sqrt{2*T'_2}} \sigma_z$ with $\frac{1}{T'_2}+\frac{1}{2T_1}=\frac{1}{T_2}$ to simulate the dephasing. (This also indicates that $T_2 \leqslant 2T_1$) Usually, the $T_2$ time is measured by the Ramsey experiment, where the qubit starts from the excited state, undergoes a $\pi/2$ pulse, proceeds for a time $t$, and measured after another $\pi/2$ pulse. For simplicity, here we directly calculate the expectation value of $\rm{H}\circ a^\dagger a \circ\rm{H}$, where $\rm{H}$ denotes the Hadamard transformation. This is equivalent to measure the population of $\frac{1}{\sqrt{2}}(|0\rangle+|1\rangle)$. The envelope should follow an exponential decay characterized by $T_2$.
t1 = 1. t2 = 0.5 processor = Processor(1, t1=t1, t2=t2) processor.add_control(H_d, 0) processor.pulses[0].coeff = True processor.pulses[0].tlist = tlist Hadamard = hadamard_transform(1) result = processor.run_state(init_state=plus_state, e_ops=[Hadamard*a.dag()*a*Hadamard]) fig, ax = plt.subplots() # detail about lenght of tlist needs to be fixed ax.plot(tlist[:-1], result.expect[0][:-1], '.', label="simulation") ax.plot(tlist[:-1], np.exp(-1./t2*tlist[:-1])*0.5 + 0.5, label="theory") plt.xlabel("t") plt.ylabel("Ramsey signal") plt.legend() ax.grid()
examples/qip-noisy-device-simulator.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
Random noise in the pulse intensity Despite single-qubit decoherence, Processor can also simulate coherent control noise. For general types of noise, one can define a noise object and add it to the processor. An example of predefined noise is the random amplitude noise, where random value is added to the pulse every dt. loc and scale are key word arguments for the random number generator np.random.normal.
from qutip.qip.noise import RandomNoise processor = Processor(N=1) processor.add_control(0.5 * sigmaz(), targets=0, label="sigmaz") processor.add_control(0.5 * sigmay(), targets=0, label="sigmay") processor.coeffs = np.array([[1., 0., 1.], [0., 1., 0.]]) processor.set_all_tlist(np.array([0., pi/2., 2*pi/2, 3*pi/2])) processor_white = copy.deepcopy(processor) processor_white.add_noise(RandomNoise(rand_gen=np.random.normal, dt=0.1, loc=-0.05, scale=0.02)) # gausian white noise
examples/qip-noisy-device-simulator.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
We again compare the result of the evolution with and without noise.
result = processor.run_state(init_state=basis(2, 1)) result.states[-1].tidyup(1.0e-5) result_white = processor_white.run_state(init_state=basis(2, 1)) result_white.states[-1].tidyup(1.0e-4) fidelity(result.states[-1], result_white.states[-1])
examples/qip-noisy-device-simulator.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
Since the result of this this noise is still a pure state, we can visualize it on a Bloch sphere
from qutip.bloch import Bloch b = Bloch() b.add_states([result.states[-1], result_white.states[-1]]) b.make_sphere()
examples/qip-noisy-device-simulator.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
We can print the pulse information to see the noise. The ideal pulses:
for pulse in processor_white.pulses: pulse.print_info()
examples/qip-noisy-device-simulator.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
And the noisy pulses:
for pulse in processor_white.get_noisy_pulses(): pulse.print_info()
examples/qip-noisy-device-simulator.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
Getting a Pulse or QobjEvo representation If you define a complicate Processor but don't want to run the simulation right now, you can extract an ideal/noisy Pulse representation or QobjEvo representation. The later one can be feeded directly to QuTiP sovler for the evolution.
ideal_pulses = processor_white.pulses noisy_pulses = processor_white.get_noisy_pulses(device_noise=True, drift=True) qobjevo = processor_white.get_qobjevo(noisy=False) noisy_qobjevo, c_ops = processor_white.get_qobjevo(noisy=True)
examples/qip-noisy-device-simulator.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
Structure inside the simulator The figures below help one understanding the workflow inside the simulator. The first figure shows how the noise is processed in the circuit processor. The noise is defined separately in a class object. When called, it takes parameters and the unitary noiseless qutip.QobjEvo from the processor, generates the noisy version and sends the noisy qutip.QobjEvo together with the collapse operators to the processor. When calculating the evolution, the processor first creates its own qutip.QobjEvo of the noiseless evolution. It will then find all the noise objects saved in the attributes qutip.qip.device.Processor.noise and call the corresponding methods to get the qutip.QobjEvo and a list of collapse operators representing the noise. (For collapse operators, we don't want to add all the constant collapse into one time-independent operator, so we use a list). The processor then combines its own qutip.QobjEvo with those from the noise objects and give them to the solver. The figure below shows how the noiseless part and the noisy part are combined.
from qutip.ipynbtools import version_table version_table()
examples/qip-noisy-device-simulator.ipynb
ajgpitch/qutip-notebooks
lgpl-3.0
The Bandit Here we define our bandit. For this example we are using a four-armed bandit. The pullBandit function generates a random number from a normal distribution with a mean of 0. The lower the bandit number, the more likely a positive reward will be returned. We want our agent to learn to always choose the arm that will give that positive reward.
#List out our bandit arms. #Currently arm 4 (index #3) is set to most often provide a positive reward. bandit_arms = [0.2,0,-0.2,-2] num_arms = len(bandit_arms) def pullBandit(bandit): #Get a random number. result = np.random.randn(1) if result > bandit: #return a positive reward. return 1 else: #return a negative reward. return -1
Simple-Policy.ipynb
awjuliani/DeepRL-Agents
mit
The Agent The code below established our simple neural agent. It consists of a set of values for each of the bandit arms. Each value is an estimate of the value of the return from choosing the bandit. We use a policy gradient method to update the agent by moving the value for the selected action toward the recieved reward.
tf.reset_default_graph() #These two lines established the feed-forward part of the network. weights = tf.Variable(tf.ones([num_arms])) output = tf.nn.softmax(weights) #The next six lines establish the training proceedure. We feed the reward and chosen action into the network #to compute the loss, and use it to update the network. reward_holder = tf.placeholder(shape=[1],dtype=tf.float32) action_holder = tf.placeholder(shape=[1],dtype=tf.int32) responsible_output = tf.slice(output,action_holder,[1]) loss = -(tf.log(responsible_output)*reward_holder) optimizer = tf.train.AdamOptimizer(learning_rate=1e-3) update = optimizer.minimize(loss)
Simple-Policy.ipynb
awjuliani/DeepRL-Agents
mit
Training the Agent We will train our agent by taking actions in our environment, and recieving rewards. Using the rewards and actions, we can know how to properly update our network in order to more often choose actions that will yield the highest rewards over time.
total_episodes = 1000 #Set total number of episodes to train agent on. total_reward = np.zeros(num_arms) #Set scoreboard for bandit arms to 0. init = tf.global_variables_initializer() # Launch the tensorflow graph with tf.Session() as sess: sess.run(init) i = 0 while i < total_episodes: #Choose action according to Boltzmann distribution. actions = sess.run(output) a = np.random.choice(actions,p=actions) action = np.argmax(actions == a) reward = pullBandit(bandit_arms[action]) #Get our reward from picking one of the bandit arms. #Update the network. _,resp,ww = sess.run([update,responsible_output,weights], feed_dict={reward_holder:[reward],action_holder:[action]}) #Update our running tally of scores. total_reward[action] += reward if i % 50 == 0: print("Running reward for the " + str(num_arms) + " arms of the bandit: " + str(total_reward)) i+=1 print("\nThe agent thinks arm " + str(np.argmax(ww)+1) + " is the most promising....") if np.argmax(ww) == np.argmax(-np.array(bandit_arms)): print("...and it was right!") else: print("...and it was wrong!")
Simple-Policy.ipynb
awjuliani/DeepRL-Agents
mit
If you have a CSV file, you can do: s = Striplog.from_csv(filename=filename) But we have text, so we do something slightly different, passing the text argument instead. We also pass a stop argument to tell Striplog to make the last unit (E) 50 m thick. (If you don't do this, it will be 1 m thick).
from striplog import Striplog s = Striplog.from_csv(text=data, stop=650)
docs/tutorial/10_Extract_curves_into_striplogs.ipynb
agile-geoscience/striplog
apache-2.0
Each element of the striplog is an Interval object, which has a top, base and one or more Components, which represent whatever is in the interval (maybe a rock type, or in this case a formation). There is also a data field, which we will use later.
s[0]
docs/tutorial/10_Extract_curves_into_striplogs.ipynb
agile-geoscience/striplog
apache-2.0
We can plot the striplog. By default, it will use a random legend for the colours:
s.plot(aspect=3)
docs/tutorial/10_Extract_curves_into_striplogs.ipynb
agile-geoscience/striplog
apache-2.0
Or we can plot in the 'tops' style:
s.plot(style='tops', field='formation', aspect=1)
docs/tutorial/10_Extract_curves_into_striplogs.ipynb
agile-geoscience/striplog
apache-2.0
Random curve data Make some fake data:
from welly import Curve import numpy as np depth = np.linspace(0, 699, 700) data = np.sin(depth/10) curve = Curve(data=data, index=depth)
docs/tutorial/10_Extract_curves_into_striplogs.ipynb
agile-geoscience/striplog
apache-2.0
Plot it:
import matplotlib.pyplot as plt fig, axs = plt.subplots(ncols=2, sharey=True) axs[0] = s.plot(ax=axs[0]) axs[1] = curve.plot(ax=axs[1])
docs/tutorial/10_Extract_curves_into_striplogs.ipynb
agile-geoscience/striplog
apache-2.0
Extract data from the curve into the striplog
s = s.extract(curve.values, basis=depth, name='GR')
docs/tutorial/10_Extract_curves_into_striplogs.ipynb
agile-geoscience/striplog
apache-2.0
Now we have some the GR data from each unit stored in that unit:
s[1]
docs/tutorial/10_Extract_curves_into_striplogs.ipynb
agile-geoscience/striplog
apache-2.0
So we could plot a segment of curve, say:
plt.plot(s[1].data['GR'])
docs/tutorial/10_Extract_curves_into_striplogs.ipynb
agile-geoscience/striplog
apache-2.0
Extract and reduce data We don't have to store all the data points. We can optionaly pass a function to produce anything we like, and store the result of that:
s = s.extract(curve, basis=depth, name='GRmean', function=np.nanmean) s[1]
docs/tutorial/10_Extract_curves_into_striplogs.ipynb
agile-geoscience/striplog
apache-2.0
Other helpful reducing functions: np.nanmedian &mdash; median average (ignoring nans) np.product &mdash; product np.nansum &mdash; sum (ignoring nans) np.nanmin &mdash; minimum (ignoring nans) np.nanmax &mdash; maximum (ignoring nans) scipy.stats.mstats.mode &mdash; mode average scipy.stats.mstats.hmean &mdash; harmonic mean scipy.stats.mstats.gmean &mdash; geometric mean Or you can write your own, for example: def trim_mean(a): """Compute trimmed mean, trimming min and max""" return (np.nansum(a) - np.nanmin(a) - np.nanmax(a)) / a.size Then do: s.extract(curve, basis=basis, name='GRtrim', function=trim_mean) The function doesn't have to return a single number like this, it could return anything you like, including a dictionary. We can also add bits to the data dictionary manually:
s[1].data['foo'] = 'bar' s[1]
docs/tutorial/10_Extract_curves_into_striplogs.ipynb
agile-geoscience/striplog
apache-2.0
局部异常因子计算出每个点的局部密度,通过它与K最近邻的点的距离来评估点的局部密度,并与邻居的密度进行比较,以此找出异常点--异常点比邻居的密度要低得多 为了理解LOF,先了解一些术语的定义 * 对象P的K距离:对象P与它第K个最近邻的距离,K是算法的参数 P的K距离邻居:到P的距离小于或等于P到第K个最邻近的距离的所有对象的集合Q * 从P到Q的可达距离:P与它的第K个最近邻的距离和P和Q之间的距离中的最大者。 P的局部可达密度(local Reachability Density of P):K距离邻居和K与其邻居的可达距离之和的比值 * P的局部异常因子(Local Outlier Factor of P):P与它的K最近邻的局部可达性的比值的平均值
# 获取点两两之间的距离pairwise_distance distance = 'manhattan' from sklearn.metrics import pairwise_distances dist = pairwise_distances(instance,metric=distance) print dist # 计算K距离,使用heapq来获得K最近邻 k = 2 # 计算K距离 import heapq # k_distance的值是tuple k_distance = defaultdict(tuple) # 对每个点计算 for i in range(instance.shape[0]): # 获取它与所有其点之间的距离 distances = dist[i].tolist() # 获得K最近邻 ksmallest = heapq.nsmallest(k+1,distances)[1:][k-1] # 获取索引号 ksmallest_idx = distances.index(ksmallest) # 记录下每个点到第K个最近邻以及到它的距离 k_distance[i]=(ksmallest,ksmallest_idx) # 计算K距离邻居 def all_indices(value,inlist): out_indices = [] idx = -1 while True: try: idx = inlist.index(value,idx+1) out_indices.append(idx) except ValueError: break return out_indices # 计算K距离邻居 k_distance_neig = defaultdict(list) for i in range(instance.shape[0]): # 获得它到所有邻居点的距离 distances = dist[i].tolist() print 'k distance neighbourhood',i print distances # 获得从第1到第k的最近邻 ksmallest = heapq.nsmallest(k+1,distances)[1:] print ksmallest ksmallest_set = set(ksmallest) print ksmallest_set ksmallest_idx = [] # 获取k里最小的元素的索引号 for x in ksmallest_set: ksmallest_idx.append(all_indices(x,distances)) # 将列表的列表转换为列表 ksmallest_idx = [item for sublist in ksmallest_idx for item in sublist] # 对每个点保存 k_distance_neig[i].extend(zip(ksmallest,ksmallest_idx)) print k_distance_neig # 计算可达距离和LRD # 局部可达密度 local_reach_density = defaultdict(float) for i in range(instance.shape[0]): # LRD分子,k距离邻居的个数 no_neighbours = len(k_distance_neig[i]) denom_sum = 0 # 可达距离求和 for neigh in k_distance_neig[i]: denom_sum += max(k_distance[neigh[1]][0],neigh[0]) local_reach_density[i] = no_neighbours/(1.0*denom_sum) # 计算LOF lof_list = [] for i in range(instance.shape[0]): lrd_sum = 0 rdist_sum = 0 for neigh in k_distance_neig[i]: lrd_sum +=local_reach_density[neigh[1]] rdist_sum += max(k_distance[neigh[1]][0],neigh[0]) lof_list.append((i,lrd_sum*rdist_sum)) print lof_list
Data_Mining/Local_outlier_factor.ipynb
Roc-J/Python_data_science
apache-2.0
References https://class.coursera.org/statistics-003 https://www.udacity.com/course/intro-to-data-science--ud359 http://blog.minitab.com/blog/adventures-in-statistics/multiple-regession-analysis-use-adjusted-r-squared-and-predicted-r-squared-to-include-the-correct-number-of-variables https://en.wikipedia.org/wiki/Coefficient_of_determination http://napitupulu-jon.appspot.com/posts/inference-diagnostic-mlr-coursera-statistics.html Statistical Test
df.groupby('rain',as_index=False).ENTRIESn_hourly.mean()
p2-introds/nyc_subway/project.ipynb
napjon/ds-nd
mit
In this data, we can see summary statistic of number of ridership hourly, represented by ENTRIESn_hourly variable between rainy days and non-rainy days. So the independent variable is rain that represented as non-rainy day in control group, and non-rainy in experiment group. How rainy days affect the number of ridership, so the dependent variable is ENTRIESn_hourly. We can see that means of number ridership hourly of non-rainy days is 1090, where the means with rainy days is 1105. Such small difference, and we're going to test whether the difference is significantly higher, using independence test with one-tail p-value. I'm using 0.05 as p-critical value. H0 $ P_\mathbf{(rain > non-rain)} = 0.5$ : Population number of ridership in rainy days and non-rainy days is equal. HA $ P_\mathbf{(rain > non-rain)} \gt 0.5$ : Population number of ridership in rainy days is higher than non-rainy days. The conditions within groups have validated. The sample size in this data is more than 30, and less than 10% population. Non-parametric test used as statistical test that doesn't assume any underlying probability distribution. Mann Whittney U test is one of non-parametric test that I will be using in this case. Since we see that the distribution of both rainy and non-rainy is very right skewed in the Visualization section, we can't use any statistical test that assume normal distribution. So instead we can use non-parametric test.
df.groupby('rain',as_index=False).ENTRIESn_hourly.mean() sp.mannwhitneyu(df.ix[df.rain==0,'ENTRIESn_hourly'], df.ix[df.rain==1,'ENTRIESn_hourly'])
p2-introds/nyc_subway/project.ipynb
napjon/ds-nd
mit
We're using Mann-Whitney U test with average 1090 hourly ridership on non-rainy days and 1105 hourly ridership on rainy days. Because p-value is 0.025 less than 0.05 p-critical, we reject the null hypothesis, and conclude that the data provide convincing evidence that average number of hourly ridership in rainy days is higher than those of non-rainy days. Linear Regression OLS using Statsmodels or Scikit Learn Gradient descent using Scikit Learn Or something different?* I'm going to use linear regression with multiple predictor, hence multiple linear regression with OLS. I use all numerical variables in my data plus additional variable isBusinessDay, except exits, since it will be expected that number of ridership between entries and exits will be similar. I use UNIT and Hour as dummy variables. I don't test dummy features, since it's computationally expensive. I also subset the data since it's also computationally expensive learn from dummy features. Moreover I know that UNIT and Hour features improve the model when I try it at the Udacity website.
length = df.shape[0] subset = df.take(np.random.permutation(length)[:int(length*0.1)]).reset_index() dummy_hours = pd.get_dummies(subset['Hour'], prefix='hour') dummy_units = pd.get_dummies(subset['UNIT'], prefix='unit') # features = subset.join(dummy_units).join(dummy_hours) features = subset banned = ['ENTRIESn_hourly','UNIT','Hour','DESCn','EXITSn_hourly','index'] candidates = [e for e in features.columns if e not in banned]
p2-introds/nyc_subway/project.ipynb
napjon/ds-nd
mit
R squared is not a significant measures for testing our model. Since every time we're adding a variable, R-squared will keep increasing. We're going to use adjusted R-squared, since it will incorporate penalty everytime we're adding a variable.
def test_adjusted_R_squared(col): """Testing one variable with already approved predictors""" reg = sm.OLS(features['ENTRIESn_hourly'],features[predictors + [col]]) result = reg.fit() return result.rsquared_adj
p2-introds/nyc_subway/project.ipynb
napjon/ds-nd
mit
I'm going to choose forward selection, where I add one variable at a time based on highest adjusted R squared. And I will stop adding a variable if there's isnt anymore increase compared to previous adjusted R squared.
predictors = [] topr2 = 0 for i in xrange(len(candidates)): filtered = filter(lambda x: x not in predictors, candidates) list_r2 = map(test_adjusted_R_squared,filtered) highest,curr_topr2 = max(zip(filtered,list_r2),key=lambda x: x[1]) if curr_topr2 > topr2: topr2 = round(curr_topr2,10) else: print("Adjusted R Squared can't go any higher. Stopping") break predictors.append(highest) print('Step {}: Adjusted R-squared = {} + {}'.format(i,topr2,highest))
p2-introds/nyc_subway/project.ipynb
napjon/ds-nd
mit
These are non dummy features after I perform forward selection
predictors
p2-introds/nyc_subway/project.ipynb
napjon/ds-nd
mit
To test collinearity that may happen in my numerical features, I use scatter matrix.
print('Scatter Matrix of features and predictors to test collinearity'); pd.scatter_matrix(features[numerics],figsize=(10,10));
p2-introds/nyc_subway/project.ipynb
napjon/ds-nd
mit
I can see that there are no collinearity among the predictors. Next I join non-dummy features and dummy features to features_dummy and create the model.
features_dummy = features[predictors].join(dummy_units).join(dummy_hours) model = sm.OLS(features['ENTRIESn_hourly'],features_dummy).fit() filter_cols = lambda col: not col.startswith('unit') and not col.startswith('hour') model.params[model.params.index.map(filter_cols)] model.rsquared
p2-introds/nyc_subway/project.ipynb
napjon/ds-nd
mit
R2 is often interpreted as the proportion of response variation "explained" by the regressors in the model. So we can say 61.67% of the variability in the % number of ridership subway hourly can be explained by the model. Visualization At the time of this writing, pandas has grown mature, and ggplot for python,which relies on pandas, is not being updated. So I will not use ggplot in this section, and use pandas plotting.
fig,axes = plt.subplots(nrows=1,ncols=2,sharex=True,sharey=True,squeeze=False) filtered = df.ix[df.ENTRIESn_hourly < 10000] for i in xrange(1): axes[0][i].set_xlabel('Number of ridership hourly') axes[0][i].set_ylabel('Frequency') filtered.ix[filtered.rain == 0,'ENTRIESn_hourly'].hist(ax=axes[0][0],bins=50) axes[0][0].set_title('Non-rainy days') filtered.ix[filtered.rain == 1,'ENTRIESn_hourly'].hist(ax=axes[0][1],bins=50) axes[0][1].set_title('Rainy days') fig.set_size_inches((15,5))
p2-introds/nyc_subway/project.ipynb
napjon/ds-nd
mit
In this plot, we can see that more people is riding the subway. But we want to know whether the difference is significance, using hypothesis test. The frequency is indeed higher for non-rainy days compared to non-rainy days.
(df .resample('1D',how='mean') .groupby(lambda x : 1 if pd.datetools.isBusinessDay(x) else 0) .ENTRIESn_hourly .plot(legend=True)) plt.legend(['Not Business Day', 'Business Day']) plt.xlabel('By day in May 2011') plt.ylabel('Average number of ridership hourly') plt.title('Average number of ridership every day at in May 2011');
p2-introds/nyc_subway/project.ipynb
napjon/ds-nd
mit
We can see that the difference is likely siginificant of ridership from the time of day. We can create a new variable to turn this into categorical variable.
df['BusinessDay'] = df.index.map(lambda x : 0 if pd.datetools.isBusinessDay(x) else 1) df.resample('1D').rain.value_counts()
p2-introds/nyc_subway/project.ipynb
napjon/ds-nd
mit
Conclusion Since the data is observation and not controlled experiment, we can't make causation. However there is likely to be no different for average number of ridership hourly of non-rainy days and rainy days. We know that the dataset is taken from NYC data subway, but because the data is not random sampled in this observation, we can't generalize to all people who use subway in NYC. So pretty much we can't make any causal statement that whether or not there is a difference of average number ridership hourly between rainy days and non rainy days. Moreover, the data also doesn't provide convincing evidence that the number people ride NYC subway is significantly different between rainy days and not rainy days. Using Statistical Test, If in fact there's no different of average number of ridership hourly of non-rainy days and rainy days, the probability of getting a sample with size 44104 for rainy days and 87847 sample size for non-rainy days with average difference of 15 ridership, is 0.025. Such a small probability could means that rain is a significant predictor, and the difference it's not due to chance. Using Linear Regression, we can say that all else held constant, the model predicts number of ridership in one hour for non-rainy days is 117 people higher than rainy days, on average. Reflection So where will this lead us? Well we could see that in average day, number of ridership still following some pattern. But it's not clear how this affect through season since we only have limited data. The data itself could expand to through one year. As we can see that this data only include May 2011, and we have no idea how winter, autumn, summer, and spring affecting the number of ridership. That's more analysis that can be done. With statistical test, I just analyze how rain is not significant different. The different is just due to chance, or it could be other factor than the rain. Fog may be significantly different, or you also that in Visualization section, the number of ridership is different between business day and non business day. We also have seen that the distribution of number in hour (ENTRIESn_hourly) is right skewed, so we could do some tranformation to make it more normal. The number of ridership between business day and non business day also not linear, it follows what seems to be cyclical. The model predict not really linear. To test the performance of our model we can do following things: linear relationship between every numerical explanatory and response Nearly normal residuals wih mean 0 Constant variability of residuals Independent residuals Our model is not a good fit if at least one this diagnostics failed, which it does. linear relationship between every numerical explanatory and response To test if the model is good we can plot all the numerical features again residuals, see whether every plot is random scatter around zero. This is to check whether there is a linear relationship between residuals and numerical features, to make sure that it doesn't containy any other dependent variables.
fig,axes = plt.subplots(nrows=1,ncols=3,sharey=True,squeeze=False) numerics = ['maxpressurei', 'mintempi', 'precipi'] for i in xrange(len(numerics)): axes[0][i].scatter(x=features[numerics[i]],y=model.resid,alpha=0.1) axes[0][i].set_xlabel(numerics[i]) axes[0][0].set_ylabel('final model residuals') axes[0][1].set_title('linear relationships between features and residual, alpha 0.1') fig.set_size_inches(12,5);
p2-introds/nyc_subway/project.ipynb
napjon/ds-nd
mit
We see that eventhough seems categorical maxpressurei and mintempi is random scatter. But precipi is not a good candidate for linear relationship of the model. It seems it's not randomly scattered. Nearly normal residuals wih mean 0
fig,axes = plt.subplots(nrows=1,ncols=2,squeeze=False) sp.probplot(model.resid,plot=axes[0][0]) model.resid.hist(bins=20,ax=axes[0][1]); axes[0][1].set_title('Histogram of residuals') axes[0][1].set_xlabel('Residuals') axes[0][1].set_ylabel('Frequency');
p2-introds/nyc_subway/project.ipynb
napjon/ds-nd
mit
Next, we're checking by histogram that the residuals is normally distributed. The histogram shown that it's pretty normal and distributed around zero. Quantile plot checking if the residuals randomly scattered around zero. We can see that our model failed in this test. The residuals is very skewed, explained by large number of points deviated from mean line at tails area. This means that our linear regression is not a good model for this case. Constant variability of residuals
fig,axes = plt.subplots(nrows=1,ncols=2,squeeze=False) axes[0][0].scatter(x=model.fittedvalues, y=model.resid, alpha=0.1) axes[0][1].scatter(x=model.fittedvalues, y=abs(model.resid), alpha=0.1); axes[0][0].set_xlabel('fitted_values') axes[0][1].set_xlabel('fitted_values') axes[0][0].set_ylabel('Abs(residuals)') axes[0][1].set_ylabel('residuals'); fig.set_size_inches(13,5)
p2-introds/nyc_subway/project.ipynb
napjon/ds-nd
mit
The model also failed in this diagnostic. The first plot, the fitted values and residuals should be randomly scattered around zero, and not performing some kind of fan shape. For the plot in the left, we're seeing that there's some kind of boundary that limit the plot to be randomly scattered, and it's performing fan shape. This could means there's another dependent variables that we don't yet find. Some fan shape also ocurring where we plot in the right with absolute value of residuals. Independent residuals
resids = pd.DataFrame(model.resid.copy()) resids.columns = ['residuals'] resids.index = pd.to_datetime(features['index']) resids.sort_index(inplace=True) plt.plot_date(x=resids.resample('1H',how='mean').index, y=resids.resample('1H',how='mean').residuals); plt.xlabel('Time Series') plt.ylabel('residuals') plt.title('Residuals Variability across time');
p2-introds/nyc_subway/project.ipynb
napjon/ds-nd
mit
Ótimo temos uma função e podemos reaproveita-la. Porém, para de fato reaproveita-la temos que utilizar o comando return.
def maximo(x, y): if x > y: return x else: return y
Python/2016-07-22/aula2-parte1-funcoes.ipynb
rubensfernando/mba-analytics-big-data
mit
Pronto agora sim! Já podemos reaproveitar nossa função! E como fazer isso?
z = maximo(3, 4)
Python/2016-07-22/aula2-parte1-funcoes.ipynb
rubensfernando/mba-analytics-big-data
mit
Quando chamamos a função maximo(3, 4) estamos definindo que x = 3 e y = 4. Após, as expressões são avaliadas até que não se tenha mais expressões, e nesse caso é retornado None. Ou até que encontre a palavra especial return, retornando como valor da chamada da função.
print(z)
Python/2016-07-22/aula2-parte1-funcoes.ipynb
rubensfernando/mba-analytics-big-data
mit
Já entendemos o que é e como criar funções. Para testar vamos criar uma função que irá realizar uma conta.
def economias (dinheiro, conta, gastos): total = (dinheiro + conta) - gastos return (total) eco = economias(10, 20, 10) print(eco)
Python/2016-07-22/aula2-parte1-funcoes.ipynb
rubensfernando/mba-analytics-big-data
mit
Também podemos definir um valor padrão para um ou mais argumentos Vamos reescrever a função economias para que os gastos sejam fixados em 150, caso não seja passado nenhum valor por padrão.
def economias(dinheiro, conta, gastos=150): total = (dinheiro + conta) - gastos return(total) print(economias(100, 60)) print(economias(100, 60, 10))
Python/2016-07-22/aula2-parte1-funcoes.ipynb
rubensfernando/mba-analytics-big-data
mit
É importante notar que uma variável que está dentro de uma função, não pode ser utilizada novamente enquanto a função não terminar de ser executada. No mundo da programação, isso é chamado de escopo. Vamos tentar imprimir o valor da variável dinheiro.
print(dinheiro)
Python/2016-07-22/aula2-parte1-funcoes.ipynb
rubensfernando/mba-analytics-big-data
mit
<span style="color:blue;">Por que isso aconteceu?</span> Esse erro acontece pois a variável dinheiro somente existe dentro da função economias, ou seja, ela existe apenas no contexto local dessa função. Vamos modificar novamente a função economias:
def economias(dinheiro, conta, gastos=150): total = (dinheiro + conta) - gastos total = total + eco return(total) print(economias(100,60))
Python/2016-07-22/aula2-parte1-funcoes.ipynb
rubensfernando/mba-analytics-big-data
mit
<span style="color:blue;">Por que não deu problema?</span> Quando utilizamos uma variável que está fora da função dentro de uma função estamos utilizando a ideia de variáveis globais, onde dentro do contexto geral essa variável existe e pode ser utilizada dentro da função. <span style="color:red;">Isso não é recomendado! O correto seria ter um novo argumento!</span> Exercício de Funções Crie uma função que receba dois argumentos. * O primeiro argumento é o valor de um determinado serviço * O segundo é a porcentagem da multa por atraso do pagamento. O valor padrão da porcentagem, se não passado, é de 7%. A função deve retornar o valor final da conta com o juros. Lembre-se de converter 7%.
def conta(valor, multa=7): # Seu código aqui
Python/2016-07-22/aula2-parte1-funcoes.ipynb
rubensfernando/mba-analytics-big-data
mit
Funções embutidas Python tem um número de funções embutidas que sempre estão presentes. Uma lista completa pode ser encontrada em https://docs.python.org/3/library/functions.html. <span style="color:blue;">Já utilizamos algumas delas! Quais?</span> input Uma outra função que é bem interessante, é a input. Essa função permite que o usuário digite uma entrada, por exemplo:
idade = input('Digite sua idade:') print(idade) nome = input('Digite seu nome:') print(nome) print(type(idade)) print(type(nome))
Python/2016-07-22/aula2-parte1-funcoes.ipynb
rubensfernando/mba-analytics-big-data
mit
Note que ambas as variáveis são strings. Portanto precisamos converter para inteiro a idade.
idade = int(input("Digite sua idade:")) print(type(idade))
Python/2016-07-22/aula2-parte1-funcoes.ipynb
rubensfernando/mba-analytics-big-data
mit
open A função open, permite abrir um arquivo para leitura e escrita. open(nome_do_arquivo, modo) Modos: * r - abre o arquivo para leitura. * w - abre o arquivo para escrita. * a - abre o arquivo para escrita acrescentando os dados no final do arquivo. * + - pode ser lido e escrito simultaneamente.
import os os.remove("arquivo.txt") arq = open("arquivo.txt", "w") for i in range(1, 5): arq.write('{}. Escrevendo em arquivo\n'.format(i)) arq.close()
Python/2016-07-22/aula2-parte1-funcoes.ipynb
rubensfernando/mba-analytics-big-data
mit
Métodos read() - retorna uma string única com todo o conteúdo do arquivo. readlines() - todo o conteúdo do arquivo é salvo em uma lista, onde cada linha do arquivo será um elemento da lista.
f = open("arquivo.txt", "r") print(f, '\n') texto = f.read() print(texto) f.close() f = open("arquivo.txt", "r") texto = f.readlines() print(texto) f.close() #help(f.readlines)
Python/2016-07-22/aula2-parte1-funcoes.ipynb
rubensfernando/mba-analytics-big-data
mit
Para remover o \n podemos utilizar o método read que irá gerar uma única string e depois aplicamos o método splitlines.
f = open("arquivo.txt", "r") texto = f.read().splitlines() print(texto) f.close()
Python/2016-07-22/aula2-parte1-funcoes.ipynb
rubensfernando/mba-analytics-big-data
mit
Wavelet reconstruction Can reconstruct the sequence by $$ \hat y = W \hat \beta. $$ The objective is likelihood term + L1 penalty term, $$ \frac 12 \sum_{i=1}^T (y - W \beta)i^2 + \lambda \sum{i=1}^T |\beta_i|. $$ The L1 penalty "forces" some $\beta_i = 0$, inducing sparsity
plt.plot(tse_soft[:,4]) high_idx = np.where(np.abs(tse_soft[:,4]) > .0001)[0] print(high_idx) fig, axs = plt.subplots(len(high_idx) + 1,1) for i, idx in enumerate(high_idx): axs[i].plot(W[:,idx]) plt.plot(tse_den['FTSE'],c='r')
lectures/lecture5/lecture5.ipynb
jsharpna/DavisSML
mit
Non-orthogonal design The objective is likelihood term + L1 penalty term, $$ \frac 12 \sum_{i=1}^T (y - X \beta)i^2 + \lambda \sum{i=1}^T |\beta_i|. $$ does not have closed form for $X$ that is non-orthogonal. it is convex it is non-smooth (recall $|x|$) has tuning parameter $\lambda$ Compare to best subset selection (NP-hard): $$ \min \frac 12 \sum_{i=1}^T (y - X \beta)_i^2. $$ for $$ \| \beta \|_0 = |{\rm supp}(\beta)| < s. $$ Image of Lasso solution <img src="lasso_soln.PNG" width=100%> Solving the Lasso The lasso can be written in regularized form, $$ \min \frac 12 \sum_{i=1}^T (y - X \beta)i^2 + \lambda \sum{i=1}^T |\beta_i|, $$ or in constrained form, $$ \min \frac 12 \sum_{i=1}^T (y - X \beta)i^2, \quad \textrm{s.t.} \sum{i=1}^T |\beta_i| \le C, $$ For every $\lambda$ there is a $C$ such that the regularized form and constrained form have the same argmin This correspondence is data dependent Exercise 5.1. Solving the Lasso A quadratic program (QP) is any convex optimization of the form $$ \min \beta^\top Q \beta + \beta^\top a \quad \textrm{ s.t. } A\beta \le c $$ where $Q$ is positive semi-definite. Show that the lasso in constrained form is a QP. (Hint: write $\beta = \beta_+ - \beta_-$ where $\beta_{+,j} = \beta_{j} 1{ \beta_j > 0}$ and $\beta_{-,j} = - \beta_{j} 1{ \beta_j < 0}$). Solution to 5.1 The objective is certainly quadratic... $$ \frac 12 \sum_{i=1}^T (y - X \beta)_i^2 = \frac 12 \beta^\top (X^\top X) \beta - \beta^\top (X^\top y) + C $$ and we know that $X^\top X$ is PSD because $a^\top X^\top X a = \| X a\|^2 \ge 0$. What about $\| \beta \|_1$? Solving the lasso For a single $\lambda$ (or $C$ in constrained form) can solve the lasso with many specialized methods - quadratic program solver - proximal gradient - alternating direction method of multipliers but $\lambda$ is a tuning parameter. Options 1. Construct a grid of $\lambda$ and solve each lasso 2. Solve for all $\lambda$ values - path algorithm Active sets and why lasso works better Let $\hat \beta_\lambda$ be the $\hat \beta$ at tuning parameter $\lambda$. Define $\mathcal A_\lambda = {\rm supp}(\hat \beta_\lambda)$ the non-zero elements of $\hat \beta_\lambda$. For large $\lambda \rightarrow \infty$, $|\mathcal A_\lambda| = 0$ For small $\lambda = 0$, $|\mathcal A_\lambda| = p$ (when OLS solution has full support) Forward greedy selection only adds elements to the active set, does not remove elements. Exercise 5.2.1 Verify 1 and 2 above. Lasso Path Start at $\lambda = +\infty, \hat \beta = 0$. Decrease $\lambda$ until $\hat \beta_{j_1} \ne 0$, $\mathcal A \gets {j_1}$. (Hitting event) Continue decreasing $\lambda$ updating $\mathcal A$ with hitting and leaving events $x_{j_1}$ is the predictor variable most correlated with $y$ Hitting events are when element is added to $\mathcal A$ Leaving events are when element is removed from $\mathcal A$ $\hat \beta_{\lambda,j}$ is piecewise linear, continuous, as a function of $\lambda$ knots are at "hitting" and "leaving" events from sklearn.org Least Angle Regression (LAR) Standardize predictors and start with residual $r = y - \bar y$, $\hat \beta = 0$ Find $x_j$ most correlated with $r$ Move $\beta_j$ in the direction of $x_j^\top r$ until the residual is more correlated with another $x_k$ Move $\beta_j,\beta_k$ in the direction of their joint OLS coefficients of $r$ on $(x_j,x_k)$ until some other competitor $x_l$ has as much correlation with the current residual Continue until all predictors have been entered. Exercise 5.2.2 How do we know that LAR does not give us the Lasso solution? Lasso modification 4.5 If a non-zero coefficient drops to 0 then remove it from the active set and recompute the restricted OLS. from ESL
# %load ../standard_import.txt import pandas as pd import numpy as np import matplotlib.pyplot as plt from sklearn import preprocessing, model_selection, linear_model %matplotlib inline ## Modified from the github repo: https://github.com/JWarmenhoven/ISLR-python ## which is based on the book by James et al. Intro to Statistical Learning. df = pd.read_csv('../../data/Hitters.csv', index_col=0).dropna() df.index.name = 'Player' df.info() ## Simulate a dataset for lasso n=100 p=1000 X = np.random.randn(n,p) X = preprocessing.scale(X) ## Subselect true active set sprob = 0.02 Sbool = np.random.rand(p) < sprob s = np.sum(Sbool) print("Number of non-zero's: {}".format(s)) ## Construct beta and y mu = 100. beta = np.zeros(p) beta[Sbool] = mu * np.random.randn(s) eps = np.random.randn(n) y = X.dot(beta) + eps
lectures/lecture5/lecture5.ipynb
jsharpna/DavisSML
mit
Exercise 5.3 Run the lasso using linear_model.lars_path with the lasso modification (see docstring with ?linear_model.lars_path) Plot the lasso coefficients that are learned as a function of lambda. You should have a plot with the x-axis being lambda and the y-axis being the coefficient value, with $p=1000$ lines plotted. Highlight the $s$ coefficients that are truly non-zero by plotting them in red.
?linear_model.lars_path ## Answer to exercise 5.3 ## Run lars with lasso mod, find active set larper = linear_model.lars_path(X,y,method="lasso") S = set(np.where(Sbool)[0]) def plot_it(): for j in S: _ = plt.plot(larper[0],larper[2][j,:],'r') for j in set(range(p)) - S: _ = plt.plot(larper[0],larper[2][j,:],'k',linewidth=.75) _ = plt.title('Lasso path for simulated data') _ = plt.xlabel('lambda') _ = plt.ylabel('Coef') plot_it() ## Hitters dataset df = pd.read_csv('../../data/Hitters.csv', index_col=0).dropna() df.index.name = 'Player' df.info() df.head() dummies = pd.get_dummies(df[['League', 'Division', 'NewLeague']]) dummies.info() print(dummies.head()) y = df.Salary # Drop the column with the independent variable (Salary), and columns for which we created dummy variables X_ = df.drop(['Salary', 'League', 'Division', 'NewLeague'], axis=1).astype('float64') # Define the feature set X. X = pd.concat([X_, dummies[['League_N', 'Division_W', 'NewLeague_N']]], axis=1) X.info() X.head(5)
lectures/lecture5/lecture5.ipynb
jsharpna/DavisSML
mit
Exercise 5.4 You should cross-validate to select the lambda just like any other tuning parameter. Sklearn gives you the option of using their fast cross-validation script via linear_model.LassoCV, see the documentation. You can create a leave-one-out cross validator with model_selection.LeaveOneOut then pass this to LassoCV with the cv argument. Do this, and see what the returned fit and selected lambda are.
## Answer to 5.4 ## Fit the lasso and cross-validate, increased max_iter to achieve convergence loo = model_selection.LeaveOneOut() looiter = loo.split(X) hitlasso = linear_model.LassoCV(cv=looiter,max_iter=2000) hitlasso.fit(X,y) print("The selected lambda value is {:.2f}".format(hitlasso.alpha_)) hitlasso.coef_
lectures/lecture5/lecture5.ipynb
jsharpna/DavisSML
mit
We can also compare this to the selected model from forward stagewise regression: [-0.21830515, 0.38154135, 0. , 0. , 0. , 0.16139123, 0. , 0. , 0. , 0. , 0.09994524, 0.56696569, -0.16872682, 0.16924078, 0. , 0. , 0. , -0.19429699, 0. ] This is not exactly the same model with differences in the inclusion or exclusion of AtBat, HmRun, Runs, RBI, Years, CHmRun, Errors, League_N, Division_W, NewLeague_N
bforw = [-0.21830515, 0.38154135, 0. , 0. , 0. , 0.16139123, 0. , 0. , 0. , 0. , 0.09994524, 0.56696569, -0.16872682, 0.16924078, 0. , 0. , 0. , -0.19429699, 0. ] print(", ".join(X.columns[(hitlasso.coef_ != 0.) != (bforw != 0.)]))
lectures/lecture5/lecture5.ipynb
jsharpna/DavisSML
mit
Données Les données sont artificielles mais simulent ce que pourraient être le chiffre d'affaires d'un magasin de quartier, des samedi très forts, une semaine morne, un Noël chargé, un été plat.
from ensae_teaching_cs.data import generate_sells import pandas df = pandas.DataFrame(generate_sells()) df.head()
_doc/notebooks/td2a_ml/seasonal_timeseries.ipynb
sdpython/ensae_teaching_cs
mit
Premiers graphiques La série a deux saisonnalités, hebdomadaire, mensuelle.
import matplotlib.pyplot as plt fig, ax = plt.subplots(1, 2, figsize=(14, 4)) df.iloc[-30:].set_index('date').plot(ax=ax[0]) df.set_index('date').plot(ax=ax[1]) ax[0].set_title("chiffre d'affaire sur le dernier mois") ax[1].set_title("chiffre d'affaire sur deux ans");
_doc/notebooks/td2a_ml/seasonal_timeseries.ipynb
sdpython/ensae_teaching_cs
mit
Elle a une vague tendance, on peut calculer un tendance à l'ordre 1, 2, ...
from statsmodels.tsa.tsatools import detrend notrend = detrend(df.value, order=1) df["notrend"] = notrend df["trend"] = df['value'] - notrend ax = df.plot(x="date", y=["value", "trend"], figsize=(14,4)) ax.set_title('tendance');
_doc/notebooks/td2a_ml/seasonal_timeseries.ipynb
sdpython/ensae_teaching_cs
mit
Autocorrélations...
from statsmodels.tsa.stattools import acf cor = acf(df.value) cor fig, ax = plt.subplots(1, 1, figsize=(14,2)) ax.plot(cor) ax.set_title("Autocorrélogramme");
_doc/notebooks/td2a_ml/seasonal_timeseries.ipynb
sdpython/ensae_teaching_cs
mit
La première saisonalité apparaît, 7, 14, 21... Les autocorrélations partielles confirment cela, plutôt 7 jours.
from statsmodels.tsa.stattools import pacf from statsmodels.graphics.tsaplots import plot_pacf plot_pacf(df.value, lags=50);
_doc/notebooks/td2a_ml/seasonal_timeseries.ipynb
sdpython/ensae_teaching_cs
mit
Comme il n'y a rien le dimanche, il vaut mieux les enlever. Garder des zéros nous priverait de modèles multiplicatifs.
df["weekday"] = df.date.dt.weekday df.head() df_nosunday = df[df.weekday != 6] df_nosunday.head(n=10) fig, ax = plt.subplots(1, 1, figsize=(14,2)) cor = acf(df_nosunday.value) ax.plot(cor) ax.set_title("Autocorrélogramme"); plot_pacf(df_nosunday.value, lags=50);
_doc/notebooks/td2a_ml/seasonal_timeseries.ipynb
sdpython/ensae_teaching_cs
mit
On décompose la série en tendance + saisonnalité. Les étés et Noël apparaissent.
from statsmodels.tsa.seasonal import seasonal_decompose res = seasonal_decompose(df_nosunday.value, freq=7) res.plot(); plt.plot(res.seasonal[-30:]) plt.title("Saisonnalité"); cor = acf(res.trend[5:-5]); plt.plot(cor);
_doc/notebooks/td2a_ml/seasonal_timeseries.ipynb
sdpython/ensae_teaching_cs
mit
On cherche maintenant la saisonnalité de la série débarrassée de sa tendance herbdomadaire. On retrouve la saisonnalité mensuelle.
res_year = seasonal_decompose(res.trend[5:-5], freq=25) res_year.plot();
_doc/notebooks/td2a_ml/seasonal_timeseries.ipynb
sdpython/ensae_teaching_cs
mit
Test de stationnarité Le test KPSS permet de tester la stationnarité d'une série.
from statsmodels.tsa.stattools import kpss kpss(res.trend[5:-5])
_doc/notebooks/td2a_ml/seasonal_timeseries.ipynb
sdpython/ensae_teaching_cs
mit
Comme ce n'est pas toujours facile à interpréter, on simule une variable aléatoire gaussienne donc sans tendance.
from numpy.random import randn bruit = randn(1000) kpss(bruit)
_doc/notebooks/td2a_ml/seasonal_timeseries.ipynb
sdpython/ensae_teaching_cs
mit
Et puis une série avec une tendance forte.
from numpy.random import randn from numpy import arange bruit = randn(1000) * 100 + arange(1000) / 10 kpss(bruit)
_doc/notebooks/td2a_ml/seasonal_timeseries.ipynb
sdpython/ensae_teaching_cs
mit
Une valeur forte indique une tendance et la série en a clairement une. Prédiction Les modèles AR, ARMA, ARIMA se concentrent sur une série à une dimension. En machine learning, il y a la série et plein d'autres informations. On construit une matrice avec des séries décalées.
from statsmodels.tsa.tsatools import lagmat lag = 8 X = lagmat(df_nosunday["value"], lag) lagged = df_nosunday.copy() for c in range(1,lag+1): lagged["lag%d" % c] = X[:, c-1] lagged.tail()
_doc/notebooks/td2a_ml/seasonal_timeseries.ipynb
sdpython/ensae_teaching_cs
mit
On ajoute ou on réécrit le jour de la semaine qu'on utilise comme variable supplémentaire.
lagged["weekday"] = lagged.date.dt.weekday X = lagged.drop(["date", "value", "notrend", "trend"], axis=1) Y = lagged["value"] X.shape, Y.shape from numpy import corrcoef corrcoef(X)
_doc/notebooks/td2a_ml/seasonal_timeseries.ipynb
sdpython/ensae_teaching_cs
mit
Etrange autant de grandes valeurs, cela veut dire que la tendance est trop forte pour calculer des corrélations, il vaudrait mieux tout recommencer avec la série $\Delta Y_t = Y_t - Y_{t-1}$. Bref, passons...
X.columns
_doc/notebooks/td2a_ml/seasonal_timeseries.ipynb
sdpython/ensae_teaching_cs
mit
Une régression linéaire car les modèles linéaires sont toujours de bonnes baseline et pour connaître le modèle simulé, on ne fera pas beaucoup mieux.
from sklearn.linear_model import LinearRegression clr = LinearRegression() clr.fit(X, Y) from sklearn.metrics import r2_score r2_score(Y, clr.predict(X)) clr.coef_
_doc/notebooks/td2a_ml/seasonal_timeseries.ipynb
sdpython/ensae_teaching_cs
mit
On retrouve la saisonnalité, $Y_t$ et $Y_{t-6}$ sont de mèches.
for i in range(1, X.shape[1]): print("X(t-%d)" % (i), r2_score(Y, X.iloc[:, i]))
_doc/notebooks/td2a_ml/seasonal_timeseries.ipynb
sdpython/ensae_teaching_cs
mit
Auparavant (l'année dernière en fait), je construisais deux bases, apprentissage et tests, comme ceci :
n = X.shape[0] X_train = X.iloc[:n * 2//3] X_test = X.iloc[n * 2//3:] Y_train = Y[:n * 2//3] Y_test = Y[n * 2//3:]
_doc/notebooks/td2a_ml/seasonal_timeseries.ipynb
sdpython/ensae_teaching_cs
mit
Et puis scikit-learn est arrivée avec TimeSeriesSplit.
from sklearn.model_selection import TimeSeriesSplit tscv = TimeSeriesSplit(n_splits=5) for train_index, test_index in tscv.split(lagged): data_train, data_test = lagged.iloc[train_index, :], lagged.iloc[test_index, :] print("TRAIN:", data_train.shape, "TEST:", data_test.shape)
_doc/notebooks/td2a_ml/seasonal_timeseries.ipynb
sdpython/ensae_teaching_cs
mit
Et on calé une forêt aléatoire...
import warnings from sklearn.ensemble import RandomForestRegressor clr = RandomForestRegressor() def train_test(clr, train_index, test_index): data_train = lagged.iloc[train_index, :] data_test = lagged.iloc[test_index, :] clr.fit(data_train.drop(["value", "date", "notrend", "trend"], axis=1), data_train.value) r2 = r2_score(data_test.value, clr.predict(data_test.drop(["value", "date", "notrend", "trend"], axis=1).values)) return r2 warnings.simplefilter("ignore") last_test_index = None for train_index, test_index in tscv.split(lagged): r2 = train_test(clr, train_index, test_index) if last_test_index is not None: r2_prime = train_test(clr, last_test_index, test_index) print(r2, r2_prime) else: print(r2) last_test_index = test_index
_doc/notebooks/td2a_ml/seasonal_timeseries.ipynb
sdpython/ensae_teaching_cs
mit
2 ans coupé en 5, soit tous les 5 mois, ça veut dire que ce découpage inclut parfois Noël, parfois l'été et que les performances y seront très sensibles.
from sklearn.metrics import r2_score r2 = r2_score(data_test.value, clr.predict(data_test.drop(["value", "date", "notrend", "trend"], axis=1).values)) r2
_doc/notebooks/td2a_ml/seasonal_timeseries.ipynb
sdpython/ensae_teaching_cs
mit
On compare avec le $r_2$ avec le même $r_2$ obtenu en utilisant $Y_{t-1}$, $Y_{t-2}$, ... $Y_{t-d}$ comme prédiction.
for i in range(1, 9): print(i, ":", r2_score(data_test.value, data_test["lag%d" % i])) lagged[:5]
_doc/notebooks/td2a_ml/seasonal_timeseries.ipynb
sdpython/ensae_teaching_cs
mit
En fait le jour de la semaine est une variable catégorielle, on crée une colonne par jour.
from sklearn.compose import ColumnTransformer from sklearn.preprocessing import OneHotEncoder cols = ['lag1', 'lag2', 'lag3', 'lag4', 'lag5', 'lag6', 'lag7', 'lag8'] ct = ColumnTransformer( [('pass', "passthrough", cols), ("dummies", OneHotEncoder(), ["weekday"])]) pred = ct.fit(lagged).transform(lagged[:5]) pred
_doc/notebooks/td2a_ml/seasonal_timeseries.ipynb
sdpython/ensae_teaching_cs
mit
On met tout dans un pipeline parce que c'est plus joli, plus pratique aussi.
from sklearn.pipeline import make_pipeline from sklearn.decomposition import PCA, TruncatedSVD cols = ['lag1', 'lag2', 'lag3', 'lag4', 'lag5', 'lag6', 'lag7', 'lag8'] model = make_pipeline( make_pipeline( ColumnTransformer( [('pass', "passthrough", cols), ("dummies", make_pipeline(OneHotEncoder(), TruncatedSVD(n_components=2)), ["weekday"])]), LinearRegression())) model.fit(lagged, lagged["value"])
_doc/notebooks/td2a_ml/seasonal_timeseries.ipynb
sdpython/ensae_teaching_cs
mit
C'est plus facile à voir visuellement.
from mlinsights.plotting import pipeline2dot dot = pipeline2dot(model, lagged) from jyquickhelper import RenderJsDot RenderJsDot(dot) r2_score(lagged['value'], model.predict(lagged))
_doc/notebooks/td2a_ml/seasonal_timeseries.ipynb
sdpython/ensae_teaching_cs
mit
Templating Complètement hors sujet mais utile.
from jinja2 import Template template = Template('Hello {{ name }}!') template.render(name='John Doe') template = Template(""" {{ name }} {{ "-" * len(name) }} Possède : {% for i in range(len(meubles)) %} - {{meubles[i]}}{% endfor %} """) meubles = ['table', "tabouret"] print(template.render(name='John Doe Doe', len=len, meubles=meubles))
_doc/notebooks/td2a_ml/seasonal_timeseries.ipynb
sdpython/ensae_teaching_cs
mit
As an example, we set $\alpha = 0.2$ (more like a ridge regression), and give double weights to the latter half of the observations. To avoid too long a display here, we set nlambda to 20. In practice, however, the number of values of $\lambda$ is recommended to be 100 (default) or more. In most cases, it does not come with extra cost because of the warm-starts used in the algorithm, and for nonlinear models leads to better convergence properties.
# call glmnet fit = glmnet(x = x.copy(), y = y.copy(), family = 'gaussian', \ weights = wts, \ alpha = 0.2, nlambda = 20 )
docs/glmnet_vignette.ipynb
bbalasub1/glmnet_python
gpl-3.0
We can then print the glmnet object.
glmnetPrint(fit)
docs/glmnet_vignette.ipynb
bbalasub1/glmnet_python
gpl-3.0