markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Let's go ahead and execute the simulation! You'll notice that the output for multi-group mode is exactly the same as for continuous-energy. The differences are all under the hood.
# Run OpenMC openmc.run()
docs/source/examples/mg-mode-part-i.ipynb
bhermanmit/openmc
mit
Results Visualization Now that we have run the simulation, let's look at the fission rate and flux tallies that we tallied.
# Load the last statepoint file and keff value sp = openmc.StatePoint('statepoint.' + str(batches) + '.h5') # Get the OpenMC pin power tally data mesh_tally = sp.get_tally(name='mesh tally') fission_rates = mesh_tally.get_values(scores=['fission']) # Reshape array to 2D for plotting fission_rates.shape = mesh.dimension # Normalize to the average pin power fission_rates /= np.mean(fission_rates) # Force zeros to be NaNs so their values are not included when matplotlib calculates # the color scale fission_rates[fission_rates == 0.] = np.nan # Plot the pin powers and the fluxes plt.figure() plt.imshow(fission_rates, interpolation='none', cmap='jet', origin='lower') plt.colorbar() plt.title('Pin Powers') plt.show()
docs/source/examples/mg-mode-part-i.ipynb
bhermanmit/openmc
mit
Sensitivity map of SSP projections This example shows the sources that have a forward field similar to the first SSP vector correcting for ECG.
# Author: Alexandre Gramfort <[email protected]> # # License: BSD (3-clause) import matplotlib.pyplot as plt from mne import read_forward_solution, read_proj, sensitivity_map from mne.datasets import sample print(__doc__) data_path = sample.data_path() subjects_dir = data_path + '/subjects' fname = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif' ecg_fname = data_path + '/MEG/sample/sample_audvis_ecg-proj.fif' fwd = read_forward_solution(fname) projs = read_proj(ecg_fname) # take only one projection per channel type projs = projs[::2] # Compute sensitivity map ssp_ecg_map = sensitivity_map(fwd, ch_type='grad', projs=projs, mode='angle')
0.18/_downloads/012b7ba30b03ebda4c3419b2e4f5161a/plot_ssp_projs_sensitivity_map.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Some sort of mapping between neural activity and a state in the world my location head tilt image remembered location Intuitively, we call this "representation" In neuroscience, people talk about the 'neural code' To formalize this notion, the NEF uses information theory (or coding theory) Representation formalism Value being represented: $x$ Neural activity: $a$ Neuron index: $i$ Encoding and decoding Have to define both to define a code Lossless code (e.g. Morse Code): encoding: $a = f(x)$ decodng: $x = f^{-1}(a)$ Lossy code: encoding: $a = f(x)$ decoding: $\hat{x} = g(a) \approx x$ Distributed representation Not just one neuron per $x$ value (or per $x$) Many different $a$ values for a single $x$ Encoding: $a_i = f_i(x)$ Decoding: $\hat{x} = g(a_0, a_1, a_2, a_3, ...)$ Example: binary representation Encoding (nonlinear): $$ a_i = \begin{cases} 1 &\mbox{if } x \mod {2^{i}} > 2^{i-1} \ 0 &\mbox{otherwise} \end{cases} $$ Decoding (linear): $$ \hat{x} = \sum_i a_i 2^{i-1} $$ Suppose: $x = 13$ Encoding: $a_1 = 1$, $a_2 = 0$, $a_3 = 1$, $a_4 = 1$ Decoding: $\hat{x} = 11+02+14+18 = 13$ Linear decoding Write decoder as $\hat{x} = \sum_ia_i d_i$ Linear decoding is nice and simple Works fine with non-linear encoding (!) The NEF uses linear decoding, but what about the encoding? Neuron encoding $a_i = f_i(x)$ What do we know about neurons? <img src=files/lecture1/NeuronStructure.jpg> Firing rate goes up as total input current goes up $a_i = G_i(J)$ What is $G_i$? depends on how detailed a neuron model we want.
from IPython.display import YouTubeVideo YouTubeVideo('hxdPdKbqm_I', width=720, height=400, loop=1, autoplay=0)
SYDE 556 Lecture 2 Representation.ipynb
Stanford-BIS/syde556
gpl-2.0
Rectified Linear Neuron
# Rectified linear neuron %pylab inline import numpy import nengo n = nengo.neurons.RectifiedLinear() J = numpy.linspace(-1,1,100) plot(J, n.rates(J, gain=10, bias=-5)) xlabel('J (current)') ylabel('$a$ (Hz)');
SYDE 556 Lecture 2 Representation.ipynb
Stanford-BIS/syde556
gpl-2.0
Leaky integrate-and-fire neuron $ a = {1 \over {\tau_{ref}-\tau_{RC}ln(1-{1 \over J})}}$
#assume this has been run #%pylab inline # Leaky integrate and fire import numpy import nengo n = nengo.neurons.LIFRate(tau_rc=0.02, tau_ref=0.002) #n is a Nengo LIF neuron, these are defaults J = numpy.linspace(-1,10,100) plot(J, n.rates(J, gain=1, bias=-3)) xlabel('J (current)') ylabel('$a$ (Hz)');
SYDE 556 Lecture 2 Representation.ipynb
Stanford-BIS/syde556
gpl-2.0
Response functions These are called "response functions" How much neural firing changes with change in current Similar for many classes of cells (e.g. pyramidal cells - most of cortex) This is the $G_i$ function in the NEF: it can be pretty much anything Tuning Curves Neurons seem to be sensitive to particular values of $x$ How are neurons 'tuned' to a representation? or... What's the mapping between $x$ and $a$? Recall 'place cells', and 'edge detectors' Sometimes they are fairly straight forward: <img src=files/lecture2/tuning_curve_auditory.gif> But not often: <img src=files/lecture2/tuning_curve.jpg> <img src=files/lecture2/orientation_tuning.png> Is there a general form? Tuning curves (cont.) The NEF suggests that there is... Something generic and simple That covers all the above cases (and more) Let's start with the simpler case: <img src=files/lecture2/tuning_curve_auditory.gif> Note that the experimenters are graphing $a$, as a function of $x$ $x$ is much easier to measure than $J$ So, there are two mappings of interest: $x$->$J$ $J$->$a$ (response function) Together these give the tuning curve $x$ is the volume of the sound in this case Any ideas?
#assume this has been run #%pylab inline import numpy import nengo n = nengo.neurons.LIFRate() #n is a Nengo LIF neuron x = numpy.linspace(-100,0,100) plot(x, n.rates(x, gain=1, bias=50), 'b') # x*1+50 plot(x, n.rates(x, gain=0.1, bias=10), 'r') # x*0.1+10 plot(x, n.rates(x, gain=0.5, bias=5), 'g') # x*0.05+5 plot(x, n.rates(x, gain=0.1, bias=4), 'c') #x*0.1+4)) xlabel('x') ylabel('a');
SYDE 556 Lecture 2 Representation.ipynb
Stanford-BIS/syde556
gpl-2.0
For mapping #1, the NEF uses a linear map: $ J = \alpha x + J^{bias} $ But what about type (c) in this graph? <img src=files/lecture2/tuning_curve.jpg> Easy enough: $ J = - \alpha x + J^{bias} $ But what about type(b)? Or these ones? <img src=files/lecture2/orientation_tuning.png> There's usually some $x$ which gives a maximum firing rate ...and thus a maximum $J$ Firing rate (and $J$) decrease as you get farther from the preferred $x$ value So something like $J = \alpha [sim(x, x_{pref})] + J^{bias}$ What sort of similarity measure? Let's think about $x$ for a moment $x$ can be anything... scalar, vector, etc. Does thinking of it as a vector help? The Encoding Equation (i.e. Tuning Curves) Here is the general form we use for everything (it has both 'mappings' in it) $a_i = G_i[\alpha_i x \cdot e_i + J_i^{bias}] $ $\alpha$ is a gain term (constrained to always be positive) $J^{bias}$ is a constant bias term $e$ is the encoder, or the preferred direction vector $G$ is the neuron model $i$ indexes the neuron To simplify life, we always assume $e$ is of unit length Otherwise we could combine $\alpha$ and $e$ In the 1D case, $e$ is either +1 or -1 In higher dimensions, what happens?
#assume this has been run #%pylab inline import numpy import nengo n = nengo.neurons.LIFRate() e = numpy.array([1.0, 1.0]) e = e/numpy.linalg.norm(e) a = numpy.linspace(-1,1,50) b = numpy.linspace(-1,1,50) X,Y = numpy.meshgrid(a, b) from mpl_toolkits.mplot3d.axes3d import Axes3D fig = figure() ax = fig.add_subplot(1, 1, 1, projection='3d') p = ax.plot_surface(X, Y, n.rates((X*e[0]+Y*e[1]), gain=1, bias=1.5), linewidth=0, cstride=1, rstride=1, cmap=pylab.cm.jet)
SYDE 556 Lecture 2 Representation.ipynb
Stanford-BIS/syde556
gpl-2.0
But that's not how people normally plot it It might not make sense to sample every possible x Instead they might do some subset For example, what if we just plot the points around the unit circle?
import nengo import numpy n = nengo.neurons.LIFRate() theta = numpy.linspace(0, 2*numpy.pi, 100) x = numpy.array([numpy.cos(theta), numpy.sin(theta)]) plot(x[0],x[1]) axis('equal') e = numpy.array([1.0, 1.0]) e = e/numpy.linalg.norm(e) plot([0,e[0]], [0,e[1]],'r') gain = 1 bias = 2.5 figure() plot(theta, n.rates(numpy.dot(x.T, e), gain=gain, bias=bias)) plot([numpy.arctan2(e[1],e[0])],0,'rv') xlabel('angle') ylabel('firing rate') xlim(0, 2*numpy.pi);
SYDE 556 Lecture 2 Representation.ipynb
Stanford-BIS/syde556
gpl-2.0
That starts looking a lot more like the real data. Notation Encoding $a_i = G_i[\alpha_i x \cdot e_i + J^{bias}_i]$ Decoding $\hat{x} = \sum_i a_i d_i$ The textbook uses $\phi$ for $d$ and $\tilde \phi$ for $e$ We're switching to $d$ (for decoder) and $e$ (for encoder) Decoder But where do we get $d_i$ from? $\hat{x}=\sum a_i d_i$ Find the optimal $d_i$ How? Math Solving for $d$ Minimize the average error over all $x$, i.e., $ E = \frac{1}{2}\int_{-1}^1 (x-\hat{x})^2 \; dx $ Substitute for $\hat{x}$: $ \begin{align} E = \frac{1}{2}\int_{-1}^1 \left(x-\sum_i^N a_i d_i \right)^2 \; dx \end{align} $ Take the derivative with respect to $d_i$: $ \begin{align} {{\partial E} \over {\partial d_i}} &= {1 \over 2} \int_{-1}^1 2 \left[ x-\sum_j a_j d_j \right] (-a_i) \; dx \ {{\partial E} \over {\partial d_i}} &= - \int_{-1}^1 a_i x \; dx + \int_{-1}^1 \sum_j a_j d_j a_i \; dx \end{align} $ At the minimum (i.e. smallest error), $ {{\partial E} \over {\partial d_i}} = 0$ $ \begin{align} \int_{-1}^1 a_i x \; dx &= \int_{-1}^1 \sum_j(a_j d_j a_i) \; dx \ \int_{-1}^1 a_i x \; dx &= \sum_j \left(\int_{-1}^1 a_i a_j \; dx\right)d_j \end{align} $ That's a system of $N$ equations and $N$ unknowns In fact, we can rewrite this in matrix form $ \Upsilon = \Gamma d $ where $ \begin{align} \Upsilon_i &= {1 \over 2} \int_{-1}^1 a_i x \;dx\ \Gamma_{ij} &= {1 \over 2} \int_{-1}^1 a_i a_j \;dx \end{align} $ Do we have to do the integral over all $x$? Approximate the integral by sampling over $x$ $S$ is the number of $x$ values to use ($S$ for samples) $ \begin{align} \sum_x a_i x / S &= \sum_j \left(\sum_x a_i a_j /S \right)d_j \ \Upsilon &= \Gamma d \end{align} $ where $ \begin{align} \Upsilon_i &= \sum_x a_i x / S \ \Gamma_{ij} &= \sum_x a_i a_j / S \end{align} $ Notice that if $A$ is the matrix of activities (the firing rate for each neuron for each $x$ value), then $\Gamma=A^T A / S$ and $\Upsilon=A^T x / S$ So given $ \Upsilon = \Gamma d $ then $ d = \Gamma^{-1} \Upsilon $ or, equivalently $ d_i = \sum_j \Gamma^{-1}_{ij} \Upsilon_j $
import numpy import nengo from nengo.utils.ensemble import tuning_curves from nengo.dists import Uniform N = 10 model = nengo.Network(label='Neurons') with model: neurons = nengo.Ensemble(N, dimensions=1, max_rates=Uniform(100,200)) #Defaults to LIF neurons, #with random gains and biases for #neurons between 100-200hz over -1,1 connection = nengo.Connection(neurons, neurons, #This is just to generate the decoders solver=nengo.solvers.LstsqL2(reg=0)) #reg=0 means ignore noise sim = nengo.Simulator(model) d = sim.data[connection].weights.T x, A = tuning_curves(neurons, sim) xhat = numpy.dot(A, d) pyplot.plot(x, A) xlabel('x') ylabel('firing rate (Hz)') figure() plot(x, x) plot(x, xhat) xlabel('$x$') ylabel('$\hat{x}$') ylim(-1, 1) xlim(-1, 1) figure() plot(x, xhat-x) xlabel('$x$') ylabel('$\hat{x}-x$') xlim(-1, 1) print 'RMSE', np.sqrt(np.average((x-xhat)**2))
SYDE 556 Lecture 2 Representation.ipynb
Stanford-BIS/syde556
gpl-2.0
What happens to the error with more neurons? Noise Neurons aren't perfect Axonal jitter Neurotransmitter vesicle release failure (~80%) Amount of neurotransmitter per vesicle Thermal noise Ion channel noise (# of channels open and closed) Network effects More information: http://icwww.epfl.ch/~gerstner/SPNM/node33.html How do we include this noise as well? Make the neuron model more complicated Simple approach: add gaussian random noise to $a_i$ Set noise standard deviation $\sigma$ to 20% of maximum firing rate Each $a_i$ value for each $x$ value gets a different noise value added to it What effect does this have on decoding?
#Have to run previous python cell first A_noisy = A + numpy.random.normal(scale=0.2*numpy.max(A), size=A.shape) xhat = numpy.dot(A_noisy, d) pyplot.plot(x, A_noisy) xlabel('x') ylabel('firing rate (Hz)') figure() plot(x, x) plot(x, xhat) xlabel('$x$') ylabel('$\hat{x}$') ylim(-1, 1) xlim(-1, 1) print 'RMSE', np.sqrt(np.average((x-xhat)**2))
SYDE 556 Lecture 2 Representation.ipynb
Stanford-BIS/syde556
gpl-2.0
What if we just increase the number of neurons? Will it help? Taking noise into account Include noise while solving for decoders Introduce noise term $\eta$ $ \begin{align} \hat{x} &= \sum_i(a_i+\eta)d_i \ E &= {1 \over 2} \int_{-1}^1 (x-\hat{x})^2 \;dx d\eta\ &= {1 \over 2} \int_{-1}^1 \left(x-\sum_i(a_i+\eta)d_i\right)^2 \;dx d\eta\ &= {1 \over 2} \int_{-1}^1 \left(x-\sum_i a_i d_i - \sum \eta d_i \right)^2 \;dx d\eta \end{align} $ - Assume noise is gaussian, independent, mean zero, and has the same variance for each neuron - $\eta = \mathcal{N}(0, \sigma)$ - All the noise cross-terms disappear (independent) $ \begin{align} E &= {1 \over 2} \int_{-1}^1 \left(x-\sum_i a_i d_i \right)^2 \;dx + \sum_{i,j} d_i d_j <\eta_i \eta_j>\eta \ &= {1 \over 2} \int{-1}^1 \left(x-\sum_i a_i d_i \right)^2 \;dx + \sum_{i} d_i d_i <\eta_i \eta_i>_\eta \end{align} $ Since the average of $\eta_i \eta_i$ noise is its variance (since the mean is zero), $\sigma^2$, we get $ \begin{align} E = {1 \over 2} \int_{-1}^1 \left(x-\sum_i a_i d_i \right)^2 \;dx + \sigma^2 \sum_i d_i^2 \end{align} $ The practical result is that, when computing the decoder, we get $ \begin{align} \Gamma_{ij} = \sum_x a_i a_j / S + \sigma^2 \delta_{ij} \end{align} $ Where $\delta_{ij}$ is the Kronecker delta: http://en.wikipedia.org/wiki/Kronecker_delta To simplfy computing this using matrices, this can be written as $\Gamma=A^T A /S + \sigma^2 I$
import numpy import nengo from nengo.utils.ensemble import tuning_curves from nengo.dists import Uniform N = 100 model = nengo.Network(label='Neurons') with model: neurons = nengo.Ensemble(N, dimensions=1, max_rates=Uniform(100,200)) #Defaults to LIF neurons, #with random gains and biases for #neurons between 100-200hz over -1,1 connection = nengo.Connection(neurons, neurons, #This is just to generate the decoders solver=nengo.solvers.LstsqNoise(noise=0.2)) #Add noise ###NEW sim = nengo.Simulator(model) d = sim.data[connection].weights.T x, A = tuning_curves(neurons, sim) A_noisy = A + numpy.random.normal(scale=0.2*numpy.max(A), size=A.shape) xhat = numpy.dot(A_noisy, d) pyplot.plot(x, A_noisy) xlabel('x') ylabel('firing rate (Hz)') figure() plot(x, x) plot(x, xhat) xlabel('$x$') ylabel('$\hat{x}$') ylim(-1, 1) xlim(-1, 1) print 'RMSE', np.sqrt(np.average((x-xhat)**2))
SYDE 556 Lecture 2 Representation.ipynb
Stanford-BIS/syde556
gpl-2.0
Number of neurons What happens to the error with more neurons? Note that the error has two parts: $ \begin{align} E = {1 \over 2} \int_{-1}^1 \left(x-\sum_i a_i d_i \right)^2 \;dx + \sigma^2 \sum_i d_i^2 \end{align} $ Error due to static distortion (i.e. the error introduced by the decoders themselves) This is present regardless of noise $ \begin{align} E_{distortion} = {1 \over 2} \int_{-1}^1 \left(x-\sum_i a_i d_i \right)^2 dx \end{align} $ Error due to noise $ \begin{align} E_{noise} = \sigma^2 \sum_i d_i^2 \end{align} $ What do these look like as number of neurons $N$ increases? <img src="files/lecture2/repn_noise.png"> - Noise error is proportional to $1/N$ - Distortion error is proportional to $1/N^2$ - Remember this error $E$ is defined as $ E = {1 \over 2} \int_{-1}^1 (x-\hat{x})^2 dx $ So that's actually a squared error term Also, as number of neurons is greater than 100 or so, the error is dominated by the noise term ($1/N$). Examples Methodology for building models with the Neural Engineering Framework (outlined in Chapter 1) System Description: Describe the system of interest in terms of the neural data, architecture, computations, representations, etc. (e.g. response functions, tuning curves, etc.) Design Specification: Add additional performance constraints (e.g. bandwidth, noise, SNR, dynamic range, stability, etc.) Implement the model: Employ the NEF principles given the System Description and Design Specification Example 1: Horizontal Eye Control (1D) From http://www.nature.com/nrn/journal/v3/n12/full/nrn986.html <img src="files/lecture2/horizontal_eye.jpg"> There are also neurons whose response goes the other way. All of the neurons are directly connected to the muscle controlling the horizontal direction of the eye, and that's the only thing that muscle does, so we're pretty sure this is what's being repreesnted. System Description We've only done the first NEF principle, so that's all we'll worry about What is being represented? $x$ is the horizontal position Tuning curves: extremely linear (high $\tau_{RC}$, low $\tau_{ref}$) some have $e=1$, some have $e=-1$ these are often called "on" and "off" neurons, respectively Firing rates of up to 300Hz Design Specification Range of values for $x$: -60 degrees to +60 degrees Normal levels of noise: $\sigma$ is 20% of maximum firing rate the book goes a bit higher, with $\sigma^2=0.1$, meaning that $\sigma = \sqrt{0.1} \approx 0.32$ times the maximum firing rate Implementation Examine the tuning curves Then use principle 1
#%pylab inline import numpy import nengo from nengo.utils.ensemble import tuning_curves from nengo.dists import Uniform N = 40 tau_rc = .2 tau_ref = .001 lif_model = nengo.LIFRate(tau_rc=tau_rc, tau_ref=tau_ref) model = nengo.Network(label='Neurons') with model: neurons = nengo.Ensemble(N, dimensions=1, max_rates = Uniform(250,300), neuron_type = lif_model) sim = nengo.Simulator(model) x, A = tuning_curves(neurons, sim) plot(x, A) xlabel('x') ylabel('firing rate (Hz)');
SYDE 556 Lecture 2 Representation.ipynb
Stanford-BIS/syde556
gpl-2.0
How good is the representation?
#Have to run previous code cell first noise = 0.2 with model: connection = nengo.Connection(neurons, neurons, #This is just to generate the decoders solver=nengo.solvers.LstsqNoise(noise=0.2)) #Add noise ###NEW sim = nengo.Simulator(model) d = sim.data[connection].weights.T x, A = tuning_curves(neurons, sim) A_noisy = A + numpy.random.normal(scale=noise*numpy.max(A), size=A.shape) xhat = numpy.dot(A_noisy, d) print 'RMSE with %d neurons is %g'%(N, np.sqrt(np.average((x-xhat)**2))) figure() plot(x, x) plot(x, xhat) xlabel('$x$') ylabel('$\hat{x}$') ylim(-1, 1) xlim(-1, 1);
SYDE 556 Lecture 2 Representation.ipynb
Stanford-BIS/syde556
gpl-2.0
Possible questions How many neurons do we need for a particular level of accuracy? What happens with different firing rates? What happens with different distributions of x-intercepts? Example 2: Arm Movements (2D) Georgopoulos et al., 1982. "On the relations between the direction of two-dimensional arm movements and cell discharge in primate motor cortex." <img src="files/lecture2/armmovement1.jpg"> <img src="files/lecture2/armmovement2.png"> <img src="files/lecture2/armtuningcurve.png"> System Description What is being represented? $x$ is the hand position Note that this is different from what Georgopoulos talks about in this initial paper Initial paper only looks at those 8 positions, so it only talks about direction of movement (angle but not magnitude) More recent work in the same area shows the cells do respond to both (Fu et al, 1993; Messier and Kalaska, 2000) Bell-shaped tuning curves Encoders: randomly distributed around the unit circle Firing rates of up to 60Hz Design Specification Range of values for $x$: Anywhere within a unit circle (or perhaps some other radius) Normal levels of noise: $\sigma$ is 20% of maximum firing rate the book goes a bit higher, with $\sigma^2=0.1$, meaning that $\sigma = \sqrt{0.1} \approx 0.32$ times the maximum Implementation Examine the tuning curves
import numpy import nengo n = nengo.neurons.LIFRate() theta = numpy.linspace(-numpy.pi, numpy.pi, 100) x = numpy.array([numpy.sin(theta), numpy.cos(theta)]) e = numpy.array([1.0, 0]) plot(theta*180/numpy.pi, n.rates(numpy.dot(x.T, e), bias=1, gain=0.2)) #bias 1->1.5 xlabel('angle') ylabel('firing rate') xlim(-180, 180) show()
SYDE 556 Lecture 2 Representation.ipynb
Stanford-BIS/syde556
gpl-2.0
This notebook will demonstrate how to use skrf to do a two-tiered one-port calibration. We'll use data that was taken to characterize a waveguide-to-CPW probe. So, for this specific example the diagram above looks like:
SVG('oneport_tiered_calibration/images/probe.svg')
doc/source/examples/metrology/One Port Tiered Calibration.ipynb
Ttl/scikit-rf
bsd-3-clause
Some Data The data available is the folders 'tier1/' and 'tier2/'.
ls oneport_tiered_calibration/
doc/source/examples/metrology/One Port Tiered Calibration.ipynb
Ttl/scikit-rf
bsd-3-clause
(if you dont have the git repo for these examples, the data for this notebook can be found here) In each folder you will find the two sub-folders, called 'ideals/' and 'measured/'. These contain touchstone files of the calibration standards ideal and measured responses, respectively.
ls oneport_tiered_calibration/tier1/
doc/source/examples/metrology/One Port Tiered Calibration.ipynb
Ttl/scikit-rf
bsd-3-clause
The first tier is at waveguide interface, and consisted of the following set of standards short delay short load radiating open (literally an open waveguide)
ls oneport_tiered_calibration/tier1/measured/
doc/source/examples/metrology/One Port Tiered Calibration.ipynb
Ttl/scikit-rf
bsd-3-clause
Creating Calibrations Tier 1 First defining the calibration for Tier 1
from skrf.calibration import OnePort import skrf as rf %matplotlib inline from pylab import * rf.stylely() tier1_ideals = rf.read_all_networks('oneport_tiered_calibration/tier1/ideals/') tier1_measured = rf.read_all_networks('oneport_tiered_calibration/tier1/measured/') tier1 = OnePort(measured = tier1_measured, ideals = tier1_ideals, name = 'tier1', sloppy_input=True) tier1
doc/source/examples/metrology/One Port Tiered Calibration.ipynb
Ttl/scikit-rf
bsd-3-clause
Because we saved corresponding ideal and measured standards with identical names, the Calibration will automatically align our standards upon initialization. (More info on creating Calibration objects this can be found in the docs.) Similarly for the second tier 2, Tier 2
tier2_ideals = rf.read_all_networks('oneport_tiered_calibration/tier2/ideals/') tier2_measured = rf.read_all_networks('oneport_tiered_calibration/tier2/measured/') tier2 = OnePort(measured = tier2_measured, ideals = tier2_ideals, name = 'tier2', sloppy_input=True) tier2
doc/source/examples/metrology/One Port Tiered Calibration.ipynb
Ttl/scikit-rf
bsd-3-clause
Error Networks Each one-port Calibration contains a two-port error network, that is determined from the calculated error coefficients. The error network for tier1 models the VNA, while the error network for tier2 represents the VNA and the DUT. These can be visualized through the parameter 'error_ntwk'. For tier 1,
tier1.error_ntwk.plot_s_db() title('Tier 1 Error Network')
doc/source/examples/metrology/One Port Tiered Calibration.ipynb
Ttl/scikit-rf
bsd-3-clause
Similarly for tier 2,
tier2.error_ntwk.plot_s_db() title('Tier 2 Error Network')
doc/source/examples/metrology/One Port Tiered Calibration.ipynb
Ttl/scikit-rf
bsd-3-clause
De-embedding the DUT As previously stated, the error network for tier1 models the VNA, and the error network for tier2 represents the VNA+DUT. So to deterine the DUT's response, we cascade the inverse S-parameters of the VNA with the VNA+DUT. $$ DUT = VNA^{-1}\cdot (VNA \cdot DUT)$$ In skrf, this is done as follows
dut = tier1.error_ntwk.inv ** tier2.error_ntwk dut.name = 'probe' dut.plot_s_db() title('Probe S-parameters') ylim(-60,10)
doc/source/examples/metrology/One Port Tiered Calibration.ipynb
Ttl/scikit-rf
bsd-3-clause
You may want to save this to disk, for future use, dut.write_touchstone()
ls probe*
doc/source/examples/metrology/One Port Tiered Calibration.ipynb
Ttl/scikit-rf
bsd-3-clause
In pandas
title = "bar chart2" index = pd.date_range("8/24/2018", periods=6, freq="M") df1 = pd.DataFrame(np.random.randn(6), index=index) df2 = pd.DataFrame(np.random.rand(6), index=index) dfvalue1 = [i[0] for i in df1.values] dfvalue2 = [i[0] for i in df2.values] _index = [i for i in df1.index.format()] bar = pyecharts.Bar(title, "Profit and loss situation") bar.add("profit", _index, dfvalue1) bar.add("loss", _index, dfvalue2) bar.height = 500 bar.width = 800 bar from pyecharts import Bar, Line, Overlap attr = ['A','B','C','D','E','F'] v1 = [10, 20, 30, 40, 50, 60] v2 = [38, 28, 58, 48, 78, 68] bar = Bar("Line Bar") bar.add("bar", attr, v1) line = Line() line.add("line", attr, v2) overlap = Overlap() overlap.add(bar) overlap.add(line) overlap from pyecharts import Pie attr = ['A','B','C','D','E','F'] v1 = [10, 20, 30, 40, 50, 60] v2 = [38, 28, 58, 48, 78, 68] pie = Pie("pie chart", title_pos="center", width=600) pie.add("A", attr, v1, center=[25, 50], is_random=True, radius=[30, 75], rosetype='radius') pie.add("B", attr, v2, center=[75, 50], is_randome=True, radius=[30, 75], rosetype='area', is_legend_show=False, is_label_show=True) pie
python/pyecharts.ipynb
zzsza/TIL
mit
가로 그래프
bar = Bar("가로 그래프") bar.add("A", attr, v1) bar.add("B", attr, v2, is_convert=True) bar.width=800 bar
python/pyecharts.ipynb
zzsza/TIL
mit
슬라이더
import random attr = ["{}th".format(i) for i in range(30)] v1 = [random.randint(1, 30) for _ in range(30)] bar = Bar("Bar - datazoom - slider ") bar.add("", attr, v1, is_label_show=True, is_datazoom_show=True) # bar.render() bar days = ["{}th".format(i) for i in range(30)] days_v1 = [random.randint(1, 30) for _ in range(30)] bar = Bar("Bar - datazoom - xaxis/yaxis") bar.add( "", days, days_v1, is_datazoom_show=True, datazoom_type="slider", datazoom_range=[10, 25], is_datazoom_extra_show=True, datazoom_extra_type="slider", datazoom_extra_range=[10, 25], is_toolbox_show=False, ) # bar.render() bar
python/pyecharts.ipynb
zzsza/TIL
mit
3D
from pyecharts import Bar3D bar3d = Bar3D("3D Graph", width=1200, height=600) x_axis = [ "12a", "1a", "2a", "3a", "4a", "5a", "6a", "7a", "8a", "9a", "10a", "11a", "12p", "1p", "2p", "3p", "4p", "5p", "6p", "7p", "8p", "9p", "10p", "11p" ] y_axis = [ "Saturday", "Friday", "Thursday", "Wednesday", "Tuesday", "Monday", "Sunday" ] data = [ [0, 0, 5], [0, 1, 1], [0, 2, 0], [0, 3, 0], [0, 4, 0], [0, 5, 0], [0, 6, 0], [0, 7, 0], [0, 8, 0], [0, 9, 0], [0, 10, 0], [0, 11, 2], [0, 12, 4], [0, 13, 1], [0, 14, 1], [0, 15, 3], [0, 16, 4], [0, 17, 6], [0, 18, 4], [0, 19, 4], [0, 20, 3], [0, 21, 3], [0, 22, 2], [0, 23, 5], [1, 0, 7], [1, 1, 0], [1, 2, 0], [1, 3, 0], [1, 4, 0], [1, 5, 0], [1, 6, 0], [1, 7, 0], [1, 8, 0], [1, 9, 0], [1, 10, 5], [1, 11, 2], [1, 12, 2], [1, 13, 6], [1, 14, 9], [1, 15, 11], [1, 16, 6], [1, 17, 7], [1, 18, 8], [1, 19, 12], [1, 20, 5], [1, 21, 5], [1, 22, 7], [1, 23, 2], [2, 0, 1], [2, 1, 1], [2, 2, 0], [2, 3, 0], [2, 4, 0], [2, 5, 0], [2, 6, 0], [2, 7, 0], [2, 8, 0], [2, 9, 0], [2, 10, 3], [2, 11, 2], [2, 12, 1], [2, 13, 9], [2, 14, 8], [2, 15, 10], [2, 16, 6], [2, 17, 5], [2, 18, 5], [2, 19, 5], [2, 20, 7], [2, 21, 4], [2, 22, 2], [2, 23, 4], [3, 0, 7], [3, 1, 3], [3, 2, 0], [3, 3, 0], [3, 4, 0], [3, 5, 0], [3, 6, 0], [3, 7, 0], [3, 8, 1], [3, 9, 0], [3, 10, 5], [3, 11, 4], [3, 12, 7], [3, 13, 14], [3, 14, 13], [3, 15, 12], [3, 16, 9], [3, 17, 5], [3, 18, 5], [3, 19, 10], [3, 20, 6], [3, 21, 4], [3, 22, 4], [3, 23, 1], [4, 0, 1], [4, 1, 3], [4, 2, 0], [4, 3, 0], [4, 4, 0], [4, 5, 1], [4, 6, 0], [4, 7, 0], [4, 8, 0], [4, 9, 2], [4, 10, 4], [4, 11, 4], [4, 12, 2], [4, 13, 4], [4, 14, 4], [4, 15, 14], [4, 16, 12], [4, 17, 1], [4, 18, 8], [4, 19, 5], [4, 20, 3], [4, 21, 7], [4, 22, 3], [4, 23, 0], [5, 0, 2], [5, 1, 1], [5, 2, 0], [5, 3, 3], [5, 4, 0], [5, 5, 0], [5, 6, 0], [5, 7, 0], [5, 8, 2], [5, 9, 0], [5, 10, 4], [5, 11, 1], [5, 12, 5], [5, 13, 10], [5, 14, 5], [5, 15, 7], [5, 16, 11], [5, 17, 6], [5, 18, 0], [5, 19, 5], [5, 20, 3], [5, 21, 4], [5, 22, 2], [5, 23, 0], [6, 0, 1], [6, 1, 0], [6, 2, 0], [6, 3, 0], [6, 4, 0], [6, 5, 0], [6, 6, 0], [6, 7, 0], [6, 8, 0], [6, 9, 0], [6, 10, 1], [6, 11, 0], [6, 12, 2], [6, 13, 1], [6, 14, 3], [6, 15, 4], [6, 16, 0], [6, 17, 0], [6, 18, 0], [6, 19, 0], [6, 20, 1], [6, 21, 2], [6, 22, 2], [6, 23, 6] ] range_color = ['#313695', '#4575b4', '#74add1', '#abd9e9', '#e0f3f8', '#ffffbf', '#fee090', '#fdae61', '#f46d43', '#d73027', '#a50026'] bar3d.add( "", x_axis, y_axis, [[d[1], d[0], d[2]] for d in data], is_visualmap=True, visual_range=[0, 20], visual_range_color=range_color, grid3d_width=200, grid3d_depth=80, ) bar3d.width=700 bar3d.height=500 bar3d
python/pyecharts.ipynb
zzsza/TIL
mit
Boxplot
from pyecharts import Boxplot boxplot = Boxplot("Box plot") x_axis = ['expr1', 'expr2', 'expr3', 'expr4', 'expr5'] y_axis = [ [850, 740, 900, 1070, 930, 850, 950, 980, 980, 880, 1000, 980, 930, 650, 760, 810, 1000, 1000, 960, 960], [960, 940, 960, 940, 880, 800, 850, 880, 900, 840, 830, 790, 810, 880, 880, 830, 800, 790, 760, 800], [880, 880, 880, 860, 720, 720, 620, 860, 970, 950, 880, 910, 850, 870, 840, 840, 850, 840, 840, 840], [890, 810, 810, 820, 800, 770, 760, 740, 750, 760, 910, 920, 890, 860, 880, 720, 840, 850, 850, 780], [890, 840, 780, 810, 760, 810, 790, 810, 820, 850, 870, 870, 810, 740, 810, 940, 950, 800, 810, 870] ] _yaxis = boxplot.prepare_data(y_axis) boxplot.add("boxplot", x_axis, _yaxis) boxplot
python/pyecharts.ipynb
zzsza/TIL
mit
퍼널
from pyecharts import Funnel attr = ["A", "B", "C", "D", "E", "F"] value = [20, 40, 60, 80, 100, 120] funnel = Funnel("퍼널 그래프") funnel.add( "퍼널", attr, value, is_label_show=True, label_pos="inside", label_text_color="#fff", ) funnel.width=700 funnel.height=500 funnel
python/pyecharts.ipynb
zzsza/TIL
mit
Gauge
from pyecharts import Gauge gauge = Gauge("Gauge Graph") gauge.add("이용률", "가운데", 66.66) gauge
python/pyecharts.ipynb
zzsza/TIL
mit
导入人脸数据集 在下方的代码单元中,我们导入人脸图像数据集,文件所在路径存储在名为 human_files 的 numpy 数组。
import random random.seed(8675309) # 加载打乱后的人脸数据集的文件名 human_files = np.array(glob("/data/lfw/*/*")) random.shuffle(human_files) # 打印数据集的数据量 print('There are %d total human images.' % len(human_files))
assets/media/uda-ml/deep/azjc/卷积神经网络的例子/dog/dog_app_zh.ipynb
hetaodie/hetaodie.github.io
mit
<a id='step1'></a> 步骤1:检测人脸 我们将使用 OpenCV 中的 Haar feature-based cascade classifiers 来检测图像中的人脸。OpenCV 提供了很多预训练的人脸检测模型,它们以XML文件保存在 github。我们已经下载了其中一个检测模型,并且把它存储在 haarcascades 的目录中。 在如下代码单元中,我们将演示如何使用这个检测模型在样本图像中找到人脸。
import cv2 import matplotlib.pyplot as plt %matplotlib inline # 提取预训练的人脸检测模型 face_cascade = cv2.CascadeClassifier('haarcascades/haarcascade_frontalface_alt.xml') # 加载彩色(通道顺序为BGR)图像 img = cv2.imread(human_files[3]) # 将BGR图像进行灰度处理 gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # 在图像中找出脸 faces = face_cascade.detectMultiScale(gray) # 打印图像中检测到的脸的个数 print('Number of faces detected:', len(faces)) # 获取每一个所检测到的脸的识别框 for (x,y,w,h) in faces: # 在人脸图像中绘制出识别框 cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2) # 将BGR图像转变为RGB图像以打印 cv_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # 展示含有识别框的图像 plt.imshow(cv_rgb) plt.show()
assets/media/uda-ml/deep/azjc/卷积神经网络的例子/dog/dog_app_zh.ipynb
hetaodie/hetaodie.github.io
mit
在使用任何一个检测模型之前,将图像转换为灰度图是常用过程。detectMultiScale 函数使用储存在 face_cascade 中的的数据,对输入的灰度图像进行分类。 在上方的代码中,faces 以 numpy 数组的形式,保存了识别到的面部信息。它其中每一行表示一个被检测到的脸,该数据包括如下四个信息:前两个元素 x、y 代表识别框左上角的 x 和 y 坐标(参照上图,注意 y 坐标的方向和我们默认的方向不同);后两个元素代表识别框在 x 和 y 轴两个方向延伸的长度 w 和 d。 写一个人脸识别器 我们可以将这个程序封装为一个函数。该函数的输入为人脸图像的路径,当图像中包含人脸时,该函数返回 True,反之返回 False。该函数定义如下所示。
# 如果img_path路径表示的图像检测到了脸,返回"True" def face_detector(img_path): img = cv2.imread(img_path) gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) faces = face_cascade.detectMultiScale(gray) return len(faces) > 0
assets/media/uda-ml/deep/azjc/卷积神经网络的例子/dog/dog_app_zh.ipynb
hetaodie/hetaodie.github.io
mit
【练习】 评估人脸检测模型 <a id='question1'></a> 问题 1: 在下方的代码块中,使用 face_detector 函数,计算: human_files 的前100张图像中,能够检测到人脸的图像占比多少? dog_files 的前100张图像中,能够检测到人脸的图像占比多少? 理想情况下,人图像中检测到人脸的概率应当为100%,而狗图像中检测到人脸的概率应该为0%。你会发现我们的算法并非完美,但结果仍然是可以接受的。我们从每个数据集中提取前100个图像的文件路径,并将它们存储在human_files_short和dog_files_short中。
human_files_short = human_files[:100] dog_files_short = train_files[:100] ## 请不要修改上方代码 ## TODO: 基于human_files_short和dog_files_short ## 中的图像测试face_detector的表现 human_files_short_detect = 0 dog_files_short_detect = 0 for i in range(100): if (face_detector(human_files_short[i])): human_files_short_detect += 1 if (face_detector(dog_files_short[i])): dog_files_short_detect += 1 print("The percentage of detecting human faces in human files is:", human_files_short_detect/human_files_short.size) print("The percentage of detecting human faces in dog files is:", dog_files_short_detect/dog_files_short.size)
assets/media/uda-ml/deep/azjc/卷积神经网络的例子/dog/dog_app_zh.ipynb
hetaodie/hetaodie.github.io
mit
<a id='question2'></a> 问题 2: 就算法而言,该算法成功与否的关键在于,用户能否提供含有清晰面部特征的人脸图像。 那么你认为,这样的要求在实际使用中对用户合理吗?如果你觉得不合理,你能否想到一个方法,即使图像中并没有清晰的面部特征,也能够检测到人脸? 回答: 不太合理,因为图片的来源不同,不能保证所有的图片的脸部都是清晰的。 如果脸部特征不太清晰,应对图片进行前期的预处理。 <a id='Selection1'></a> 选做: 我们建议在你的算法中使用opencv的人脸检测模型去检测人类图像,不过你可以自由地探索其他的方法,尤其是尝试使用深度学习来解决它:)。请用下方的代码单元来设计和测试你的面部监测算法。如果你决定完成这个_选做_任务,你需要报告算法在每一个数据集上的表现。
## (选做) TODO: 报告另一个面部检测算法在LFW数据集上的表现 ### 你可以随意使用所需的代码单元数
assets/media/uda-ml/deep/azjc/卷积神经网络的例子/dog/dog_app_zh.ipynb
hetaodie/hetaodie.github.io
mit
<a id='step2'></a> 步骤 2: 检测狗狗 在这个部分中,我们使用预训练的 ResNet-50 模型去检测图像中的狗。下方的第一行代码就是下载了 ResNet-50 模型的网络结构参数,以及基于 ImageNet 数据集的预训练权重。 ImageNet 这目前一个非常流行的数据集,常被用来测试图像分类等计算机视觉任务相关的算法。它包含超过一千万个 URL,每一个都链接到 1000 categories 中所对应的一个物体的图像。任给输入一个图像,该 ResNet-50 模型会返回一个对图像中物体的预测结果。
from keras.applications.resnet50 import ResNet50 # 定义ResNet50模型 ResNet50_model = ResNet50(weights='imagenet')
assets/media/uda-ml/deep/azjc/卷积神经网络的例子/dog/dog_app_zh.ipynb
hetaodie/hetaodie.github.io
mit
数据预处理 在使用 TensorFlow 作为后端的时候,在 Keras 中,CNN 的输入是一个4维数组(也被称作4维张量),它的各维度尺寸为 (nb_samples, rows, columns, channels)。其中 nb_samples 表示图像(或者样本)的总数,rows, columns, 和 channels 分别表示图像的行数、列数和通道数。 下方的 path_to_tensor 函数实现如下将彩色图像的字符串型的文件路径作为输入,返回一个4维张量,作为 Keras CNN 输入。因为我们的输入图像是彩色图像,因此它们具有三个通道( channels 为 3)。 该函数首先读取一张图像,然后将其缩放为 224×224 的图像。 随后,该图像被调整为具有4个维度的张量。 对于任一输入图像,最后返回的张量的维度是:(1, 224, 224, 3)。 paths_to_tensor 函数将图像路径的字符串组成的 numpy 数组作为输入,并返回一个4维张量,各维度尺寸为 (nb_samples, 224, 224, 3)。 在这里,nb_samples是提供的图像路径的数据中的样本数量或图像数量。你也可以将 nb_samples 理解为数据集中3维张量的个数(每个3维张量表示一个不同的图像。
from keras.preprocessing import image from tqdm import tqdm def path_to_tensor(img_path): # 用PIL加载RGB图像为PIL.Image.Image类型 img = image.load_img(img_path, target_size=(224, 224)) # 将PIL.Image.Image类型转化为格式为(224, 224, 3)的3维张量 x = image.img_to_array(img) # 将3维张量转化为格式为(1, 224, 224, 3)的4维张量并返回 return np.expand_dims(x, axis=0) def paths_to_tensor(img_paths): list_of_tensors = [path_to_tensor(img_path) for img_path in tqdm(img_paths)] return np.vstack(list_of_tensors)
assets/media/uda-ml/deep/azjc/卷积神经网络的例子/dog/dog_app_zh.ipynb
hetaodie/hetaodie.github.io
mit
基于 ResNet-50 架构进行预测 对于通过上述步骤得到的四维张量,在把它们输入到 ResNet-50 网络、或 Keras 中其他类似的预训练模型之前,还需要进行一些额外的处理: 1. 首先,这些图像的通道顺序为 RGB,我们需要重排他们的通道顺序为 BGR。 2. 其次,预训练模型的输入都进行了额外的归一化过程。因此我们在这里也要对这些张量进行归一化,即对所有图像所有像素都减去像素均值 [103.939, 116.779, 123.68](以 RGB 模式表示,根据所有的 ImageNet 图像算出)。 导入的 preprocess_input 函数实现了这些功能。如果你对此很感兴趣,可以在 这里 查看 preprocess_input的代码。 在实现了图像处理的部分之后,我们就可以使用模型来进行预测。这一步通过 predict 方法来实现,它返回一个向量,向量的第 i 个元素表示该图像属于第 i 个 ImageNet 类别的概率。这通过如下的 ResNet50_predict_labels 函数实现。 通过对预测出的向量取用 argmax 函数(找到有最大概率值的下标序号),我们可以得到一个整数,即模型预测到的物体的类别。进而根据这个 清单,我们能够知道这具体是哪个品种的狗狗。
from keras.applications.resnet50 import preprocess_input, decode_predictions def ResNet50_predict_labels(img_path): # 返回img_path路径的图像的预测向量 img = preprocess_input(path_to_tensor(img_path)) return np.argmax(ResNet50_model.predict(img))
assets/media/uda-ml/deep/azjc/卷积神经网络的例子/dog/dog_app_zh.ipynb
hetaodie/hetaodie.github.io
mit
完成狗检测模型 在研究该 清单 的时候,你会注意到,狗类别对应的序号为151-268。因此,在检查预训练模型判断图像是否包含狗的时候,我们只需要检查如上的 ResNet50_predict_labels 函数是否返回一个介于151和268之间(包含区间端点)的值。 我们通过这些想法来完成下方的 dog_detector 函数,如果从图像中检测到狗就返回 True,否则返回 False。
def dog_detector(img_path): prediction = ResNet50_predict_labels(img_path) return ((prediction <= 268) & (prediction >= 151))
assets/media/uda-ml/deep/azjc/卷积神经网络的例子/dog/dog_app_zh.ipynb
hetaodie/hetaodie.github.io
mit
【作业】评估狗狗检测模型 <a id='question3'></a> 问题 3: 在下方的代码块中,使用 dog_detector 函数,计算: human_files_short中图像检测到狗狗的百分比? dog_files_short中图像检测到狗狗的百分比?
### TODO: 测试dog_detector函数在human_files_short和dog_files_short的表现 human_files_short_detect = 0 dog_files_short_detect = 0 for i in range(100): if (dog_detector(human_files_short[i])): human_files_short_detect += 1 if (dog_detector(dog_files_short[i])): dog_files_short_detect += 1 print("The percentage of detecting dogs in human files is:", human_files_short_detect/human_files_short.size) print("The percentage of detecting dogs in dog files is:", dog_files_short_detect/dog_files_short.size)
assets/media/uda-ml/deep/azjc/卷积神经网络的例子/dog/dog_app_zh.ipynb
hetaodie/hetaodie.github.io
mit
<a id='step3'></a> 步骤 3: 从头开始创建一个CNN来分类狗品种 现在我们已经实现了一个函数,能够在图像中识别人类及狗狗。但我们需要更进一步的方法,来对狗的类别进行识别。在这一步中,你需要实现一个卷积神经网络来对狗的品种进行分类。你需要__从头实现__你的卷积神经网络(在这一阶段,你还不能使用迁移学习),并且你需要达到超过1%的测试集准确率。在本项目的步骤五种,你还有机会使用迁移学习来实现一个准确率大大提高的模型。 在添加卷积层的时候,注意不要加上太多的(可训练的)层。更多的参数意味着更长的训练时间,也就是说你更可能需要一个 GPU 来加速训练过程。万幸的是,Keras 提供了能够轻松预测每次迭代(epoch)花费时间所需的函数。你可以据此推断你算法所需的训练时间。 值得注意的是,对狗的图像进行分类是一项极具挑战性的任务。因为即便是一个正常人,也很难区分布列塔尼犬和威尔士史宾格犬。 布列塔尼犬(Brittany) | 威尔士史宾格犬(Welsh Springer Spaniel) - | - <img src="images/Brittany_02625.jpg" width="100"> | <img src="images/Welsh_springer_spaniel_08203.jpg" width="200"> 不难发现其他的狗品种会有很小的类间差别(比如金毛寻回犬和美国水猎犬)。 金毛寻回犬(Curly-Coated Retriever) | 美国水猎犬(American Water Spaniel) - | - <img src="images/Curly-coated_retriever_03896.jpg" width="200"> | <img src="images/American_water_spaniel_00648.jpg" width="200"> 同样,拉布拉多犬(labradors)有黄色、棕色和黑色这三种。那么你设计的基于视觉的算法将不得不克服这种较高的类间差别,以达到能够将这些不同颜色的同类狗分到同一个品种中。 黄色拉布拉多犬(Yellow Labrador) | 棕色拉布拉多犬(Chocolate Labrador) | 黑色拉布拉多犬(Black Labrador) - | - <img src="images/Labrador_retriever_06457.jpg" width="150"> | <img src="images/Labrador_retriever_06455.jpg" width="240"> | <img src="images/Labrador_retriever_06449.jpg" width="220"> 我们也提到了随机分类将得到一个非常低的结果:不考虑品种略有失衡的影响,随机猜测到正确品种的概率是1/133,相对应的准确率是低于1%的。 请记住,在深度学习领域,实践远远高于理论。大量尝试不同的框架吧,相信你的直觉!当然,玩得开心! 数据预处理 通过对每张图像的像素值除以255,我们对图像实现了归一化处理。
from PIL import ImageFile ImageFile.LOAD_TRUNCATED_IMAGES = True # Keras中的数据预处理过程 train_tensors = paths_to_tensor(train_files).astype('float32')/255 valid_tensors = paths_to_tensor(valid_files).astype('float32')/255 test_tensors = paths_to_tensor(test_files).astype('float32')/255
assets/media/uda-ml/deep/azjc/卷积神经网络的例子/dog/dog_app_zh.ipynb
hetaodie/hetaodie.github.io
mit
【练习】模型架构 创建一个卷积神经网络来对狗品种进行分类。在你代码块的最后,执行 model.summary() 来输出你模型的总结信息。 我们已经帮你导入了一些所需的 Python 库,如有需要你可以自行导入。如果你在过程中遇到了困难,如下是给你的一点小提示——该模型能够在5个 epoch 内取得超过1%的测试准确率,并且能在CPU上很快地训练。 <a id='question4'></a> 问题 4: 在下方的代码块中尝试使用 Keras 搭建卷积网络的架构,并回答相关的问题。 你可以尝试自己搭建一个卷积网络的模型,那么你需要回答你搭建卷积网络的具体步骤(用了哪些层)以及为什么这样搭建。 你也可以根据上图提示的步骤搭建卷积网络,那么请说明为何如上的架构能够在该问题上取得很好的表现。 回答: 我选择根据上图提示搭建卷积神经网络。首先,搭建三层卷积层可以检测更高级的特征,以达到狗狗品种分类的目的。同时,两个卷积层之间的池化层有效降低了数据的复杂度,使得训练效率得到有效提升
from keras.layers import Conv2D, MaxPooling2D, GlobalAveragePooling2D from keras.layers import Dropout, Flatten, Dense from keras.models import Sequential model = Sequential() ### TODO: 定义你的网络架构 model.add(Conv2D(filters=16, kernel_size=2, input_shape=(224, 224, 3), activation='relu')) model.add(MaxPooling2D(pool_size=2)) model.add(Dropout(0.2)) model.add(Conv2D(filters=32, kernel_size=2, activation='relu')) model.add(MaxPooling2D(pool_size=2)) model.add(Dropout(0.2)) model.add(Conv2D(filters=64, kernel_size=2, activation='relu')) model.add(MaxPooling2D(pool_size=2)) model.add(Dropout(0.2)) model.add(GlobalAveragePooling2D()) model.add(Dense(133, activation='softmax')) model.summary() ## 编译模型 model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])
assets/media/uda-ml/deep/azjc/卷积神经网络的例子/dog/dog_app_zh.ipynb
hetaodie/hetaodie.github.io
mit
【练习】训练模型 <a id='question5'></a> 问题 5: 在下方代码单元训练模型。使用模型检查点(model checkpointing)来储存具有最低验证集 loss 的模型。 可选题:你也可以对训练集进行 数据增强,来优化模型的表现。
from keras.callbacks import ModelCheckpoint ### TODO: 设置训练模型的epochs的数量 epochs = 5 ### 不要修改下方代码 checkpointer = ModelCheckpoint(filepath='saved_models/weights.best.from_scratch.hdf5', verbose=1, save_best_only=True) model.fit(train_tensors, train_targets, validation_data=(valid_tensors, valid_targets), epochs=epochs, batch_size=20, callbacks=[checkpointer], verbose=1) ## 加载具有最好验证loss的模型 model.load_weights('saved_models/weights.best.from_scratch.hdf5')
assets/media/uda-ml/deep/azjc/卷积神经网络的例子/dog/dog_app_zh.ipynb
hetaodie/hetaodie.github.io
mit
测试模型 在狗图像的测试数据集上试用你的模型。确保测试准确率大于1%。
# 获取测试数据集中每一个图像所预测的狗品种的index dog_breed_predictions = [np.argmax(model.predict(np.expand_dims(tensor, axis=0))) for tensor in test_tensors] # 报告测试准确率 test_accuracy = 100*np.sum(np.array(dog_breed_predictions)==np.argmax(test_targets, axis=1))/len(dog_breed_predictions) print('Test accuracy: %.4f%%' % test_accuracy)
assets/media/uda-ml/deep/azjc/卷积神经网络的例子/dog/dog_app_zh.ipynb
hetaodie/hetaodie.github.io
mit
<a id='step4'></a> 步骤 4: 使用一个CNN来区分狗的品种 使用 迁移学习(Transfer Learning)的方法,能帮助我们在不损失准确率的情况下大大减少训练时间。在以下步骤中,你可以尝试使用迁移学习来训练你自己的CNN。 得到从图像中提取的特征向量(Bottleneck Features)
bottleneck_features = np.load('/data/bottleneck_features/DogVGG16Data.npz') train_VGG16 = bottleneck_features['train'] valid_VGG16 = bottleneck_features['valid'] test_VGG16 = bottleneck_features['test']
assets/media/uda-ml/deep/azjc/卷积神经网络的例子/dog/dog_app_zh.ipynb
hetaodie/hetaodie.github.io
mit
模型架构 该模型使用预训练的 VGG-16 模型作为固定的图像特征提取器,其中 VGG-16 最后一层卷积层的输出被直接输入到我们的模型。我们只需要添加一个全局平均池化层以及一个全连接层,其中全连接层使用 softmax 激活函数,对每一个狗的种类都包含一个节点。
VGG16_model = Sequential() VGG16_model.add(GlobalAveragePooling2D(input_shape=train_VGG16.shape[1:])) VGG16_model.add(Dense(133, activation='softmax')) VGG16_model.summary() ## 编译模型 VGG16_model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy']) ## 训练模型 checkpointer = ModelCheckpoint(filepath='saved_models/weights.best.VGG16.hdf5', verbose=1, save_best_only=True) VGG16_model.fit(train_VGG16, train_targets, validation_data=(valid_VGG16, valid_targets), epochs=20, batch_size=20, callbacks=[checkpointer], verbose=1) ## 加载具有最好验证loss的模型 VGG16_model.load_weights('saved_models/weights.best.VGG16.hdf5')
assets/media/uda-ml/deep/azjc/卷积神经网络的例子/dog/dog_app_zh.ipynb
hetaodie/hetaodie.github.io
mit
测试模型 现在,我们可以测试此CNN在狗图像测试数据集中识别品种的效果如何。我们在下方打印出测试准确率。
# 获取测试数据集中每一个图像所预测的狗品种的index VGG16_predictions = [np.argmax(VGG16_model.predict(np.expand_dims(feature, axis=0))) for feature in test_VGG16] # 报告测试准确率 test_accuracy = 100*np.sum(np.array(VGG16_predictions)==np.argmax(test_targets, axis=1))/len(VGG16_predictions) print('Test accuracy: %.4f%%' % test_accuracy)
assets/media/uda-ml/deep/azjc/卷积神经网络的例子/dog/dog_app_zh.ipynb
hetaodie/hetaodie.github.io
mit
使用模型预测狗的品种
from extract_bottleneck_features import * def VGG16_predict_breed(img_path): # 提取bottleneck特征 bottleneck_feature = extract_VGG16(path_to_tensor(img_path)) # 获取预测向量 predicted_vector = VGG16_model.predict(bottleneck_feature) # 返回此模型预测的狗的品种 return dog_names[np.argmax(predicted_vector)]
assets/media/uda-ml/deep/azjc/卷积神经网络的例子/dog/dog_app_zh.ipynb
hetaodie/hetaodie.github.io
mit
<a id='step5'></a> 步骤 5: 建立一个CNN来分类狗的品种(使用迁移学习) 现在你将使用迁移学习来建立一个CNN,从而可以从图像中识别狗的品种。你的 CNN 在测试集上的准确率必须至少达到60%。 在步骤4中,我们使用了迁移学习来创建一个使用基于 VGG-16 提取的特征向量来搭建一个 CNN。在本部分内容中,你必须使用另一个预训练模型来搭建一个 CNN。为了让这个任务更易实现,我们已经预先对目前 keras 中可用的几种网络进行了预训练: VGG-19 bottleneck features ResNet-50 bottleneck features Inception bottleneck features Xception bottleneck features 这些文件被命名为为: Dog{network}Data.npz 其中 {network} 可以是 VGG19、Resnet50、InceptionV3 或 Xception 中的一个。选择上方网络架构中的一个,他们已经保存在目录 /data/bottleneck_features/ 中。 【练习】获取模型的特征向量 在下方代码块中,通过运行下方代码提取训练、测试与验证集相对应的bottleneck特征。 bottleneck_features = np.load('/data/bottleneck_features/Dog{network}Data.npz') train_{network} = bottleneck_features['train'] valid_{network} = bottleneck_features['valid'] test_{network} = bottleneck_features['test']
### TODO: 从另一个预训练的CNN获取bottleneck特征 bottleneck_features = np.load('/data/bottleneck_features/DogXceptionData.npz') train_Xception = bottleneck_features['train'] valid_Xception = bottleneck_features['valid'] test_Xception = bottleneck_features['test']
assets/media/uda-ml/deep/azjc/卷积神经网络的例子/dog/dog_app_zh.ipynb
hetaodie/hetaodie.github.io
mit
【练习】模型架构 建立一个CNN来分类狗品种。在你的代码单元块的最后,通过运行如下代码输出网络的结构: &lt;your model's name&gt;.summary() <a id='question6'></a> 问题 6: 在下方的代码块中尝试使用 Keras 搭建最终的网络架构,并回答你实现最终 CNN 架构的步骤与每一步的作用,并描述你在迁移学习过程中,使用该网络架构的原因。 回答: Xception_model = Sequential() 这一步是调用Xception的预训练模型 Xception_model.add(GlobalAveragePooling2D(input_shape=train_Resnet50.shape[1:])) 这一步添加一个全局平均池化层避免过拟合 Xception_model.add(Dropout(0.2)) 这一步是添加Dropout层避免过拟合 Xception_model.add(Dense(133, activation='softmax')) 这一步添加133个节点的全连接层,使用softmax激活函数输出每个狗狗品种的概率 使用该网络架构的原因是由于Xception具有如下优点: 1.相比传统的卷积神经网络如VGG复杂度降低,需要的参数数量下降。 2.可以做到更深,不会出现梯度消失的问题。 3.优化简单,分类准确度加深由于使用更深的网络。 4.Xception在众多图像识别领域中拔得头筹。 因此,选取Xception网络可以比之前的VGG网络取得更好的预测效果。 为什么这一架构会在这一分类任务中成功? 这四个架构都是经过反复多次实验确定的,非常有效果的架构。以Inception net为例,inception net是多层特征提取器,通过分别多次同时提取特征,然后叠加,就可以学到不同层次的特征,所以效果非常好。 为什么早期(第三步 )的尝试不成功? 第三步中,第一,使用的网络在架构上,非常浅,学到的特征非常少,其次学习库非常小,上面四个网络是在Imagenet上经过大量训练在不同种类的训练集上得来的,这是这个小库无法比拟的。
### TODO: 定义你的框架 # 调用Xception的预训练模型 Xception_model = Sequential() #加一个全局平均池化层避免过拟合 Xception_model.add(GlobalAveragePooling2D(input_shape=train_Xception.shape[1:])) #添加Dropout层避免过拟合 Xception_model.add(Dropout(0.2)) #添加133个节点的全连接层,使用softmax激活函数输出每个狗狗品种的概率 Xception_model.add(Dense(133, activation='softmax')) Xception_model.summary() ### TODO: 编译模型 Xception_model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
assets/media/uda-ml/deep/azjc/卷积神经网络的例子/dog/dog_app_zh.ipynb
hetaodie/hetaodie.github.io
mit
【练习】训练模型 <a id='question7'></a> 问题 7: 在下方代码单元中训练你的模型。使用模型检查点(model checkpointing)来储存具有最低验证集 loss 的模型。 当然,你也可以对训练集进行 数据增强 以优化模型的表现,不过这不是必须的步骤。
### TODO: 训练模型 checkpointer = ModelCheckpoint(filepath='saved_models/weights.best.Xception1.hdf5', verbose=1, save_best_only=True) history = Xception_model.fit(train_Xception, train_targets, validation_data=(valid_Xception, valid_targets), epochs=20, batch_size=20, callbacks=[checkpointer], verbose=1) ### TODO: 加载具有最佳验证loss的模型权重 Xception_model.load_weights('saved_models/weights.best.Xception1.hdf5')
assets/media/uda-ml/deep/azjc/卷积神经网络的例子/dog/dog_app_zh.ipynb
hetaodie/hetaodie.github.io
mit
【练习】测试模型 <a id='question8'></a> 问题 8: 在狗图像的测试数据集上试用你的模型。确保测试准确率大于60%。
### TODO: 在测试集上计算分类准确率 Xception_predictions = [np.argmax(Xception_model.predict(np.expand_dims(feature, axis=0))) for feature in test_Xception] # 报告测试准确率 test_accuracy = 100*np.sum(np.array(Xception_predictions)==np.argmax(test_targets, axis=1))/len(Xception_predictions) print('Test accuracy: %.4f%%' % test_accuracy)
assets/media/uda-ml/deep/azjc/卷积神经网络的例子/dog/dog_app_zh.ipynb
hetaodie/hetaodie.github.io
mit
【练习】使用模型测试狗的品种 实现一个函数,它的输入为图像路径,功能为预测对应图像的类别,输出为你模型预测出的狗类别(Affenpinscher, Afghan_hound 等)。 与步骤5中的模拟函数类似,你的函数应当包含如下三个步骤: 根据选定的模型载入图像特征(bottleneck features) 将图像特征输输入到你的模型中,并返回预测向量。注意,在该向量上使用 argmax 函数可以返回狗种类的序号。 使用在步骤0中定义的 dog_names 数组来返回对应的狗种类名称。 提取图像特征过程中使用到的函数可以在 extract_bottleneck_features.py 中找到。同时,他们应已在之前的代码块中被导入。根据你选定的 CNN 网络,你可以使用 extract_{network} 函数来获得对应的图像特征,其中 {network} 代表 VGG19, Resnet50, InceptionV3, 或 Xception 中的一个。 <a id='question9'></a> 问题 9:
### TODO: 写一个函数,该函数将图像的路径作为输入 ### 然后返回此模型所预测的狗的品种 def Xception_predict_breed(img_path): # extract bottleneck features bottleneck_feature = extract_Xception(path_to_tensor(img_path)) # obtain predicted vector predicted_vector = Xception_model.predict(bottleneck_feature) # return dog breed that is predicted by the model return dog_names[np.argmax(predicted_vector)]
assets/media/uda-ml/deep/azjc/卷积神经网络的例子/dog/dog_app_zh.ipynb
hetaodie/hetaodie.github.io
mit
<a id='step6'></a> 步骤 6: 完成你的算法 实现一个算法,它的输入为图像的路径,它能够区分图像是否包含一个人、狗或两者都不包含,然后: 如果从图像中检测到一只__狗__,返回被预测的品种。 如果从图像中检测到__人__,返回最相像的狗品种。 如果两者都不能在图像中检测到,输出错误提示。 我们非常欢迎你来自己编写检测图像中人类与狗的函数,你可以随意地使用上方完成的 face_detector 和 dog_detector 函数。你__需要__在步骤5使用你的CNN来预测狗品种。 下面提供了算法的示例输出,但你可以自由地设计自己的模型! <a id='question10'></a> 问题 10: 在下方代码块中完成你的代码。
### TODO: 设计你的算法 ### 自由地使用所需的代码单元数吧 from IPython.core.display import Image, display def dog_breed_algorithm(img_path): if dog_detector(img_path) == 1: print("hello, dog!") display(Image(img_path,width=200,height=200)) print("Your predicted breed is ... ") return print(Xception_predict_breed(img_path)) elif face_detector(img_path) == 1: print("hello, human!") display(Image(img_path,width=200,height=200)) print("You look like a ... ") return print(Xception_predict_breed(img_path)) else: display(Image(img_path,width=200,height=200)) return print("Could not identify a human or dog in the chosen image. Please try again.")
assets/media/uda-ml/deep/azjc/卷积神经网络的例子/dog/dog_app_zh.ipynb
hetaodie/hetaodie.github.io
mit
<a id='step7'></a> 步骤 7: 测试你的算法 在这个部分中,你将尝试一下你的新算法!算法认为__你__看起来像什么类型的狗?如果你有一只狗,它可以准确地预测你的狗的品种吗?如果你有一只猫,它会将你的猫误判为一只狗吗? 上传方式:点击左上角的Jupyter回到上级菜单,你可以看到Jupyter Notebook的右上方会有Upload按钮。 <a id='question11'></a> 问题 11: 在下方编写代码,用至少6张现实中的图片来测试你的算法。你可以使用任意照片,不过请至少使用两张人类图片(要征得当事人同意哦)和两张狗的图片。 同时请回答如下问题: 输出结果比你预想的要好吗 :) ?或者更糟 :( ? 提出至少三点改进你的模型的想法。 1.结果比我预想的好。该算法可以准确识别出图片中是否含有狗或者人 2. 1)对训练集进行数据增强以优化模型的表现 2)优化神经网络结构 3)增大数据集数据
## TODO: 在你的电脑上,在步骤6中,至少在6张图片上运行你的算法。 ## 自由地使用所需的代码单元数吧 for i in range(1, 7): filename = 'images/' + str(i) + '.jpg' print('filename = ' + filename) dog_breed_algorithm(filename) print('\n')
assets/media/uda-ml/deep/azjc/卷积神经网络的例子/dog/dog_app_zh.ipynb
hetaodie/hetaodie.github.io
mit
Forward pass: compute scores
scores = net.loss(X) print('scores: ') print(scores) print print('correct scores:') correct_scores = np.asarray([ [-0.81233741, -1.27654624, -0.70335995], [-0.17129677, -1.18803311, -0.47310444], [-0.51590475, -1.01354314, -0.8504215 ], [-0.15419291, -0.48629638, -0.52901952], [-0.00618733, -0.12435261, -0.15226949]]) print(correct_scores) print # The difference should be very small, get < 1e-7 print('Difference between your scores and correct scores:') print(np.sum(np.abs(scores - correct_scores)))
test/two_layer_net.ipynb
zklgame/CatEyeNets
mit
Forward pass: compute loss
loss, _ = net.loss(X, y, reg=0.05) corrent_loss = 1.30378789133 # should be very small, get < 1e-12 print('Difference between your loss and correct loss:') print(np.sum(np.abs(loss - corrent_loss)))
test/two_layer_net.ipynb
zklgame/CatEyeNets
mit
Backward pass Implement the rest of the function. This will compute the gradient of the loss with respect to the variables W1, b1, W2, and b2. Now that you (hopefully!) have a correctly implemented forward pass, you can debug your backward pass using a numeric gradient check:
from utils.gradient_check import eval_numerical_gradient loss, grads = net.loss(X, y, reg=0.05) # these should all be less than 1e-8 or so for param_name in grads: f = lambda W: net.loss(X, y, reg=0.05)[0] param_grad_num = eval_numerical_gradient(f, net.params[param_name], verbose=False) print('%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name])))
test/two_layer_net.ipynb
zklgame/CatEyeNets
mit
Train the network Once you have implemented the method, run the code below to train a two-layer network on toy data. You should achieve a training loss less than 0.2.
net = init_toy_model() stats = net.train(X, y, X, y, learning_rate=1e-1, reg=5e-6, num_iters=100, verbose=False) print('Final training loss: ', stats['loss_history'][-1]) # plot the loss history plt.plot(stats['loss_history']) plt.xlabel('iteration') plt.ylabel('training loss') plt.title('Training Loss history') plt.show()
test/two_layer_net.ipynb
zklgame/CatEyeNets
mit
Load the data Now that you have implemented a two-layer network that passes gradient checks and works on toy data, it's time to load up our favorite CIFAR-10 data so we can use it to train a classifier on a real dataset.
# Load the raw CIFAR-10 data cifar10_dir = 'datasets/cifar-10-batches-py' X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir) # Split the data num_training = 49000 num_validation = 1000 num_test = 1000 mask = range(num_training, num_training+num_validation) X_val = X_train[mask] y_val = y_train[mask] mask = range(num_training) X_train = X_train[mask] y_train = y_train[mask] mask = xrange(num_test) X_test = X_test[mask] y_test = y_test[mask] # Preprocessing: reshape the image data into rows X_train = X_train.reshape(X_train.shape[0], -1) X_val = X_val.reshape(X_val.shape[0], -1) X_test = X_test.reshape(X_test.shape[0], -1) # Normalize the data: subtract the mean rows mean_image = np.mean(X_train, axis=0) X_train -= mean_image X_val -= mean_image X_test -= mean_image print(X_train.shape, X_val.shape, X_test.shape)
test/two_layer_net.ipynb
zklgame/CatEyeNets
mit
Train a network To train our network we will use SGD with momentum. In addition, we will adjust the learning rate with an exponential learning rate schedule as optimization proceeds; after each epoch, we will reduce the learning rate by multiplying it by a decay rate.
input_size = 32 * 32 * 3 hidden_size = 50 num_classes = 10 net = TwoLayerNet(input_size, hidden_size, num_classes) # Train the network stats = net.train(X_train, y_train, X_val, y_val, learning_rate=1e-4, learning_rate_decay=0.95, reg=0.25, num_iters=1000, batch_size=200, verbose=True) # Predict on the validation set val_acc = (net.predict(X_val) == y_val).mean() print('Validation accuracy: ', val_acc)
test/two_layer_net.ipynb
zklgame/CatEyeNets
mit
Debug the training With the default parameters we provided above, you should get a validation accuracy of about 0.29 on the validation set. This isn't very good. One strategy for getting insight into what's wrong is to plot the loss function and the accuracies on the training and validation sets during optimization. Another strategy is to visualize the weights that were learned in the first layer of the network. In most neural networks trained on visual data, the first layer weights typically show some visible structure when visualized.
# Plot the loss function and train / validation accuracies plt.subplot(2, 1, 1) plt.plot(stats['loss_history']) plt.title('Loss history') plt.xlabel('Iteration') plt.ylabel('Loss') plt.subplot(2, 1, 2) plt.plot(stats['train_acc_history'], label='train') plt.plot(stats['val_acc_history'], label='val') plt.title('Classification accuracy history') plt.legend() plt.xlabel('Epoch') plt.ylabel('Classification accuracy') plt.show() from utils.vis_utils import visualize_grid # Visualize the weights of the network def show_net_weights(net): W1 = net.params['W1'] W1 = W1.reshape(32, 32, 3, -1).transpose(3, 0, 1, 2) plt.imshow(visualize_grid(W1, padding=3).astype('uint8')) plt.gca().axis('off') plt.show() show_net_weights(net)
test/two_layer_net.ipynb
zklgame/CatEyeNets
mit
Tune your hyperparameters What's wrong?. Looking at the visualizations above, we see that the loss is decreasing more or less linearly, which seems to suggest that the learning rate may be too low. Moreover, there is no gap between the training and validation accuracy, suggesting that the model we used has low capacity, and that we should increase its size. On the other hand, with a very large model we would expect to see more overfitting, which would manifest itself as a very large gap between the training and validation accuracy. Tuning. Tuning the hyperparameters and developing intuition for how they affect the final performance is a large part of using Neural Networks, so we want you to get a lot of practice. Below, you should experiment with different values of the various hyperparameters, including hidden layer size, learning rate, numer of training epochs, and regularization strength. You might also consider tuning the learning rate decay, but you should be able to get good performance using the default value. Approximate results. You should be aim to achieve a classification accuracy of greater than 48% on the validation set. Our best network gets over 52% on the validation set. Experiment: You goal in this exercise is to get as good of a result on CIFAR-10 as you can, with a fully-connected Neural Network. For every 1% above 52% on the Test set we will award you with one extra bonus point. Feel free implement your own techniques (e.g. PCA to reduce dimensionality, or adding dropout, or adding features to the solver, etc.).
input_size = 32 * 32 * 3 num_classes = 10 hidden_layer_size = [50] learning_rates = [3e-4, 9e-4, 1e-3, 3e-3] regularization_strengths = [7e-1, 8e-1, 9e-1, 1] results = {} best_model = None best_val = -1 for hidden_size in hidden_layer_size: for lr in learning_rates: for reg in regularization_strengths: model = TwoLayerNet(input_size, hidden_size, num_classes, std=1e-3) stats = model.train(X_train, y_train, X_val, y_val, learning_rate=lr, learning_rate_decay=0.95, reg=reg, num_iters=5000, batch_size=200, verbose=True) train_acc = (model.predict(X_train) == y_train).mean() val_acc = (model.predict(X_val) == y_val).mean() print('hidden_layer_size: %d, lr: %e, reg: %e, train_acc: %f, val_acc: %f' % (hidden_size, lr, reg, train_acc, val_acc)) results[(hidden_size, lr, reg)] = (train_acc, val_acc) if val_acc > best_val: best_val = val_acc best_model = model print print print('best val_acc: %f' % (best_val)) old_lr = -1 for hidden_size, lr, reg in sorted(results): if old_lr != lr: old_lr = lr print train_acc, val_acc = results[(hidden_size, lr, reg)] print('hidden_layer_size: %d, lr: %e, reg: %e, train_acc: %f, val_acc: %f' % (hidden_size, lr, reg, train_acc, val_acc)) for hidden_size, lr, reg in sorted(results): train_acc, val_acc = results[(hidden_size, lr, reg)] print('hidden_layer_size: %d, lr: %e, reg: %e, train_acc: %f, val_acc: %f' % (hidden_size, lr, reg, train_acc, val_acc)) # visualize the weights of the best network show_net_weights(best_model)
test/two_layer_net.ipynb
zklgame/CatEyeNets
mit
Run on the test set When you are done experimenting, you should evaluate your final trained network on the test set; you should get above 48%. We will give you extra bonus point for every 1% of accuracy above 52%.
test_acc = (best_model.predict(X_test) == y_test).mean() print('Test accuracy: ', test_acc)
test/two_layer_net.ipynb
zklgame/CatEyeNets
mit
model_mnist_cnn.py
#! /usr/bin/env python import tensorflow as tf class mnistCNN(object): """ A NN for mnist classification. """ def __init__(self, dense=500): # Placeholders for input, output and dropout self.input_x = tf.placeholder(tf.float32, [None, 784], name="input_x") self.input_y = tf.placeholder(tf.float32, [None, 10], name="input_y") # First layer self.dense_1 = self.dense_layer(self.input_x, input_dim=784, output_dim=dense) # Final layer self.dense_2 = self.dense_layer(self.dense_1, input_dim=dense, output_dim=10) self.predictions = tf.argmax(self.dense_2, 1, name="predictions") # Loss function self.loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(self.dense_2, self.input_y)) # Accuracy correct_predictions = tf.equal(self.predictions, tf.argmax(self.input_y, 1)) self.accuracy = tf.reduce_mean(tf.cast(correct_predictions, "float"), name="accuracy") def dense_layer(self, x, input_dim=10, output_dim=10, name='dense'): ''' Dense layer function Inputs: x: Input tensor input_dim: Dimmension of the input tensor. output_dim: dimmension of the output tensor name: Layer name ''' W = tf.Variable(tf.truncated_normal([input_dim, output_dim], stddev=0.1), name='W_'+name) b = tf.Variable(tf.constant(0.1, shape=[output_dim]), name='b_'+name) dense_output = tf.nn.relu(tf.matmul(x, W) + b) return dense_output
tensorflow_old/02-template_class/template_class.ipynb
sueiras/training
gpl-3.0
train.py
#! /usr/bin/env python from __future__ import print_function import tensorflow as tf #from data_utils import get_data, batch_generator #from model_mnist_cnn import mnistCNN # Parameters # ================================================== # Data loading params tf.flags.DEFINE_string("data_directory", '/tmp/MNIST_data', "Data dir (default /tmp/MNIST_data)") # Model Hyperparameters tf.flags.DEFINE_integer("dense_size", 500, "dense_size (default 500)") # Training parameters tf.flags.DEFINE_float("learning_rate", 0.001, "learning rate (default: 0.001)") tf.flags.DEFINE_integer("batch_size", 256, "Batch Size (default: 256)") tf.flags.DEFINE_integer("num_epochs", 20, "Number of training epochs (default: 20)") # Misc Parameters tf.flags.DEFINE_boolean("log_device_placement", False, "Log placement of ops on devices") FLAGS = tf.flags.FLAGS FLAGS._parse_flags() print("\nParameters:") for attr, value in sorted(FLAGS.__flags.items()): print("{}={}".format(attr.upper(), value)) print("") # Data Preparation # ================================================== #Access to the data mnist_data = get_data(data_dir= FLAGS.data_directory) # Training # ================================================== gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333, allow_growth = True) with tf.Graph().as_default(): session_conf = tf.ConfigProto( gpu_options=gpu_options, log_device_placement=FLAGS.log_device_placement) sess = tf.Session(config=session_conf) with sess.as_default(): # Create model cnn = mnistCNN(dense=FLAGS.dense_size) # Trainer train_op = tf.train.AdamOptimizer(FLAGS.learning_rate).minimize(cnn.loss) # Saver saver = tf.train.Saver(max_to_keep=1) # Initialize all variables sess.run(tf.global_variables_initializer()) # Train proccess for epoch in range(FLAGS.num_epochs): for n_batch in range(int(55000/FLAGS.batch_size)): batch = batch_generator(mnist_data, batch_size=FLAGS.batch_size, type='train') _, ce = sess.run([train_op, cnn.loss], feed_dict={cnn.input_x: batch[0], cnn.input_y: batch[1]}) print(epoch, ce) model_file = saver.save(sess, '/tmp/mnist_model') print('Model saved in', model_file)
tensorflow_old/02-template_class/template_class.ipynb
sueiras/training
gpl-3.0
Once you have seen the mean image of your dataset, how does it relate to your own expectations of the dataset? Did you expect something different? Was there something more "regular" or "predictable" about your dataset that the mean image did or did not reveal? If your mean image looks a lot like something recognizable, it's a good sign that there is a lot of predictability in your dataset. If your mean image looks like nothing at all, a gray blob where not much seems to stand out, then it's pretty likely that there isn't very much in common between your images. Neither is a bad scenario. Though, it is more likely that having some predictability in your mean image, e.g. something recognizable, that there are representations worth exploring with deeper networks capable of representing them. However, we're only using 100 images so it's a very small dataset to begin with. <a name="part-three---compute-the-standard-deviation"></a> Part Three - Compute the Standard Deviation <a name="instructions-2"></a> Instructions Now use tensorflow to calculate the standard deviation and upload the standard deviation image averaged across color channels as a "jet" heatmap of the 100 images. This will be a little more involved as there is no operation in tensorflow to do this for you. However, you can do this by calculating the mean image of your dataset as a 4-D array. To do this, you could write e.g. mean_img_4d = tf.reduce_mean(imgs, reduction_indices=0, keep_dims=True) to give you a 1 x H x W x C dimension array calculated on the N x H x W x C images variable. The reduction_indices parameter is saying to calculate the mean over the 0th dimension, meaning for every possible H, W, C, or for every pixel, you will have a mean composed over the N possible values it could have had, or what that pixel was for every possible image. This way, you can write images - mean_img_4d to give you a N x H x W x C dimension variable, with every image in your images array having been subtracted by the mean_img_4d. If you calculate the square root of the sum of the squared differences of this resulting operation, you have your standard deviation! In summary, you'll need to write something like: subtraction = imgs - tf.reduce_mean(imgs, reduction_indices=0, keep_dims=True), then reduce this operation using tf.sqrt(tf.reduce_sum(subtraction * subtraction, reduction_indices=0)) to get your standard deviation then include this image in your zip file as <b>std.png</b> <a name="code-2"></a> Code <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
# Create a tensorflow operation to give you the standard deviation # First compute the difference of every image with a # 4 dimensional mean image shaped 1 x H x W x C mean_img_4d = ... subtraction = imgs - mean_img_4d # Now compute the standard deviation by calculating the # square root of the sum of squared differences std_img_op = tf.sqrt(tf.reduce_sum(subtraction * subtraction, reduction_indices=0)) # Now calculate the standard deviation using your session std_img = sess.run(std_img_op) # Then plot the resulting standard deviation image: # Make sure the std image is the right size! assert(std_img.shape == (100, 100) or std_img.shape == (100, 100, 3)) plt.figure(figsize=(10, 10)) std_img_show = std_img / np.max(std_img) plt.imshow(std_img_show) plt.imsave(arr=std_img_show, fname='std.png')
session-1/.ipynb_checkpoints/session-1-checkpoint.ipynb
dariox2/CADL
apache-2.0
What we've just done is build a "hand-crafted" feature detector: the Gabor Kernel. This kernel is built to respond to particular orientation: horizontal edges, and a particular scale. It also responds equally to R, G, and B color channels, as that is how we have told the convolve operation to work: use the same kernel for every input color channel. When we work with deep networks, we'll see how we can learn the convolution kernels for every color channel, and learn many more of them, in the order of 100s per color channel. That is really where the power of deep networks will start to become obvious. For now, we've seen just how difficult it is to get at any higher order features of the dataset. We've really only picked out some edges! <a name="part-six---sort-the-dataset"></a> Part Six - Sort the Dataset <a name="instructions-5"></a> Instructions Using tensorflow, we'll attempt to organize your dataset. We'll try sorting based on the mean value of each convolved image's output to use for sorting. To do this, we could calculate either the sum value (tf.reduce_sum) or the mean value (tf.reduce_mean) of each image in your dataset and then use those values, e.g. stored inside a variable values to sort your images using something like tf.nn.top_k and sorted_imgs = np.array([imgs[idx_i] for idx_i in idxs]) prior to creating the montage image, m = montage(sorted_imgs, "sorted.png") and then include this image in your zip file as <b>sorted.png</b> <a name="code-5"></a> Code <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
# Create a set of operations using tensorflow which could # provide you for instance the sum or mean value of every # image in your dataset: # First flatten our convolved images so instead of many 3d images, # we have many 1d vectors. # This should convert our 4d representation of N x H x W x C to a # 2d representation of N x (H*W*C) flattened = tf.reshape(convolved... assert(flattened.get_shape().as_list() == [100, 10000]) # Now calculate some statistics about each of our images values = tf.reduce_sum(flattened, reduction_indices=1) # Then create another operation which sorts those values # and then calculate the result: idxs_op = tf.nn.top_k(values, k=100)[1] idxs = sess.run(idxs_op) # Then finally use the sorted indices to sort your images: sorted_imgs = np.array([imgs[idx_i] for idx_i in idxs]) # Then plot the resulting sorted dataset montage: # Make sure we have a 100 x 100 x 100 x 3 dimension array assert(sorted_imgs.shape == (100, 100, 100, 3)) plt.figure(figsize=(10, 10)) plt.imshow(utils.montage(sorted_imgs, 'sorted.png'))
session-1/.ipynb_checkpoints/session-1-checkpoint.ipynb
dariox2/CADL
apache-2.0
What does your sorting reveal? Could you imagine the same sorting over many more images reveal the thing your dataset sought to represent? It is likely that the representations that you wanted to find hidden within "higher layers", i.e., "deeper features" of the image, and that these "low level" features, edges essentially, are not very good at describing the really interesting aspects of your dataset. In later sessions, we'll see how we can combine the outputs of many more convolution kernels that have been assembled in a way that accentuate something very particular about each image, and build a sorting that is much more intelligent than this one! <a name="assignment-submission"></a> Assignment Submission Now that you've completed all 6 parts, we'll create a zip file of the current directory using the code below. This code will make sure you have included this completed ipython notebook and the following files named exactly as: <pre> session-1/ session-1.ipynb dataset.png mean.png std.png normalized.png kernel.png convolved.png sorted.png libs/ utils.py </pre> You'll then submit this zip file for your first assignment on Kadenze for "Assignment 1: Datasets/Computing with Tensorflow"! If you have any questions, remember to reach out on the forums and connect with your peers or with me. <b>To get assessed, you'll need to be a premium student which is free for a month!</b> If you aren't already enrolled as a student, register now at http://www.kadenze.com/ and join the #CADL community to see what your peers are doing! https://www.kadenze.com/courses/creative-applications-of-deep-learning-with-tensorflow/info Then remember to complete the remaining parts of Assignemnt 1 on Kadenze!: * Comment on 1 student's open-ended arrangement (Part 6) in the course gallery titled "Creating a Dataset/ Computing with Tensorflow". Think about what images they've used in their dataset and how the arrangement reflects what could be represented by that data. * Finally make a forum post in the forum for this assignment "Creating a Dataset/ Computing with Tensorflow". - Including a link to an artist making use of machine learning to organize data or finding representations within large datasets - Tell a little about their work (min 20 words). - Comment on at least 2 other student's forum posts (min 20 words) Make sure your notebook is named "session-1" or else replace it with the correct name in the list of files below:
utils.build_submission('session-1.zip', ('dataset.png', 'mean.png', 'std.png', 'normalized.png', 'kernel.png', 'convolved.png', 'sorted.png', 'session-1.ipynb'))
session-1/.ipynb_checkpoints/session-1-checkpoint.ipynb
dariox2/CADL
apache-2.0
Placement of ticks and custom tick labels We can explicitly determine where we want the axis ticks with set_xticks and set_yticks, which both take a list of values for where on the axis the ticks are to be placed. We can also use the set_xticklabels and set_yticklabels methods to provide a list of custom text labels for each tick location:
fig, ax = plt.subplots(figsize=(10, 4)) ax.plot(x, x**2, x, x**3, lw=2) ax.set_xticks([1, 2, 3, 4, 5]) ax.set_xticklabels([r'$\alpha$', r'$\beta$', r'$\gamma$', r'$\delta$', r'$\epsilon$'], fontsize=18) yticks = [0, 50, 100, 150] ax.set_yticks(yticks) ax.set_yticklabels(["$%.1f$" % y for y in yticks], fontsize=18); # use LaTeX formatted labels
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/04-Visualization-Matplotlib-Pandas/04b-Pandas Visualization/03 - Advanced Matplotlib Concepts.ipynb
arcyfelix/Courses
apache-2.0
There are a number of more advanced methods for controlling major and minor tick placement in matplotlib figures, such as automatic placement according to different policies. See http://matplotlib.org/api/ticker_api.html for details. Scientific notation With large numbers on axes, it is often better use scientific notation:
fig, ax = plt.subplots(1, 1) ax.plot(x, x**2, x, np.exp(x)) ax.set_title("scientific notation") ax.set_yticks([0, 50, 100, 150]) from matplotlib import ticker formatter = ticker.ScalarFormatter(useMathText=True) formatter.set_scientific(True) formatter.set_powerlimits((-1,1)) ax.yaxis.set_major_formatter(formatter)
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/04-Visualization-Matplotlib-Pandas/04b-Pandas Visualization/03 - Advanced Matplotlib Concepts.ipynb
arcyfelix/Courses
apache-2.0
Axis number and axis label spacing
# distance between x and y axis and the numbers on the axes matplotlib.rcParams['xtick.major.pad'] = 5 matplotlib.rcParams['ytick.major.pad'] = 5 fig, ax = plt.subplots(1, 1) ax.plot(x, x**2, x, np.exp(x)) ax.set_yticks([0, 50, 100, 150]) ax.set_title("label and axis spacing") # padding between axis label and axis numbers ax.xaxis.labelpad = 5 ax.yaxis.labelpad = 5 ax.set_xlabel("x") ax.set_ylabel("y"); # restore defaults matplotlib.rcParams['xtick.major.pad'] = 3 matplotlib.rcParams['ytick.major.pad'] = 3
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/04-Visualization-Matplotlib-Pandas/04b-Pandas Visualization/03 - Advanced Matplotlib Concepts.ipynb
arcyfelix/Courses
apache-2.0
Axis position adjustments Unfortunately, when saving figures the labels are sometimes clipped, and it can be necessary to adjust the positions of axes a little bit. This can be done using subplots_adjust:
fig, ax = plt.subplots(1, 1) ax.plot(x, x**2, x, np.exp(x)) ax.set_yticks([0, 50, 100, 150]) ax.set_title("title") ax.set_xlabel("x") ax.set_ylabel("y") fig.subplots_adjust(left=0.15, right=.9, bottom=0.1, top=0.9);
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/04-Visualization-Matplotlib-Pandas/04b-Pandas Visualization/03 - Advanced Matplotlib Concepts.ipynb
arcyfelix/Courses
apache-2.0
Axis grid With the grid method in the axis object, we can turn on and off grid lines. We can also customize the appearance of the grid lines using the same keyword arguments as the plot function:
fig, axes = plt.subplots(1, 2, figsize=(10,3)) # default grid appearance axes[0].plot(x, x**2, x, x**3, lw=2) axes[0].grid(True) # custom grid appearance axes[1].plot(x, x**2, x, x**3, lw=2) axes[1].grid(color='b', alpha=0.5, linestyle='dashed', linewidth=0.5)
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/04-Visualization-Matplotlib-Pandas/04b-Pandas Visualization/03 - Advanced Matplotlib Concepts.ipynb
arcyfelix/Courses
apache-2.0
Axis spines We can also change the properties of axis spines:
fig, ax = plt.subplots(figsize=(6,2)) ax.spines['bottom'].set_color('blue') ax.spines['top'].set_color('blue') ax.spines['left'].set_color('red') ax.spines['left'].set_linewidth(2) # turn off axis spine to the right ax.spines['right'].set_color("none") ax.yaxis.tick_left() # only ticks on the left side
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/04-Visualization-Matplotlib-Pandas/04b-Pandas Visualization/03 - Advanced Matplotlib Concepts.ipynb
arcyfelix/Courses
apache-2.0
Twin axes Sometimes it is useful to have dual x or y axes in a figure; for example, when plotting curves with different units together. Matplotlib supports this with the twinx and twiny functions:
fig, ax1 = plt.subplots() ax1.plot(x, x**2, lw=2, color="blue") ax1.set_ylabel(r"area $(m^2)$", fontsize=18, color="blue") for label in ax1.get_yticklabels(): label.set_color("blue") ax2 = ax1.twinx() ax2.plot(x, x**3, lw=2, color="red") ax2.set_ylabel(r"volume $(m^3)$", fontsize=18, color="red") for label in ax2.get_yticklabels(): label.set_color("red")
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/04-Visualization-Matplotlib-Pandas/04b-Pandas Visualization/03 - Advanced Matplotlib Concepts.ipynb
arcyfelix/Courses
apache-2.0
Axes where x and y is zero
fig, ax = plt.subplots() ax.spines['right'].set_color('none') ax.spines['top'].set_color('none') ax.xaxis.set_ticks_position('bottom') ax.spines['bottom'].set_position(('data',0)) # set position of x spine to x=0 ax.yaxis.set_ticks_position('left') ax.spines['left'].set_position(('data',0)) # set position of y spine to y=0 xx = np.linspace(-0.75, 1., 100) ax.plot(xx, xx**3);
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/04-Visualization-Matplotlib-Pandas/04b-Pandas Visualization/03 - Advanced Matplotlib Concepts.ipynb
arcyfelix/Courses
apache-2.0
Other 2D plot styles In addition to the regular plot method, there are a number of other functions for generating different kind of plots. See the matplotlib plot gallery for a complete list of available plot types: http://matplotlib.org/gallery.html. Some of the more useful ones are show below:
n = np.array([0,1,2,3,4,5]) fig, axes = plt.subplots(1, 4, figsize=(12,3)) axes[0].scatter(xx, xx + 0.25*np.random.randn(len(xx))) axes[0].set_title("scatter") axes[1].step(n, n**2, lw=2) axes[1].set_title("step") axes[2].bar(n, n**2, align="center", width=0.5, alpha=0.5) axes[2].set_title("bar") axes[3].fill_between(x, x**2, x**3, color="green", alpha=0.5); axes[3].set_title("fill_between");
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/04-Visualization-Matplotlib-Pandas/04b-Pandas Visualization/03 - Advanced Matplotlib Concepts.ipynb
arcyfelix/Courses
apache-2.0
Text annotation Annotating text in matplotlib figures can be done using the text function. It supports LaTeX formatting just like axis label texts and titles:
fig, ax = plt.subplots() ax.plot(xx, xx**2, xx, xx**3) ax.text(0.15, 0.2, r"$y=x^2$", fontsize=20, color="blue") ax.text(0.65, 0.1, r"$y=x^3$", fontsize=20, color="green");
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/04-Visualization-Matplotlib-Pandas/04b-Pandas Visualization/03 - Advanced Matplotlib Concepts.ipynb
arcyfelix/Courses
apache-2.0
Figures with multiple subplots and insets Axes can be added to a matplotlib Figure canvas manually using fig.add_axes or using a sub-figure layout manager such as subplots, subplot2grid, or gridspec: subplots
fig, ax = plt.subplots(2, 3) fig.tight_layout()
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/04-Visualization-Matplotlib-Pandas/04b-Pandas Visualization/03 - Advanced Matplotlib Concepts.ipynb
arcyfelix/Courses
apache-2.0
subplot2grid
fig = plt.figure() ax1 = plt.subplot2grid((3,3), (0,0), colspan=3) ax2 = plt.subplot2grid((3,3), (1,0), colspan=2) ax3 = plt.subplot2grid((3,3), (1,2), rowspan=2) ax4 = plt.subplot2grid((3,3), (2,0)) ax5 = plt.subplot2grid((3,3), (2,1)) fig.tight_layout()
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/04-Visualization-Matplotlib-Pandas/04b-Pandas Visualization/03 - Advanced Matplotlib Concepts.ipynb
arcyfelix/Courses
apache-2.0
gridspec
import matplotlib.gridspec as gridspec fig = plt.figure() gs = gridspec.GridSpec(2, 3, height_ratios=[2,1], width_ratios=[1,2,1]) for g in gs: ax = fig.add_subplot(g) fig.tight_layout()
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/04-Visualization-Matplotlib-Pandas/04b-Pandas Visualization/03 - Advanced Matplotlib Concepts.ipynb
arcyfelix/Courses
apache-2.0
add_axes Manually adding axes with add_axes is useful for adding insets to figures:
fig, ax = plt.subplots() ax.plot(xx, xx**2, xx, xx**3) fig.tight_layout() # inset inset_ax = fig.add_axes([0.2, 0.55, 0.35, 0.35]) # X, Y, width, height inset_ax.plot(xx, xx**2, xx, xx**3) inset_ax.set_title('zoom near origin') # set axis range inset_ax.set_xlim(-.2, .2) inset_ax.set_ylim(-.005, .01) # set axis tick locations inset_ax.set_yticks([0, 0.005, 0.01]) inset_ax.set_xticks([-0.1,0,.1]);
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/04-Visualization-Matplotlib-Pandas/04b-Pandas Visualization/03 - Advanced Matplotlib Concepts.ipynb
arcyfelix/Courses
apache-2.0
Colormap and contour figures Colormaps and contour figures are useful for plotting functions of two variables. In most of these functions we will use a colormap to encode one dimension of the data. There are a number of predefined colormaps. It is relatively straightforward to define custom colormaps. For a list of pre-defined colormaps, see: http://www.scipy.org/Cookbook/Matplotlib/Show_colormaps
alpha = 0.7 phi_ext = 2 * np.pi * 0.5 def flux_qubit_potential(phi_m, phi_p): return 2 + alpha - 2 * np.cos(phi_p) * np.cos(phi_m) - alpha * np.cos(phi_ext - 2*phi_p) phi_m = np.linspace(0, 2*np.pi, 100) phi_p = np.linspace(0, 2*np.pi, 100) X,Y = np.meshgrid(phi_p, phi_m) Z = flux_qubit_potential(X, Y).T
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/04-Visualization-Matplotlib-Pandas/04b-Pandas Visualization/03 - Advanced Matplotlib Concepts.ipynb
arcyfelix/Courses
apache-2.0
pcolor
fig, ax = plt.subplots() p = ax.pcolor(X/(2*np.pi), Y/(2*np.pi), Z, cmap=matplotlib.cm.RdBu, vmin=abs(Z).min(), vmax=abs(Z).max()) cb = fig.colorbar(p, ax=ax)
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/04-Visualization-Matplotlib-Pandas/04b-Pandas Visualization/03 - Advanced Matplotlib Concepts.ipynb
arcyfelix/Courses
apache-2.0
imshow
fig, ax = plt.subplots() im = ax.imshow(Z, cmap=matplotlib.cm.RdBu, vmin=abs(Z).min(), vmax=abs(Z).max(), extent=[0, 1, 0, 1]) im.set_interpolation('bilinear') cb = fig.colorbar(im, ax=ax)
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/04-Visualization-Matplotlib-Pandas/04b-Pandas Visualization/03 - Advanced Matplotlib Concepts.ipynb
arcyfelix/Courses
apache-2.0
contour
fig, ax = plt.subplots() cnt = ax.contour(Z, cmap=matplotlib.cm.RdBu, vmin=abs(Z).min(), vmax=abs(Z).max(), extent=[0, 1, 0, 1])
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/04-Visualization-Matplotlib-Pandas/04b-Pandas Visualization/03 - Advanced Matplotlib Concepts.ipynb
arcyfelix/Courses
apache-2.0
3D figures To use 3D graphics in matplotlib, we first need to create an instance of the Axes3D class. 3D axes can be added to a matplotlib figure canvas in exactly the same way as 2D axes; or, more conveniently, by passing a projection='3d' keyword argument to the add_axes or add_subplot methods.
from mpl_toolkits.mplot3d.axes3d import Axes3D
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/04-Visualization-Matplotlib-Pandas/04b-Pandas Visualization/03 - Advanced Matplotlib Concepts.ipynb
arcyfelix/Courses
apache-2.0
Surface plots
fig = plt.figure(figsize=(14,6)) # `ax` is a 3D-aware axis instance because of the projection='3d' keyword argument to add_subplot ax = fig.add_subplot(1, 2, 1, projection='3d') p = ax.plot_surface(X, Y, Z, rstride=4, cstride=4, linewidth=0) # surface_plot with color grading and color bar ax = fig.add_subplot(1, 2, 2, projection='3d') p = ax.plot_surface(X, Y, Z, rstride=1, cstride=1, cmap=matplotlib.cm.coolwarm, linewidth=0, antialiased=False) cb = fig.colorbar(p, shrink=0.5)
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/04-Visualization-Matplotlib-Pandas/04b-Pandas Visualization/03 - Advanced Matplotlib Concepts.ipynb
arcyfelix/Courses
apache-2.0
Wire-frame plot
fig = plt.figure(figsize=(8,6)) ax = fig.add_subplot(1, 1, 1, projection='3d') p = ax.plot_wireframe(X, Y, Z, rstride=4, cstride=4)
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/04-Visualization-Matplotlib-Pandas/04b-Pandas Visualization/03 - Advanced Matplotlib Concepts.ipynb
arcyfelix/Courses
apache-2.0
Coutour plots with projections
fig = plt.figure(figsize=(8,6)) ax = fig.add_subplot(1,1,1, projection='3d') ax.plot_surface(X, Y, Z, rstride=4, cstride=4, alpha=0.25) cset = ax.contour(X, Y, Z, zdir='z', offset=-np.pi, cmap=matplotlib.cm.coolwarm) cset = ax.contour(X, Y, Z, zdir='x', offset=-np.pi, cmap=matplotlib.cm.coolwarm) cset = ax.contour(X, Y, Z, zdir='y', offset=3*np.pi, cmap=matplotlib.cm.coolwarm) ax.set_xlim3d(-np.pi, 2*np.pi); ax.set_ylim3d(0, 3*np.pi); ax.set_zlim3d(-np.pi, 2*np.pi);
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/04-Visualization-Matplotlib-Pandas/04b-Pandas Visualization/03 - Advanced Matplotlib Concepts.ipynb
arcyfelix/Courses
apache-2.0
The SDK uses the Python logging module to tell you what it's doing; if desired you can control what sort of output you see by uncommenting one of the lines below:
import logging logger = logging.getLogger("bwapi") #(Default) All logging messages enabled #logger.setLevel(logging.DEBUG) #Does not report URL's of API requests, but all other messages enabled #logger.setLevel(logging.INFO) #Report only errors and warnings #logger.setLevel(logging.WARN) #Report only errors #logger.setLevel(logging.ERROR) #Disable logging #logger.setLevel(logging.CRITICAL)
DEMO.ipynb
anthonybu/api_sdk
mit
Project When you use the API for the first time you have to authenticate with Brandwatch. This will get you an access token. The access token is stored in a credentials file (tokens.txt in this example). Once you've authenticated your access token will be read from that file so you won't need to enter your password again. You can authenticate from command line using the provided console script bwapi-authenticate: $ bwapi-authenticate Please enter your Brandwatch credentials below Username: example@example Password: Authenticating user: example@example Writing access token for user: example@example Writing access token for user: example@example Success! Access token: 00000000-0000-0000-0000-000000000000 Alternatively, you can authenticate directly:
BWUser(username="[email protected]", password="YOUR_PASSWORD", token_path="tokens.txt")
DEMO.ipynb
anthonybu/api_sdk
mit
Now you have authenticated you can load your project:
YOUR_ACCOUNT = your_account YOUR_PROJECT = your_project project = BWProject(username=YOUR_ACCOUNT, project=YOUR_PROJECT)
DEMO.ipynb
anthonybu/api_sdk
mit
Before we really begin, please note that you can get documentation for any class or function by viewing the help documentation
help(BWProject)
DEMO.ipynb
anthonybu/api_sdk
mit
Queries Now we create some objects which can manipulate queries and groups in our project:
queries = BWQueries(project)
DEMO.ipynb
anthonybu/api_sdk
mit
Let's check what queries already exist in the account
queries.names
DEMO.ipynb
anthonybu/api_sdk
mit
We can also upload queries directly via the API by handing the "name", "searchTerms" and "backfillDate" to the upload funcion. If you don't pass a backfillDate, then the query will not backfill. The BWQueries class inserts default values for the "languages", "type", "industry", and "samplePercent" parameters, but we can override the defaults by including them as keyword arguments if we want. Upload accepts two boolean keyword arguments - "create_only" and "modify_only" (both defaulting to False) - which specifies what API verbs the function is allowed to use; for instance, if we set "create_only" to True then the function will post a new query if it can and otherwise it will do nothing. Note: this is true of all upload functions in this package.
queries.upload(name = "Brandwatch Engagement", includedTerms = "at_mentions:Brandwatch", backfill_date = "2015-09-01")
DEMO.ipynb
anthonybu/api_sdk
mit