markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
As you see, CSR is faster, and for more unstructured patterns the gain will be larger. CSR format has difficulties with adding new elements. How to solve linear systems? Direct or iterative solvers Direct solvers The direct methods use sparse Gaussian elimination, i.e. they eliminate variables while trying to keep the matrix as sparse as possible. And often, the inverse of a sparse matrix is not sparse: (it corresponds to some integral operator, so it has block low-rank structure. Details will be later in this course)
N = n = 100 ex = np.ones(n); a = sp.sparse.spdiags(np.vstack((ex, -2*ex, ex)), [-1, 0, 1], n, n, 'csr'); a = a.todense() b = np.array(np.linalg.inv(a)) fig,axes = plt.subplots(1, 2) axes[0].spy(a) axes[1].spy(b,markersize=2)
lecture-7.ipynb
oseledets/fastpde
cc0-1.0
Looks woefully.
N = n = 5 ex = np.ones(n); A = sp.sparse.spdiags(np.vstack((ex, -2*ex, ex)), [-1, 0, 1], n, n, 'csr'); A = A.todense() B = np.array(np.linalg.inv(A)) print B
lecture-7.ipynb
oseledets/fastpde
cc0-1.0
But occasionally L and U factors can be sparse.
p, l, u = scipy.linalg.lu(a) fig,axes = plt.subplots(1, 2) axes[0].spy(l) axes[1].spy(u)
lecture-7.ipynb
oseledets/fastpde
cc0-1.0
In 1D factors L and U are bidiagonal. In 2D factors L and U looks less optimistic, but still ok.)
n = 3 ex = np.ones(n); lp1 = sp.sparse.spdiags(np.vstack((ex, -2*ex, ex)), [-1, 0, 1], n, n, 'csr'); e = sp.sparse.eye(n) A = sp.sparse.kron(lp1, e) + sp.sparse.kron(e, lp1) A = csc_matrix(A) T = scipy.sparse.linalg.splu(A) fig,axes = plt.subplots(1, 2) axes[0].spy(a, markersize=1) axes[1].spy(T.L, marker='.', markersize=0.4)
lecture-7.ipynb
oseledets/fastpde
cc0-1.0
Sparse matrices and graph ordering The number of non-zeros in the LU decomposition has a deep connection to the graph theory. (I.e., there is an edge between $(i, j)$ if $a_{ij} \ne 0$.
import networkx as nx n = 13 ex = np.ones(n); lp1 = sp.sparse.spdiags(np.vstack((ex, -2*ex, ex)), [-1, 0, 1], n, n, 'csr'); e = sp.sparse.eye(n) A = sp.sparse.kron(lp1, e) + sp.sparse.kron(e, lp1) A = csc_matrix(A) G = nx.Graph(A) nx.draw(G, pos=nx.spring_layout(G), node_size=10)
lecture-7.ipynb
oseledets/fastpde
cc0-1.0
Strategies for elimination The reordering that minimizes the fill-in is important, so we can use graph theory to find one. Minimum degree ordering - order by the degree of the vertex Cuthill–McKee algorithm (and reverse Cuthill-McKee) -- order for a small bandwidth Nested dissection: split the graph into two with minimal number of vertices on the separator
import networkx as nx from networkx.utils import reverse_cuthill_mckee_ordering, cuthill_mckee_ordering n = 13 ex = np.ones(n); lp1 = sp.sparse.spdiags(np.vstack((ex, -2*ex, ex)), [-1, 0, 1], n, n, 'csr'); e = sp.sparse.eye(n) A = sp.sparse.kron(lp1, e) + sp.sparse.kron(e, lp1) A = csc_matrix(A) G = nx.Graph(A) #rcm = list(reverse_cuthill_mckee_ordering(G)) rcm = list(reverse_cuthill_mckee_ordering(G)) A1 = A[rcm, :][:, rcm] plt.spy(A1, marker='.', markersize=3) #p, L, U = scipy.linalg.lu(A1.todense()) #plt.spy(L, marker='.', markersize=0.8) #nx.draw(G, pos=nx.spring_layout(G), node_size=10)
lecture-7.ipynb
oseledets/fastpde
cc0-1.0
Florida sparse matrix collection Florida sparse matrix collection which contains all sorts of matrices for different applications. It also allows for finding test matrices as well! Let's have a look.
from IPython.display import HTML HTML('<iframe src=http://yifanhu.net/GALLERY/GRAPHS/search.html width=700 height=450></iframe>')
lecture-7.ipynb
oseledets/fastpde
cc0-1.0
Test some Let us check some sparse matrix (and its LU).
fname = 'crystm02.mat' !wget http://www.cise.ufl.edu/research/sparse/mat/Boeing/$fname from scipy.io import loadmat import scipy.sparse q = loadmat(fname) #print q mat = q['Problem']['A'][0, 0] T = scipy.sparse.linalg.splu(mat) #Compute its LU %matplotlib inline import matplotlib.pyplot as plt plt.spy(T.L, markersize=0.1)
lecture-7.ipynb
oseledets/fastpde
cc0-1.0
Iterative solvers The main disadvantage of factorization methods is there computational complexity. A more efficient solution of linear systems can be obtained by iterative methods. This requires a high convergence rate of the iterative process and low arithmetic cost of each iteration. Modern iterative methods are mainly based on the idea of iteration on Krylov subspace. $$ \mathcal{K}i = span{b,~Ab,~A^2b,~ ..,~ A^{i-1}b}, ~~ i = 1,2,..$$ $$ x_i = argmin{ \|b-Ax\|{\text{some norm}}:x\in \mathcal{K}_i} $$ In fact, to apply iterative solver to a system with matrix $~A$ all you need to know is how to multiply matrix by vector how to apply preconditioner Preconditioners If A is ill conditioned then iterative methods give you a lot of iterations. You can reduce number of iterations if you find matrix $~B$ (called preconditioner), such that $~AB$ or $~BA$ matrices has less conditional number. $$Ax=y \Rightarrow BAx= By$$ $$ABz= y, x= Bz.$$ To be a good preconditioner matrix $~B$ must be somehow close to inverse matrix of $~A$ $$B \approx A^{-1}.$$ Note that $B = A^{-1}$ is a perfect preconditioner and gives you 1 iteration to converge. But building this preconditioner requires as much operations as the direct solution requires. Building a preconditioner requires some compromise between time for building it and iterations time. Two basic strategies for building preconditioner: Use information about elements of matrix $A$ Use additional information about problem. The first strategy, where we use information about elements of matrix $A$ For sparse matrices we use only non-zero elements. Good example is a method of Incomplete matrix factorization The main idea here is to avoid full factorization by dropping some elements in the factorization. Drop rules specify type of incomplete factorization and type of preconditioner. Standard ILU preconditioners: ILU($0$) ILU(k) ILUt ILU2 The second strategy, where we use additional information about a problem Here we use additional information about where the matrix came from. For example, Multigrid and Domain Decomposition methods (see next lecture for multigrid)
from IPython.core.display import HTML def css_styling(): styles = open("./styles/custom.css", "r").read() return HTML(styles) css_styling()
lecture-7.ipynb
oseledets/fastpde
cc0-1.0
Test Frame Nodes
%%Table nodes NODEID,X,Y,Z A,0,0,5000 B,0,4000,5000 C,8000,4000,5000 D,8000,0,5000 @sl.extend(Frame2D) class Frame2D: COLUMNS_nodes = ('NODEID','X','Y') def install_nodes(self): node_table = self.get_table('nodes') for ix,r in node_table.data.iterrows(): if r.NODEID in self.nodes: raise Exception('Multiply defined node: {}'.format(r.NODEID)) n = Node(r.NODEID,r.X,r.Y) self.nodes[n.id] = n self.rawdata.nodes = node_table def get_node(self,id): try: return self.nodes[id] except KeyError: raise Exception('Node not defined: {}'.format(id)) ##test: f = Frame2D() ##test: f.install_nodes() ##test: f.nodes ##test: f.get_node('C')
Devel/Old/v04-old/Milestones/Frame2D-v04-Milestone2.ipynb
nholtz/structural-analysis
cc0-1.0
Supports
%%Table supports NODEID,C0,C1,C2 A,FX,FY,MZ D,FX,FY def isnan(x): if x is None: return True try: return np.isnan(x) except TypeError: return False @sl.extend(Frame2D) class Frame2D: COLUMNS_supports = ('NODEID','C0','C1','C2') def install_supports(self): table = self.get_table('supports') for ix,row in table.data.iterrows(): node = self.get_node(row.NODEID) for c in [row.C0,row.C1,row.C2]: if not isnan(c): node.add_constraint(c) self.rawdata.supports = table ##test: f.install_supports() vars(f.get_node('D'))
Devel/Old/v04-old/Milestones/Frame2D-v04-Milestone2.ipynb
nholtz/structural-analysis
cc0-1.0
Members
%%Table members MEMBERID,NODEJ,NODEK AB,A,B BC,B,C DC,D,C @sl.extend(Frame2D) class Frame2D: COLUMNS_members = ('MEMBERID','NODEJ','NODEK') def install_members(self): table = self.get_table('members') for ix,m in table.data.iterrows(): if m.MEMBERID in self.members: raise Exception('Multiply defined member: {}'.format(m.MEMBERID)) memb = Member(m.MEMBERID,self.get_node(m.NODEJ),self.get_node(m.NODEK)) self.members[memb.id] = memb self.rawdata.members = table def get_member(self,id): try: return self.members[id] except KeyError: raise Exception('Member not defined: {}'.format(id)) ##test: f.install_members() f.members ##test: m = f.get_member('BC') m.id, m.L, m.dcx, m.dcy
Devel/Old/v04-old/Milestones/Frame2D-v04-Milestone2.ipynb
nholtz/structural-analysis
cc0-1.0
Releases
%%Table releases MEMBERID,RELEASE AB,MZK @sl.extend(Frame2D) class Frame2D: COLUMNS_releases = ('MEMBERID','RELEASE') def install_releases(self): table = self.get_table('releases',optional=True) for ix,r in table.data.iterrows(): memb = self.get_member(r.MEMBERID) memb.add_release(r.RELEASE) self.rawdata.releases = table ##test: f.install_releases() ##test: vars(f.get_member('AB'))
Devel/Old/v04-old/Milestones/Frame2D-v04-Milestone2.ipynb
nholtz/structural-analysis
cc0-1.0
Properties If the SST module is loadable, member properties may be specified by giving steel shape designations (such as 'W310x97') in the member properties data. If the module is not available, you may still give $A$ and $I_x$ directly (it only tries to lookup the properties if these two are not provided).
try: from sst import SST __SST = SST() get_section = __SST.section except ImportError: def get_section(dsg,fields): raise ValueError('Cannot lookup property SIZE because SST is not available. SIZE = {}'.format(dsg)) ##return [1.] * len(fields.split(',')) # in case you want to do it that way %%Table properties MEMBERID,SIZE,IX,A BC,W460x106,, AB,W310x97,, DC,, @sl.extend(Frame2D) class Frame2D: COLUMNS_properties = ('MEMBERID','SIZE','IX','A') def install_properties(self): table = self.get_table('properties') table = self.fill_properties(table) for ix,row in table.data.iterrows(): memb = self.get_member(row.MEMBERID) memb.size = row.SIZE memb.Ix = row.IX memb.A = row.A self.rawdata.properties = table def fill_properties(self,table): data = table.data for ix,row in data.iterrows(): if type(row.SIZE) in [type(''),type(u'')]: if isnan(row.IX) or isnan(row.A): Ix,A = get_section(row.SIZE,'Ix,A') if isnan(row.IX): data.loc[ix,'IX'] = Ix if isnan(row.A): data.loc[ix,'A'] = A elif isnan(row.SIZE): data.loc[ix,'SIZE'] = '' table.data = data.fillna(method='ffill') return table ##test: f.install_properties() ##test: vars(f.get_member('DC'))
Devel/Old/v04-old/Milestones/Frame2D-v04-Milestone2.ipynb
nholtz/structural-analysis
cc0-1.0
Node Loads
%%Table node_loads LOAD,NODEID,DIRN,F Wind,B,FX,-200000. @sl.extend(Frame2D) class Frame2D: COLUMNS_node_loads = ('LOAD','NODEID','DIRN','F') def install_node_loads(self): table = self.get_table('node_loads') dirns = ['FX','FY','FZ'] for ix,row in table.data.iterrows(): n = self.get_node(row.NODEID) if row.DIRN not in dirns: raise ValueError("Invalid node load direction: {} for load {}, node {}; must be one of '{}'" .format(row.DIRN, row.LOAD, row.NODEID, ', '.join(dirns))) l = makeNodeLoad({row.DIRN:row.F}) self.nodeloads.append(row.LOAD,n,l) self.rawdata.node_loads = table ##test: f.install_node_loads() ##test: for o,l,fact in f.nodeloads.iterloads('Wind'): print(o,l,fact,l*fact)
Devel/Old/v04-old/Milestones/Frame2D-v04-Milestone2.ipynb
nholtz/structural-analysis
cc0-1.0
Member Loads
%%Table member_loads LOAD,MEMBERID,TYPE,W1,W2,A,B,C Live,BC,UDL,-50,,,, Live,BC,PL,-200000,,5000 @sl.extend(Frame2D) class Frame2D: COLUMNS_member_loads = ('LOAD','MEMBERID','TYPE','W1','W2','A','B','C') def install_member_loads(self): table = self.get_table('member_loads') for ix,row in table.data.iterrows(): m = self.get_member(row.MEMBERID) l = makeMemberLoad(m.L,row) self.memberloads.append(row.LOAD,m,l) self.rawdata.member_loads = table ##test: f.install_member_loads() ##test: for o,l,fact in f.memberloads.iterloads('Live'): print(o.id,l,fact,l.fefs()*fact)
Devel/Old/v04-old/Milestones/Frame2D-v04-Milestone2.ipynb
nholtz/structural-analysis
cc0-1.0
Load Combinations
%%Table load_combinations COMBO,LOAD,FACTOR One,Live,1.5 One,Wind,1.75 @sl.extend(Frame2D) class Frame2D: COLUMNS_load_combinations = ('COMBO','LOAD','FACTOR') def install_load_combinations(self): table = self.get_table('load_combinations') for ix,row in table.data.iterrows(): self.loadcombinations.append(row.COMBO,row.LOAD,row.FACTOR) self.rawdata.load_combinations = table ##test: f.install_load_combinations() ##test: for o,l,fact in f.loadcombinations.iterloads('One',f.nodeloads): print(o.id,l,fact) for o,l,fact in f.loadcombinations.iterloads('One',f.memberloads): print(o.id,l,fact,l.fefs()*fact)
Devel/Old/v04-old/Milestones/Frame2D-v04-Milestone2.ipynb
nholtz/structural-analysis
cc0-1.0
Load Iterators
@sl.extend(Frame2D) class Frame2D: def iter_nodeloads(self,comboname): for o,l,f in self.loadcombinations.iterloads(comboname,self.nodeloads): yield o,l,f def iter_memberloads(self,comboname): for o,l,f in self.loadcombinations.iterloads(comboname,self.memberloads): yield o,l,f ##test: for o,l,fact in f.iter_nodeloads('One'): print(o.id,l,fact) for o,l,fact in f.iter_memberloads('One'): print(o.id,l,fact)
Devel/Old/v04-old/Milestones/Frame2D-v04-Milestone2.ipynb
nholtz/structural-analysis
cc0-1.0
Support Constraints
%%Table supports NODEID,C0,C1,C2 A,FX,FY,MZ D,FX,FY
Devel/Old/v04-old/Milestones/Frame2D-v04-Milestone2.ipynb
nholtz/structural-analysis
cc0-1.0
Accumulated Cell Data
##test: Table.CELLDATA
Devel/Old/v04-old/Milestones/Frame2D-v04-Milestone2.ipynb
nholtz/structural-analysis
cc0-1.0
Mandamos llamar al simulador
from robots.simuladores import simulador %matplotlib widget ts, xs = simulador(puerto_zmq="5551", f=f, x0=[0, 0, 0, 0], dt=0.02)
Practicas/practica2/numerico.ipynb
robblack007/clase-dinamica-robot
mit
Hat potential The following potential is often used in Physics and other fields to describe symmetry breaking and is often known as the "hat potential": $$ V(x) = -a x^2 + b x^4 $$ Write a function hat(x,a,b) that returns the value of this function:
def hat(x,a,b): return (-a*x**2 + b*x**4) assert hat(0.0, 1.0, 1.0)==0.0 assert hat(0.0, 1.0, 1.0)==0.0 assert hat(1.0, 10.0, 1.0)==-9.0
assignments/assignment11/OptimizationEx01.ipynb
phungkh/phys202-2015-work
mit
Plot this function over the range $x\in\left[-3,3\right]$ with $b=1.0$ and $a=5.0$:
a = 5.0 b = 1.0 x = np.linspace(-3,3,1000) plt.plot(x, hat(x,a,b)) assert True # leave this to grade the plot
assignments/assignment11/OptimizationEx01.ipynb
phungkh/phys202-2015-work
mit
Write code that finds the two local minima of this function for $b=1.0$ and $a=5.0$. Use scipy.optimize.minimize to find the minima. You will have to think carefully about how to get this function to find both minima. Print the x values of the minima. Plot the function as a blue line. On the same axes, show the minima as red circles. Customize your visualization to make it beatiful and effective.
min1 = opt.minimize(hat, x0 =-1.7,args=(a,b)) min2=opt.minimize(hat, x0 =1.7, args=(a,b)) print(min1,min2) print('Our minimas are x=-1.58113883 and x=1.58113882') plt.figure(figsize=(7,5)) plt.plot(x,hat(x,a,b), color = 'b',label='hat potential') plt.box(False) plt.title('Hat Potential') plt.scatter(x=-1.58113883,y=hat(x=-1.58113883,a=5,b=1), color='r', label='min1') plt.scatter(x=1.58113883,y=hat(x=-1.58113883,a=5,b=1), color='r',label='min2') plt.legend() assert True # leave this for grading the plot
assignments/assignment11/OptimizationEx01.ipynb
phungkh/phys202-2015-work
mit
First, we just compute the Python EVZ and display a sample. The "scores()" method returns a list of centrality scores in order of the vertices. Thus, what you see below are the (normalized, see the respective argument) centrality scores for G.nodes()[0], G.nodes()[1], ...
evzSciPy = networkit.centrality.SciPyEVZ(G, normalized=True) evzSciPy.run() evzSciPy.scores()[:10]
Doc/uploads/docs/SpectralCentrality.ipynb
fmaschler/networkit
mit
We now take a look at the 10 most central vertices according to the four heuristics. Here, the centrality algorithms offer the ranking() method that returns a list of (vertex, centrality) ordered by centrality.
evzSciPy.ranking()[:10]
Doc/uploads/docs/SpectralCentrality.ipynb
fmaschler/networkit
mit
Compute the EVZ using the C++ backend and also display the 10 most important vertices, just as above. This should hopefully look similar... Please note: The normalization argument may not be passed as a named argument to the C++-backed centrality measures. This is due to some limitation in the C++ wrapping code.
evz = networkit.centrality.EigenvectorCentrality(G, True) evz.run() evz.ranking()[:10]
Doc/uploads/docs/SpectralCentrality.ipynb
fmaschler/networkit
mit
Now, let's take a look at the PageRank. First, compute the PageRank using the C++ backend and display the 10 most important vertices. The second argument to the algorithm is the dampening factor, i.e. the probability that a random walk just stops at a vertex and instead teleports to some other vertex.
pageRank = networkit.centrality.PageRank(G, 0.95, True) pageRank.run() pageRank.ranking()[:10]
Doc/uploads/docs/SpectralCentrality.ipynb
fmaschler/networkit
mit
Same in Python...
SciPyPageRank = networkit.centrality.SciPyPageRank(G, 0.95, normalized=True) SciPyPageRank.run() SciPyPageRank.ranking()[:10]
Doc/uploads/docs/SpectralCentrality.ipynb
fmaschler/networkit
mit
If everything went well, these should look similar, too. Finally, we take a look at the relative differences between the computed centralities for the vertices:
differences = [(max(x[0], x[1]) / min(x[0], x[1])) - 1 for x in zip(evz.scores(), evzSciPy.scores())] print("Average relative difference: {}".format(sum(differences) / len(differences))) print("Maximum relative difference: {}".format(max(differences)))
Doc/uploads/docs/SpectralCentrality.ipynb
fmaschler/networkit
mit
Loading Data For this example notebook, we'll be using the elevators UCI dataset used in the paper. Running the next cell downloads a copy of the dataset that has already been scaled and normalized appropriately. For this notebook, we'll simply be splitting the data using the first 40% of the data as training and the last 60% as testing. Note: Running the next cell will attempt to download a small dataset file to the current directory.
import urllib.request import os from scipy.io import loadmat from math import floor # this is for running the notebook in our testing framework smoke_test = ('CI' in os.environ) if not smoke_test and not os.path.isfile('../elevators.mat'): print('Downloading \'elevators\' UCI dataset...') urllib.request.urlretrieve('https://drive.google.com/uc?export=download&id=1jhWL3YUHvXIaftia4qeAyDwVxo6j1alk', '../elevators.mat') if smoke_test: # this is for running the notebook in our testing framework X, y = torch.randn(100, 3), torch.randn(100) else: data = torch.Tensor(loadmat('../elevators.mat')['data']) X = data[:, :-1] X = X - X.min(0)[0] X = 2 * (X / X.max(0)[0]) - 1 y = data[:, -1] train_n = int(floor(0.8 * len(X))) train_x = X[:train_n, :].contiguous() train_y = y[:train_n].contiguous() test_x = X[train_n:, :].contiguous() test_y = y[train_n:].contiguous() if torch.cuda.is_available(): train_x, train_y, test_x, test_y = train_x.cuda(), train_y.cuda(), test_x.cuda(), test_y.cuda()
examples/02_Scalable_Exact_GPs/Simple_GP_Regression_With_LOVE_Fast_Variances_and_Sampling.ipynb
jrg365/gpytorch
mit
LOVE can be used with any type of GP model, including exact GPs, multitask models and scalable approximations. Here we demonstrate LOVE in conjunction with KISS-GP, which has the amazing property of producing constant time variances. The KISS-GP + LOVE GP Model We now define the GP model. For more details on the use of GP models, see our simpler examples. This model uses a GridInterpolationKernel (SKI) with an Deep RBF base kernel. The forward method passes the input data x through the neural network feature extractor defined above, scales the resulting features to be between 0 and 1, and then calls the kernel. The Deep RBF kernel (DKL) uses a neural network as an initial feature extractor. In this case, we use a fully connected network with the architecture d -&gt; 1000 -&gt; 500 -&gt; 50 -&gt; 2, as described in the original DKL paper. All of the code below uses standard PyTorch implementations of neural network layers.
class LargeFeatureExtractor(torch.nn.Sequential): def __init__(self, input_dim): super(LargeFeatureExtractor, self).__init__() self.add_module('linear1', torch.nn.Linear(input_dim, 1000)) self.add_module('relu1', torch.nn.ReLU()) self.add_module('linear2', torch.nn.Linear(1000, 500)) self.add_module('relu2', torch.nn.ReLU()) self.add_module('linear3', torch.nn.Linear(500, 50)) self.add_module('relu3', torch.nn.ReLU()) self.add_module('linear4', torch.nn.Linear(50, 2)) class GPRegressionModel(gpytorch.models.ExactGP): def __init__(self, train_x, train_y, likelihood): super(GPRegressionModel, self).__init__(train_x, train_y, likelihood) self.mean_module = gpytorch.means.ConstantMean() self.covar_module = gpytorch.kernels.GridInterpolationKernel( gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel()), grid_size=100, num_dims=2, ) # Also add the deep net self.feature_extractor = LargeFeatureExtractor(input_dim=train_x.size(-1)) def forward(self, x): # We're first putting our data through a deep net (feature extractor) # We're also scaling the features so that they're nice values projected_x = self.feature_extractor(x) projected_x = projected_x - projected_x.min(0)[0] projected_x = 2 * (projected_x / projected_x.max(0)[0]) - 1 # The rest of this looks like what we've seen mean_x = self.mean_module(projected_x) covar_x = self.covar_module(projected_x) return gpytorch.distributions.MultivariateNormal(mean_x, covar_x) likelihood = gpytorch.likelihoods.GaussianLikelihood() model = GPRegressionModel(train_x, train_y, likelihood) if torch.cuda.is_available(): model = model.cuda() likelihood = likelihood.cuda()
examples/02_Scalable_Exact_GPs/Simple_GP_Regression_With_LOVE_Fast_Variances_and_Sampling.ipynb
jrg365/gpytorch
mit
Training the model The cell below trains the GP model, finding optimal hyperparameters using Type-II MLE. We run 20 iterations of training using the Adam optimizer built in to PyTorch. With a decent GPU, this should only take a few seconds.
training_iterations = 1 if smoke_test else 20 # Find optimal model hyperparameters model.train() likelihood.train() # Use the adam optimizer optimizer = torch.optim.Adam(model.parameters(), lr=0.1) # Includes GaussianLikelihood parameters # "Loss" for GPs - the marginal log likelihood mll = gpytorch.mlls.ExactMarginalLogLikelihood(likelihood, model) def train(): iterator = tqdm.notebook.tqdm(range(training_iterations)) for i in iterator: optimizer.zero_grad() output = model(train_x) loss = -mll(output, train_y) loss.backward() iterator.set_postfix(loss=loss.item()) optimizer.step() %time train()
examples/02_Scalable_Exact_GPs/Simple_GP_Regression_With_LOVE_Fast_Variances_and_Sampling.ipynb
jrg365/gpytorch
mit
Computing predictive variances (KISS-GP or Exact GPs) Using standard computaitons (without LOVE) The next cell gets the predictive covariance for the test set (and also technically gets the predictive mean, stored in preds.mean) using the standard SKI testing code, with no acceleration or precomputation. Note: Full predictive covariance matrices (and the computations needed to get them) can be quite memory intensive. Depending on the memory available on your GPU, you may need to reduce the size of the test set for the code below to run. If you run out of memory, try replacing test_x below with something like test_x[:1000] to use the first 1000 test points only, and then restart the notebook.
import time # Set into eval mode model.eval() likelihood.eval() with torch.no_grad(): start_time = time.time() preds = likelihood(model(test_x)) exact_covar = preds.covariance_matrix exact_covar_time = time.time() - start_time print(f"Time to compute exact mean + covariances: {exact_covar_time:.2f}s")
examples/02_Scalable_Exact_GPs/Simple_GP_Regression_With_LOVE_Fast_Variances_and_Sampling.ipynb
jrg365/gpytorch
mit
Using LOVE Next we compute predictive covariances (and the predictive means) for LOVE, but starting from scratch. That is, we don't yet have access to the precomputed cache discussed in the paper. This should still be faster than the full covariance computation code above. To use LOVE, use the context manager with gpytorch.settings.fast_pred_var(): You can also set some of the LOVE settings with context managers as well. For example, gpytorch.settings.max_root_decomposition_size(100) affects the accuracy of the LOVE solves (larger is more accurate, but slower). In this simple example, we allow a rank 100 root decomposition, although increasing this to rank 20-40 should not affect the timing results substantially.
# Clear the cache from the previous computations model.train() likelihood.train() # Set into eval mode model.eval() likelihood.eval() with torch.no_grad(), gpytorch.settings.fast_pred_var(), gpytorch.settings.max_root_decomposition_size(100): start_time = time.time() preds = model(test_x) fast_time_no_cache = time.time() - start_time
examples/02_Scalable_Exact_GPs/Simple_GP_Regression_With_LOVE_Fast_Variances_and_Sampling.ipynb
jrg365/gpytorch
mit
The above cell additionally computed the caches required to get fast predictions. From this point onwards, unless we put the model back in training mode, predictions should be extremely fast. The cell below re-runs the above code, but takes full advantage of both the mean cache and the LOVE cache for variances.
with torch.no_grad(), gpytorch.settings.fast_pred_var(): start_time = time.time() preds = likelihood(model(test_x)) fast_covar = preds.covariance_matrix fast_time_with_cache = time.time() - start_time print('Time to compute mean + covariances (no cache) {:.2f}s'.format(fast_time_no_cache)) print('Time to compute mean + variances (cache): {:.2f}s'.format(fast_time_with_cache))
examples/02_Scalable_Exact_GPs/Simple_GP_Regression_With_LOVE_Fast_Variances_and_Sampling.ipynb
jrg365/gpytorch
mit
Compute Error between Exact and Fast Variances Finally, we compute the mean absolute error between the fast variances computed by LOVE (stored in fast_covar), and the exact variances computed previously. Note that these tests were run with a root decomposition of rank 10, which is about the minimum you would realistically ever run with. Despite this, the fast variance estimates are quite good. If more accuracy was needed, increasing max_root_decomposition_size would provide even better estimates.
mae = ((exact_covar - fast_covar).abs() / exact_covar.abs()).mean() print(f"MAE between exact covar matrix and fast covar matrix: {mae:.6f}")
examples/02_Scalable_Exact_GPs/Simple_GP_Regression_With_LOVE_Fast_Variances_and_Sampling.ipynb
jrg365/gpytorch
mit
Computing posterior samples (KISS-GP only) With KISS-GP models, LOVE can also be used to draw fast posterior samples. (The same does not apply to exact GP models.) Drawing samples the standard way (without LOVE) We now draw samples from the posterior distribution. Without LOVE, we accomlish this by performing Cholesky on the posterior covariance matrix. This can be slow for large covariance matrices.
import time num_samples = 20 if smoke_test else 20000 # Set into eval mode model.eval() likelihood.eval() with torch.no_grad(): start_time = time.time() exact_samples = model(test_x).rsample(torch.Size([num_samples])) exact_sample_time = time.time() - start_time print(f"Time to compute exact samples: {exact_sample_time:.2f}s")
examples/02_Scalable_Exact_GPs/Simple_GP_Regression_With_LOVE_Fast_Variances_and_Sampling.ipynb
jrg365/gpytorch
mit
Using LOVE Next we compute posterior samples (and the predictive means) using LOVE. This requires the additional context manager with gpytorch.settings.fast_pred_samples():. Note that we also need the with gpytorch.settings.fast_pred_var(): flag turned on. Both context managers respond to the gpytorch.settings.max_root_decomposition_size(100) setting.
# Clear the cache from the previous computations model.train() likelihood.train() # Set into eval mode model.eval() likelihood.eval() with torch.no_grad(), gpytorch.settings.fast_pred_var(), gpytorch.settings.max_root_decomposition_size(200): # NEW FLAG FOR SAMPLING with gpytorch.settings.fast_pred_samples(): start_time = time.time() _ = model(test_x).rsample(torch.Size([num_samples])) fast_sample_time_no_cache = time.time() - start_time # Repeat the timing now that the cache is computed with torch.no_grad(), gpytorch.settings.fast_pred_var(): with gpytorch.settings.fast_pred_samples(): start_time = time.time() love_samples = model(test_x).rsample(torch.Size([num_samples])) fast_sample_time_cache = time.time() - start_time print('Time to compute LOVE samples (no cache) {:.2f}s'.format(fast_sample_time_no_cache)) print('Time to compute LOVE samples (cache) {:.2f}s'.format(fast_sample_time_cache))
examples/02_Scalable_Exact_GPs/Simple_GP_Regression_With_LOVE_Fast_Variances_and_Sampling.ipynb
jrg365/gpytorch
mit
Compute the empirical covariance matrices Let's see how well LOVE samples and exact samples recover the true covariance matrix.
# Compute exact posterior covar with torch.no_grad(): start_time = time.time() posterior = model(test_x) mean, covar = posterior.mean, posterior.covariance_matrix exact_empirical_covar = ((exact_samples - mean).t() @ (exact_samples - mean)) / num_samples love_empirical_covar = ((love_samples - mean).t() @ (love_samples - mean)) / num_samples exact_empirical_error = ((exact_empirical_covar - covar).abs()).mean() love_empirical_error = ((love_empirical_covar - covar).abs()).mean() print(f"Empirical covariance MAE (Exact samples): {exact_empirical_error}") print(f"Empirical covariance MAE (LOVE samples): {love_empirical_error}")
examples/02_Scalable_Exact_GPs/Simple_GP_Regression_With_LOVE_Fast_Variances_and_Sampling.ipynb
jrg365/gpytorch
mit
Package up a log-posterior function.
def lnPost(params, x, y): # This is written for clarity rather than numerical efficiency. Feel free to tweak it. a = params[0] b = params[1] lnp = 0.0 # Using informative priors to achieve faster convergence is cheating in this exercise! # But this is where you would add them. lnp += -0.5*np.sum((a+b*x - y)**2) return lnp
notes/InferenceSandbox.ipynb
seniosh/StatisticalMethods
gpl-2.0
Convenience functions encoding the exact posterior:
class ExactPosterior: def __init__(self, x, y, a0, b0): X = np.matrix(np.vstack([np.ones(len(x)), x]).T) Y = np.matrix(y).T self.invcov = X.T * X self.covariance = np.linalg.inv(self.invcov) self.mean = self.covariance * X.T * Y self.a_array = np.arange(0.0, 6.0, 0.02) self.b_array = np.arange(0.0, 3.25, 0.02) self.P_of_a = np.array([self.marg_a(a) for a in self.a_array]) self.P_of_b = np.array([self.marg_b(b) for b in self.b_array]) self.P_of_ab = np.array([[self.lnpost(a,b) for a in self.a_array] for b in self.b_array]) self.P_of_ab = np.exp(self.P_of_ab) self.renorm = 1.0/np.sum(self.P_of_ab) self.P_of_ab = self.P_of_ab * self.renorm self.levels = scipy.stats.chi2.cdf(np.arange(1,4)**2, 1) # confidence levels corresponding to contours below self.contourLevels = self.renorm*np.exp(self.lnpost(a0,b0)-0.5*scipy.stats.chi2.ppf(self.levels, 2)) def lnpost(self, a, b): # the 2D posterior z = self.mean - np.matrix([[a],[b]]) return -0.5 * (z.T * self.invcov * z)[0,0] def marg_a(self, a): # marginal posterior of a return scipy.stats.norm.pdf(a, self.mean[0,0], np.sqrt(self.covariance[0,0])) def marg_b(self, b): # marginal posterior of b return scipy.stats.norm.pdf(b, self.mean[1,0], np.sqrt(self.covariance[1,1])) exact = ExactPosterior(x, y, a, b)
notes/InferenceSandbox.ipynb
seniosh/StatisticalMethods
gpl-2.0
Demo some plots of the exact posterior distribution
plt.plot(exact.a_array, exact.P_of_a); plt.plot(exact.b_array, exact.P_of_b); plt.contour(exact.a_array, exact.b_array, exact.P_of_ab, colors='blue', levels=exact.contourLevels); plt.plot(a, b, 'o', color='red');
notes/InferenceSandbox.ipynb
seniosh/StatisticalMethods
gpl-2.0
Ok, you're almost ready to go! A decidely minimal stub of a Metropolis loop appears below; of course, you don't need to stick exactly with this layout. Once again, after running a chain, be sure to visually inspect traces of each parameter to see whether they appear converged compare the marginal and joint posterior distributions to the exact solution to check whether they've converged to the correct distribution Normally, you should always use quantitative tests of convergence in addition to visual inspection, as you saw on Tuesday. For this class (only), let's save some time by relying only on visual impressions and comparison to the exact posterior. (see the snippets farther down) If you think you have a sampler that works well, use it to run some more chains from different starting points and compare them both visually and using the numerical convergence criteria covered in class. Once you have a working sampler, the question is: how can we make it converge faster? Experiment! We'll compare notes in a bit.
Nsamples = 501**(2) samples = np.zeros((Nsamples, 2)) # put any more global definitions here def proposal(a_try, b_try, temperature): a = a_try + temperature*np.random.randn(1) b = b_try + temperature*np.random.randn(1) return a, b def we_accept_this_proposal(lnp_try, lnp_current): return np.exp(lnp_try - lnp_current) > np.random.uniform() temperature = 0.1 a_current, b_current = proposal(0, 0, temperature) lnp_current = lnPost([a_current, b_current], x, y) for i in range(Nsamples): a_try, b_try = proposal(a_current, b_current, temperature) # propose new parameter value(s) lnp_try = lnPost([a_try,b_try], x, y) # calculate posterior density for the proposal if we_accept_this_proposal(lnp_try, lnp_current): lnp_current = lnp_try a_current, b_current = (a_try, b_try) else: pass samples[i, 0] = a_current samples[i, 1] = b_current plt.rcParams['figure.figsize'] = (12.0, 3.0) plt.plot(samples[:,0]) plt.plot(samples[:,1]); plt.rcParams['figure.figsize'] = (5.0, 5.0) plt.plot(samples[:,0], samples[:,1]); plt.rcParams['figure.figsize'] = (5.0, 5.0) plt.hist(samples[:,0], 20, normed=True, color='cyan'); plt.plot(exact.a_array, exact.P_of_a, color='red'); plt.rcParams['figure.figsize'] = (5.0, 5.0) plt.hist(samples[:,1], 20, normed=True, color='cyan'); plt.plot(exact.b_array, exact.P_of_b, color='red'); # If you know how to easily overlay the 2D sample and theoretical confidence regions, by all means do so.
notes/InferenceSandbox.ipynb
seniosh/StatisticalMethods
gpl-2.0
¡Adivina Quién es! El juego de adivina quién es, consiste en adivinar el personaje que tu oponente ha seleccionado antes de que él/ella adivine el tuyo. La dinámica del juego es: * Cada jugador elige un personaje al azar * Por turnos, cada jugador realiza preguntas de sí o no, e intenta adivinar el personaje del oponente. * Las preguntas válidas están basadas en la apariencia de los personajes y deberían ser fáciles de responder. * Ejemplo de pregunta válida: ¿Tiene el cabello negro? * Ejemplo de pregunta no válida: ¿Luce como un ex-presidiario? A continuación, cargamos el tablero con los personajes.
Image('data/guess_who_board.jpg', width=700)
ml_miguel/Crackeando el guess who.ipynb
PyDataMallorca/WS_Introduction_to_data_science
gpl-3.0
Cargando los datos Para la carga de datos usaremos la función read_csv de pandas. Pandas cuenta con un amplio listado de funciones para la carga de datos. Mas informacion en la documentación de la API.
df = pd.read_csv('data/guess_who.csv', index_col='observacion') df.head()
ml_miguel/Crackeando el guess who.ipynb
PyDataMallorca/WS_Introduction_to_data_science
gpl-3.0
¿Cuántos personajes tenemos con cada caracteristica?
#Separamos los tipos de variables categorical_var = 'color de cabello' binary_vars = list(set(df.keys()) - set([categorical_var, 'NOMBRE'])) # Para las variables booleanas calculamos la suma df[binary_vars].sum() # Para las variables categoricas, observamos la frecuencia de cada categoría df[categorical_var].value_counts() labels = df['NOMBRE'] del df['NOMBRE'] df.head() labels
ml_miguel/Crackeando el guess who.ipynb
PyDataMallorca/WS_Introduction_to_data_science
gpl-3.0
Codificación de variables categóricas
from sklearn.feature_extraction import DictVectorizer vectorizer = DictVectorizer(sparse=False) ab=vectorizer.fit_transform(df.to_dict('records')) dft = pd.DataFrame(ab, columns=vectorizer.get_feature_names()) dft.head()
ml_miguel/Crackeando el guess who.ipynb
PyDataMallorca/WS_Introduction_to_data_science
gpl-3.0
Entrenando un arbol de decisión
from sklearn.tree import DecisionTreeClassifier classifier = DecisionTreeClassifier(criterion='entropy', splitter='random', random_state=42) classifier.fit(dft, labels)
ml_miguel/Crackeando el guess who.ipynb
PyDataMallorca/WS_Introduction_to_data_science
gpl-3.0
Obtención de los pesos de cada feature
classifier.feature_importances_ feat = pd.DataFrame(index=dft.keys(), data=classifier.feature_importances_, columns=['score']) feat = feat.sort_values(by='score', ascending=False) feat.plot(kind='bar',rot=85,figsize=(10,4),)
ml_miguel/Crackeando el guess who.ipynb
PyDataMallorca/WS_Introduction_to_data_science
gpl-3.0
Bonus: Visualizando el arbol, requiere graphviz conda install graphviz
from sklearn.tree import export_graphviz dotfile = open('guess_who_tree.dot', 'w') export_graphviz( classifier, out_file = dotfile, filled=True, feature_names = dft.columns, class_names=list(labels), rotate=True, max_depth=1, rounded=True, ) dotfile.close() !dot -Tpng guess_who_tree.dot -o guess_who_tree.png Image('guess_who_tree.png', width=1000)
ml_miguel/Crackeando el guess who.ipynb
PyDataMallorca/WS_Introduction_to_data_science
gpl-3.0
Warm-up exercises Exercise: Suppose that goal scoring in hockey is well modeled by a Poisson process, and that the long-run goal-scoring rate of the Boston Bruins against the Vancouver Canucks is 2.9 goals per game. In their next game, what is the probability that the Bruins score exactly 3 goals? Plot the PMF of k, the number of goals they score in a game.
# Solution goes here # Solution goes here # Solution goes here
code/.ipynb_checkpoints/chap07mine-checkpoint.ipynb
NathanYee/ThinkBayes2
gpl-2.0
Exercise: Assuming again that the goal scoring rate is 2.9, what is the probability of scoring a total of 9 goals in three games? Answer this question two ways: Compute the distribution of goals scored in one game and then add it to itself twice to find the distribution of goals scored in 3 games. Use the Poisson PMF with parameter $\lambda t$, where $\lambda$ is the rate in goals per game and $t$ is the duration in games.
# Solution goes here # Solution goes here
code/.ipynb_checkpoints/chap07mine-checkpoint.ipynb
NathanYee/ThinkBayes2
gpl-2.0
Exercise: Suppose that the long-run goal-scoring rate of the Canucks against the Bruins is 2.6 goals per game. Plot the distribution of t, the time until the Canucks score their first goal. In their next game, what is the probability that the Canucks score during the first period (that is, the first third of the game)? Hint: thinkbayes2 provides MakeExponentialPmf and EvalExponentialCdf.
# Solution goes here # Solution goes here # Solution goes here
code/.ipynb_checkpoints/chap07mine-checkpoint.ipynb
NathanYee/ThinkBayes2
gpl-2.0
Exercise: Assuming again that the goal scoring rate is 2.8, what is the probability that the Canucks get shut out (that is, don't score for an entire game)? Answer this question two ways, using the CDF of the exponential distribution and the PMF of the Poisson distribution.
# Solution goes here # Solution goes here
code/.ipynb_checkpoints/chap07mine-checkpoint.ipynb
NathanYee/ThinkBayes2
gpl-2.0
The Boston Bruins problem The Hockey suite contains hypotheses about the goal scoring rate for one team against the other. The prior is Gaussian, with mean and variance based on previous games in the league. The Likelihood function takes as data the number of goals scored in a game.
from thinkbayes2 import MakeNormalPmf from thinkbayes2 import EvalPoissonPmf class Hockey(Suite): """Represents hypotheses about the scoring rate for a team.""" def __init__(self, label=None): """Initializes the Hockey object. label: string """ mu = 2.8 sigma = 0.3 pmf = MakeNormalPmf(mu, sigma, num_sigmas=4, n=101) Suite.__init__(self, pmf, label=label) def Likelihood(self, data, hypo): """Computes the likelihood of the data under the hypothesis. Evaluates the Poisson PMF for lambda and k. hypo: goal scoring rate in goals per game data: goals scored in one game """ lam = hypo k = data like = EvalPoissonPmf(k, lam) return like
code/.ipynb_checkpoints/chap07mine-checkpoint.ipynb
NathanYee/ThinkBayes2
gpl-2.0
Now we can initialize a suite for each team:
suite1 = Hockey('bruins') suite2 = Hockey('canucks')
code/.ipynb_checkpoints/chap07mine-checkpoint.ipynb
NathanYee/ThinkBayes2
gpl-2.0
Here's what the priors look like:
thinkplot.PrePlot(num=2) thinkplot.Pdf(suite1) thinkplot.Pdf(suite2) thinkplot.Config(xlabel='Goals per game', ylabel='Probability')
code/.ipynb_checkpoints/chap07mine-checkpoint.ipynb
NathanYee/ThinkBayes2
gpl-2.0
And we can update each suite with the scores from the first 4 games.
suite1.UpdateSet([0, 2, 8, 4]) suite2.UpdateSet([1, 3, 1, 0]) thinkplot.PrePlot(num=2) thinkplot.Pdf(suite1) thinkplot.Pdf(suite2) thinkplot.Config(xlabel='Goals per game', ylabel='Probability') suite1.Mean(), suite2.Mean()
code/.ipynb_checkpoints/chap07mine-checkpoint.ipynb
NathanYee/ThinkBayes2
gpl-2.0
To predict the number of goals scored in the next game we can compute, for each hypothetical value of $\lambda$, a Poisson distribution of goals scored, then make a weighted mixture of Poissons:
from thinkbayes2 import MakeMixture from thinkbayes2 import MakePoissonPmf def MakeGoalPmf(suite, high=10): """Makes the distribution of goals scored, given distribution of lam. suite: distribution of goal-scoring rate high: upper bound returns: Pmf of goals per game """ metapmf = Pmf() for lam, prob in suite.Items(): pmf = MakePoissonPmf(lam, high) metapmf.Set(pmf, prob) mix = MakeMixture(metapmf, label=suite.label) return mix
code/.ipynb_checkpoints/chap07mine-checkpoint.ipynb
NathanYee/ThinkBayes2
gpl-2.0
Here's what the results look like.
goal_dist1 = MakeGoalPmf(suite1) goal_dist2 = MakeGoalPmf(suite2) thinkplot.PrePlot(num=2) thinkplot.Pmf(goal_dist1) thinkplot.Pmf(goal_dist2) thinkplot.Config(xlabel='Goals', ylabel='Probability', xlim=[-0.7, 11.5]) goal_dist1.Mean(), goal_dist2.Mean()
code/.ipynb_checkpoints/chap07mine-checkpoint.ipynb
NathanYee/ThinkBayes2
gpl-2.0
Now we can compute the probability that the Bruins win, lose, or tie in regulation time.
diff = goal_dist1 - goal_dist2 p_win = diff.ProbGreater(0) p_loss = diff.ProbLess(0) p_tie = diff.Prob(0) print('Prob win, loss, tie:', p_win, p_loss, p_tie)
code/.ipynb_checkpoints/chap07mine-checkpoint.ipynb
NathanYee/ThinkBayes2
gpl-2.0
If the game goes into overtime, we have to compute the distribution of t, the time until the first goal, for each team. For each hypothetical value of $\lambda$, the distribution of t is exponential, so the predictive distribution is a mixture of exponentials.
from thinkbayes2 import MakeExponentialPmf def MakeGoalTimePmf(suite): """Makes the distribution of time til first goal. suite: distribution of goal-scoring rate returns: Pmf of goals per game """ metapmf = Pmf() for lam, prob in suite.Items(): pmf = MakeExponentialPmf(lam, high=2.5, n=1001) metapmf.Set(pmf, prob) mix = MakeMixture(metapmf, label=suite.label) return mix
code/.ipynb_checkpoints/chap07mine-checkpoint.ipynb
NathanYee/ThinkBayes2
gpl-2.0
Here's what the predictive distributions for t look like.
time_dist1 = MakeGoalTimePmf(suite1) time_dist2 = MakeGoalTimePmf(suite2) thinkplot.PrePlot(num=2) thinkplot.Pmf(time_dist1) thinkplot.Pmf(time_dist2) thinkplot.Config(xlabel='Games until goal', ylabel='Probability') time_dist1.Mean(), time_dist2.Mean()
code/.ipynb_checkpoints/chap07mine-checkpoint.ipynb
NathanYee/ThinkBayes2
gpl-2.0
In overtime the first team to score wins, so the probability of winning is the probability of generating a smaller value of t:
p_win_in_overtime = time_dist1.ProbLess(time_dist2) p_adjust = time_dist1.ProbEqual(time_dist2) p_win_in_overtime += p_adjust / 2 print('p_win_in_overtime', p_win_in_overtime)
code/.ipynb_checkpoints/chap07mine-checkpoint.ipynb
NathanYee/ThinkBayes2
gpl-2.0
Finally, we can compute the overall chance that the Bruins win, either in regulation or overtime.
p_win_overall = p_win + p_tie * p_win_in_overtime print('p_win_overall', p_win_overall)
code/.ipynb_checkpoints/chap07mine-checkpoint.ipynb
NathanYee/ThinkBayes2
gpl-2.0
Exercises Exercise: To make the model of overtime more correct, we could update both suites with 0 goals in one game, before computing the predictive distribution of t. Make this change and see what effect it has on the results.
# Solution goes here
code/.ipynb_checkpoints/chap07mine-checkpoint.ipynb
NathanYee/ThinkBayes2
gpl-2.0
Exercise: In the final match of the 2014 FIFA World Cup, Germany defeated Argentina 1-0. What is the probability that Germany had the better team? What is the probability that Germany would win a rematch? For a prior distribution on the goal-scoring rate for each team, use a gamma distribution with parameter 1.3.
from thinkbayes2 import MakeGammaPmf xs = np.linspace(0, 8, 101) pmf = MakeGammaPmf(xs, 1.3) thinkplot.Pdf(pmf) thinkplot.Config(xlabel='Goals per game') pmf.Mean()
code/.ipynb_checkpoints/chap07mine-checkpoint.ipynb
NathanYee/ThinkBayes2
gpl-2.0
Constrained problem First we set up an objective function (the townsend function) and a constraint function. We further assume both functions are black-box. We also define the optimization domain (2 continuous parameters).
# Objective & constraint def townsend(X): return -(np.cos((X[:,0]-0.1)*X[:,1])**2 + X[:,0] * np.sin(3*X[:,0]+X[:,1]))[:,None] def constraint(X): return -(-np.cos(1.5*X[:,0]+np.pi)*np.cos(1.5*X[:,1])+np.sin(1.5*X[:,0]+np.pi)*np.sin(1.5*X[:,1]))[:,None] # Setup input domain domain = gpflowopt.domain.ContinuousParameter('x1', -2.25, 2.5) + \ gpflowopt.domain.ContinuousParameter('x2', -2.5, 1.75) # Plot def plotfx(): X = gpflowopt.design.FactorialDesign(101, domain).generate() Zo = townsend(X) Zc = constraint(X) mask = Zc>=0 Zc[mask] = np.nan Zc[np.logical_not(mask)] = 1 Z = Zo * Zc shape = (101, 101) f, axes = plt.subplots(1, 1, figsize=(7, 5)) axes.contourf(X[:,0].reshape(shape), X[:,1].reshape(shape), Z.reshape(shape)) axes.set_xlabel('x1') axes.set_ylabel('x2') axes.set_xlim([domain.lower[0], domain.upper[0]]) axes.set_ylim([domain.lower[1], domain.upper[1]]) return axes plotfx();
doc/source/notebooks/constrained_bo.ipynb
GPflow/GPflowOpt
apache-2.0
Modeling and joint acquisition function We proceed by assigning the objective and constraint function a GP prior. Both functions are evaluated on a space-filling set of points (here, a Latin Hypercube design). Two GPR models are created. The EI is based on the model of the objective function (townsend), whereas PoF is based on the model of the constraint function. We then define the joint criterioin as the product of the EI and PoF.
# Initial evaluations design = gpflowopt.design.LatinHyperCube(11, domain) X = design.generate() Yo = townsend(X) Yc = constraint(X) # Models objective_model = gpflow.gpr.GPR(X, Yo, gpflow.kernels.Matern52(2, ARD=True)) objective_model.likelihood.variance = 0.01 constraint_model = gpflow.gpr.GPR(np.copy(X), Yc, gpflow.kernels.Matern52(2, ARD=True)) constraint_model.kern.lengthscales.transform = gpflow.transforms.Log1pe(1e-3) constraint_model.likelihood.variance = 0.01 constraint_model.likelihood.variance.prior = gpflow.priors.Gamma(1./4.,1.0) # Setup ei = gpflowopt.acquisition.ExpectedImprovement(objective_model) pof = gpflowopt.acquisition.ProbabilityOfFeasibility(constraint_model) joint = ei * pof
doc/source/notebooks/constrained_bo.ipynb
GPflow/GPflowOpt
apache-2.0
Initial belief We can now inspect our belief about the optimization problem by plotting the models, the EI, PoF and joint mappings. Both models clearly are not very accurate yet. More specifically, the constraint model does not correctly capture the feasibility yet.
def plot(): Xeval = gpflowopt.design.FactorialDesign(101, domain).generate() Yevala,_ = joint.operands[0].models[0].predict_f(Xeval) Yevalb,_ = joint.operands[1].models[0].predict_f(Xeval) Yevalc = np.maximum(ei.evaluate(Xeval), 0) Yevald = pof.evaluate(Xeval) Yevale = np.maximum(joint.evaluate(Xeval), 0) shape = (101, 101) plots = [('Objective model', Yevala), ('Constraint model', Yevalb), ('EI', Yevalc), ('PoF', Yevald), ('EI * PoF', Yevale)] plt.figure(figsize=(10,10)) for i, plot in enumerate(plots): if i == 4: ax = plt.subplot2grid((3, 4), (2, 1), colspan=2) else: ax = plt.subplot2grid((3, 2), (int(i/2), i % 2)) ax.contourf(Xeval[:,0].reshape(shape), Xeval[:,1].reshape(shape), plot[1].reshape(shape)) ax.scatter(joint.data[0][:,0], joint.data[0][:,1], c='w') ax.set_title(plot[0]) ax.set_xlabel('x1') ax.set_ylabel('x2') ax.set_xlim([domain.lower[0], domain.upper[0]]) ax.set_ylim([domain.lower[1], domain.upper[1]]) plt.tight_layout() # Plot representing the model belief, and the belief mapped to EI and PoF plot() print(constraint_model)
doc/source/notebooks/constrained_bo.ipynb
GPflow/GPflowOpt
apache-2.0
Running Bayesian Optimizer Running the Bayesian optimization is the next step. For this, we must set up an appropriate strategy to optimize the joint acquisition function. Sometimes this can be a bit challenging as often large non-varying areas may occur. A typical strategy is to apply a Monte Carlo optimization step first, then optimize the point with the best value (several variations exist). This approach is followed here. We then run the Bayesian Optimization and allow it to select up to 50 additional decisions. The joint acquisition function assures the feasibility (w.r.t the constraint) is taken into account while selecting decisions for optimality.
# First setup the optimization strategy for the acquisition function # Combining MC step followed by L-BFGS-B acquisition_opt = gpflowopt.optim.StagedOptimizer([gpflowopt.optim.MCOptimizer(domain, 200), gpflowopt.optim.SciPyOptimizer(domain)]) # Then run the BayesianOptimizer for 50 iterations optimizer = gpflowopt.BayesianOptimizer(domain, joint, optimizer=acquisition_opt, verbose=True) result = optimizer.optimize([townsend, constraint], n_iter=50) print(result)
doc/source/notebooks/constrained_bo.ipynb
GPflow/GPflowOpt
apache-2.0
Results If we now plot the belief, we clearly see the constraint model has improved significantly. More specifically, its PoF mapping is an accurate representation of the true constraint function. By multiplying the EI by the PoF, the search is restricted to the feasible regions.
# Plotting belief again print(constraint_model) plot()
doc/source/notebooks/constrained_bo.ipynb
GPflow/GPflowOpt
apache-2.0
If we inspect the sampling distribution, we can see that the amount of samples in the infeasible regions is limited. The optimization has focussed on the feasible areas. In addition, it has been active mostly in two optimal regions.
# Plot function, overlayed by the constraint. Also plot the samples axes = plotfx() valid = joint.feasible_data_index() axes.scatter(joint.data[0][valid,0], joint.data[0][valid,1], label='feasible data', c='w') axes.scatter(joint.data[0][np.logical_not(valid),0], joint.data[0][np.logical_not(valid),1], label='data', c='r'); axes.legend()
doc/source/notebooks/constrained_bo.ipynb
GPflow/GPflowOpt
apache-2.0
Finally, the evolution of the best value over the number of iterations clearly shows a very good solution is already found after only a few evaluations.
f, axes = plt.subplots(1, 1, figsize=(7, 5)) f = joint.data[1][:,0] f[joint.data[1][:,1] > 0] = np.inf axes.plot(np.arange(0, joint.data[0].shape[0]), np.minimum.accumulate(f)) axes.set_ylabel('fmin') axes.set_xlabel('Number of evaluated points');
doc/source/notebooks/constrained_bo.ipynb
GPflow/GPflowOpt
apache-2.0
The following data was generated using code that can be found on GitHub https://github.com/mtchem/Twitter-Politics/blob/master/data_wrangle/Data_Wrangle.ipynb
# load federal document data from pickle file fed_reg_data = r'data/fed_reg_data.pickle' fed_data = pd.read_pickle(fed_reg_data) # load twitter data from csv twitter_file_path = r'data/twitter_01_20_17_to_3-2-18.pickle' twitter_data = pd.read_pickle(twitter_file_path) len(fed_data)
EDA.ipynb
mtchem/Twitter-Politics
mit
In order to explore the twitter and executive document data I will look at the following: Determine the most used hashtags Determine who President Trump tweeted at(@) the most Create a word frequency plot for the most used words in the twitter data and the presidental documents Find words that both data sets have in common, and determine those words document frequency (what percentage of documents those words appear in)
# imports import nltk nltk.download('stopwords') from nltk.corpus import stopwords import itertools from collections import Counter import matplotlib.pyplot as plt import seaborn as sns plt.style.use('ggplot')
EDA.ipynb
mtchem/Twitter-Politics
mit
Plot the most used hashtags and @ tags
# find the most used hashtags hashtag_freq = Counter(list(itertools.chain(*(twitter_data.hash_tags)))) hashtag_top20 = hashtag_freq.most_common(20) # find the most used @ tags at_tag_freq = Counter(list(itertools.chain(*(twitter_data['@_tags'])))) at_tags_top20 = at_tag_freq.most_common(20) print(hashtag_top20) # frequency plot for the most used hashtags df = pd.DataFrame(hashtag_top20, columns=['Hashtag', 'frequency']) df.plot(kind='bar', x='Hashtag',legend=None,fontsize = 15, figsize = (15,5)) plt.ylabel('Frequency',fontsize = 18) plt.xlabel('Hashtag', fontsize=18) plt.title('Most Common Hashtags', fontsize = 15) plt.show() # frequency plot for the most used @ tags df = pd.DataFrame(at_tags_top20, columns=['@ Tag', 'frequency']) df.plot(kind='bar', x='@ Tag',legend=None, figsize = (15,5)) plt.ylabel('Frequency',fontsize = 18) plt.xlabel('@ Tags', fontsize=18) plt.title('Most Common @ Tags', fontsize = 15) plt.show()
EDA.ipynb
mtchem/Twitter-Politics
mit
Top used words for the twitter data and the federal document data Define a list of words that have no meaning, such as 'a', 'the', and punctuation
# use nltk's list of stopwords stop_words = set(stopwords.words('english')) # add puncuation to stopwords stop_words.update(['.', ',','get','going','one', 'amp','like' '"','...',"''", "'","n't", '?', '!', ':', ';', '#','@', '(', ')', 'https', '``',"'s", 'rt' ])
EDA.ipynb
mtchem/Twitter-Politics
mit
Make a list of hashtags and @entites used in the twitter data
# combine the hashtags and @ tags, flatten the list of lists, keep the unique items stop_twitter = set(list(itertools.chain(*(twitter_data.hash_tags + twitter_data['@_tags']))))
EDA.ipynb
mtchem/Twitter-Politics
mit
The federal document data also has some words that need to be removed. The words Federal Registry and the date are on the top of every page so they should be removed. Also, words like 'shall', 'order', and 'act' are used quite a bit but don't convay much meaning, so I'm going to remove those words as well.
stop_fed_docs = ['united', 'states', '1','2','3','4','5','6','7','8','9','10', '11','12', '13','14','15','16','17','18','19','20','21','22','23','24','25','26', '27','28','29','30','31','2016', '2015','2014','federal','shall', '4790', 'national', '2017', 'order','president', 'presidential', 'sep', 'register','po','verdate', 'jkt','00000','frm','fmt','sfmt','vol', 'section','donald','act','america', 'executive','secretary', 'law', 'proclamation','81','day','including', 'code', '4705','authority', 'agencies', '241001','americans','238001','year', 'amp','government','agency','hereby', 'people','public','person','state','american','two','nation', '82', 'sec', 'laws', 'policy','set','fr','appropriate','doc','new','filed','u.s.c', 'department','ii','also','office','country','within','memorandum', 'director', 'us', 'sunday','monday', 'tuesday','wednesday','thursday', 'friday', 'saturday','title','upon','constitution','support', 'vested', 'part', 'month', 'subheading', 'foreign','general','january', 'february', 'march', 'april','may','june','july','august', 'september', 'october', 'november', 'december', 'council','provide','consistent','pursuant', 'thereof','00001','documents','11:15', 'area','management', 'following','house','white','week','therefore','amended', 'continue', 'chapter','must','years', '00002', 'use','make','date','one', 'many','12', 'commission','provisions', 'every','u.s.','functions', 'made','hand','necessary', 'witness','time','otherwise', 'proclaim', 'follows','thousand','efforts','jan', 'trump','j.', 'applicable', '4717','whereof','hereunto', 'subject', 'report', '3—', '3295–f7–p']
EDA.ipynb
mtchem/Twitter-Politics
mit
Create functions that removes the stop words for each of the datasets
def remove_from_fed_data(token_lst): # remove stopwords and one letter words filtered_lst = [word for word in token_lst if word.lower() not in stop_fed_docs and len(word) > 1 and word.lower() not in stop_words] return filtered_lst def remove_from_twitter_data(token_lst): # remove stopwords and one letter words filtered_lst = [word for word in token_lst if word.lower() not in stop_words and len(word) > 1 and word.lower() not in stop_twitter] return filtered_lst
EDA.ipynb
mtchem/Twitter-Politics
mit
Remove all of the stop words from the tokenized twitter and document data
# apply the remove_stopwords function to all of the tokenized twitter text twitter_words = twitter_data.text_tokenized.apply(lambda x: remove_from_twitter_data(x)) # apply the remove_stopwords function to all of the tokenized document text document_words = fed_data.token_text.apply(lambda x: remove_from_fed_data(x)) # flatten each the word lists into one list all_twitter_words = list(itertools.chain(*twitter_words)) all_document_words =list(itertools.chain(*document_words))
EDA.ipynb
mtchem/Twitter-Politics
mit
Count how many times each word is used for both datasets
# create a dictionary using the Counter method, where the key is a word and the value is the number of time it was used twitter_freq = Counter(all_twitter_words) doc_freq = Counter(all_document_words) # determine the top 30 words used in the twitter data top_30_tweet = twitter_freq.most_common(30) top_30_fed = doc_freq.most_common(30)
EDA.ipynb
mtchem/Twitter-Politics
mit
Plot the most used words for the twitter data and the federal document data
# frequency plot for the most used Federal Data df = pd.DataFrame(top_30_fed, columns=['Federal Data', 'frequency']) df.plot(kind='bar', x='Federal Data',legend=None, figsize = (15,5)) plt.ylabel('Frequency',fontsize = 18) plt.xlabel('Words', fontsize=18) plt.title('Most Used Words that Occured in the Federal Data', fontsize = 15) plt.show() # frequency plot for the most used words in the twitter data df = pd.DataFrame(top_30_tweet, columns=['Twitter Data', 'frequency']) df.plot(kind='bar', x='Twitter Data',legend=None, figsize = (15,5)) plt.ylabel('Frequency',fontsize = 18) plt.xlabel('Words', fontsize=18) plt.title('Most Used Words that Occured in the Twitter Data', fontsize = 15) plt.show()
EDA.ipynb
mtchem/Twitter-Politics
mit
Determine all of the words that are used in both datasets
# find the unique words in each dataset joint_words = list((set(all_document_words)).intersection(all_twitter_words))
EDA.ipynb
mtchem/Twitter-Politics
mit
Create a dictionary with the unique joint words as keys
# make array of zeros values = np.zeros(len(joint_words)) # create dictionary joint_words_dict = dict(zip(joint_words, values))
EDA.ipynb
mtchem/Twitter-Politics
mit
Create dictionaries for both datasets with document frequency for each joint word
# create a dictionary with a word as key, and a value = number of documents that contain the word for Twitter twitter_document_freq = joint_words_dict.copy() for word in joint_words: for lst in twitter_data.text_tokenized: if word in lst: twitter_document_freq[word]= twitter_document_freq[word] + 1 # create a dictionary with a word as key, and a value = number of documents that contain the word for Fed Data fed_document_freq = joint_words_dict.copy() for word in joint_words: for lst in fed_data.token_text: if word in lst: fed_document_freq[word]= fed_document_freq[word] + 1
EDA.ipynb
mtchem/Twitter-Politics
mit
Create dataframe with the word and the document percentage for each data set
df = pd.DataFrame([fed_document_freq, twitter_document_freq]).T df.columns = ['Fed', 'Tweet'] df['% Fed'] = (df.Fed/len(df.Fed))*100 df['% Tweet'] = (df.Tweet/len(df.Tweet))*100 top_joint_fed = df[['% Fed','% Tweet']].sort_values(by='% Fed', ascending=False)[0:50] top_joint_tweet = df[['% Fed','% Tweet']].sort_values(by='% Tweet', ascending=False)[0:50] top_joint_fed.plot.bar(figsize=(14,5)) plt.show() top_joint_tweet.plot.bar(figsize=(14,5)) plt.show() df['diff %'] = df['% Fed'] - df['% Tweet'] top_same = df[df['diff %'] == 0].sort_values(by='% Fed', ascending=False)[0:50] top_same[['% Fed', '% Tweet']].plot.bar(figsize=(14,5)) plt.show()
EDA.ipynb
mtchem/Twitter-Politics
mit
Explore the Data The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following: * airplane 1 * automobile 2 * bird 3 * cat 4 * deer 5 * dog 6 * frog 7 * horse 8 * ship 9 * truck 10 Total 10 classes (Aras changed above/this section a bit) Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch. Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.
%matplotlib inline %config InlineBackend.figure_format = 'retina' import helper import numpy as np # Explore the dataset batch_id = 1 sample_id = 5 helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
udacity-dl/CNN/cnn_bp-learning-curves.ipynb
arasdar/DL
unlicense
Implement Preprocess Functions Normalize In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
def normalize(x): """ Normalize a list of sample image data in the range of 0 to 1 : x: List of image data. The image shape is (32, 32, 3) : return: Numpy array of normalize data """ # TODO: Implement Function ## image data shape = [t, i,j,k], t= num_img_per_batch (basically the list of images), i,j,k=height,width, and depth/channel return x/255 """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_normalize(normalize)
udacity-dl/CNN/cnn_bp-learning-curves.ipynb
arasdar/DL
unlicense
One-hot encode Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function. Hint: Don't reinvent the wheel.
# import helper ## I did this because sklearn.preprocessing was defined in there from sklearn import preprocessing ## from sklearn lib import preprocessing lib/sublib/functionality/class def one_hot_encode(x): """ One hot encode a list of sample labels. Return a one-hot encoded vector for each label. : x: List of sample Labels : return: Numpy array of one-hot encoded labels """ # TODO: Implement Function ## This was in the helper.py which belongs to the generic helper functions # def display_image_predictions(features, labels, predictions): # n_classes = 10 # label_names = _load_label_names() # label_binarizer = LabelBinarizer() # label_binarizer.fit(range(n_classes)) # label_ids = label_binarizer.inverse_transform(np.array(labels)) label_binarizer = preprocessing.LabelBinarizer() ## instantiate and initialized the one-hot encoder from class to one-hot n_class = 10 ## total num_classes label_binarizer.fit(range(n_class)) ## fit the one-vec to the range of number of classes, 10 in this case (dataset) return label_binarizer.transform(x) ## transform the class labels to one-hot vec """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_one_hot_encode(one_hot_encode)
udacity-dl/CNN/cnn_bp-learning-curves.ipynb
arasdar/DL
unlicense
Preprocess all the data and save it Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
""" DON'T MODIFY ANYTHING IN THIS CELL """ # Preprocess Training, Validation, and Testing Data helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
udacity-dl/CNN/cnn_bp-learning-curves.ipynb
arasdar/DL
unlicense
Implementation of CNN with backprop in NumPy
def get_im2col_indices(x_shape, field_height, field_width, padding=1, stride=1): # First figure out what the size of the output should be N, C, H, W = x_shape assert (H + 2 * padding - field_height) % stride == 0 assert (W + 2 * padding - field_height) % stride == 0 out_height = int((H + 2 * padding - field_height) / stride + 1) out_width = int((W + 2 * padding - field_width) / stride + 1) i0 = np.repeat(np.arange(field_height), field_width) i0 = np.tile(i0, C) i1 = stride * np.repeat(np.arange(out_height), out_width) j0 = np.tile(np.arange(field_width), field_height * C) j1 = stride * np.tile(np.arange(out_width), out_height) i = i0.reshape(-1, 1) + i1.reshape(1, -1) j = j0.reshape(-1, 1) + j1.reshape(1, -1) k = np.repeat(np.arange(C), field_height * field_width).reshape(-1, 1) return (k.astype(int), i.astype(int), j.astype(int)) def im2col_indices(x, field_height, field_width, padding=1, stride=1): """ An implementation of im2col based on some fancy indexing """ # Zero-pad the input p = padding x_padded = np.pad(x, ((0, 0), (0, 0), (p, p), (p, p)), mode='constant') k, i, j = get_im2col_indices(x.shape, field_height, field_width, padding, stride) cols = x_padded[:, k, i, j] C = x.shape[1] cols = cols.transpose(1, 2, 0).reshape(field_height * field_width * C, -1) return cols def col2im_indices(cols, x_shape, field_height=3, field_width=3, padding=1, stride=1): """ An implementation of col2im based on fancy indexing and np.add.at """ N, C, H, W = x_shape H_padded, W_padded = H + 2 * padding, W + 2 * padding x_padded = np.zeros((N, C, H_padded, W_padded), dtype=cols.dtype) k, i, j = get_im2col_indices(x_shape, field_height, field_width, padding, stride) cols_reshaped = cols.reshape(C * field_height * field_width, -1, N) cols_reshaped = cols_reshaped.transpose(2, 0, 1) np.add.at(x_padded, (slice(None), k, i, j), cols_reshaped) if padding == 0: return x_padded return x_padded[:, :, padding:-padding, padding:-padding] def conv_forward(X, W, b, stride=1, padding=1): cache = W, b, stride, padding n_filters, d_filter, h_filter, w_filter = W.shape n_x, d_x, h_x, w_x = X.shape h_out = (h_x - h_filter + 2 * padding) / stride + 1 w_out = (w_x - w_filter + 2 * padding) / stride + 1 if not h_out.is_integer() or not w_out.is_integer(): raise Exception('Invalid output dimension!') h_out, w_out = int(h_out), int(w_out) X_col = im2col_indices(X, h_filter, w_filter, padding=padding, stride=stride) W_col = W.reshape(n_filters, -1) out = W_col @ X_col + b out = out.reshape(n_filters, h_out, w_out, n_x) out = out.transpose(3, 0, 1, 2) cache = (X, W, b, stride, padding, X_col) return out, cache def conv_backward(dout, cache): X, W, b, stride, padding, X_col = cache n_filter, d_filter, h_filter, w_filter = W.shape db = np.sum(dout, axis=(0, 2, 3)) db = db.reshape(n_filter, -1) dout_reshaped = dout.transpose(1, 2, 3, 0).reshape(n_filter, -1) dW = dout_reshaped @ X_col.T dW = dW.reshape(W.shape) W_reshape = W.reshape(n_filter, -1) dX_col = W_reshape.T @ dout_reshaped dX = col2im_indices(dX_col, X.shape, h_filter, w_filter, padding=padding, stride=stride) return dX, dW, db # Now it is time to calculate the error using cross entropy def cross_entropy(y_pred, y_train): m = y_pred.shape[0] prob = softmax(y_pred) log_like = -np.log(prob[range(m), y_train]) data_loss = np.sum(log_like) / m # reg_loss = regularization(model, reg_type='l2', lam=lam) return data_loss # + reg_loss def dcross_entropy(y_pred, y_train): m = y_pred.shape[0] grad_y = softmax(y_pred) grad_y[range(m), y_train] -= 1. grad_y /= m return grad_y # Softmax and sidmoid are equally based on Bayesian NBC/ Naiive Bayesian Classifer as a probability-based classifier def softmax(X): eX = np.exp((X.T - np.max(X, axis=1)).T) return (eX.T / eX.sum(axis=1)).T def dsoftmax(X, sX): # derivative of the softmax which is the same as sigmoid as softmax is sigmoid and bayesian function for probabilistic classfication # X is the input to the softmax and sX is the sX=softmax(X) grad = np.zeros(shape=(len(sX[0]), len(X[0]))) # Start filling up the gradient for i in range(len(sX[0])): # mat_1xn, n=num_claess, 10 in this case for j in range(len(X[0])): if i==j: grad[i, j] = (sX[0, i] * (1-sX[0, i])) else: grad[i, j] = (-sX[0, i]* sX[0, j]) # return the gradient as the derivative of softmax/bwd softmax layer return grad def sigmoid(X): return 1. / (1 + np.exp(-X)) def dsigmoid(X): return sigmoid(X) * (1-sigmoid(X)) def squared_loss(y_pred, y_train): m = y_pred.shape[0] data_loss = (0.5/m) * np.sum(y_pred - y_train)**2 # This is now convex error surface x^2 return data_loss #+ reg_loss def dsquared_loss(y_pred, y_train): m = y_pred.shape[0] grad_y = (y_pred - y_train)/m # f(x)-y is the convex surface for descending/minimizing return grad_y from sklearn.utils import shuffle as sklearn_shuffle def get_minibatch(X, y, minibatch_size, shuffle=True): minibatches = [] if shuffle: X, y = sklearn_shuffle(X, y) for i in range(0, X.shape[0], minibatch_size): X_mini = X[i:i + minibatch_size] y_mini = y[i:i + minibatch_size] minibatches.append((X_mini, y_mini)) return minibatches
udacity-dl/CNN/cnn_bp-learning-curves.ipynb
arasdar/DL
unlicense
This is where the CNN imllementation in NumPy starts!
# Displaying an image using matplotlib # importing the library/package import matplotlib.pyplot as plot # Using plot with imshow to show the image (N=5000, H=32, W=32, C=3) plot.imshow(valid_features[0, :, :, :]) # # Training cycle # for epoch in range(num_): # # Loop over all batches # n_batches = 5 # for batch_i in range(1, n_batches + 1): # for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size): # train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels) # print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='') # print_stats(sess, batch_features, batch_labels, cost, accuracy) # # input and output dataset X=valid_features.transpose(0, 3, 1, 2) # NCHW == mat_txn Y=valid_labels #NH= num_classes=10 = mat_txn #for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size): # train_features, train_labels = helper.load_preprocess_training_batch(batch_id=, batch_size=) # Initilizting the parameters # Convolutional layer # Suppose we have 20 of 3x3 filter: 20x1x3x3. W_col will be 20x9 matrix # Let this be 3x3 convolution with stride = 1 and padding = 1 h_filter=3 w_filter=3 c_filter=3 padding=1 stride=1 num_filters = 20 w1 = np.random.normal(loc=0.0, scale=1.0, size=(num_filters, c_filter, h_filter, w_filter))# NCHW 20x9 x 9x500 = 20x500 w1 = w1/(c_filter* h_filter* w_filter) # taking average from them or average running for initialization. b1 = np.zeros(shape=(num_filters, 1), dtype=float) # FC layer to the output layer -- This is really hard to have a final size for the FC to the output layer # num_classes = y[0, 1] # txn w2 = np.random.normal(loc=0.0, scale=1.0, size=Y[0:1].shape) # This will be resized though b2 = np.zeros(shape=Y[0:1].shape) # number of output nodes/units/neurons are equal to the number of classes # Initializing hyper parameters num_epochs = 200 ## minibatch_size = 512 # This will eventually used for stochstic or random minibatch from the whole batch batch_size = X.shape[0]//1 #NCHW, N= number of samples or t error_list = [] # to display the plot or plot the error curve/ learning rate # Training loops for epochs and updating params for epoch in range(num_epochs): # start=0, stop=num_epochs, step=1 # Initializing/reseting the gradients dw1 = np.zeros(shape=w1.shape) db1 = np.zeros(shape=b1.shape) dw2 = np.zeros(shape=w2.shape) db2 = np.zeros(shape=b2.shape) err = 0 # # Shuffling the entire batch for a minibatch # # Stochastic part for randomizing/shuffling through the dataset in every single epoch # minibatches = get_minibatch(X=X, y=Y, minibatch_size=batch_size, shuffle=True) # X_mini, Y_mini = minibatches[0] # The loop for learning the gradients for t in range(batch_size): # start=0, stop=mini_batch_size/batch_size, step=1 # input and output each sample in the batch/minibatch for updating the gradients/d_params/delta_params x= X[t:t+1] # mat_nxcxhxw y= Y[t:t+1] # mat_txm # print("inputs:", x.shape, y.shape) # Forward pass # start with the convolution layer forward h1_in, h1_cache = conv_forward(X=x, W=w1, b=b1, stride=1, padding=1) h1_out = h1_in * 1 # activation func. = LU #h1_out = np.maximum(h1_in, 0) # ReLU for avoiding the very high ERROR in classification # print("Convolution layer:", h1_out) # Connect the flattened layer to the output layer/visible layer FC layer h1_fc = h1_out.reshape(1, -1) # initializing w2 knowing the size/given the size of fc layer if t==0: w2 = (1/h1_fc.shape[1]) * np.resize(a=w2, new_shape=(h1_fc.shape[1], y.shape[1])) # mat_hxm # initialization out = h1_fc @ w2 + b2 y_prob = softmax(X=out) # can also be sigmoid/logistic function/Bayesina/ NBC # print("Output layer: ", out, y_prob, y) # Mean Square Error: Calculate the error one by one sample from the batch -- Euclidean distance err += 0.5 * (1/ batch_size) * np.sum((y_prob - y)**2) # convex surface ax2+b dy = (1/ batch_size) * (y_prob - y) # convex surface this way # ignoring the constant coefficies # print("error:", dy, err) # # Mean Cross Entropy Error: np.log is np.log(exp(x))=x equals to ln in math # err += (1/batch_size) * -(np.sum(y* np.log(y_prob))) # dy = (1/batch_size) * -(y/ y_prob) # y_prop= 0-1, log(y_prob)==-inf-0 # # print("Error:", dy, err) # Backward pass # output layer gradient dout = dy @ dsoftmax(X=out, sX=y_prob).T if t==0: dw2 = np.resize(a=dw2, new_shape=w2.shape) dw2 += h1_fc.T @ dout # mat_hx1 @ mat_1xm = mat_hxm db2 += dout # mat_1xm dh1_fc = dout @ w2.T # mat_1xm @ mat_mxh # convolution layer back dh1_out = dh1_fc.reshape(h1_out.shape) # dh1[h1_out<=0] = 0 #drelu dh1 = dh1_out * 1 # derivative of the LU in bwd pass/prop dX_conv, dW_conv, db_conv = conv_backward(cache=h1_cache, dout=dh1) dw1 += dW_conv db1 += db_conv # Updating the params in the model/cnn in ech epoch w1 -= dw1 b1 -= db1 w2 -= dw2 b2 -= db2 # displaying the total error and accuracy print("Epoch:", epoch, "Error:", err) error_list.append(err) # Ploting the error list for the learning rate plot.plot(error_list) error_list_MCE = error_list plot.plot(error_list_MCE) error_list_MSE = error_list plot.plot(error_list_MSE)
udacity-dl/CNN/cnn_bp-learning-curves.ipynb
arasdar/DL
unlicense
The Correlation Function The 2-point correlation function $\xi(\theta)$ is defined as "the probability of finding two galaxies separated by an angular distance $\theta$ with respect to that expected for a random distribution" (Peebles 1980), and is an excellent summary statistic for quantifying the clustering of galaxies. The simplest possible estimator for this excess probability is just $\hat{\xi}(\theta) = \frac{DD - RR}{RR}$, where $DD(\theta) = N_{\rm pairs}(\theta) / N_D(N_D-1)/2$. Here, $N_D$ is the total number of galaxies in the dataset, and $N_{\rm pairs}(\theta)$ is the number of galaxy pairs with separation lying in a bin centered on $\theta$. $RR(\theta)$ is the same quantity computed in a "random catalog," covering the same field of view but with uniformly randomly distributed positions. We'll use Mike Jarvis' TreeCorr code (Jarvis et al 2004) to compute this correlation function estimator efficiently. You can read more about better estimators starting from the TreeCorr wiki.
# !pip install --upgrade TreeCorr
examples/SDSScatalog/CorrFunc.ipynb
hdesmond/StatisticalMethods
gpl-2.0
Random Catalogs First we'll need a random catalog. Let's make it the same size as the data one.
random = pd.DataFrame({'ra' : ramin + (ramax-ramin)*np.random.rand(Ngals), 'dec' : decmin + (decmax-decmin)*np.random.rand(Ngals)}) print len(random), type(random)
examples/SDSScatalog/CorrFunc.ipynb
hdesmond/StatisticalMethods
gpl-2.0
Now let's plot both catalogs, and compare.
fig, ax = plt.subplots(nrows=1, ncols=2) fig.set_size_inches(15, 6) plt.subplots_adjust(wspace=0.2) random.plot(kind='scatter', x='ra', y='dec', ax=ax[0], title='Random') ax[0].set_xlabel('RA / deg') ax[0].set_ylabel('Dec. / deg') data.plot(kind='scatter', x='ra', y='dec', ax=ax[1], title='Data') ax[1].set_xlabel('RA / deg') ax[1].set_ylabel('Dec. / deg')
examples/SDSScatalog/CorrFunc.ipynb
hdesmond/StatisticalMethods
gpl-2.0
Estimating $\xi(\theta)$
import treecorr random_cat = treecorr.Catalog(ra=random['ra'], dec=random['dec'], ra_units='deg', dec_units='deg') data_cat = treecorr.Catalog(ra=data['ra'], dec=data['dec'], ra_units='deg', dec_units='deg') # Set up some correlation function estimator objects: sep_units='arcmin' min_sep=0.5 max_sep=10.0 N = 7 bin_size = np.log10(1.0*max_sep/min_sep)/(1.0*N) dd = treecorr.NNCorrelation(bin_size=bin_size, min_sep=min_sep, max_sep=max_sep, sep_units=sep_units, bin_slop=0.05/bin_size) rr = treecorr.NNCorrelation(bin_size=bin_size, min_sep=min_sep, max_sep=max_sep, sep_units=sep_units, bin_slop=0.05/bin_size) # Process the data: dd.process(data_cat) rr.process(random_cat) # Combine into a correlation function and its variance: xi, varxi = dd.calculateXi(rr) plt.figure(figsize=(15,8)) plt.rc('xtick', labelsize=16) plt.rc('ytick', labelsize=16) plt.errorbar(np.exp(dd.logr),xi,np.sqrt(varxi),c='blue',linewidth=2) # plt.xscale('log') plt.xlabel('$\\theta / {\\rm arcmin}$',fontsize=20) plt.ylabel('$\\xi(\\theta)$',fontsize=20) plt.ylim([-0.1,0.2]) plt.grid(True)
examples/SDSScatalog/CorrFunc.ipynb
hdesmond/StatisticalMethods
gpl-2.0
使用 tf.distribute.Strategy 进行自定义训练 <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://tensorflow.google.cn/tutorials/distribute/custom_training"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">在 TensorFlow.org 上查看</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/distribute/custom_training.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">在 Google Colab 上运行</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/distribute/custom_training.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png">在 GitHub 上查看源代码</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/tutorials/distribute/custom_training.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png">下载该 notebook</a> </td> </table> 本教程演示了如何使用 tf.distribute.Strategy 进行自定义训练循环。我们将在 Fashion-MNIST 数据集上训练一个简单的 CNN 模型。Fashion-MNIST 数据集包含了 60000 个大小为 28 x 28 的训练图像和 10000 个大小为 28 x 28 的测试图像。 我们用自定义训练循环来训练我们的模型是因为它们在训练的过程中为我们提供了灵活性和在训练过程中更好的控制。而且,使它们调试模型和训练循环的时候更容易。
# Import TensorFlow import tensorflow as tf # Helper libraries import numpy as np import os print(tf.__version__)
site/zh-cn/tutorials/distribute/custom_training.ipynb
tensorflow/docs-l10n
apache-2.0