markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
To be able to process our data in the notebook environment and explore the PCollections, we will use the interactive runner. We create this pipeline object in the same manner as usually, but passing in InteractiveRunner() as the runner.
p = beam.Pipeline(InteractiveRunner())
courses/dataflow/demos/beam_notebooks/beam_notebooks_demo.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Now we're ready to start processing our data! We first apply our ReadWordsFromText transform to read in the lines of text from Google Cloud Storage and parse into individual words.
words = p | 'ReadWordsFromText' >> ReadWordsFromText('gs://apache-beam-samples/shakespeare/kinglear.txt')
courses/dataflow/demos/beam_notebooks/beam_notebooks_demo.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Now we will see some capabilities of the interactive runner. First we can use ib.show to view the contents of a specific PCollection from any point of our pipeline.
ib.show(words)
courses/dataflow/demos/beam_notebooks/beam_notebooks_demo.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Great! We see that we have 28,001 words in our PCollection and we can view the words in our PCollection. We can also view the current DAG for our graph by using the ib.show_graph() method. Note that here we pass in the pipeline object rather than a PCollection
ib.show_graph(p)
courses/dataflow/demos/beam_notebooks/beam_notebooks_demo.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
In the above graph, the rectanglar boxes correspond to PTransforms and the circles correspond to PCollections. Next we will add a simple schema to our PCollection and convert the PCollection into a dataframe using the to_dataframe method.
word_rows = words | 'ToRows' >> beam.Map(lambda word: beam.Row(word=word)) df = to_dataframe(word_rows)
courses/dataflow/demos/beam_notebooks/beam_notebooks_demo.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
We can now explore our PCollection as a Pandas-like dataframe! One of the first things many data scientists do as soon as they load data into a dataframe is explore the first few rows of data using the head method. Let's see what happens here.
df.head()
courses/dataflow/demos/beam_notebooks/beam_notebooks_demo.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Notice that we got a very specific type of error! The WontImplementError is for Pandas methods that will not be implemented for Beam dataframes. These are methods that violate the Beam model for one reason or another. For example, in this case the head method depends on the order of the dataframe. However, this is in conflict with the Beam model. Our goal however is to count the number of times each word appears in the ingested text. First we will add a new column in our dataframe named count with a value of 1 for all rows. After that, we will group by the value of the word column and apply the sum method for the count field.
df['count'] = 1 counted = df.groupby('word').sum()
courses/dataflow/demos/beam_notebooks/beam_notebooks_demo.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
That's it! It looks exactly like the code one would write when using Pandas. However, what does this look like in the DAG for the pipeline? We can see this by executing ib.show_graph(p) as before.
ib.show_graph(p)
courses/dataflow/demos/beam_notebooks/beam_notebooks_demo.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
We can see that the dataframe manipulations added a new PTransform to our pipeline. Let us convert the dataframe back to a PCollection so we can use ib.show to view the contents.
word_counts = to_pcollection(counted, include_indexes=True) ib.show(word_counts)
courses/dataflow/demos/beam_notebooks/beam_notebooks_demo.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Great! We can now see that the words have been successfully counted. Finally let us build in a sink into the pipeline. We can do this in two ways. If we wish to write to a CSV file, then we can use the dataframe's to_csv method. We can also use the WriteToText transform after converting back to a PCollection. Let's do both and explore the outputs.
counted.to_csv('from_df.csv') _ = word_counts | beam.io.WriteToText('from_pcoll.csv') ib.show_graph(p)
courses/dataflow/demos/beam_notebooks/beam_notebooks_demo.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Note that we can see the branching with two different sinks, also we can see where the dataframe is converted back to a PCollection. We can run our entire pipeline by using p.run() as normal.
p.run()
courses/dataflow/demos/beam_notebooks/beam_notebooks_demo.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Let us now look at the beginning of the CSV files using the bash line magic with the head command to compare.
!head from_df* !head from_pcoll*
courses/dataflow/demos/beam_notebooks/beam_notebooks_demo.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Next, import the relevant modules: Hint: Find corresponding bio-entity representation used in BioThings Explorer based on user input (could be any database IDs, symbols, names) FindConnection: Find intermediate bio-entities which connects user specified input and output
from biothings_explorer.hint import Hint from biothings_explorer.user_query_dispatcher import FindConnection import nest_asyncio nest_asyncio.apply()
jupyter notebooks/Multi intermediate nodes query.ipynb
biothings/biothings_explorer
apache-2.0
Step 1: Find representation of "Anisindione" in BTE In this step, BioThings Explorer translates our query string "Anisindioine" into BioThings objects, which contain mappings to many common identifiers. Generally, the top result returned by the Hint module will be the correct item, but you should confirm that using the identifiers shown. Search terms can correspond to any child of BiologicalEntity from the Biolink Model, including DiseaseOrPhenotypicFeature (e.g., "lupus"), ChemicalSubstance (e.g., "acetaminophen"), Gene (e.g., "CDK2"), BiologicalProcess (e.g., "T cell differentiation"), and Pathway (e.g., "Citric acid cycle").
ht = Hint() anisindione = ht.query("Anisindione")['ChemicalSubstance'][0] anisindione
jupyter notebooks/Multi intermediate nodes query.ipynb
biothings/biothings_explorer
apache-2.0
Step 2: Find phenotypes that are associated with Anisindione through Gene and DiseaseOrPhenotypicFeature as intermediate nodes In this section, we find all paths in the knowledge graph that connect Anisindione to any entity that is a phenotypic feature. To do that, we will use FindConnection. This class is a convenient wrapper around two advanced functions for query path planning and query path execution. More advanced features for both query path planning and query path execution are in development and will be documented in the coming months.
fc = FindConnection(input_obj=anisindione, output_obj='PhenotypicFeature', intermediate_nodes=['Gene', 'Disease']) fc.connect(verbose=True) df = fc.display_table_view() df.head()
jupyter notebooks/Multi intermediate nodes query.ipynb
biothings/biothings_explorer
apache-2.0
The df object contains the full output from BioThings Explorer. Each row shows one path that joins the input node (ANISINDIONE) to an intermediate node (a gene or protein) to another intermediate node (a DisseaseOrPhenotypicFeature) to an ending node (a Phenotypic Feature). The data frame includes a set of columns with additional details on each node and edge (including human-readable labels, identifiers, and sources). Let's remove all examples where the output_name (the phenotype label) is None, and specifically focus on paths with specific mechanistic predicates target and causes.
dfFilt = df.loc[df['output_name'].notnull()].query('pred1 == "physically_interacts_with" and pred2 == "prevents"') dfFilt
jupyter notebooks/Multi intermediate nodes query.ipynb
biothings/biothings_explorer
apache-2.0
This line create a CSV (comma seperated file) called hw1data.csv in the current working directory.
%%file hw1data.csv id,sex,weight 1,M,190 2,F,120 3,F,110 4,M,150 5,O,120 6,M,120 7,F,140
homework/homework1.ipynb
AlJohri/DAT-DC-12
mit
Basic
def double(x): """ double the value x """ assert double(10) == 20 def apply_to_100(f): """ runs some abitrary function f on the value 100 and returns the output """ assert(apply_to_100(double) == 200) """ create a an anonymous function using lambda that takes some value x and adds 1 to x """ add_one = lambda x: x assert apply_to_100(add_one) == 101 def get_up_to_first_three_elements(l): """ get up to the first three elements in list l """ return assert get_up_to_first_three_elements([1,2,3,4]) == [1,2,3] assert get_up_to_first_three_elements([1,2]) == [1,2] assert get_up_to_first_three_elements([1]) == [1] assert get_up_to_first_three_elements([]) == [] def caesar_cipher(s, key): """ https://www.hackerrank.com/challenges/caesar-cipher-1 Given an unencrypted string s and an encryption key (an integer), compute the caesar cipher. Basically just shift each letter by the value of key. A becomes C if key = 2. This is case sensitive. What is a Caesar Cipher? https://en.wikipedia.org/wiki/Caesar_cipher Hint: ord function https://docs.python.org/2/library/functions.html#ord Hint: chr function https://docs.python.org/2/library/functions.html#chr print(ord('A'), ord('Z'), ord('a'), ord('z')) print(chr(65), chr(90), chr(97), chr(122)) """ new_s = [] for c in s: if ord('A') <= ord(c) <= ord('Z'): new_c = chr(ord('A') + (ord(c) - ord('A') + 2) % 26) new_s.append(new_c) elif ord('a') <= ord(c) <= ord('z'): new_c = chr(ord('a') + (ord(c) - ord('a') + 2) % 26) new_s.append(new_c) else: new_s.append(c) return "".join(new_s) assert caesar_cipher("middle-Outz", 2) == "okffng-Qwvb"
homework/homework1.ipynb
AlJohri/DAT-DC-12
mit
Working with Files
def create_list_of_lines_in_hw1data(): """ Read each line of hw1data.csv into a list and return the list of lines. Remove the newline character ("\n") at the end of each line. What is a newline character? https://en.wikipedia.org/wiki/Newline Hint: Reading a File (https://docs.python.org/3/tutorial/inputoutput.html#methods-of-file-objects) """ with open("hw1data.csv", "r") as f: return [line.strip() for line in f] # lines = f.read().splitlines() # alternative 1 # lines = [line.strip() for line in f.readlines()] # altenative 2 assert create_list_of_lines_in_hw1data() == [ "id,sex,weight", "1,M,190", "2,F,120", "3,F,110", "4,M,150", "5,O,120", "6,M,120", "7,F,140", ] def filter_to_lines_with_just_M(): """ Read each line in like last time except filter down to only the rows with "M" in them. Hint: Filter using List Comprehensions (http://www.diveintopython.net/power_of_introspection/filtering_lists.html) """ lines = create_list_of_lines_in_hw1data() return [line for line in lines ] assert filter_to_lines_with_just_M() == ["1,M,190", "4,M,150", "6,M,120"] def filter_to_lines_with_just_F(): """ Read each line in like last time except filter down to only the rows with "F" in them. """ lines = create_list_of_lines_in_hw1data() return [line for line in lines ] assert filter_to_lines_with_just_F() == ["2,F,120", "3,F,110", "7,F,140"] def filter_to_lines_with_any_sex(sex): """ Read each line in like last time except filter down to only the rows with "M" in them. """ lines = create_list_of_lines_in_hw1data() return [line for line in lines ] assert filter_to_lines_with_any_sex("O") == ["5,O,120"] def get_average_weight(): """ This time instead of just reading the file, parse the csv using csv.reader. get the average weight of all people rounded to the hundredth place Hint: https://docs.python.org/3/library/csv.html#csv.reader """ weights = [] with open("hw1data.csv", "r") as f: reader = csv.reader(f) next(reader) for row in reader: print(int(row[2])) return round(avg_weight, 2) assert get_average_weight() == 135.71 def create_list_of_dicts_in_hw1data(): """ create list of dicts for each line in the hw1data (except the header) """ with open("hw1data.csv", "r") as f: return [] assert create_list_of_dicts_in_hw1data() == [ {"id": "1", "sex": "M", "weight": "190"}, {"id": "2", "sex": "F", "weight": "120"}, {"id": "3", "sex": "F", "weight": "110"}, {"id": "4", "sex": "M", "weight": "150"}, {"id": "5", "sex": "O", "weight": "120"}, {"id": "6", "sex": "M", "weight": "120"}, {"id": "7", "sex": "F", "weight": "140"} ]
homework/homework1.ipynb
AlJohri/DAT-DC-12
mit
Project Euler
def sum_of_multiples_of_three_and_five_below_1000(): """ https://projecteuler.net/problem=1 If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23. Find the sum of all the multiples of 3 or 5 below 1000. Hint: Modulo Operator (https://docs.python.org/3/reference/expressions.html#binary-arithmetic-operations) Hint: List Comprehension (https://docs.python.org/3/tutorial/datastructures.html#list-comprehensions) Hint: Range Function (https://docs.python.org/3/library/functions.html#func-range) """ return def sum_of_even_fibonacci_under_4million(): """ https://projecteuler.net/problem=2 Each new term in the Fibonacci sequence is generated by adding the previous two terms. By starting with 1 and 2, the first 10 terms will be: 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ... By considering the terms in the Fibonacci sequence whose values do not exceed four million, find the sum of the even-valued terms. Hint: While Loops (http://learnpythonthehardway.org/book/ex33.html) """ the_sum = 0 a, b = 1, 2 while b < 4000000: return the_sum def test_all(): assert sum_of_multiples_of_three_and_five_below_1000() == 233168 assert sum_of_even_fibonacci_under_4million() == 4613732 test_all()
homework/homework1.ipynb
AlJohri/DAT-DC-12
mit
Strings
from collections import Counter def remove_punctuation(s): """remove periods, commas, and semicolons """ return s.replace() def tokenize(s): """return a list of lowercased tokens (words) in a string without punctuation """ return remove_punctuation(s.lower()) def word_count(s): """count the number of times each word (lowercased) appears and return a dictionary """ return Counter(words) def test_all(): test_string1 = "A quick brown Al, jumps over the lazy dog; sometimes..." test_string2 = "This this is a sentence sentence with words multiple multiple times." # ---------------------------------------------------------------------------------- # test_punctuation1 = "A quick brown Al jumps over the lazy dog sometimes" test_punctuation2 = "This this is a sentence sentence with words multiple multiple times" assert remove_punctuation(test_string1) == test_punctuation1 assert remove_punctuation(test_string2) == test_punctuation2 # ---------------------------------------------------------------------------------- # test_tokens1 = ["a", "quick", "brown", "al", "jumps", "over", "the", "lazy", "dog", "sometimes"] test_tokens2 = [ "this", "this", "is", "a", "sentence", "sentence", "with", "words", "multiple", "multiple", "times" ] assert tokenize(test_string1) == test_tokens1 assert tokenize(test_string2) == test_tokens2 # ---------------------------------------------------------------------------------- # test_wordcount1 = { "a": 1, "quick": 1, "brown": 1, "al": 1, "jumps": 1, "over": 1, "the": 1, "lazy": 1, "dog": 1, "sometimes": 1 } test_wordcount2 = {"this": 2, "is": 1, "a": 1, "sentence": 2, "with": 1, "words": 1, "multiple": 2, "times": 1} assert word_count(test_string1) == test_wordcount1 assert word_count(test_string2) == test_wordcount2 test_all()
homework/homework1.ipynb
AlJohri/DAT-DC-12
mit
Linear Algebra Please find the following empty functions and write the code to complete the logic. These functions are focused around implementing vector algebra operations. The vectors can be of any length. If a function accepts two vectors, assume they are the same length. Khan Academy has a decent introduction: [https://www.khanacademy.org/math/linear-algebra/vectors_and_spaces/vectors/v/vector-introduction-linear-algebra]
def vector_add(v, w): """adds two vectors componentwise and returns the result hint: use zip() v + w = [4, 5, 1] + [9, 8, 1] = [13, 13, 2] """ return [] def vector_subtract(v, w): """subtracts two vectors componentwise and returns the result hint use zip() v + w = [4, 5, 1] - [9, 8, 1] = [-5, -3, 0] """ return [] def vector_sum(vectors): """sums a list of vectors or arbitrary length and returns the resulting vector [[1,2], [4,5], [8,3]] = [13,10] """ v_copy = list(vectors) result = v_copy.pop() for v in v_copy: result = return result def scalar_multiply(c, v): """returns a vector where components are multplied by c""" return [] def dot(v, w): """dot product v.w v_1 * w_1 + ... + v_n * w_n""" return sum() def sum_of_squares(v): """ v.v square each component and sum them v_1 * v_1 + ... + v_n * v_n""" return def magnitude(v): """the Norm of a vector, the sqrt of the sum of the squares of the components""" return math.sqrt() def distance(v, w): """ the distance of v to w""" return def cross_product(v, w): # or outer_product(v, w) """Bonus: The outer/cross product of v and w""" for i in v: yield scalar_multiply(i, w) def test_all(): test_v = [4, 5, 1] test_w = [9, 8, 1] list_v = [[1,2], [4,5], [8,3]] print("Vector Add", test_v, test_w, vector_add(test_v, test_w)) print("Vector Subtract", test_v, test_w, vector_subtract(test_v, test_w)) print("Vector Sum", list_v, vector_sum(list_v)) print("Scalar Multiply", 3, test_w, scalar_multiply(3, test_w)) print("Dot", test_v, test_w, dot(test_v, test_w)) print("Sum of Squares", test_v, sum_of_squares(test_v)) print("Magnitude", test_v, magnitude(test_v)) print("Distance", test_v, test_w, distance(test_v, test_w)) print("Cross Product", list(cross_product(test_v, test_w))) assert vector_add(test_v, test_w) == [13, 13, 2] assert vector_subtract(test_v, test_w) == [-5, -3, 0] assert vector_sum(list_v) == [13,10] assert scalar_multiply(3, test_w) == [27, 24, 3] assert dot(test_v, test_w) == 77 assert sum_of_squares(test_v) == 42 assert magnitude(test_v) == 6.48074069840786 assert distance(test_v, test_w) == 5.830951894845301 assert list(cross_product(test_v, test_w)) == [[36, 32, 4], [45, 40, 5], [9, 8, 1]] test_all()
homework/homework1.ipynb
AlJohri/DAT-DC-12
mit
Run a simulation
nregions = [fp.Region(0,1,1),fp.Region(2,3,1)] sregions = [fp.ExpS(1,2,1,-0.1),fp.ExpS(1,2,0.01,0.001)] rregions = [fp.Region(0,3,1)] rng = fp.GSLrng(101) popsizes = np.array([1000],dtype=np.uint32) popsizes=np.tile(popsizes,10000) #Initialize a vector with 1 population of size N = 1,000 pops=fp.SpopVec(1,1000) #This sampler object will record selected mutation #frequencies over time. A sampler gets the length #of pops as a constructor argument because you #need a different sampler object in memory for #each population. sampler=fp.FreqSampler(len(pops)) #Record mutation frequencies every generation #The function evolve_regions sampler takes any #of fwdpy's temporal samplers and applies them. #For users familiar with C++, custom samplers will be written, #and we plan to allow for custom samplers to be written primarily #using Cython, but we are still experimenting with how best to do so. rawTraj=fp.evolve_regions_sampler(rng,pops,sampler, popsizes[0:],0.001,0.001,0.001, nregions,sregions,rregions, #The one means we sample every generation. 1) rawTraj = [i for i in sampler] #This example has only 1 set of trajectories, so let's make a variable for thet #single replicate traj=rawTraj[0] print traj.head() print traj.tail() print traj.freq.max()
docs/examples/trajectories.ipynb
molpopgen/fwdpy
gpl-3.0
Group mutation trajectories by position and effect size Max mutation frequencies
mfreq = traj.groupby(['pos','esize']).max().reset_index() #Print out info for all mutations that hit a frequency of 1 (e.g., fixed) mfreq[mfreq['freq']==1]
docs/examples/trajectories.ipynb
molpopgen/fwdpy
gpl-3.0
The only fixation has an 'esize' $> 0$, which means that it was positively selected, Frequency trajectory of fixations
#Get positions of mutations that hit q = 1 mpos=mfreq[mfreq['freq']==1]['pos'] #Frequency trajectories of fixations fig = plt.figure() ax = plt.subplot(111) plt.xlabel("Time (generations)") plt.ylabel("Mutation frequency") ax.set_xlim(traj['generation'].min(),traj['generation'].max()) for i in mpos: plt.plot(traj[traj['pos']==i]['generation'],traj[traj['pos']==i]['freq']) #Let's get histogram of effect sizes for all mutations that did not fix fig = plt.figure() ax = plt.subplot(111) plt.xlabel(r'$s$ (selection coefficient)') plt.ylabel("Number of mutations") mfreq[mfreq['freq']<1.0]['esize'].hist()
docs/examples/trajectories.ipynb
molpopgen/fwdpy
gpl-3.0
Input Parameter
# Discretization c1=30 # Number of grid points per dominant wavelength c2=0.2 # CFL-Number nx=300 # Number of grid points in X ny=300 # Number of grid points in Y T=1 # Total propagation time # Source Signal f0= 5 # Center frequency Ricker-wavelet q0= 100 # Maximum amplitude Ricker-Wavelet xscr = 150 # Source position (in grid points) in X yscr = 150 # Source position (in grid points) in Y # Receiver xrec1=150; yrec1=120; # Position Reciever 1 (in grid points) xrec2=150; yrec2=150; # Position Reciever 2 (in grid points) xrec3=150; yrec3=180;# Position Reciever 3 (in grid points) # Velocity and density modell_v = 3000*np.ones((ny,nx)) rho=2.2*np.ones((ny,nx))
JupyterNotebook/2D/FD_2D_DX4_DT2_fast.ipynb
florianwittkamp/FD_ACOUSTIC
gpl-3.0
Preparation
# Init wavefields vx=np.zeros(shape = (ny,nx)) vy=np.zeros(shape = (ny,nx)) p=np.zeros(shape = (ny,nx)) vx_x=np.zeros(shape = (ny,nx)) vy_y=np.zeros(shape = (ny,nx)) p_x=np.zeros(shape = (ny,nx)) p_y=np.zeros(shape = (ny,nx)) # Calculate first Lame-Paramter l=rho * modell_v * modell_v cmin=min(modell_v.flatten()) # Lowest P-wave velocity cmax=max(modell_v.flatten()) # Highest P-wave velocity fmax=2*f0 # Maximum frequency dx=cmin/(fmax*c1) # Spatial discretization (in m) dy=dx # Spatial discretization (in m) dt=dx/(cmax)*c2 # Temporal discretization (in s) lampda_min=cmin/fmax # Smallest wavelength # Output model parameter: print("Model size: x:",dx*nx,"in m, y:",dy*ny,"in m") print("Temporal discretization: ",dt," s") print("Spatial discretization: ",dx," m") print("Number of gridpoints per minimum wavelength: ",lampda_min/dx)
JupyterNotebook/2D/FD_2D_DX4_DT2_fast.ipynb
florianwittkamp/FD_ACOUSTIC
gpl-3.0
Create space and time vector
x=np.arange(0,dx*nx,dx) # Space vector in X y=np.arange(0,dy*ny,dy) # Space vector in Y t=np.arange(0,T,dt) # Time vector nt=np.size(t) # Number of time steps # Plotting model fig, (ax1, ax2) = plt.subplots(1, 2) fig.subplots_adjust(wspace=0.4,right=1.6) ax1.plot(x,modell_v) ax1.set_ylabel('VP in m/s') ax1.set_xlabel('Depth in m') ax1.set_title('P-wave velocity') ax2.plot(x,rho) ax2.set_ylabel('Density in g/cm^3') ax2.set_xlabel('Depth in m') ax2.set_title('Density');
JupyterNotebook/2D/FD_2D_DX4_DT2_fast.ipynb
florianwittkamp/FD_ACOUSTIC
gpl-3.0
Source signal - Ricker-wavelet
tau=np.pi*f0*(t-1.5/f0) q=q0*(1.0-2.0*tau**2.0)*np.exp(-tau**2) # Plotting source signal plt.figure(3) plt.plot(t,q) plt.title('Source signal Ricker-Wavelet') plt.ylabel('Amplitude') plt.xlabel('Time in s') plt.draw()
JupyterNotebook/2D/FD_2D_DX4_DT2_fast.ipynb
florianwittkamp/FD_ACOUSTIC
gpl-3.0
Time stepping
# Init Seismograms Seismogramm=np.zeros((3,nt)); # Three seismograms # Calculation of some coefficients i_dx=1.0/(dx) i_dy=1.0/(dy) c1=9.0/(8.0*dx) c2=1.0/(24.0*dx) c3=9.0/(8.0*dy) c4=1.0/(24.0*dy) c5=1.0/np.power(dx,3) c6=1.0/np.power(dy,3) c7=1.0/np.power(dx,2) c8=1.0/np.power(dy,2) c9=np.power(dt,3)/24.0 # Prepare slicing parameter: kxM2=slice(5-2,nx-4-2) kxM1=slice(5-1,nx-4-1) kx=slice(5,nx-4) kxP1=slice(5+1,nx-4+1) kxP2=slice(5+2,nx-4+2) kyM2=slice(5-2,ny-4-2) kyM1=slice(5-1,ny-4-1) ky=slice(5,ny-4) kyP1=slice(5+1,ny-4+1) kyP2=slice(5+2,ny-4+2) ## Time stepping print("Starting time stepping...") for n in range(2,nt): # Inject source wavelet p[yscr,xscr]=p[yscr,xscr]+q[n] # Update velocity p_x[ky,kx]=c1*(p[ky,kxP1]-p[ky,kx])-c2*(p[ky,kxP2]-p[ky,kxM1]) p_y[ky,kx]=c3*(p[kyP1,kx]-p[ky,kx])-c4*(p[kyP2,kx]-p[kyM1,kx]) vx=vx-dt/rho*p_x vy=vy-dt/rho*p_y # Update pressure vx_x[ky,kx]=c1*(vx[ky,kx]-vx[ky,kxM1])-c2*(vx[ky,kxP1]-vx[ky,kxM2]) vy_y[ky,kx]=c3*(vy[ky,kx]-vy[kyM1,kx])-c4*(vy[kyP1,kx]-vy[kyM2,kx]) p=p-l*dt*(vx_x+vy_y) # Save seismograms Seismogramm[0,n]=p[yrec1,xrec1] Seismogramm[1,n]=p[yrec2,xrec2] Seismogramm[2,n]=p[yrec3,xrec3] print("Finished time stepping!")
JupyterNotebook/2D/FD_2D_DX4_DT2_fast.ipynb
florianwittkamp/FD_ACOUSTIC
gpl-3.0
Save seismograms
## Save seismograms np.save("Seismograms/FD_2D_DX4_DT2_fast",Seismogramm)
JupyterNotebook/2D/FD_2D_DX4_DT2_fast.ipynb
florianwittkamp/FD_ACOUSTIC
gpl-3.0
Plotting
## Image plot fig, ax = plt.subplots(1,1) img = ax.imshow(p); ax.set_title('P-Wavefield') ax.set_xticks(range(0,nx+1,int(nx/5))) ax.set_yticks(range(0,ny+1,int(ny/5))) ax.set_xlabel('Grid-points in X') ax.set_ylabel('Grid-points in Y') fig.colorbar(img) ## Plot seismograms fig, (ax1, ax2, ax3) = plt.subplots(3, 1) fig.subplots_adjust(hspace=0.4,right=1.6, top = 2 ) ax1.plot(t,Seismogramm[0,:]) ax1.set_title('Seismogram 1') ax1.set_ylabel('Amplitude') ax1.set_xlabel('Time in s') ax1.set_xlim(0, T) ax2.plot(t,Seismogramm[1,:]) ax2.set_title('Seismogram 2') ax2.set_ylabel('Amplitude') ax2.set_xlabel('Time in s') ax2.set_xlim(0, T) ax3.plot(t,Seismogramm[2,:]) ax3.set_title('Seismogram 3') ax3.set_ylabel('Amplitude') ax3.set_xlabel('Time in s') ax3.set_xlim(0, T);
JupyterNotebook/2D/FD_2D_DX4_DT2_fast.ipynb
florianwittkamp/FD_ACOUSTIC
gpl-3.0
Import the digits dataset (http://scikit-learn.org/stable/auto_examples/datasets/plot_digits_last_image.html) and show its attributes
from sklearn.datasets import load_digits digits = load_digits() X_digits, y_digits = digits.data, digits.target print digits.keys()
scikit-learn/scikit-learn-book/Chapter 3 - Unsupervised Learning - Principal Component Analysis.ipynb
marcelomiky/PythonCodes
mit
Let's show how the digits look like...
n_row, n_col = 2, 5 def print_digits(images, y, max_n=10): # set up the figure size in inches fig = plt.figure(figsize=(2. * n_col, 2.26 * n_row)) i=0 while i < max_n and i < images.shape[0]: p = fig.add_subplot(n_row, n_col, i + 1, xticks=[], yticks=[]) p.imshow(images[i], cmap=plt.cm.bone, interpolation='nearest') # label the image with the target value p.text(0, -1, str(y[i])) i = i + 1 print_digits(digits.images, digits.target, max_n=10)
scikit-learn/scikit-learn-book/Chapter 3 - Unsupervised Learning - Principal Component Analysis.ipynb
marcelomiky/PythonCodes
mit
Now, let's define a function that will plot a scatter with the two-dimensional points that will be obtained by a PCA transformation. Our data points will also be colored according to their classes. Recall that the target class will not be used to perform the transformation; we want to investigate if the distribution after PCA reveals the distribution of the different classes, and if they are clearly separable. We will use ten different colors for each of the digits, from 0 to 9. Find components and plot first and second components
def plot_pca_scatter(): colors = ['black', 'blue', 'purple', 'yellow', 'white', 'red', 'lime', 'cyan', 'orange', 'gray'] for i in xrange(len(colors)): px = X_pca[:, 0][y_digits == i] py = X_pca[:, 1][y_digits == i] plt.scatter(px, py, c=colors[i]) plt.legend(digits.target_names) plt.xlabel('First Principal Component') plt.ylabel('Second Principal Component')
scikit-learn/scikit-learn-book/Chapter 3 - Unsupervised Learning - Principal Component Analysis.ipynb
marcelomiky/PythonCodes
mit
At this point, we are ready to perform the PCA transformation. In scikit-learn, PCA is implemented as a transformer object that learns n number of components through the fit method, and can be used on new data to project it onto these components. In scikit-learn, we have various classes that implement different kinds of PCA decompositions. In our case, we will work with the PCA class from the sklearn.decomposition module. The most important parameter we can change is n_components, which allows us to specify the number of features that the obtained instances will have.
from sklearn.decomposition import PCA n_components = n_row * n_col # 10 estimator = PCA(n_components=n_components) X_pca = estimator.fit_transform(X_digits) plot_pca_scatter() # Note that we only plot the first and second principal component
scikit-learn/scikit-learn-book/Chapter 3 - Unsupervised Learning - Principal Component Analysis.ipynb
marcelomiky/PythonCodes
mit
To finish, let us look at principal component transformations. We will take the principal components from the estimator by accessing the components attribute. Each of its components is a matrix that is used to transform a vector from the original space to the transformed space. In the scatter we previously plotted, we only took into account the first two components.
def print_pca_components(images, n_col, n_row): plt.figure(figsize=(2. * n_col, 2.26 * n_row)) for i, comp in enumerate(images): plt.subplot(n_row, n_col, i + 1) plt.imshow(comp.reshape((8, 8)), interpolation='nearest') plt.text(0, -1, str(i + 1) + '-component') plt.xticks(()) plt.yticks(()) print_pca_components(estimator.components_[:n_components], n_col, n_row)
scikit-learn/scikit-learn-book/Chapter 3 - Unsupervised Learning - Principal Component Analysis.ipynb
marcelomiky/PythonCodes
mit
The BERT repo uses Tensorflow 1 and thus a few of the functions have been moved/changed/renamed in Tensorflow 2. In order for the BERT tokenizer to be used, one of the lines in the repo that was just cloned needs to be modified to comply with Tensorflow 2. Line 125 in the BERT tokenization.py file must be changed as follows: From => with tf.gfile.GFile(vocab_file, "r") as reader: To => with tf.io.gfile.GFile(vocab_file, "r") as reader: Once that is complete and the file is saved, the tokenization library can be imported.
import tokenization
examples/Document_representation_from_BERT.ipynb
google/patents-public-data
apache-2.0
Load BERT
MAX_SEQ_LENGTH = 512 MODEL_DIR = 'path/to/model' VOCAB = 'path/to/vocab' tokenizer = tokenization.FullTokenizer(VOCAB, do_lower_case=True) model = tf.compat.v2.saved_model.load(export_dir=MODEL_DIR, tags=['serve']) model = model.signatures['serving_default'] # Mean pooling layer for combining pooling = tf.keras.layers.GlobalAveragePooling1D()
examples/Document_representation_from_BERT.ipynb
google/patents-public-data
apache-2.0
Get a couple of Patents Here we do a simple query from the BigQuery patents data to collect the claims for a sample set of patents.
# Put your publications here. test_pubs = ( 'US-8000000-B2', 'US-2007186831-A1', 'US-2009030261-A1', 'US-10722718-B2' ) js = r""" // Regex to find the separations of the claims data var pattern = new RegExp(/[.][\\s]+[0-9]+[\\s]*[.]/, 'g'); if (pattern.test(text)) { return text.split(pattern); } """ query = r''' #standardSQL CREATE TEMPORARY FUNCTION breakout_claims(text STRING) RETURNS ARRAY<STRING> LANGUAGE js AS """ {} """; SELECT pubs.publication_number, title.text as title, breakout_claims(claims.text) as claims FROM `patents-public-data.patents.publications` as pubs, UNNEST(claims_localized) as claims, UNNEST(title_localized) as title WHERE publication_number in {} '''.format(js, test_pubs) df = bq_client.query(query).to_dataframe() df.head() def get_bert_token_input(texts): input_ids = [] input_mask = [] segment_ids = [] for text in texts: tokens = tokenizer.tokenize(text) if len(tokens) > MAX_SEQ_LENGTH - 2: tokens = tokens[0:(MAX_SEQ_LENGTH - 2)] tokens = ['[CLS]'] + tokens + ['[SEP]'] ids = tokenizer.convert_tokens_to_ids(tokens) token_pad = MAX_SEQ_LENGTH - len(ids) input_mask.append([1] * len(ids) + [0] * token_pad) input_ids.append(ids + [0] * token_pad) segment_ids.append([0] * MAX_SEQ_LENGTH) return { 'segment_ids': tf.convert_to_tensor(segment_ids, dtype=tf.int64), 'input_mask': tf.convert_to_tensor(input_mask, dtype=tf.int64), 'input_ids': tf.convert_to_tensor(input_ids, dtype=tf.int64), 'mlm_positions': tf.convert_to_tensor([], dtype=tf.int64) } docs_embeddings = [] for _, row in df.iterrows(): inputs = get_bert_token_input(row['claims']) response = model(**inputs) avg_embeddings = pooling( tf.reshape(response['encoder_layer'], shape=[1, -1, 1024])) docs_embeddings.append(avg_embeddings.numpy()[0]) pairwise.cosine_similarity(docs_embeddings) docs_embeddings[0].shape
examples/Document_representation_from_BERT.ipynb
google/patents-public-data
apache-2.0
3. Enter DV360 Report To Sheets Recipe Parameters Specify either report name or report id to move a report. The most recent valid file will be moved to the sheet. Modify the values below for your use case, can be done multiple times, then click play.
FIELDS = { 'auth_read':'user', # Credentials used for reading data. 'report_id':'', # DV360 report ID given in UI, not needed if name used. 'report_name':'', # Name of report, not needed if ID used. 'sheet':'', # Full URL to sheet being written to. 'tab':'', # Existing tab in sheet to write to. } print("Parameters Set To: %s" % FIELDS)
colabs/dbm_to_sheets.ipynb
google/starthinker
apache-2.0
4. Execute DV360 Report To Sheets This does NOT need to be modified unless you are changing the recipe, click play.
from starthinker.util.configuration import execute from starthinker.util.recipe import json_set_fields TASKS = [ { 'dbm':{ 'auth':{'field':{'name':'auth_read','kind':'authentication','order':1,'default':'user','description':'Credentials used for reading data.'}}, 'report':{ 'report_id':{'field':{'name':'report_id','kind':'integer','order':1,'default':'','description':'DV360 report ID given in UI, not needed if name used.'}}, 'name':{'field':{'name':'report_name','kind':'string','order':2,'default':'','description':'Name of report, not needed if ID used.'}} }, 'out':{ 'sheets':{ 'sheet':{'field':{'name':'sheet','kind':'string','order':3,'default':'','description':'Full URL to sheet being written to.'}}, 'tab':{'field':{'name':'tab','kind':'string','order':4,'default':'','description':'Existing tab in sheet to write to.'}}, 'range':'A1' } } } } ] json_set_fields(TASKS, FIELDS) execute(CONFIG, TASKS, force=True)
colabs/dbm_to_sheets.ipynb
google/starthinker
apache-2.0
https://docs.scipy.org/doc/scipy-1.3.0/reference/tutorial/integrate.html https://docs.scipy.org/doc/scipy-1.3.0/reference/integrate.html Integrating functions, given callable object (scipy.integrate.quad) See: - https://docs.scipy.org/doc/scipy-1.3.0/reference/tutorial/integrate.html#general-integration-quad - https://docs.scipy.org/doc/scipy-1.3.0/reference/generated/scipy.integrate.quad.html#scipy.integrate.quad Example: $$I = \int_{0}^{3} x^2 dx = \frac{1}{3} 3^3 = 9$$
f = lambda x: np.power(x, 2) result = scipy.integrate.quad(f, 0, 3) result
nb_dev_python/python_scipy_integrate.ipynb
jdhp-docs/python_notebooks
mit
The return value is a tuple, with the first element holding the estimated value of the integral and the second element holding an upper bound on the error. Integrating functions, given fixed samples https://docs.scipy.org/doc/scipy-1.3.0/reference/tutorial/integrate.html#integrating-using-samples
x = np.linspace(0., 3., 100) y = f(x) plt.plot(x, y);
nb_dev_python/python_scipy_integrate.ipynb
jdhp-docs/python_notebooks
mit
In case of arbitrary spaced samples, the two functions trapz and simps are available.
result = scipy.integrate.simps(y, x) result result = scipy.integrate.trapz(y, x) result
nb_dev_python/python_scipy_integrate.ipynb
jdhp-docs/python_notebooks
mit
Notice that I've updated the assert to include the word "To-Do" instead of "Django". Now our test should fail. Let's check that it fails.
# First start up the server: #!python3 manage.py runserver # Run test !python3 functional_tests.py
wk9/notebooks/Ch.2-Extending our functional test using the unittest module.ipynb
ThunderShiviah/code_guild
mit
We got what was called an expected fail which is what we wanted! Python Standard Library's unittest Module There are a couple of little annoyances we should probably deal with. Firstly, the message "AssertionError" isn’t very helpful—it would be nice if the test told us what it actually found as the browser title. Also, it’s left a Firefox window hanging around the desktop, it would be nice if this would clear up for us automatically. One option would be to use the second parameter to the assert keyword, something like: python assert 'To-Do' in browser.title, "Browser title was " + browser.title And we could also use a try/finally to clean up the old Firefox window. But these sorts of problems are quite common in testing, and there are some ready-made solutions for us in the standard library’s unittest module. Let’s use that! In functional_tests.py:
%%writefile functional_tests.py from selenium import webdriver import unittest class NewVisitorTest(unittest.TestCase): #1 def setUp(self): #2 self.browser = webdriver.Firefox() self.browser.implicitly_wait(3) # Wait three seconds before trying anything. def tearDown(self): #3 self.browser.quit() def test_can_start_a_list_and_retrieve_it_later(self): #4 # Edith has heard about a cool new online to-do app. She goes # to check out its homepage self.browser.get('http://localhost:8000') # She notices the page title and header mention to-do lists self.assertIn('To-Do', self.browser.title) #5 self.fail('Finish the test!') #6 # She is invited to enter a to-do item straight away # [...rest of comments as before] if __name__ == '__main__': #7 unittest.main(warnings='ignore') #8
wk9/notebooks/Ch.2-Extending our functional test using the unittest module.ipynb
ThunderShiviah/code_guild
mit
Some things to notice about our new test file: Tests are organised into classes, which inherit from unittest.TestCase. and setUp and tearDown are special methods which get run before and after each test. I’m using them to start and stop our browser—note that they’re a bit like a try/except, in that tearDown will run even if there’s an error during the test itself.[4] No more Firefox windows left lying around! The main body of the test is in a method called test_can_start_a_list_and_retrieve_it_later. Any method whose name starts with test is a test method, and will be run by the test runner. You can have more than one test_ method per class. Nice descriptive names for our test methods are a good idea too. We use self.assertIn instead of just assert to make our test assertions. unittest provides lots of helper functions like this to make test assertions, like assertEqual, assertTrue, assertFalse, and so on. You can find more in the unittest documentation. self.fail just fails no matter what, producing the error message given. I’m using it as a reminder to finish the test. Finally, we have the if name == 'main' clause (if you’ve not seen it before, that’s how a Python script checks if it’s been executed from the command line, rather than just imported by another script). We call unittest.main(), which launches the unittest test runner, which will automatically find test classes and methods in the file and run them. warnings='ignore' suppresses a superfluous ResourceWarning which was being emitted at the time of writing. It may have disappeared by the time you read this; feel free to try removing it! Running our new test
!python3 functional_tests.py
wk9/notebooks/Ch.2-Extending our functional test using the unittest module.ipynb
ThunderShiviah/code_guild
mit
Visualizing the Tracked Object Distance The next cell visualizes the simulating data. The first visualization shows the object distance over time. You can see that the car is moving forward although decelerating. Then the car stops for 5 seconds and then drives backwards for 5 seconds.
ax1 = data_groundtruth.plot(kind='line', x='time', y='distance', title='Object Distance Versus Time') ax1.set(xlabel='time (milliseconds)', ylabel='distance (meters)')
matrix/kalman_filter_demo.ipynb
jingr1/SelfDrivingCar
mit
Visualizing Velocity Over Time The next cell outputs a visualization of the velocity over time. The tracked car starts at 100 km/h and decelerates to 0 km/h. Then the car idles and eventually starts to decelerate again until reaching -10 km/h.
ax2 = data_groundtruth.plot(kind='line', x='time', y='velocity', title='Object Velocity Versus Time') ax2.set(xlabel='time (milliseconds)', ylabel='velocity (km/h)')
matrix/kalman_filter_demo.ipynb
jingr1/SelfDrivingCar
mit
Visualizing Acceleration Over Time This cell visualizes the tracked cars acceleration. The vehicle declerates at 10 m/s^2. Then the vehicle stops for 5 seconds and briefly accelerates again.
data_groundtruth['acceleration'] = data_groundtruth['acceleration'] * 1000 / math.pow(60 * 60, 2) ax3 = data_groundtruth.plot(kind='line', x='time', y='acceleration', title='Object Acceleration Versus Time') ax3.set(xlabel='time (milliseconds)', ylabel='acceleration (m/s^2)')
matrix/kalman_filter_demo.ipynb
jingr1/SelfDrivingCar
mit
Simulate Lidar Data The following code cell creates simulated lidar data. Lidar data is noisy, so the simulator takes ground truth measurements every 0.05 seconds and then adds random noise.
# make lidar measurements lidar_standard_deviation = 0.15 lidar_measurements = datagenerator.generate_lidar(distance_groundtruth, lidar_standard_deviation) lidar_time = time_groundtruth
matrix/kalman_filter_demo.ipynb
jingr1/SelfDrivingCar
mit
Visualize Lidar Meausrements Run the following cell to visualize the lidar measurements versus the ground truth. The ground truth is shown in red, and you can see that the lidar measurements are a bit noisy.
data_lidar = pd.DataFrame( {'time': time_groundtruth, 'distance': distance_groundtruth, 'lidar': lidar_measurements }) matplotlib.rcParams.update({'font.size': 22}) ax4 = data_lidar.plot(kind='line', x='time', y ='distance', label='ground truth', figsize=(20, 15), alpha=0.8, title = 'Lidar Measurements Versus Ground Truth', color='red') ax5 = data_lidar.plot(kind='scatter', x ='time', y ='lidar', label='lidar measurements', ax=ax4, alpha=0.6, color='g') ax5.set(xlabel='time (milliseconds)', ylabel='distance (meters)') plt.show()
matrix/kalman_filter_demo.ipynb
jingr1/SelfDrivingCar
mit
Part 2 - Using a Kalman Filter The next part of the demonstration will use your matrix class to run a Kalman filter. This first cell initializes variables and defines a few functions. The following cell runs the Kalman filter using the lidar data.
# Kalman Filter Initialization initial_distance = 0 initial_velocity = 0 x_initial = m.Matrix([[initial_distance], [initial_velocity * 1e-3 / (60 * 60)]]) P_initial = m.Matrix([[5, 0],[0, 5]]) acceleration_variance = 50 lidar_variance = math.pow(lidar_standard_deviation, 2) H = m.Matrix([[1, 0]]) R = m.Matrix([[lidar_variance]]) I = m.identity(2) def F_matrix(delta_t): return m.Matrix([[1, delta_t], [0, 1]]) def Q_matrix(delta_t, variance): t4 = math.pow(delta_t, 4) t3 = math.pow(delta_t, 3) t2 = math.pow(delta_t, 2) return variance * m.Matrix([[(1/4)*t4, (1/2)*t3], [(1/2)*t3, t2]])
matrix/kalman_filter_demo.ipynb
jingr1/SelfDrivingCar
mit
Run the Kalman filter The next code cell runs the Kalman filter. In this demonstration, the prediction step starts with the second lidar measurement. When the first lidar signal arrives, there is no previous lidar measurement with which to calculate velocity. In other words, the Kalman filter predicts where the vehicle is going to be, but it can't make a prediction until time has passed between the first and second lidar reading. The Kalman filter has two steps: a prediction step and an update step. In the prediction step, the filter uses a motion model to figure out where the object has traveled in between sensor measurements. The update step uses the sensor measurement to adjust the belief about where the object is.
# Kalman Filter Implementation x = x_initial P = P_initial x_result = [] time_result = [] v_result = [] for i in range(len(lidar_measurements) - 1): # calculate time that has passed between lidar measurements delta_t = (lidar_time[i + 1] - lidar_time[i]) / 1000.0 # Prediction Step - estimates how far the object traveled during the time interval F = F_matrix(delta_t) Q = Q_matrix(delta_t, acceleration_variance) x_prime = F * x P_prime = F * P * F.T() + Q # Measurement Update Step - updates belief based on lidar measurement y = m.Matrix([[lidar_measurements[i + 1]]]) - H * x_prime S = H * P_prime * H.T() + R K = P_prime * H.T() * S.inverse() x = x_prime + K * y P = (I - K * H) * P_prime # Store distance and velocity belief and current time x_result.append(x[0][0]) v_result.append(3600.0/1000 * x[1][0]) time_result.append(lidar_time[i+1]) result = pd.DataFrame( {'time': time_result, 'distance': x_result, 'velocity': v_result })
matrix/kalman_filter_demo.ipynb
jingr1/SelfDrivingCar
mit
Visualize the Results The following code cell outputs a visualization of the Kalman filter. The chart contains ground turth, the lidar measurements, and the Kalman filter belief. Notice that the Kalman filter tends to smooth out the information obtained from the lidar measurement. It turns out that using multiple sensors like radar and lidar at the same time, will give even better results. Using more than one type of sensor at once is called sensor fusion, which you will learn about in the Self-Driving Car Engineer Nanodegree
ax6 = data_lidar.plot(kind='line', x='time', y ='distance', label='ground truth', figsize=(22, 18), alpha=.3, title='Lidar versus Kalman Filter versus Ground Truth') ax7 = data_lidar.plot(kind='scatter', x ='time', y ='lidar', label='lidar sensor', ax=ax6) ax8 = result.plot(kind='scatter', x = 'time', y = 'distance', label='kalman', ax=ax7, color='r') ax8.set(xlabel='time (milliseconds)', ylabel='distance (meters)') plt.show()
matrix/kalman_filter_demo.ipynb
jingr1/SelfDrivingCar
mit
Visualize the Velocity One of the most interesting benefits of Kalman filters is that they can give you insights into variables that you cannot directly measured. Although lidar does not directly give velocity information, the Kalman filter can infer velocity from the lidar measurements. This visualization shows the Kalman filter velocity estimation versus the ground truth. The motion model used in this Kalman filter is relatively simple; it assumes velocity is constant and that acceleration a random noise. You can see that this motion model might be too simplistic because the Kalman filter has trouble predicting velocity as the object decelerates.
ax1 = data_groundtruth.plot(kind='line', x='time', y ='velocity', label='ground truth', figsize=(22, 18), alpha=.8, title='Kalman Filter versus Ground Truth Velocity') ax2 = result.plot(kind='scatter', x = 'time', y = 'velocity', label='kalman', ax=ax1, color='r') ax2.set(xlabel='time (milliseconds)', ylabel='velocity (km/h)') plt.show()
matrix/kalman_filter_demo.ipynb
jingr1/SelfDrivingCar
mit
Scrape swear word list We scrape swear words from the web from the site: http://www.noswearing.com/ It is a community driven list of swear words.
import string import os import requests from fake_useragent import UserAgent from lxml import html def requests_get(url): ua = UserAgent().random return requests.get(url, headers={'User-Agent': ua}) def get_swear_words(save_file='swear-words.txt'): """ Scrapes a comprehensive list of swear words from noswearing.com """ words = ['niggas'] if os.path.isfile(save_file): with open(save_file, 'rt') as f: for line in f: words.append(line.strip()) return words base_url = 'http://www.noswearing.com/dictionary/' letters = '1' + string.ascii_lowercase for letter in letters: full_url = base_url + letter result = requests_get(full_url) tree = html.fromstring(result.text) search = tree.xpath("//td[@valign='top']/a[@name and string-length(@name) != 0]") if search is None: continue for result in search: words.append(result.get('name').lower()) with open(save_file, 'wt') as f: for word in words: f.write(word) f.write('\n') return words print(get_swear_words())
notebooks/exploratory/02-raw-lyric-analysis.ipynb
tylerwmarrs/billboard-hot-100-lyric-analysis
mit
Testing TextBlob I don't really like TextBlob as it tries to be "nice", but lacks a lot of basic functionality. Stop words not included Tokenizer is pretty meh. No built in way to obtain word frequency
import os import operator import pandas as pd from textblob import TextBlob, WordList from nltk.corpus import stopwords def get_data_paths(): dir_path = os.path.dirname(os.path.realpath('.')) data_dir = os.path.join(dir_path, 'billboard-hot-100-data') dirs = [os.path.join(data_dir, d, 'songs.csv') for d in os.listdir(data_dir) if os.path.isdir(os.path.join(data_dir, d))] return dirs def lyric_file_to_text_blob(row): """ Transform lyrics column to TextBlob instances. """ return TextBlob(row['lyrics']) def remove_stop_words(word_list): wl = WordList([]) stop_words = stopwords.words('english') for word in word_list: if word.lower() not in stop_words: wl.append(word) return wl def word_freq(words, sort='desc'): """ Returns frequency table for all words provided in the list. """ reverse = sort == 'desc' freq = {} for word in words: if word in freq: freq[word] = freq[word] + 1 else: freq[word] = 1 return sorted(freq.items(), key=operator.itemgetter(1), reverse=reverse) data_paths = corpus.raw_data_dirs() songs = corpus.load_songs(data_paths[0]) songs = pd.DataFrame.from_dict(songs) songs["lyrics"] = songs.apply(lyric_file_to_text_blob, axis=1) all_words = WordList([]) for i, row in songs.iterrows(): all_words.extend(row['lyrics'].words) cleaned_all_words = remove_stop_words(all_words) cleaned_all_words = pd.DataFrame(word_freq(cleaned_all_words.lower()), columns=['word', 'frequency']) cleaned_all_words import pandas as pd import nltk def remove_extra_junk(word_list): words = [] remove = [",", "n't", "'m", ")", "(", "'s", "'", "]", "["] for word in word_list: if word not in remove: words.append(word) return words data_paths = corpus.raw_data_dirs() songs = corpus.load_songs(data_paths[0]) songs = pd.DataFrame.from_dict(songs) all_words = [] for i, row in songs.iterrows(): all_words.extend(nltk.tokenize.word_tokenize(row['lyrics'])) cleaned_all_words = [w.lower() for w in remove_extra_junk(remove_stop_words(all_words))] freq_dist = nltk.FreqDist(cleaned_all_words) freq_dist.plot(50) freq_dist.most_common(100) #cleaned_all_words = pd.DataFrame(word_freq(cleaned_all_words), columns=['word', 'frequency']) #cleaned_all_words
notebooks/exploratory/02-raw-lyric-analysis.ipynb
tylerwmarrs/billboard-hot-100-lyric-analysis
mit
Repetitive songs skewing data? Some songs may be super reptitive. Lets look at a couple of songs that have the word in the title. These songs probably repeat the title a decent amount in their song. Hence treating all lyrics as one group of text less reliable in analyzing frequency. To simplify this process, we can look at only single word titles. This will at least give us a general idea if the data could be skewed by a single song or not.
for i, song in songs.iterrows(): title = song['title'] title_words = title.split(' ') if len(title_words) > 1: continue lyrics = song['lyrics'] words = nltk.tokenize.word_tokenize(lyrics) clean_words = [w.lower() for w in remove_extra_junk(remove_stop_words(words))] dist = nltk.FreqDist(clean_words) freq = dist.freq(title_words[0].lower()) if freq > .1: print(song['artist'], title)
notebooks/exploratory/02-raw-lyric-analysis.ipynb
tylerwmarrs/billboard-hot-100-lyric-analysis
mit
Seems pretty reptitive There are a handful of single word song titles that repeat the title within the song at least 10% of the time. This gives us a general idea that there is most likely a skew to the data. I think it is safe to assume that if a single word is repeated many times, the song is most likely reptitive. Lets look at the song "water" by Ugly God to confirm.
song_title_to_analyze = 'Water' lyrics = songs['lyrics'].where(songs['title'] == song_title_to_analyze, '').max() print(lyrics) words = nltk.tokenize.word_tokenize(lyrics) clean_words = [w.lower() for w in remove_extra_junk(remove_stop_words(words))] water_dist = nltk.FreqDist(clean_words) water_dist.plot(25) water_dist.freq(song_title_to_analyze.lower())
notebooks/exploratory/02-raw-lyric-analysis.ipynb
tylerwmarrs/billboard-hot-100-lyric-analysis
mit
Looking at swear word distribution Let's look at the distribution of swear words...
sws = [] for sw in set(corpus.swear_words()): sws.append({'word': sw, 'dist': freq_dist.freq(sw)}) sw_df = pd.DataFrame.from_dict(sws) sw_df.nlargest(10, 'dist').plot(x='word', kind='bar')
notebooks/exploratory/02-raw-lyric-analysis.ipynb
tylerwmarrs/billboard-hot-100-lyric-analysis
mit
Create Variables The xs array is a tuple of SymPy symbolic variables, and the ys array is a PyEDA function array.
xs = sympy.symbols(",".join("x%d" % i for i in range(64))) ys = pyeda.boolalg.bfarray.exprvars('y', 64)
ipynb/SymPy_Comparison.ipynb
karissa/pyeda
bsd-2-clause
Basic Boolean Functions Create a SymPy XOR function:
f = sympy.Xor(*xs[:4])
ipynb/SymPy_Comparison.ipynb
karissa/pyeda
bsd-2-clause
Create a PyEDA XOR function:
g = pyeda.boolalg.expr.Xor(*ys[:4])
ipynb/SymPy_Comparison.ipynb
karissa/pyeda
bsd-2-clause
SymPy atoms method is similar to PyEDA's support property:
f.atoms() g.support
ipynb/SymPy_Comparison.ipynb
karissa/pyeda
bsd-2-clause
SymPy's subs method is similar to PyEDA's restrict method:
f.subs({xs[0]: 0, xs[1]: 1}) g.restrict({ys[0]: 0, ys[1]: 1})
ipynb/SymPy_Comparison.ipynb
karissa/pyeda
bsd-2-clause
Conversion to NNF Conversion to negation normal form is also similar. One difference is that SymPy inverts the variables by applying a Not operator, but PyEDA converts inverted variables to complements (a negative literal).
sympy.to_nnf(f) type(sympy.Not(xs[0])) g.to_nnf() type(~ys[0])
ipynb/SymPy_Comparison.ipynb
karissa/pyeda
bsd-2-clause
Conversion to DNF Conversion to disjunctive normal form, on the other hand, has some differences. With only four input variables, SymPy takes a couple seconds to do the calculation. The output is large, with unsimplified values and redundant clauses.
sympy.to_dnf(f)
ipynb/SymPy_Comparison.ipynb
karissa/pyeda
bsd-2-clause
PyEDA's DNF conversion is minimal:
g.to_dnf()
ipynb/SymPy_Comparison.ipynb
karissa/pyeda
bsd-2-clause
It's a little hard to do an apples-to-apples comparison, because 1) SymPy is pure Python and 2) the algorithms are probably different. The simplify_logic function actually looks better for comparison:
from sympy.logic import simplify_logic simplify_logic(f) simplify_logic(f)
ipynb/SymPy_Comparison.ipynb
karissa/pyeda
bsd-2-clause
Running this experiment from N=2 to N=6 shows that PyEDA's runtime grows significantly slower.
import numpy as np import matplotlib.pyplot as plt %matplotlib inline N = 5 sympy_times = (.000485, .000957, .00202, .00426, .0103) pyeda_times = (.0000609, .000104, .000147, .00027, .000451) ind = np.arange(N) # the x locations for the groups width = 0.35 # the width of the bars fig, ax = plt.subplots() rects1 = ax.bar(ind, sympy_times, width, color='r') rects2 = ax.bar(ind + width, pyeda_times, width, color='y') # add some text for labels, title and axes ticks ax.set_ylabel('Time (s)') ax.set_title('SymPy vs. PyEDA: Xor(x[0], x[1], ..., x[n-1]) to DNF') ax.set_xticks(ind + width) ax.set_xticklabels(('N=2', 'N=3', 'N=4', 'N=5', 'N=6')) ax.legend((rects1[0], rects2[0]), ('SymPy', 'PyEDA')) plt.show()
ipynb/SymPy_Comparison.ipynb
karissa/pyeda
bsd-2-clause
Going a bit further, things get worse. These numbers are from my laptop: | N | sympy | pyeda | ratio | |----|----------|----------|--------| | 2 | .000485 | .0000609 | 7.96 | | 3 | .000957 | .000104 | 9.20 | | 4 | .00202 | .000147 | 13.74 | | 5 | .00426 | .00027 | 15.78 | | 6 | .0103 | .000451 | 22.84 | | 7 | .0231 | .000761 | 30.35 | | 8 | .0623 | .00144 | 43.26 | | 9 | .162 | .00389 | 41.65 | | 10 | .565 | .00477 | 118.45 | | 11 | 1.78 | .012 | 148.33 | | 12 | 6.46 | .0309 | 209.06 | Simplification SymPy supports some obvious simplifications, but PyEDA supports more. Here are a few examples.
sympy.Equivalent(xs[0], xs[1], 0) pyeda.boolalg.expr.Equal(ys[0], ys[1], 0) sympy.ITE(xs[0], 0, xs[1]) pyeda.boolalg.expr.ITE(ys[0], 0, ys[1]) sympy.Or(xs[0], sympy.Or(xs[1], xs[2])) pyeda.boolalg.expr.Or(ys[0], pyeda.boolalg.expr.Or(ys[1], ys[2])) sympy.Xor(xs[0], sympy.Not(sympy.Xor(xs[1], xs[2]))) pyeda.boolalg.expr.Xor(ys[0], pyeda.boolalg.expr.Xnor(ys[1], ys[2]))
ipynb/SymPy_Comparison.ipynb
karissa/pyeda
bsd-2-clause
Vertex SDK: Submit a HyperParameter tuning training job with TensorFlow Installation Install the latest (preview) version of Vertex SDK.
! pip3 install -U google-cloud-aiplatform --user
notebooks/community/migration/UJ11 HyperParameter Tuning Training Job with TensorFlow.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Set up variables Next, set up some variables used throughout the tutorial. Import libraries and define constants Import Vertex SDK Import the Vertex SDK into our Python environment.
import os import sys import time from google.cloud.aiplatform import gapic as aip from google.protobuf import json_format from google.protobuf.json_format import MessageToJson, ParseDict from google.protobuf.struct_pb2 import Struct, Value
notebooks/community/migration/UJ11 HyperParameter Tuning Training Job with TensorFlow.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Prepare a trainer script Package assembly
# Make folder for python training script ! rm -rf custom ! mkdir custom # Add package information ! touch custom/README.md setup_cfg = "[egg_info]\n\ tag_build =\n\ tag_date = 0" ! echo "$setup_cfg" > custom/setup.cfg setup_py = "import setuptools\n\ # Requires TensorFlow Datasets\n\ setuptools.setup(\n\ install_requires=[\n\ 'tensorflow_datasets==1.3.0',\n\ ],\n\ packages=setuptools.find_packages())" ! echo "$setup_py" > custom/setup.py pkg_info = "Metadata-Version: 1.0\n\ Name: Hyperparameter Tuning - Boston Housing\n\ Version: 0.0.0\n\ Summary: Demonstration hyperparameter tuning script\n\ Home-page: www.google.com\n\ Author: Google\n\ Author-email: [email protected]\n\ License: Public\n\ Description: Demo\n\ Platform: Vertex AI" ! echo "$pkg_info" > custom/PKG-INFO # Make the training subfolder ! mkdir custom/trainer ! touch custom/trainer/__init__.py
notebooks/community/migration/UJ11 HyperParameter Tuning Training Job with TensorFlow.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Task.py contents
%%writefile custom/trainer/task.py # hyperparameter tuningfor Boston Housing import tensorflow_datasets as tfds import tensorflow as tf from tensorflow.python.client import device_lib from hypertune import HyperTune import numpy as np import argparse import os import sys tfds.disable_progress_bar() parser = argparse.ArgumentParser() parser.add_argument('--model-dir', dest='model_dir', default='/tmp/saved_model', type=str, help='Model dir.') parser.add_argument('--lr', dest='lr', default=0.001, type=float, help='Learning rate.') parser.add_argument('--units', dest='units', default=64, type=int, help='Number of units.') parser.add_argument('--epochs', dest='epochs', default=20, type=int, help='Number of epochs.') parser.add_argument('--param-file', dest='param_file', default='/tmp/param.txt', type=str, help='Output file for parameters') args = parser.parse_args() print('Python Version = {}'.format(sys.version)) print('TensorFlow Version = {}'.format(tf.__version__)) print('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found'))) def make_dataset(): # Scaling Boston Housing data features def scale(feature): max = np.max(feature) feature = (feature / max).astype(np.float) return feature, max (x_train, y_train), (x_test, y_test) = tf.keras.datasets.boston_housing.load_data( path="boston_housing.npz", test_split=0.2, seed=113 ) params = [] for _ in range(13): x_train[_], max = scale(x_train[_]) x_test[_], _ = scale(x_test[_]) params.append(max) # store the normalization (max) value for each feature with tf.io.gfile.GFile(args.param_file, 'w') as f: f.write(str(params)) return (x_train, y_train), (x_test, y_test) # Build the Keras model def build_and_compile_dnn_model(): model = tf.keras.Sequential([ tf.keras.layers.Dense(args.units, activation='relu', input_shape=(13,)), tf.keras.layers.Dense(args.units, activation='relu'), tf.keras.layers.Dense(1, activation='linear') ]) model.compile( loss='mse', optimizer=tf.keras.optimizers.RMSprop(learning_rate=args.lr)) return model model = build_and_compile_dnn_model() # Instantiate the HyperTune reporting object hpt = HyperTune() # Reporting callback class HPTCallback(tf.keras.callbacks.Callback): def on_epoch_end(self, epoch, logs=None): global hpt hpt.report_hyperparameter_tuning_metric( hyperparameter_metric_tag='val_loss', metric_value=logs['val_loss'], global_step=epoch) # Train the model BATCH_SIZE = 16 (x_train, y_train), (x_test, y_test) = make_dataset() model.fit(x_train, y_train, epochs=args.epochs, batch_size=BATCH_SIZE, validation_split=0.1, callbacks=[HPTCallback()]) model.save(args.model_dir)
notebooks/community/migration/UJ11 HyperParameter Tuning Training Job with TensorFlow.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Store training script on your Cloud Storage bucket
! rm -f custom.tar custom.tar.gz ! tar cvf custom.tar custom ! gzip custom.tar ! gsutil cp custom.tar.gz gs://$BUCKET_NAME/hpt_boston_housing.tar.gz
notebooks/community/migration/UJ11 HyperParameter Tuning Training Job with TensorFlow.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Train a model projects.locations.hyperparameterTuningJob.create Request
JOB_NAME = "hyperparameter_tuning_" + TIMESTAMP WORKER_POOL_SPEC = [ { "replica_count": 1, "machine_spec": {"machine_type": "n1-standard-4", "accelerator_count": 0}, "python_package_spec": { "executor_image_uri": "gcr.io/cloud-aiplatform/training/tf-cpu.2-1:latest", "package_uris": ["gs://" + BUCKET_NAME + "/hpt_boston_housing.tar.gz"], "python_module": "trainer.task", "args": ["--model-dir=" + "gs://{}/{}".format(BUCKET_NAME, JOB_NAME)], }, } ] STUDY_SPEC = { "metrics": [ {"metric_id": "val_loss", "goal": aip.StudySpec.MetricSpec.GoalType.MINIMIZE} ], "parameters": [ { "parameter_id": "lr", "discrete_value_spec": {"values": [0.001, 0.01, 0.1]}, "scale_type": aip.StudySpec.ParameterSpec.ScaleType.UNIT_LINEAR_SCALE, }, { "parameter_id": "units", "integer_value_spec": {"min_value": 32, "max_value": 256}, "scale_type": aip.StudySpec.ParameterSpec.ScaleType.UNIT_LINEAR_SCALE, }, ], "algorithm": aip.StudySpec.Algorithm.RANDOM_SEARCH, } hyperparameter_tuning_job = aip.HyperparameterTuningJob( display_name=JOB_NAME, trial_job_spec={"worker_pool_specs": WORKER_POOL_SPEC}, study_spec=STUDY_SPEC, max_trial_count=6, parallel_trial_count=1, ) print( MessageToJson( aip.CreateHyperparameterTuningJobRequest( parent=PARENT, hyperparameter_tuning_job=hyperparameter_tuning_job ).__dict__["_pb"] ) )
notebooks/community/migration/UJ11 HyperParameter Tuning Training Job with TensorFlow.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "parent": "projects/migration-ucaip-training/locations/us-central1", "hyperparameterTuningJob": { "displayName": "hyperparameter_tuning_20210226020029", "studySpec": { "metrics": [ { "metricId": "val_loss", "goal": "MINIMIZE" } ], "parameters": [ { "parameterId": "lr", "discreteValueSpec": { "values": [ 0.001, 0.01, 0.1 ] }, "scaleType": "UNIT_LINEAR_SCALE" }, { "parameterId": "units", "integerValueSpec": { "minValue": "32", "maxValue": "256" }, "scaleType": "UNIT_LINEAR_SCALE" } ], "algorithm": "RANDOM_SEARCH" }, "maxTrialCount": 6, "parallelTrialCount": 1, "trialJobSpec": { "workerPoolSpecs": [ { "machineSpec": { "machineType": "n1-standard-4" }, "replicaCount": "1", "pythonPackageSpec": { "executorImageUri": "gcr.io/cloud-aiplatform/training/tf-cpu.2-1:latest", "packageUris": [ "gs://migration-ucaip-trainingaip-20210226020029/hpt_boston_housing.tar.gz" ], "pythonModule": "trainer.task", "args": [ "--model-dir=gs://migration-ucaip-trainingaip-20210226020029/hyperparameter_tuning_20210226020029" ] } } ] } } } Call
request = clients["job"].create_hyperparameter_tuning_job( parent=PARENT, hyperparameter_tuning_job=hyperparameter_tuning_job )
notebooks/community/migration/UJ11 HyperParameter Tuning Training Job with TensorFlow.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "name": "projects/116273516712/locations/us-central1/hyperparameterTuningJobs/5264408897233354752", "displayName": "hyperparameter_tuning_20210226020029", "studySpec": { "metrics": [ { "metricId": "val_loss", "goal": "MINIMIZE" } ], "parameters": [ { "parameterId": "lr", "discreteValueSpec": { "values": [ 0.001, 0.01, 0.1 ] }, "scaleType": "UNIT_LINEAR_SCALE" }, { "parameterId": "units", "integerValueSpec": { "minValue": "32", "maxValue": "256" }, "scaleType": "UNIT_LINEAR_SCALE" } ], "algorithm": "RANDOM_SEARCH" }, "maxTrialCount": 6, "parallelTrialCount": 1, "trialJobSpec": { "workerPoolSpecs": [ { "machineSpec": { "machineType": "n1-standard-4" }, "replicaCount": "1", "diskSpec": { "bootDiskType": "pd-ssd", "bootDiskSizeGb": 100 }, "pythonPackageSpec": { "executorImageUri": "gcr.io/cloud-aiplatform/training/tf-cpu.2-1:latest", "packageUris": [ "gs://migration-ucaip-trainingaip-20210226020029/hpt_boston_housing.tar.gz" ], "pythonModule": "trainer.task", "args": [ "--model-dir=gs://migration-ucaip-trainingaip-20210226020029/hyperparameter_tuning_20210226020029" ] } } ] }, "state": "JOB_STATE_PENDING", "createTime": "2021-02-26T02:02:02.787187Z", "updateTime": "2021-02-26T02:02:02.787187Z" }
# The full unique ID for the hyperparameter tuningjob hyperparameter_tuning_id = request.name # The short numeric ID for the hyperparameter tuningjob hyperparameter_tuning_short_id = hyperparameter_tuning_id.split("/")[-1] print(hyperparameter_tuning_id)
notebooks/community/migration/UJ11 HyperParameter Tuning Training Job with TensorFlow.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
projects.locations.hyperparameterTuningJob.get Call
request = clients["job"].get_hyperparameter_tuning_job(name=hyperparameter_tuning_id)
notebooks/community/migration/UJ11 HyperParameter Tuning Training Job with TensorFlow.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: { "name": "projects/116273516712/locations/us-central1/hyperparameterTuningJobs/5264408897233354752", "displayName": "hyperparameter_tuning_20210226020029", "studySpec": { "metrics": [ { "metricId": "val_loss", "goal": "MINIMIZE" } ], "parameters": [ { "parameterId": "lr", "discreteValueSpec": { "values": [ 0.001, 0.01, 0.1 ] }, "scaleType": "UNIT_LINEAR_SCALE" }, { "parameterId": "units", "integerValueSpec": { "minValue": "32", "maxValue": "256" }, "scaleType": "UNIT_LINEAR_SCALE" } ], "algorithm": "RANDOM_SEARCH" }, "maxTrialCount": 6, "parallelTrialCount": 1, "trialJobSpec": { "workerPoolSpecs": [ { "machineSpec": { "machineType": "n1-standard-4" }, "replicaCount": "1", "diskSpec": { "bootDiskType": "pd-ssd", "bootDiskSizeGb": 100 }, "pythonPackageSpec": { "executorImageUri": "gcr.io/cloud-aiplatform/training/tf-cpu.2-1:latest", "packageUris": [ "gs://migration-ucaip-trainingaip-20210226020029/hpt_boston_housing.tar.gz" ], "pythonModule": "trainer.task", "args": [ "--model-dir=gs://migration-ucaip-trainingaip-20210226020029/hyperparameter_tuning_20210226020029" ] } } ] }, "state": "JOB_STATE_PENDING", "createTime": "2021-02-26T02:02:02.787187Z", "updateTime": "2021-02-26T02:02:02.787187Z" } Wait for the study to complete
while True: response = clients["job"].get_hyperparameter_tuning_job( name=hyperparameter_tuning_id ) if response.state != aip.PipelineState.PIPELINE_STATE_SUCCEEDED: print("Study trials have not completed:", response.state) if response.state == aip.PipelineState.PIPELINE_STATE_FAILED: break else: print("Study trials have completed:", response.end_time - response.start_time) break time.sleep(20)
notebooks/community/migration/UJ11 HyperParameter Tuning Training Job with TensorFlow.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Review the results of the study
best = (None, None, None, 0.0) response = clients["job"].get_hyperparameter_tuning_job(name=hyperparameter_tuning_id) for trial in response.trials: print(MessageToJson(trial.__dict__["_pb"])) # Keep track of the best outcome try: if float(trial.final_measurement.metrics[0].value) > best[3]: best = ( trial.id, float(trial.parameters[0].value), float(trial.parameters[1].value), float(trial.final_measurement.metrics[0].value), ) except: pass print() print("ID", best[0]) print("Decay", best[1]) print("Learning Rate", best[2]) print("Validation Accuracy", best[3])
notebooks/community/migration/UJ11 HyperParameter Tuning Training Job with TensorFlow.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: ``` { "id": "1", "state": "SUCCEEDED", "parameters": [ { "parameterId": "lr", "value": 0.1 }, { "parameterId": "units", "value": 80.0 } ], "finalMeasurement": { "stepCount": "19", "metrics": [ { "metricId": "val_loss", "value": 46.61515110294993 } ] }, "startTime": "2021-02-26T02:05:16.935353384Z", "endTime": "2021-02-26T02:12:44Z" } { "id": "2", "state": "SUCCEEDED", "parameters": [ { "parameterId": "lr", "value": 0.01 }, { "parameterId": "units", "value": 45.0 } ], "finalMeasurement": { "stepCount": "19", "metrics": [ { "metricId": "val_loss", "value": 32.55313952376203 } ] }, "startTime": "2021-02-26T02:15:31.357856840Z", "endTime": "2021-02-26T02:24:18Z" } { "id": "3", "state": "SUCCEEDED", "parameters": [ { "parameterId": "lr", "value": 0.1 }, { "parameterId": "units", "value": 70.0 } ], "finalMeasurement": { "stepCount": "19", "metrics": [ { "metricId": "val_loss", "value": 42.709188321741614 } ] }, "startTime": "2021-02-26T02:26:40.704476222Z", "endTime": "2021-02-26T02:34:21Z" } { "id": "4", "state": "SUCCEEDED", "parameters": [ { "parameterId": "lr", "value": 0.01 }, { "parameterId": "units", "value": 173.0 } ], "finalMeasurement": { "stepCount": "17", "metrics": [ { "metricId": "val_loss", "value": 46.12480219399057 } ] }, "startTime": "2021-02-26T02:37:45.275581053Z", "endTime": "2021-02-26T02:51:07Z" } { "id": "5", "state": "SUCCEEDED", "parameters": [ { "parameterId": "lr", "value": 0.01 }, { "parameterId": "units", "value": 223.0 } ], "finalMeasurement": { "stepCount": "19", "metrics": [ { "metricId": "val_loss", "value": 24.875632611716664 } ] }, "startTime": "2021-02-26T02:53:32.612612421Z", "endTime": "2021-02-26T02:54:19Z" } { "id": "6", "state": "SUCCEEDED", "parameters": [ { "parameterId": "lr", "value": 0.1 }, { "parameterId": "units", "value": 123.0 } ], "finalMeasurement": { "stepCount": "13", "metrics": [ { "metricId": "val_loss", "value": 43.352300690441595 } ] }, "startTime": "2021-02-26T02:56:47.323707459Z", "endTime": "2021-02-26T03:03:49Z" } ID 1 Decay 0.1 Learning Rate 80.0 Validation Accuracy 46.61515110294993 ``` Cleaning up To clean up all GCP resources used in this project, you can delete the GCP project you used for the tutorial. Otherwise, you can delete the individual resources you created in this tutorial.
delete_hpt_job = True delete_bucket = True # Delete the hyperparameter tuningusing the Vertex AI fully qualified identifier for the custome training try: if delete_hpt_job: clients["job"].delete_hyperparameter_tuning_job(name=hyperparameter_tuning_id) except Exception as e: print(e) if delete_bucket and "BUCKET_NAME" in globals(): ! gsutil rm -r gs://$BUCKET_NAME
notebooks/community/migration/UJ11 HyperParameter Tuning Training Job with TensorFlow.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Download the dataset of characters 'A' to 'J' rendered in various fonts as 28x28 images. There is training set of about 500k images and a test set of about 19000 images.
url = "http://yaroslavvb.com/upload/notMNIST/" data_path = outputer.setup_directory("notMNIST") def maybe_download(path, filename, expected_bytes): """Download a file if not present, and make sure it's the right size.""" file_path = os.path.join(path, filename) if not os.path.exists(file_path): file_path, _ = urllib.request.urlretrieve(url + filename, file_path) statinfo = os.stat(file_path) if statinfo.st_size == expected_bytes: print("Found", file_path, "with correct size.") else: raise Exception("Error downloading " + filename) return file_path train_filename = maybe_download(data_path, "notMNIST_large.tar.gz", 247336696) test_filename = maybe_download(data_path, "notMNIST_small.tar.gz", 8458043)
notMNIST_setup.ipynb
ponderousmad/pyndent
mit
Extract the dataset from the compressed .tar.gz file. This should give you a set of directories, labelled A through J.
def extract(filename, root, class_count): # remove path and .tar.gz dir_name = os.path.splitext(os.path.splitext(os.path.basename(filename))[0])[0] path = os.path.join(root, dir_name) print("Extracting", filename, "to", path) tar = tarfile.open(filename) tar.extractall(path=root) tar.close() data_folders = [os.path.join(path, d) for d in sorted(os.listdir(path))] if len(data_folders) != class_count: raise Exception("Expected %d folders, one per class. Found %d instead." % (class_count, len(data_folders))) print(data_folders) return data_folders train_folders = [] test_folders = [] for name in os.listdir(data_path): path = os.path.join(data_path, name) target = None print("Checking", path) if path.endswith("_small"): target = test_folders elif path.endswith("_large"): target = train_folders if target is not None: target.extend([os.path.join(path, name) for name in os.listdir(path)]) print("Found", target) expected_classes = 10 if len(train_folders) < expected_classes: train_folders = extract(train_filename, data_path, expected_classes) if len(test_folders) < expected_classes: test_folders = extract(test_filename, data_path, expected_classes)
notMNIST_setup.ipynb
ponderousmad/pyndent
mit
Inspect Data Verify that the images contain rendered glyphs.
Image(filename="notMNIST/notMNIST_small/A/MDEtMDEtMDAudHRm.png") Image(filename="notMNIST/notMNIST_large/A/a2F6b28udHRm.png") Image(filename="notMNIST/notMNIST_large/C/ZXVyb2Z1cmVuY2UgaXRhbGljLnR0Zg==.png") # This I is all white Image(filename="notMNIST/notMNIST_small/I/SVRDIEZyYW5rbGluIEdvdGhpYyBEZW1pLnBmYg==.png")
notMNIST_setup.ipynb
ponderousmad/pyndent
mit
Convert the data into an array of normalized grayscale floating point images, and an array of classification labels. Unreadable images are skipped.
def normalize_separator(path): return path.replace("\\", "/") def load(data_folders, set_id, min_count, max_count): # Create arrays large enough for maximum expected data. dataset = np.ndarray(shape=(max_count, image_size, image_size), dtype=np.float32) labels = np.ndarray(shape=(max_count), dtype=np.int32) label_index = 0 image_index = 0 solid_blacks = [] solid_whites = [] for folder in sorted(data_folders): print(folder) for image in os.listdir(folder): if image_index >= max_count: raise Exception("More than %d images!" % (max_count,)) image_file = os.path.join(folder, image) if normalize_separator(image_file) in skip_list: continue try: raw_data = ndimage.imread(image_file) # Keep track of images a that are solid white or solid black. if np.all(raw_data == 0): solid_blacks.append(image_file) if np.all(raw_data == int(pixel_depth)): solid_whites.append(image_file) # Convert to float and normalize. image_data = (raw_data.astype(float) - pixel_depth / 2) / pixel_depth if image_data.shape != (image_size, image_size): raise Exception("Unexpected image shape: %s" % str(image_data.shape)) # Capture the image data and label. dataset[image_index, :, :] = image_data labels[image_index] = label_index image_index += 1 except IOError as e: skip_list.append(normalize_separator(image_file)) print("Could not read:", image_file, ':', e, "skipping.") label_index += 1 image_count = image_index # Trim down to just the used portion of the arrays. dataset = dataset[0:image_count, :, :] labels = labels[0:image_count] if image_count < min_count: raise Exception('Many fewer images than expected: %d < %d' % (num_images, min_num_images)) print("Input data shape:", dataset.shape) print("Mean of all normalized pixels:", np.mean(dataset)) print("Standard deviation of normalized pixels:", np.std(dataset)) print('Labels shape:', labels.shape) print("Found", len(solid_whites), "solid white images, and", len(solid_blacks), "solid black images.") return dataset, labels train_dataset, train_labels = load(train_folders, "train", 450000, 550000) test_dataset, test_labels = load(test_folders, 'test', 18000, 20000) skip_list
notMNIST_setup.ipynb
ponderousmad/pyndent
mit
Verify Proccessed Data
exemplar = plt.imshow(train_dataset[0]) train_labels[0] exemplar = plt.imshow(train_dataset[373]) train_labels[373] exemplar = plt.imshow(test_dataset[18169]) test_labels[18169] exemplar = plt.imshow(train_dataset[-9]) train_labels[-9]
notMNIST_setup.ipynb
ponderousmad/pyndent
mit
Compress and Store Data
pickle_file = 'notMNIST/full.pickle' try: f = gzip.open(pickle_file, 'wb') save = { 'train_dataset': train_dataset, 'train_labels': train_labels, 'test_dataset': test_dataset, 'test_labels': test_labels } pickle.dump(save, f, pickle.HIGHEST_PROTOCOL) f.close() except Exception as e: print('Unable to save data to', pickle_file, ':', e) raise statinfo = os.stat(pickle_file) print('Compressed pickle size:', statinfo.st_size)
notMNIST_setup.ipynb
ponderousmad/pyndent
mit
Peak finding Write a function find_peaks that finds and returns the indices of the local maxima in a sequence. Your function should: Properly handle local maxima at the endpoints of the input array. Return a Numpy array of integer indices. Handle any Python iterable as input.
np.array(range(5)).max() list(range(1,5)) find_peaks([2,0,1,0,2,0,1]) def find_peaks(a): """Find the indices of the local maxima in a sequence.""" b=[] c=np.array(a) if c[0]>c[1]: b.append(0) for i in range(1,len(c)-1): if c[i]>c[i-1] and c[i]>c[i+1]: b.append(i) if c[len(c)-1]>c[len(c)-2]: b.append(len(c)-1) return b p1 = find_peaks([2,0,1,0,2,0,1]) assert np.allclose(p1, np.array([0,2,4,6])) p2 = find_peaks(np.array([0,1,2,3])) assert np.allclose(p2, np.array([3])) p3 = find_peaks([3,2,1,0]) assert np.allclose(p3, np.array([0]))
assignments/assignment07/AlgorithmsEx02.ipynb
hunterherrin/phys202-2015-work
mit
Here is a string with the first 10000 digits of $\pi$ (after the decimal). Write code to perform the following: Convert that string to a Numpy array of integers. Find the indices of the local maxima in the digits of $\pi$. Use np.diff to find the distances between consequtive local maxima. Visualize that distribution using an appropriately customized histogram.
from sympy import pi, N pi_digits_str = str(N(pi, 10001))[2:] first_10000=np.array(list(pi_digits_str), dtype=int) peaks=find_peaks(first_10000) differences=np.diff(peaks) plt.figure(figsize=(10,10)) plt.hist(differences, 20, (1,20)) plt.title('Hoe Far Apart the Local Maxima of the First 10,0000 Digits of $\pi$ Are') plt.ylabel('Number of Occurences') plt.xlabel('Distance Apart') plt.tight_layout() assert True # use this for grading the pi digits histogram
assignments/assignment07/AlgorithmsEx02.ipynb
hunterherrin/phys202-2015-work
mit
Define the variables for this analysis. 1. how many percentiles the data is divided into 2. where the Z-Maps (from neurosynth) lie 3. where the binned gradient maps lie 4. where a mask of the brain lies (not used at the moment).
percentiles = range(10) # unthresholded z-maps from neurosynth: zmaps = [os.path.join(os.getcwd(), 'ROIs_Mask', fname) for fname in os.listdir(os.path.join(os.getcwd(), 'ROIs_Mask')) if 'z.nii' in fname] # individual, binned gradient maps, in a list of lists: gradmaps = [[os.path.join(os.getcwd(), 'data', 'Outputs', 'Bins', str(percentile), fname) for fname in os.listdir(os.path.join(os.getcwd(), 'data', 'Outputs', 'Bins', str(percentile)))] for percentile in percentiles] # a brain mask file: brainmaskfile = os.path.join(os.getcwd(), 'ROIs_Mask', 'rbgmask.nii')
6b_networks-inside-gradients.ipynb
autism-research-centre/Autism-Gradients
gpl-3.0
Next define a function to take the average of an image inside a mask and return it:
def zinsidemask(zmap, mask): # zaverage = zmap.dataobj[ np.logical_and(np.not_equal(mask.dataobj, 0), brainmask.dataobj>0) ].mean() return zaverage
6b_networks-inside-gradients.ipynb
autism-research-centre/Autism-Gradients
gpl-3.0
This next cell will step through each combination of gradient, subject and network file to calculate the average z-score inside the mask defined by the gradient percentile. This will take a long time to run!
zaverages = np.zeros([len(zmaps), len(gradmaps), len(gradmaps[0])]) # load first gradmap just for resampling gradmap = nib.load(gradmaps[0][0]) # Load a brainmask brainmask = nib.load(brainmaskfile) brainmask = resample_img(brainmask, target_affine=gradmap.affine, target_shape=gradmap.shape) # Initialise a progress bar: progbar = FloatProgress(min=0, max=zaverages.size) display(progbar) # loop through the network files: for i1, zmapfile in enumerate(zmaps): # load the neurosynth activation file: zmap = nib.load(zmapfile) # make sure the images are in the same space: zmap = resample_img(zmap, target_affine=gradmap.affine, target_shape=gradmap.shape) # loop through the bins: for i2, percentile in enumerate(percentiles): # loop through the subjects: for i3, gradmapfile in enumerate(gradmaps[percentile]): gradmap = nib.load(gradmapfile) # load image zaverages[i1, i2, i3] = zinsidemask(zmap, gradmap) # calculate av. z-score progbar.value += 1 # update progressbar (only works in jupyter notebooks)
6b_networks-inside-gradients.ipynb
autism-research-centre/Autism-Gradients
gpl-3.0
To save time next time, we'll save the result of this to file:
# np.save(os.path.join(os.getcwd(), 'data', 'average-abs-z-scores'), zaverages) zaverages = np.load(os.path.join(os.getcwd(), 'data', 'average-z-scores.npy'))
6b_networks-inside-gradients.ipynb
autism-research-centre/Autism-Gradients
gpl-3.0
Extract a list of which group contains which participants.
df_phen = pd.read_csv('data' + os.sep + 'SelectedSubjects.csv') diagnosis = df_phen.loc[:, 'DX_GROUP'] fileids = df_phen.loc[:, 'FILE_ID'] groupvec = np.zeros(len(gradmaps[0])) for filenum, filename in enumerate(gradmaps[0]): fileid = os.path.split(filename)[-1][5:-22] groupvec[filenum] = (diagnosis[fileids.str.contains(fileid)]) print(groupvec.shape)
6b_networks-inside-gradients.ipynb
autism-research-centre/Autism-Gradients
gpl-3.0
Make a plot of the z-scores inside each parcel for each gradient, split by group!
fig = plt.figure(figsize=(15, 8)) grouplabels = ['Control group', 'Autism group'] for group in np.unique(groupvec): ylabels = [os.path.split(fname)[-1][0:-23].replace('_', ' ') for fname in zmaps] # remove duplicates! includenetworks = [] seen = set() for string in ylabels: includenetworks.append(string not in seen) seen.add(string) ylabels = [string for index, string in enumerate(ylabels) if includenetworks[index]] tmp_zaverages = zaverages[includenetworks, :, :] tmp_zaverages = tmp_zaverages[:, :, groupvec==group] tmp_zaverages = tmp_zaverages[np.argsort(np.argmax(tmp_zaverages.mean(axis=2), axis=1)), :, :] # make the figure plt.subplot(1, 2, group) cax = plt.imshow(tmp_zaverages.mean(axis=2), cmap='bwr', interpolation='nearest', vmin=zaverages.mean(axis=2).min(), vmax=zaverages.mean(axis=2).max()) ax = plt.gca() plt.title(grouplabels[int(group-1)]) plt.xlabel('Percentile of principle gradient') ax.set_xticks(np.arange(0, len(percentiles), 3)) ax.set_xticklabels(['100-90', '70-60', '40-30', '10-0']) ax.set_yticks(np.arange(0, len(seen), 1)) ax.set_yticklabels(ylabels) ax.set_yticks(np.arange(-0.5, len(seen), 1), minor=True) ax.set_xticks(np.arange(-0.5, 10, 1), minor=True) ax.grid(which='minor', color='w', linewidth=2) fig.subplots_adjust(right=0.8) cbar_ax = fig.add_axes([0.85, 0.15, 0.01, 0.7]) fig.colorbar(cax, cax=cbar_ax, label='Average Z-Score') #fig.colorbar(cax, cmap='bwr', orientation='horizontal') plt.savefig('./figures/z-scores-inside-gradient-bins.png')
6b_networks-inside-gradients.ipynb
autism-research-centre/Autism-Gradients
gpl-3.0
Spacy Documentation Spacy is an NLP/Computational Linguistics package built from the ground up. It's written in Cython so it's fast!! Let's check it out. Here's some text from Alice in Wonderland free on Gutenberg.
text = """'Please would you tell me,' said Alice, a little timidly, for she was not quite sure whether it was good manners for her to speak first, 'why your cat grins like that?' 'It's a Cheshire cat,' said the Duchess, 'and that's why. Pig!' She said the last word with such sudden violence that Alice quite jumped; but she saw in another moment that it was addressed to the baby, and not to her, so she took courage, and went on again:— 'I didn't know that Cheshire cats always grinned; in fact, I didn't know that cats could grin.' 'They all can,' said the Duchess; 'and most of 'em do.' 'I don't know of any that do,' Alice said very politely, feeling quite pleased to have got into a conversation. 'You don't know much,' said the Duchess; 'and that's a fact.'"""
notebooks/nlp_spacy.ipynb
AlJohri/DAT-DC-12
mit