markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
<br> Here's an example of mapping the min function between two lists.
store1 = [10.00, 11.00, 12.34, 2.34] store2 = [9.00, 11.10, 12.34, 2.01] cheapest = map(min, store1, store2) cheapest
Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb
Z0m6ie/Zombie_Code
mit
<br> Now let's iterate through the map object to see the values.
for item in cheapest: print (item) people = ['Dr. Christopher Brooks', 'Dr. Kevyn Collins-Thompson', 'Dr. VG Vinod Vydiswaran', 'Dr. Daniel Romero'] def split_title_and_name(person): title = person.split(' ')[0] lname = person.split(' ')[-1] return title +" "+ lname list(map(split_title_and_name, people))
Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb
Z0m6ie/Zombie_Code
mit
<br> The Python Programming Language: Lambda and List Comprehensions <br> Here's an example of lambda that takes in three parameters and adds the first two.
# Single function only my_function = lambda a, b, c : a + b + c my_function(1, 2, 3) people = ['Dr. Christopher Brooks', 'Dr. Kevyn Collins-Thompson', 'Dr. VG Vinod Vydiswaran', 'Dr. Daniel Romero'] def split_title_and_name(person): return person.split()[0] + ' ' + person.split()[-1] #option 1 for person in people: print(split_title_and_name(person) == (lambda x: x.split()[0] + ' ' + x.split()[-1])(person)) #option 2 list(map(split_title_and_name, people)) == list(map(lambda person: person.split()[0] + ' ' + person.split()[-1], people))
Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb
Z0m6ie/Zombie_Code
mit
<br> Let's iterate from 0 to 999 and return the even numbers.
my_list = [] for number in range(0, 1000): if number % 2 == 0: my_list.append(number) my_list
Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb
Z0m6ie/Zombie_Code
mit
<br> Now the same thing but with list comprehension.
my_list = [number for number in range(0,1000) if number % 2 == 0] my_list def times_tables(): lst = [] for i in range(10): for j in range (10): lst.append(i*j) return lst times_tables() == [j*i for i in range(10) for j in range(10)] lowercase = 'abcdefghijklmnopqrstuvwxyz' digits = '0123456789' correct_answer = [a+b+c+d for a in lowercase for b in lowercase for c in digits for d in digits] correct_answer[0:100]
Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb
Z0m6ie/Zombie_Code
mit
<br> The Python Programming Language: Numerical Python (NumPy)
import numpy as np
Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb
Z0m6ie/Zombie_Code
mit
<br> Creating Arrays Create a list and convert it to a numpy array
mylist = [1, 2, 3] x = np.array(mylist) x
Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb
Z0m6ie/Zombie_Code
mit
<br> Or just pass in a list directly
y = np.array([4, 5, 6]) y
Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb
Z0m6ie/Zombie_Code
mit
<br> Pass in a list of lists to create a multidimensional array.
m = np.array([[7, 8, 9], [10, 11, 12]]) m
Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb
Z0m6ie/Zombie_Code
mit
<br> Use the shape method to find the dimensions of the array. (rows, columns)
m.shape
Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb
Z0m6ie/Zombie_Code
mit
<br> arange returns evenly spaced values within a given interval.
n = np.arange(0, 30, 2) # start at 0 count up by 2, stop before 30 n
Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb
Z0m6ie/Zombie_Code
mit
<br> reshape returns an array with the same data with a new shape.
n = n.reshape(3, 5) # reshape array to be 3x5 n
Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb
Z0m6ie/Zombie_Code
mit
<br> linspace returns evenly spaced numbers over a specified interval.
o = np.linspace(0, 4, 9) # return 9 evenly spaced values from 0 to 4 o
Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb
Z0m6ie/Zombie_Code
mit
<br> resize changes the shape and size of array in-place.
o.resize(3, 3) o
Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb
Z0m6ie/Zombie_Code
mit
<br> ones returns a new array of given shape and type, filled with ones.
np.ones((3, 2))
Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb
Z0m6ie/Zombie_Code
mit
<br> zeros returns a new array of given shape and type, filled with zeros.
np.zeros((2, 3))
Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb
Z0m6ie/Zombie_Code
mit
<br> eye returns a 2-D array with ones on the diagonal and zeros elsewhere.
np.eye(3)
Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb
Z0m6ie/Zombie_Code
mit
<br> diag extracts a diagonal or constructs a diagonal array.
np.diag(y)
Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb
Z0m6ie/Zombie_Code
mit
<br> Create an array using repeating list (or see np.tile)
np.array([1, 2, 3] * 3)
Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb
Z0m6ie/Zombie_Code
mit
<br> Repeat elements of an array using repeat.
np.repeat([1, 2, 3], 3)
Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb
Z0m6ie/Zombie_Code
mit
<br> Combining Arrays
p = np.ones([2, 3], int) p
Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb
Z0m6ie/Zombie_Code
mit
<br> Use vstack to stack arrays in sequence vertically (row wise).
np.vstack([p, 2*p])
Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb
Z0m6ie/Zombie_Code
mit
<br> Use hstack to stack arrays in sequence horizontally (column wise).
np.hstack([p, 2*p])
Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb
Z0m6ie/Zombie_Code
mit
<br> Operations Use +, -, *, / and ** to perform element wise addition, subtraction, multiplication, division and power.
print(x + y) # elementwise addition [1 2 3] + [4 5 6] = [5 7 9] print(x - y) # elementwise subtraction [1 2 3] - [4 5 6] = [-3 -3 -3] print(x * y) # elementwise multiplication [1 2 3] * [4 5 6] = [4 10 18] print(x / y) # elementwise divison [1 2 3] / [4 5 6] = [0.25 0.4 0.5] print(x**2) # elementwise power [1 2 3] ^2 = [1 4 9]
Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb
Z0m6ie/Zombie_Code
mit
<br> Dot Product: $ \begin{bmatrix}x_1 \ x_2 \ x_3\end{bmatrix} \cdot \begin{bmatrix}y_1 \ y_2 \ y_3\end{bmatrix} = x_1 y_1 + x_2 y_2 + x_3 y_3$
x.dot(y) # dot product 1*4 + 2*5 + 3*6 z = np.array([y, y**2]) print(len(z)) # number of rows of array
Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb
Z0m6ie/Zombie_Code
mit
<br> Let's look at transposing arrays. Transposing permutes the dimensions of the array.
z = np.array([y, y**2]) z
Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb
Z0m6ie/Zombie_Code
mit
<br> The shape of array z is (2,3) before transposing.
z.shape
Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb
Z0m6ie/Zombie_Code
mit
<br> Use .T to get the transpose.
z.T
Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb
Z0m6ie/Zombie_Code
mit
<br> The number of rows has swapped with the number of columns.
z.T.shape
Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb
Z0m6ie/Zombie_Code
mit
<br> Use .dtype to see the data type of the elements in the array.
z.dtype
Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb
Z0m6ie/Zombie_Code
mit
<br> Use .astype to cast to a specific type.
z = z.astype('f') z.dtype
Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb
Z0m6ie/Zombie_Code
mit
<br> Math Functions Numpy has many built in math functions that can be performed on arrays.
a = np.array([-4, -2, 1, 3, 5]) a.sum() a.max() a.min() a.mean() a.std()
Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb
Z0m6ie/Zombie_Code
mit
<br> argmax and argmin return the index of the maximum and minimum values in the array.
a.argmax() a.argmin()
Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb
Z0m6ie/Zombie_Code
mit
<br> Indexing / Slicing
s = np.arange(13)**2 s
Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb
Z0m6ie/Zombie_Code
mit
<br> Use bracket notation to get the value at a specific index. Remember that indexing starts at 0.
s[0], s[4], s[-1]
Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb
Z0m6ie/Zombie_Code
mit
<br> Use : to indicate a range. array[start:stop] Leaving start or stop empty will default to the beginning/end of the array.
s[1:5]
Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb
Z0m6ie/Zombie_Code
mit
<br> Use negatives to count from the back.
s[-4:]
Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb
Z0m6ie/Zombie_Code
mit
<br> A second : can be used to indicate step-size. array[start:stop:stepsize] Here we are starting 5th element from the end, and counting backwards by 2 until the beginning of the array is reached.
s[-5::-2]
Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb
Z0m6ie/Zombie_Code
mit
<br> Let's look at a multidimensional array.
r = np.arange(36) r.resize((6, 6)) r
Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb
Z0m6ie/Zombie_Code
mit
<br> Use bracket notation to slice: array[row, column]
r[2, 2]
Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb
Z0m6ie/Zombie_Code
mit
<br> And use : to select a range of rows or columns
r[3, 3:6]
Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb
Z0m6ie/Zombie_Code
mit
<br> Here we are selecting all the rows up to (and not including) row 2, and all the columns up to (and not including) the last column.
r[:2, :-1]
Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb
Z0m6ie/Zombie_Code
mit
<br> This is a slice of the last row, and only every other element.
r[-1, ::2]
Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb
Z0m6ie/Zombie_Code
mit
<br> We can also perform conditional indexing. Here we are selecting values from the array that are greater than 30. (Also see np.where)
r[r > 30]
Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb
Z0m6ie/Zombie_Code
mit
<br> Here we are assigning all values in the array that are greater than 30 to the value of 30.
r[r > 30] = 30 r
Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb
Z0m6ie/Zombie_Code
mit
<br> Copying Data Be careful with copying and modifying arrays in NumPy! r2 is a slice of r
r2 = r[:3,:3] r2
Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb
Z0m6ie/Zombie_Code
mit
<br> Set this slice's values to zero ([:] selects the entire array)
r2[:] = 0 r2
Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb
Z0m6ie/Zombie_Code
mit
<br> r has also been changed!
r
Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb
Z0m6ie/Zombie_Code
mit
<br> To avoid this, use r.copy to create a copy that will not affect the original array
r_copy = r.copy() r_copy
Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb
Z0m6ie/Zombie_Code
mit
<br> Now when r_copy is modified, r will not be changed.
r_copy[:] = 10 print(r_copy, '\n') print(r)
Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb
Z0m6ie/Zombie_Code
mit
<br> Iterating Over Arrays Let's create a new 4 by 3 array of random numbers 0-9.
test = np.random.randint(0, 10, (4,3)) test
Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb
Z0m6ie/Zombie_Code
mit
<br> Iterate by row:
for row in test: print(row)
Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb
Z0m6ie/Zombie_Code
mit
<br> Iterate by index:
for i in range(len(test)): print(test[i])
Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb
Z0m6ie/Zombie_Code
mit
<br> Iterate by row and index:
for i, row in enumerate(test): print('row', i, 'is', row)
Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb
Z0m6ie/Zombie_Code
mit
<br> Use zip to iterate over multiple iterables.
test2 = test**2 test2 for i, j in zip(test, test2): print(i,'+',j,'=',i+j)
Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week1/Week+1.ipynb
Z0m6ie/Zombie_Code
mit
This is an extremely small dataset:
employees = ks.dataframe([ ("ACME", "John", "12/01"), ("ACME", "Kate", "09/04"), ("ACME", "Albert", "09/04"), ("Databricks", "Ali", "09/04"), ], schema=["company_name", "employee_name", "dob"], name="employees") employees
python/notebooks/Demo 2-details.ipynb
tjhunter/karps
apache-2.0
Now, here is the definition of the birthday paradox function. It is pretty simple code:
# The number of people who share a birthday date with someone else. # Takes a column of data containing birthdates. def paradoxal_count(c): with ks.scope("p_count"): # Make it pretty: g = c.groupby(c).agg({'num_employees': f.count}, name="agg_count") s = f.sum(g.num_employees[g.num_employees>=2], name="paradoxical_employees") return s
python/notebooks/Demo 2-details.ipynb
tjhunter/karps
apache-2.0
This is a simple function. If we wanted to try it, or write tests for it, we would prefer not to have to launch a Spark instance, which comes with some overhead. Let's write a simple test case using Pandas to be confident it is working as expected, and then use it in Spark. It correctly found that 2 people share the same January 1st birth date.
# A series of birth dates. test_df = pd.Series(["1/1", "3/5", "1/1"]) paradoxal_count(test_df)
python/notebooks/Demo 2-details.ipynb
tjhunter/karps
apache-2.0
Now that we have this nice function, let's use against each of the companies in our dataset, with Spark. Notice that you can directly plug the function, no need to do translation, etc. This is impossible to do in Spark for complex functions like this one. We get at the end a daframe with the name of the company and the number of employees that share the same birthdate:
# Now use this to group by companies: res = (employees.dob .groupby(employees.company_name) .agg({ "paradoxical_employees": paradoxal_count })) res
python/notebooks/Demo 2-details.ipynb
tjhunter/karps
apache-2.0
This is still a dataframe. Now is the time to collect and see the content:
o = f.collect(res) o
python/notebooks/Demo 2-details.ipynb
tjhunter/karps
apache-2.0
We run it using the session we opened before, and we use compute to inspect how Karps and Spark are evaluating the computations.
comp = s.compute(o) comp
python/notebooks/Demo 2-details.ipynb
tjhunter/karps
apache-2.0
Let's look under the hood to see how this gets translated. The transformation is defined using two nested first-orderd functions, that get collected using the FunctionalShuffle operation called shuffle9.
show_phase(comp, "initial") show_phase(comp, "final")
python/notebooks/Demo 2-details.ipynb
tjhunter/karps
apache-2.0
After optimization and flattening, the graph actually turns out to be a linear graph with a first shuffle, a filter, a second shuffle and then a final aggregate. You can click around to see how computations are being done.
show_phase(comp, "final")
python/notebooks/Demo 2-details.ipynb
tjhunter/karps
apache-2.0
And finally the value:
comp.values()
python/notebooks/Demo 2-details.ipynb
tjhunter/karps
apache-2.0
As a conclusion, with Karps, you can take any reasonable function and reuse it in arbitrary ways in a functional manner, in a type-safe manner. Karps will write for you the complex SQL queries that you would have to write by hand. All errors are detected well before the actual runtime, which greatly simplifies the debugging. Laziness and structured transforms bring to Spark some fundamental characteristics such as modularity, reusability, better testing and fast-fail comprehensive error checking, on top of automatic performance optimizations.
show_phase(comp, "parsed") show_phase(comp, "physical") show_phase(comp, "rdd") comp.dump_profile("karps-trace-2.json")
python/notebooks/Demo 2-details.ipynb
tjhunter/karps
apache-2.0
Assign variables
# species = argv[1] # evalue = argv[2] # clusters_path = argv[3] # Prochlorococcus results species = 'proch' evalue = '1e-5' myaxis = [0, 64, 0, 0.36] clusters_path = '~/singlecell/clusters/orthomcl-pro4/groups.all_pro.list' # Pelagibacter results species = 'pelag' evalue = '1e-5' myaxis = [0, 64, 0, 0.36] clusters_path = '~/singlecell/clusters/orthomcl-sar4/groups.all_sar.list'
red-sea-single-cell-genomes/code/singlecell_tara_stats.ipynb
cuttlefishh/papers
mit
Format and save Tara metadata
# Tara metadata df_tara_names = pd.read_csv('/Users/luke/singlecell/tara/Tara_Prok139_PANGAEA_Sample.csv') df_tara_metadata = pd.read_csv('/Users/luke/singlecell/tara/Tara_Table_W8.csv') df_tara_metadata = df_tara_names.merge(df_tara_metadata) # SRF metadata df_tara_metadata.index = df_tara_metadata['Sample label [TARA_station#_environmental-feature_size-fraction]'] index_SRF = [index for index in list(df_tara_metadata.index) if 'SRF' in index] df_tara_metadata_SRF = df_tara_metadata.loc[index_SRF] df_tara_metadata_SRF.index = df_tara_metadata_SRF.index # Latitude column df_tara_metadata_SRF['category_latitude'] = pd.Series(0, index=np.arange(len(df_tara_metadata_SRF.columns)), dtype='object') for index, lat in abs(df_tara_metadata_SRF['Mean_Lat*']).iteritems(): if lat < 23.5: df_tara_metadata_SRF.loc[index, 'category_latitude'] = 'tropical' elif lat > 40: df_tara_metadata_SRF.loc[index, 'category_latitude'] = 'temperate' else: df_tara_metadata_SRF.loc[index, 'category_latitude'] = 'subtropical' # Temperature column df_tara_metadata_SRF['category_temperature'] = pd.Series(0, index=np.arange(len(df_tara_metadata_SRF.columns)), dtype='object') for index, temp in df_tara_metadata_SRF['Mean_Temperature [deg C]*'].iteritems(): if temp < 10: df_tara_metadata_SRF.loc[index, 'category_temperature'] = 'polar' elif temp > 20: df_tara_metadata_SRF.loc[index, 'category_temperature'] = 'tropical' else: df_tara_metadata_SRF.loc[index, 'category_temperature'] = 'temperate' # Red Sea column df_tara_metadata_SRF['category_redsea'] = pd.Series(0, index=np.arange(len(df_tara_metadata_SRF.columns)), dtype='bool') for index in df_tara_metadata_SRF.index: if index in ['TARA_031_SRF_0.22-1.6', 'TARA_031_SRF_<-0.22', 'TARA_032_SRF_0.22-1.6', 'TARA_032_SRF_<-0.22', 'TARA_033_SRF_0.22-1.6', 'TARA_034_SRF_0.1-0.22', 'TARA_034_SRF_0.22-1.6', 'TARA_034_SRF_<-0.22']: df_tara_metadata_SRF.loc[index, 'category_redsea'] = True else: df_tara_metadata_SRF.loc[index, 'category_redsea'] = False # export mapping file df_tara_metadata_SRF.to_csv('tara_metadata_SRF.tsv', sep='\t')
red-sea-single-cell-genomes/code/singlecell_tara_stats.ipynb
cuttlefishh/papers
mit
Format and save count data
# Paths of input files, containing cluster counts in Tara samples paths = pd.Series.from_csv('/Users/luke/singlecell/tara/paths_%s_%s.list' % (species, evalue), header=-1, sep='\t', index_col=None) # Data frame of non-zero cluster counts in Tara samples (NaN if missing in sample but found in others) pieces = [] for path in paths: fullpath = "/Users/luke/singlecell/tara/PROK-139/%s" % path counts = pd.DataFrame.from_csv(fullpath, header=-1, sep='\t', index_col=0) pieces.append(counts) df_nonzero = pd.concat(pieces, axis=1) headings = paths.tolist() df_nonzero.columns = headings # SRF dataframe, transposed, zeros, plus 1, renamed indexes col_SRF = [col for col in list(df_nonzero.columns) if 'SRF' in col] df_nonzero_SRF = df_nonzero[col_SRF] df_nonzero_SRF_T = df_nonzero_SRF.transpose() df_nonzero_SRF_T.fillna(0, inplace=True) df_nonzero_SRF_T_plusOne = df_nonzero_SRF_T + 1 df_nonzero_SRF_T_plusOne.index = [re.sub(species, 'TARA', x) for x in df_nonzero_SRF_T_plusOne.index] df_nonzero_SRF_T_plusOne.index = [re.sub('_1e-5', '', x) for x in df_nonzero_SRF_T_plusOne.index] # Dataframe of all clusters (includes clusters missing from Tara) clusters = pd.Series.from_csv(clusters_path, header=-1, sep='\t', index_col=None) df_all = df_nonzero.loc[clusters] df_all_SRF = df_all[col_SRF] df_all_SRF_T = df_all_SRF.transpose() df_all_SRF_T.fillna(0, inplace=True) # remove '1e-5' from count indexes df_nonzero_SRF_T.index = [re.sub('_1e-5', '', x) for x in df_nonzero_SRF_T.index] df_all_SRF_T.index = [re.sub('_1e-5', '', x) for x in df_all_SRF_T.index] # export counts to file df_nonzero_SRF_T.to_csv('tara_%s_nonzero_SRF.csv' % species) df_all_SRF_T.to_csv('tara_%s_all_SRF.csv' % species)
red-sea-single-cell-genomes/code/singlecell_tara_stats.ipynb
cuttlefishh/papers
mit
ANCOM
# ANCOM with defaults alpha=0.05, tau=0.02, theta=0.1 # for grouping in ['category_latitude', 'category_temperature', 'category_redsea']: # results = ancom(df_nonzero_SRF_T_plusOne, df_tara_metadata_SRF[grouping], multiple_comparisons_correction='holm-bonferroni') # results.to_csv('ancom.%s_nonzero_SRF_T_plusOne.%s.csv' % (species, grouping))
red-sea-single-cell-genomes/code/singlecell_tara_stats.ipynb
cuttlefishh/papers
mit
Z-test
# lookup dict for genus name dg = { 'pelag': 'Pelagibacter', 'proch': 'Prochlorococcus' } # load OG metadata to determine RS-only OGs df_og_metadata = pd.read_csv('/Users/luke/singlecell/notebooks/og_metadata.tsv', sep='\t', index_col=0) og_rs = df_og_metadata.index[(df_og_metadata['Red_Sea_only'] == True) & (df_og_metadata['genus'] == dg[species])] og_other = df_og_metadata.index[(df_og_metadata['Red_Sea_only'] == False) & (df_og_metadata['genus'] == dg[species])] df_all_SRF_T_rs = df_all_SRF_T[og_rs] df_all_SRF_T_other = df_all_SRF_T[og_other] count = (df_all_SRF_T > 0).sum() count_rs = (df_all_SRF_T_rs > 0).sum() count_other = (df_all_SRF_T_other > 0).sum() # save count data count.to_csv('hist_counts_%s_ALL_og_presence_absence_in_63_tara_srf.csv' % species) count_rs.to_csv('hist_counts_%s_RSassoc_og_presence_absence_in_63_tara_srf.csv' % species) num_samples = df_all_SRF_T.shape[0] num_ogs = max_bin = df_all_SRF_T.shape[1] num_ogs_rsonly = count_rs.shape[0] num_ogs_other = count_other.shape[0] # all OGs AND RS-assoc OGs plt.figure(figsize=(10,10)) sns.distplot(count_rs, bins=np.arange(num_samples+2), color=sns.xkcd_rgb['orange'], label='Red Sea-associated ortholog groups (%s)' % num_ogs_rsonly) sns.distplot(count, bins=np.arange(num_samples+2), color=sns.xkcd_rgb['blue'], label='All %s ortholog groups (%s)' % (dg[species], num_ogs)) plt.xlabel('Number of %s Tara surface samples found in' % num_samples, fontsize=18) plt.ylabel('Proportion of ortholog groups', fontsize=18) plt.xticks(np.arange(0,num_samples+1,10)+0.5, ('0', '10', '20', '30', '40', '50', '60'), fontsize=14) plt.yticks(fontsize=14) plt.legend(fontsize=16, loc='upper left') plt.axis(myaxis) plt.savefig('hist_%s_paper_og_presence_absence_in_63_tara_srf.pdf' % species) # all OGs plt.figure(figsize=(8,6)) sns.distplot(count, bins=num_samples+1) plt.axis([-0, num_samples, 0, .35]) plt.xlabel('Number of %s Tara surface samples found in' % num_samples) plt.ylabel('Proportion of %s OGs' % num_ogs) plt.title('Presence/absence of all %s %s OGs in %s Tara surface samples' % (num_ogs, species, num_samples)) plt.axis(myaxis) plt.savefig('hist_%s_all_og_presence_absence_in_63_tara_srf.pdf' % species) # RS-assoc OGs plt.figure(figsize=(8,6)) sns.distplot(count_rs, bins=num_samples+1) plt.axis([-0, num_samples, 0, .25]) plt.xlabel('Number of %s Tara surface samples found in' % num_samples) plt.ylabel('Proportion of %s OGs' % num_ogs_rsonly) plt.title('Presence/absence of %s RS-assoc. %s OGs in %s Tara surface samples' % (num_ogs_rsonly, species, num_samples)) plt.axis(myaxis) plt.savefig('hist_%s_RSassoc_og_presence_absence_in_63_tara_srf.pdf' % species) # other (non-RS-assoc) OGs plt.figure(figsize=(8,6)) sns.distplot(count_other, bins=num_samples+1) plt.axis([0, num_samples, 0, .4]) plt.xlabel('Number of %s Tara surface samples found in' % num_samples) plt.ylabel('Proportion of %s OGs' % num_ogs_other) plt.title('Presence/absence of %s non-RS-assoc. %s OGs in %s Tara surface samples' % (num_ogs_other, species, num_samples)) plt.axis(myaxis) plt.savefig('hist_%s_nonRSassoc_og_presence_absence_in_63_tara_srf.pdf' % species)
red-sea-single-cell-genomes/code/singlecell_tara_stats.ipynb
cuttlefishh/papers
mit
Subsampling Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by $$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$ where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset. I'm going to leave this up to you as an exercise. Check out my solution to see how I did it. Exercise: Implement subsampling for the words in int_words. That is, go through int_words and discard each word given the probablility $P(w_i)$ shown above. Note that $P(w_i)$ is that probability that a word is discarded. Assign the subsampled data to train_words.
from collections import Counter import random threshold = 1e-5 word_counts = Counter(int_words) total_count = len(int_words) freqs = {word: count/total_count for word, count in word_counts.items()} p_drop = {word: 1 - np.sqrt(threshold/freqs[word]) for word in word_counts} train_words = [word for word in int_words if random.random() < (1 - p_drop[word])]
embeddings/Skip-Grams-Solution.ipynb
otavio-r-filho/AIND-Deep_Learning_Notebooks
mit
Building the graph From Chris McCormick's blog, we can see the general structure of our network. The input words are passed in as one-hot encoded vectors. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal. The idea here is to train the hidden layer weight matrix to find efficient representations for our words. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset. I'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal. Exercise: Assign inputs and labels using tf.placeholder. We're going to be passing in integers, so set the data types to tf.int32. The batches we're passing in will have varying sizes, so set the batch sizes to [None]. To make things work later, you'll need to set the second dimension of labels to None or 1.
train_graph = tf.Graph() with train_graph.as_default(): inputs = tf.placeholder(tf.int32, [None], name='inputs') labels = tf.placeholder(tf.int32, [None, None], name='labels')
embeddings/Skip-Grams-Solution.ipynb
otavio-r-filho/AIND-Deep_Learning_Notebooks
mit
Embedding The embedding matrix has a size of the number of words by the number of units in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using tokenized data for our inputs, usually as integers, where the number of tokens is the number of words in our vocabulary. Exercise: Tensorflow provides a convenient function tf.nn.embedding_lookup that does this lookup for us. You pass in the embedding matrix and a tensor of integers, then it returns rows in the matrix corresponding to those integers. Below, set the number of embedding features you'll use (200 is a good start), create the embedding matrix variable, and use tf.nn.embedding_lookup to get the embedding tensors. For the embedding matrix, I suggest you initialize it with a uniform random numbers between -1 and 1 using tf.random_uniform.
n_vocab = len(int_to_vocab) n_embedding = 200 # Number of embedding features with train_graph.as_default(): embedding = tf.Variable(tf.random_uniform((n_vocab, n_embedding), -1, 1)) embed = tf.nn.embedding_lookup(embedding, inputs)
embeddings/Skip-Grams-Solution.ipynb
otavio-r-filho/AIND-Deep_Learning_Notebooks
mit
通过帮助文档,我们可以了解到HowDoI大概的工作模式以及它的一些功能,例如可以colorize the output,get multiple answers, keep answers in a cache that can be clared等。 Use HowDoI
!howdoi --num-answers 3 python lambda function list comprehension !howdoi --num-answer 3 python numpy array create
machine-learning/Hitchhiker-guide-2016/ch05.ipynb
yw-fang/readingnotes
apache-2.0
Read HowDoI's code 在howdoi的目录中,除了__pycache__之外其实只有两个文件,即__init__.py 和 howdoi.py。 前者只有一行,包含了版本信息;而后者则是我们即将精读的代码。
!ls /Users/ywfang/FANG/git/howdoi_ywfang/howdoi
machine-learning/Hitchhiker-guide-2016/ch05.ipynb
yw-fang/readingnotes
apache-2.0
通过浏览howdoi.py,我们发现这里面定义了很多新的函数,而且每个函数都会在之后的函数中被引用,这是的我们可以方便follow。 其中的main function,即 command_line_runner()接近于 howdoi.py的底部
!sed -n '70,120p' /Users/ywfang/FANG/git/howdoi_ywfang/howdoi/howdoi.py
machine-learning/Hitchhiker-guide-2016/ch05.ipynb
yw-fang/readingnotes
apache-2.0
Above function was created for use in this session, it will not be available for IDV in next session so let us save it to the IDV Jython library.
saveJython(moistStaticEnergy)
examples/CreateFunctionFormulas.ipynb
suvarchal/JyIDV
mit
Create a IDV formula, once created it will be in the list of formulas in IDV. The arguments to saveFormula are (formulaid,description,functionastring,formula categories). formula categories can be a list of categories or just a single category specified by a string.
saveFormula("Moist Static Energy","Moist Static Energy from T, Q, GZ","moistStaticEnergy(T,Q,GZ)",["Grids","Grids-MapesCollection"])
examples/CreateFunctionFormulas.ipynb
suvarchal/JyIDV
mit
Check if the formula was created in IDV GUI . At anytime to show a IDV window from notebook use function showIdv(). Currently some displays cannot be made when using GUI from notebook, will be implemented in future.
showIdv()
examples/CreateFunctionFormulas.ipynb
suvarchal/JyIDV
mit
6.5 Source Finding In radio astronomy, source finding is the process through which the attributes of radio sources -- such as flux density and mophorlogy -- are measured from data. In this section we will only cover source finding in the image plane. Source finding techniques usually involve four steps, i) charecterizing the noise (or background estimation), ii) thresholding the data based on knowledge of the noise, iii) finding regions in the thresholded image with "similar" neighbouring pixels (this is that same as blob detection in image processing), and iv) parameterizing these 'blobs' through a function (usually a 2D Gaussian). The source attributes are then estimated from the parameterization of the blobs. 6.5.1 Noise Charecterization As mentioned before, the radio data we process with source finders is noisy. To charecterize this noise we need to make a few assumptions about its nature, namely we assume that the niose results from some stochastic process and that it can be described by a normal distribution $$ G(x \, | \, \mu,\sigma^2) = \frac{1}{\sigma \sqrt{2\pi}}\text{exp}\left( \frac{-(x-\mu)^2}{2\sigma^2}\right) $$ where, $\mu$ is the mean (or expected value) of the variable $x$, and $\sigma^2$ is the variance of the distribution; $\sigma$ is the standard deviation. Hence, the noise can be parameterized through the mean and the standard deviation. Let us illustrate this with an example. Bellow is a noise image from a MeerKAT simulation, along with a histogram of of the pixels (in log space).
noise_image = "../data/fits/noise_image.fits" with astropy.io.fits.open(noise_image) as hdu: data = hdu[0].data[0,0,...] fig, (image, hist) = plt.subplots(1, 2, figsize=(18,6)) histogram, bins = np.histogram(data.flatten(), bins=401) dmin = data.min() dmax = data.max() x = np.linspace(dmin, dmax, 401) im = image.imshow(data) mean = data.mean() sigma = data.std() peak = histogram.max() gauss = lambda x, amp, mean, sigma: amp*np.exp( -(x-mean)**2/(2*sigma**2)) fitdata = gauss(x, peak, mean, sigma) plt.plot(x, fitdata) plt.plot(x, histogram, "o") plt.yscale('log') plt.ylim(1)
6_Deconvolution/6_5_source_finding.ipynb
KshitijT/fundamentals_of_interferometry
gpl-2.0
Now, in reality the noise has to measured in the presence of astrophysical emission. Furthermore, radio images are also contaminated by various instrumental effects which can manifest as spurious emission in the image domain. All these factors make it difficult to charercterize the noise in a synthesized image. Since the noise generally dominates the images, the mean and standard deviation of the entire image are still fairly good approximations of the noise. Let us now insert a few sources (image and flux distribution shown below) in the noise image from earlier and then try to estimate noise.
noise_image = "../data/fits/star_model_image.fits" with astropy.io.fits.open(noise_image) as hdu: data = hdu[0].data[0,0,...] fig, (image, hist) = plt.subplots(1, 2, figsize=(18,6)) histogram, bins = np.histogram(data.flatten(), bins=101) dmin = data.min() dmax = data.max() x = np.linspace(dmin, dmax, 101) im = image.imshow(data) mean = data.mean() sigma_std = data.std() peak = histogram.max() gauss = lambda x, amp, mean, sigma: amp*np.exp( -(x-mean)**2/(2*sigma**2)) fitdata_std = gauss(x, peak, mean, sigma_std) plt.plot(x, fitdata_std, label="STD DEV") plt.plot(x, histogram, "o", label="Data") plt.legend(loc=1) plt.yscale('log') plt.ylim(1)
6_Deconvolution/6_5_source_finding.ipynb
KshitijT/fundamentals_of_interferometry
gpl-2.0
The pixel statistics of the image are no longer Gaussian as apparent from the long trail of the flux distribution. Constructing a Gaussian model from the mean and standard deviation results in a poor fit (blue line in the figure on the right). A better method to estimate the variance is to measure the dispersion of the data points about the mean (or median), this is the mean/median absolute deviation (MAD) technique. We will refer to the to median absolute deviation as the MAD Median, and the mean absolute deviation as the MAD Mean. A synthesis imaging specific method to estimate the variance of the noise is to only consider the negative pixels. This works under the assumption that all the astrophysical emission (at least in Stokes I) has a positive flux density. The Figure below shows noise estimates from methods mentioned above.
mean = data.mean() sigma_std = data.std() sigma_neg = data[data<0].std() * 2 mad_mean = lambda a: np.mean( abs(a - np.mean(a) )) sigma_mad_median = np.median( abs(data - np.median(data) )) mad_mean = lambda a: np.mean( abs(a - np.mean(a) )) sigma_mad_mean = mad_mean(data) peak = histogram.max() gauss = lambda x, amp, mean, sigma: amp*np.exp( -(x-mean)**2/(2*sigma**2)) fitdata_std = gauss(x, peak, mean, sigma_std) fitdata_mad_median = gauss(x, peak, mean, sigma_mad_median) fitdata_mad_mean = gauss(x, peak, mean, sigma_mad_mean) fitdata_neg = gauss(x, peak, mean, sigma_neg) plt.plot(x, fitdata_std, label="STD DEV") plt.plot(x, fitdata_mad_median, label="MAD Median") plt.plot(x, fitdata_mad_mean, label="MAD Mean") plt.plot(x, fitdata_neg, label="Negative STD DEV") plt.plot(x, histogram, "o", label="Data") plt.legend(loc=1) plt.yscale('log') plt.ylim(1)
6_Deconvolution/6_5_source_finding.ipynb
KshitijT/fundamentals_of_interferometry
gpl-2.0
The MAD and negtive value standard deviation methods produce a better solution to the noise distribution in the presence of sources. 6.5.2 Blob Detection and Charercterization Once the noise has been estimated, the next step is to find and charecterize sources in the image. Generically in image processing this is known as blob detection. In a simple case during synthesis imaging we define a blob as a group contiguous pixels whose spatial intensity profile can be modelled by a 2D Gaussian function. Of course, more advanced functions could be used. Generally, we would like to group together near by pixels, such as spatially 'close' sky model components from deconvolution, into a single complex source. Our interferometric array has finite spatial resolution, so we can further constrain our blobs not to be significantly smaller than the image resolution. We define two further constraints of a blob, the peak and boundary thresholds. The peak threshold, defined as $$ \sigma_\text{peak} = n * \sigma, $$ is the minimum intensity the maximum pixel in a blob must have relative to the image noise. That is, all blobs with peak pixel lower than $\sigma_\text{peak}$ will be excluded from being considered sources. And the boundary threshold $$ \sigma_\text{boundary} = m * \sigma, $$ defines the boundary of a blob, $m$ and $n$ are natural numbers with $m$ < $n$. 6.5.2.1 A simple source finder We are now in a position to write a simple source finder. To do so we implement the following steps: Estimate the image noise and set peak and boundary threshold values. Blank out all pixel values below the boundary value. Find Peaks in image. For each peak, fit a 2D Gaussian and subtract the Gaussian fit from the image. Repeat until the image has no pixels above the detection threshold.
def gauss2D(x, y, amp, mean_x, mean_y, sigma_x, sigma_y): """ Generate a 2D Gaussian image""" gx = -(x - mean_x)**2/(2*sigma_x**2) gy = -(y - mean_y)**2/(2*sigma_y**2) return amp * np.exp( gx + gy) def err(p, xx, yy, data): """2D Gaussian error function""" return gauss2D(xx.flatten(), yy.flatten(), *p) - data.flatten() def fit_gaussian(data, psf_pix): """Fit a gaussian to a 2D data set""" width = data.shape[0] mean_x, mean_y = width/2, width/2 amp = data.max() sigma_x, sigma_y = psf_pix, psf_pix params0 = amp, mean_x, mean_y, sigma_x,sigma_y npix_x, npix_y = data.shape x = np.linspace(0, npix_x, npix_x) y = np.linspace(0, npix_y, npix_y) xx, yy = np.meshgrid(x, y) params, pcov, infoDict, errmsg, sucess = optimize.leastsq(err, params0, args=(xx.flatten(), yy.flatten(), data.flatten()), full_output=1) perr = abs(np.diagonal(pcov))**0.5 model = gauss2D(xx, yy, *params) return params, perr, model def source_finder(data, peak, boundary, width, psf_pix): """A simple source finding tool""" # first we make an estimate of the noise. Lets use the MAD mean sigma_noise = mad_mean(data) # Use noise estimate to set peak and boundary thresholds peak_sigma = sigma_noise*peak boundary_sigma = sigma_noise*boundary # Pad the image to avoid hitting the edge of the image pad = width*2 residual = np.pad(data, pad_width=((pad, pad), (pad, pad)), mode="constant") model = np.zeros(residual.shape) # Create slice to remove the padding later on imslice = [slice(pad, -pad), slice(pad,-pad)] catalog = [] # We will need to convert the fitted sigma values to a width FWHM = 2*np.sqrt(2*np.log(2)) while True: # Check if the brightest pixel is at least as bright as the sigma_peak # Otherwise stop. max_pix = residual.max() if max_pix<peak_sigma: break xpix, ypix = np.where(residual==max_pix) xpix = xpix[0] # Get first element ypix = ypix[0] # Get first element # Make slice that selects box of size width centred around bright brightest pixel subim_slice = [ slice(xpix-width/2, xpix+width/2), slice(ypix-width/2, ypix+width/2) ] # apply slice to get subimage subimage = residual[subim_slice] # blank out pixels below the boundary threshold mask = subimage > boundary_sigma # Fit gaussian to submimage params, perr, _model = fit_gaussian(subimage*mask, psf_pix) amp, mean_x, mean_y, sigma_x,sigma_y = params amp_err, mean_x_err, mean_y_err, sigma_x_err, sigma_y_err = perr # Remember to reposition the source in original image pos_x = xpix + (width/2 - mean_x) - pad pos_y = ypix + (width/2 - mean_y) - pad # Convert sigma values to FWHM lengths size_x = FWHM*sigma_x size_y = FWHM*sigma_y # Add modelled source to model image model[subim_slice] = _model # create new source source = ( amp, pos_x, pos_y, size_x, size_y ) # add source to catalogue catalog.append(source) # update residual image residual[subim_slice] -= _model return catalog, model[imslice], residual[imslice], sigma_noise
6_Deconvolution/6_5_source_finding.ipynb
KshitijT/fundamentals_of_interferometry
gpl-2.0
Using this source finder we can produce a sky model which contains all 17 sources in our test image from earlier in the section.
test_image = "../data/fits/star_model_image.fits" with astropy.io.fits.open(test_image) as hdu: data = hdu[0].data[0,0,...] catalog, model, residual, sigma_noise = source_finder(data, 5, 2, 50, 10) print "Peak_Flux Pix_x Pix_y Size_x Size_y" for source in catalog: print " %.4f %.1f %.1f %.2f %.2f"%source fig, (img, mod, res) = plt.subplots(1, 3, figsize=(24,12)) vmin, vmax = sigma_noise, data.max() im = img.imshow(data, vmin=vmin, vmax=vmax) img.set_title("Data") mod.imshow(model, vmin=vmin, vmax=vmax) mod.set_title("Model") res.imshow(residual, vmin=vmin, vmax=vmax) res.set_title("Residual") cbar_ax = fig.add_axes([0.92, 0.25, 0.02, 0.5]) fig.colorbar(im, cax=cbar_ax, format="%.2g")
6_Deconvolution/6_5_source_finding.ipynb
KshitijT/fundamentals_of_interferometry
gpl-2.0
Criptografia com curvas elípticas A Bitcoin se utiliza de curvas elípticas para suas necessidades criptográficas. Mais precisamente, utiliza o algoritmo de assinatura digital por curvas elipticas (ECDSA). A ECDSA envolve três componentes principais: uma chave pública, uma chave privada e assinatura. A Bitcoin usa uma curva elíptica específica chamada secp256k1. A função em si parece inofensiva: $$y^2=x^3+7$$ onde $4a^3 +27b^2 \neq 0$ (para excluir curvas singulares. $$\begin{array}{rcl} \left{(x, y) \in \mathbb{R}^2 \right. & \left. | \right. & \left. y^2 = x^3 + ax + b, \right. \ & & \left. 4a^3 + 27b^2 \ne 0\right}\ \cup\ \left{0\right} \end{array}$$ <img src="http://andrea.corbellini.name/images/curves.png" width="30%" align="right"/> Porém, em aplicações criptográficas, esta função não é definida sobre os números reais, mas sobre um campo de números primos: mais precisamente ${\cal Z}$ modulo $2^{256} - 2^{32} - 977$. \begin{array}{rcl} \left{(x, y) \in (\mathbb{F}_p)^2 \right. & \left. | \right. & \left. y^2 \equiv x^3 + ax + b \pmod{p}, \right. \ & & \left. 4a^3 + 27b^2 \not\equiv 0 \pmod{p}\right}\ \cup\ \left{0\right} \end{array} Para um maior aprofundamento sobre a utilização de curvas elítpicas em criptografia leia este material. Encriptando textos A forma mais simples de criptografia é a criptografia simétrica, na qual se utilizando de uma chave gerada aleatóriamente, converte um texto puro em um texto encriptado. então de posse da mesma chave é possível inverter a operação, recuperando o texto original. Quando falamos em texto aqui estamos falando apenas de uma aplicação possível de criptografia. Na verdade o que será aplicado aqui para textos, pode ser aplicado para qualquer sequencia de bytes, ou seja para qualquer objeto digital.
from Crypto.Cipher import DES3 from Crypto import Random
lectures/intro_cripto.ipynb
fccoelho/Curso_Blockchain
lgpl-3.0
Neste exemplo vamos usar o algoritmo conhecido como "triplo DES" para encriptar e desencriptar um texto. Para este exemplo a chave deve ter um comprimento múltiplo de 8 bytes.
chave = b"chave secreta um" sal = Random.get_random_bytes(8) des3 = DES3.new(chave, DES3.MODE_CFB, sal)
lectures/intro_cripto.ipynb
fccoelho/Curso_Blockchain
lgpl-3.0
Note que adicionamos sal à ao nosso encriptador. o "sal" é uma sequência aleatória de bytes feitar para dificultar ataques.
texto = b"Este e um texto super secreto que precisa ser protegido a qualquer custo de olhares nao autorizados." enc = des3.encrypt(texto) enc des3 = DES3.new(chave, DES3.MODE_CFB, sal) des3.decrypt(enc)
lectures/intro_cripto.ipynb
fccoelho/Curso_Blockchain
lgpl-3.0
Um dos problemas com esta metodologia de encriptação, é que se você deseja enviar este arquivo encriptado a um amigo, terá que encontrar uma forma segura de lhe transmitir a chave, caso contrário um inimigo mal intencionado poderá desencriptar sua mensagem de posse da chave. Para resolver este problema introduzimos um novo métodos de encriptação: Criptografia de chave pública Nesta metodologia temos duas chaves: uma pública e outra privada.
from Crypto.PublicKey import RSA from Crypto.Random import get_random_bytes from Crypto.Cipher import AES, PKCS1_OAEP
lectures/intro_cripto.ipynb
fccoelho/Curso_Blockchain
lgpl-3.0
Vamos criar uma chave privada, e também encriptá-la, no caso de termos que mantê-la em algum lugar onde possa ser observada por um terceiro.
senha = "minha senha super secreta." key = RSA.generate(2048) # Chave privada print(key.exportKey()) chave_privada_encryptada = key.exportKey(passphrase=senha, pkcs=8, protection="scryptAndAES128-CBC") publica = key.publickey() publica.exportKey()
lectures/intro_cripto.ipynb
fccoelho/Curso_Blockchain
lgpl-3.0
De posse da senha podemos recuperar as duas chaves.
key2 = RSA.import_key(chave_privada_encryptada, passphrase=senha) print(key2==key) key.publickey().exportKey() == key2.publickey().exportKey()
lectures/intro_cripto.ipynb
fccoelho/Curso_Blockchain
lgpl-3.0
Agora podemos encriptar algum documento qualquer. Para máxima segurança, vamos usar o protocolo PKCS#1 OAEP com a algoritmo RSA para encriptar assimetricamente uma chave de sessão AES. Esta chave de sessão pode ser usada para encriptar os dados. Vamos usar o modo EAX para permitir a detecção de modificações não autorizadas.
data = "Minha senha do banco é 123456".encode('utf8') chave_de_sessão = get_random_bytes(16) # Encripta a chave de sessão com a a chave RSA pública. cifra_rsa = PKCS1_OAEP.new(publica) chave_de_sessão_enc = cifra_rsa.encrypt(chave_de_sessão) # Encrypta os dados. cifra_aes = AES.new(chave_de_sessão, AES.MODE_EAX) texto_cifrado, tag = cifra_aes.encrypt_and_digest(data) texto_cifrado
lectures/intro_cripto.ipynb
fccoelho/Curso_Blockchain
lgpl-3.0
O destinatário da mensagem pode então desencriptar a mensagem usando a chave privada para desencriptar a chave da sessão, e com esta a mensagem.
# Desencripta a chave de sessão com a chave privada RSA. cifra_rsa = PKCS1_OAEP.new(key) chave_de_sessão = cifra_rsa.decrypt(chave_de_sessão_enc) # Desencripta os dados com a chave de sessão AES cifra_aes = AES.new(chave_de_sessão, AES.MODE_EAX, cifra_aes.nonce) data2 = cifra_aes.decrypt_and_verify(texto_cifrado, tag) print(data.decode("utf-8"))
lectures/intro_cripto.ipynb
fccoelho/Curso_Blockchain
lgpl-3.0
The role of dipole orientations in distributed source localization When performing source localization in a distributed manner (MNE/dSPM/sLORETA/eLORETA), the source space is defined as a grid of dipoles that spans a large portion of the cortex. These dipoles have both a position and an orientation. In this tutorial, we will look at the various options available to restrict the orientation of the dipoles and the impact on the resulting source estimate. See inverse_orientation_constraints for related information. Loading data Load everything we need to perform source localization on the sample dataset.
import mne import numpy as np from mne.datasets import sample from mne.minimum_norm import make_inverse_operator, apply_inverse data_path = sample.data_path() evokeds = mne.read_evokeds(data_path + '/MEG/sample/sample_audvis-ave.fif') left_auditory = evokeds[0].apply_baseline() fwd = mne.read_forward_solution( data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif') mne.convert_forward_solution(fwd, surf_ori=True, copy=False) noise_cov = mne.read_cov(data_path + '/MEG/sample/sample_audvis-cov.fif') subject = 'sample' subjects_dir = data_path + '/subjects' trans_fname = data_path + '/MEG/sample/sample_audvis_raw-trans.fif'
0.21/_downloads/a1ab4842a5aa341564b4fa0a6bf60065/plot_dipole_orientations.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Depending on your CARTO account plan, some of these data services are subject to different quota limitations. Geocoding To get started, let's read in and explore the Starbucks location data we have. With the Starbucks store data in a DataFrame, we can see that there are two columns that can be used in the geocoding service: name and address. There's also a third column that reflects the annual revenue of the store.
import pandas as pd df = pd.read_csv('http://libs.cartocdn.com/cartoframes/samples/starbucks_brooklyn.csv') df.head()
docs/guides/06-Data-Services.ipynb
CartoDB/cartoframes
bsd-3-clause
Quota consumption Each time you run Data Services, quota is consumed. For this reason, we provide the ability to check in advance the amount of credits an operation will consume by using the dry_run parameter when running the service function. It is also possible to check your available quota by running the available_quota function.
from cartoframes.data.services import Geocoding geo_service = Geocoding() city_ny = {'value': 'New York'} country_usa = {'value': 'USA'} _, geo_dry_metadata = geo_service.geocode(df, street='address', city=city_ny, country=country_usa, dry_run=True) geo_dry_metadata geo_service.available_quota() geo_gdf, geo_metadata = geo_service.geocode(df, street='address', city=city_ny, country=country_usa)
docs/guides/06-Data-Services.ipynb
CartoDB/cartoframes
bsd-3-clause
Let's compare geo_dry_metadata and geo_metadata to see the differences between the information returned with and without the dry_run option. As we can see, this information reflects that all the locations have been geocoded successfully and that it has consumed 10 credits of quota.
geo_metadata geo_service.available_quota()
docs/guides/06-Data-Services.ipynb
CartoDB/cartoframes
bsd-3-clause
If the input data file ever changes, cached results will only be applied to unmodified records, and new geocoding will be performed only on new or changed records. In order to use cached results, we have to save the results to a CARTO table using the table_name and cached=True parameters. The resulting data is a GeoDataFrame that contains three new columns: geometry: The resulting geometry gc_status_rel: The percentage of accuracy of each location carto_geocode_hash: Geocode information
geo_gdf.head()
docs/guides/06-Data-Services.ipynb
CartoDB/cartoframes
bsd-3-clause
In addition, to prevent geocoding records that have been previously geocoded, and thus spend quota unnecessarily, you should always preserve the the_geom and carto_geocode_hash columns generated by the geocoding process. This will happen automatically in these cases: Your input is a table from CARTO processed in place (without a table_name parameter) If you save your results to a CARTO table using the table_name parameter, and only use the resulting table for any further geocoding. If you try to geocode this DataFrame now that it contains both the_geom and the carto_geocode_hash, you will see that the required quota is 0 because it has already been geocoded.
_, geo_metadata = geo_service.geocode(geo_gdf, street='address', city=city_ny, country=country_usa, dry_run=True) geo_metadata.get('required_quota')
docs/guides/06-Data-Services.ipynb
CartoDB/cartoframes
bsd-3-clause
Precision The address column is more complete than the name column, and therefore, the resulting coordinates calculated by the service will be more accurate. If we check this, the accuracy values using the name column are lower than the ones we get by using the address column for geocoding.
geo_name_gdf, geo_name_metadata = geo_service.geocode(df, street='name', city=city_ny, country=country_usa) geo_name_gdf.gc_status_rel.unique() geo_gdf.gc_status_rel.unique()
docs/guides/06-Data-Services.ipynb
CartoDB/cartoframes
bsd-3-clause
Visualize the results Finally, we can visualize the precision of the geocoded results using a CARTOframes visualization layer.
from cartoframes.viz import Layer, color_bins_style, popup_element Layer( geo_gdf, color_bins_style('gc_status_rel', method='equal', bins=geo_gdf.gc_status_rel.unique().size), popup_hover=[popup_element('address', 'Address'), popup_element('gc_status_rel', 'Precision')], title='Geocoding Precision' )
docs/guides/06-Data-Services.ipynb
CartoDB/cartoframes
bsd-3-clause