markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Substrings can be indexed using string[start:end]
sample = MS2_spectrum[0:5] sample # Note: The character at position 5 is not included in the substring. ion = MS2_spectrum[10:13] ion file_format = MS2_spectrum[-3:] file_format
Python_tutorial.ipynb
Michaelt293/ANZSMS-Programming-workshop
cc0-1.0
Type conversion - string to float
type(ion) float(ion)
Python_tutorial.ipynb
Michaelt293/ANZSMS-Programming-workshop
cc0-1.0
Collection data types Lists Lists are mutable (i.e., modifiable), ordered collections of items. Lists are created by enclosing a collection of items with square brackets. An empty list may also be created simply by assigning [] to a variable, i.e., empty_list = [].
MS_files = ["MS_spectrum", "MS2_405", "MS2_471", "MS2_495"] MS_files
Python_tutorial.ipynb
Michaelt293/ANZSMS-Programming-workshop
cc0-1.0
Indexing in lists is the same as for strings
MS_files[2]
Python_tutorial.ipynb
Michaelt293/ANZSMS-Programming-workshop
cc0-1.0
Several list 'methods' exist for manipulating lists
MS_files.remove("MS2_405") MS_files MS_files.append("MS3_225") MS_files
Python_tutorial.ipynb
Michaelt293/ANZSMS-Programming-workshop
cc0-1.0
Tuples Tuples are immutable (i.e., can't modified after their creation), ordered collections of items and are the simplist collection data type. Tuples are created by enclosing a collection of items by parentheses).
Fe_isotopes = (53.9, 55.9, 56.9, 57.9) Fe_isotopes
Python_tutorial.ipynb
Michaelt293/ANZSMS-Programming-workshop
cc0-1.0
Indexing
Fe_isotopes[0]
Python_tutorial.ipynb
Michaelt293/ANZSMS-Programming-workshop
cc0-1.0
Dictionaries Dictionaries are mutable, unordered collections of key: value pairs. Dictionaries are created created by enclosing key: value pairs with curly brackets. Importantly, keys must be hashable. This means, for example, that lists can't be used as keys since the items inside a list may be modified.
carbon_isotopes = {"12": 0.9893, "13": 0.0107}
Python_tutorial.ipynb
Michaelt293/ANZSMS-Programming-workshop
cc0-1.0
Fetching the value for a certain key
carbon_isotopes["12"]
Python_tutorial.ipynb
Michaelt293/ANZSMS-Programming-workshop
cc0-1.0
Dictionary methods
carbon_isotopes.keys() carbon_isotopes.values() carbon_isotopes.items()
Python_tutorial.ipynb
Michaelt293/ANZSMS-Programming-workshop
cc0-1.0
Sets Sets are another data type which are like an unordered list with no dublicates. They are especially useful for finding all the unique items from a list as shown below.
phospholipids = ["PA(16:0/18:1)", "PA(16:0/18:2)", "PC(14:0/16:0)", "PC(16:0/16:1)", "PC(16:1/16:2)"] # Lets assume we apply a function that finds the type of phospholipid name to phospholipid_fatty_acids = ["16:0", "18:1", "16:0", "18:2", "14:0", "16:0", "16:0", "16:1", "16:1", "16:2"] unique_fatty_acids = set(phospholipid_fatty_acids) unique_fatty_acids num_unique_fa = len(unique_fatty_acids) num_unique_fa
Python_tutorial.ipynb
Michaelt293/ANZSMS-Programming-workshop
cc0-1.0
Boolean operators Boolean operators asses the truth or falseness of a statement.
Ag > Au Ag < Au Ag == 106.9 Au >= 100 Ag <= Au and Ag > 200 Ag <= Au or Ag > 200
Python_tutorial.ipynb
Michaelt293/ANZSMS-Programming-workshop
cc0-1.0
Conditional statements Code is only executed if the conditional statement is evaluated as True. In the following example, Ag has a value of greater than 100 and therefore only the "Ag is greater than 100 Da" string is printed. A colon follows the conditional statement and the following code block is indented by 4 spaces (always use 4 spaces rather than tabs - errors will resulting when mixing tabs with spaces!). Note, the elif and else statements are optional.
if Ag < 100: print("Ag is less than 100 Da") elif Ag > 100: print("Ag is greater than 100 Da.") else: print("Ag is equal to 100 Da.")
Python_tutorial.ipynb
Michaelt293/ANZSMS-Programming-workshop
cc0-1.0
While loops While loops repeat the execution of a code block while a condition is evaulated as True. When using while loops, be careful not to make an infinite loop where the conditional statement never evaluates as False. (Note: You could, however, use 'break' to break from an infinite loop.)
mass_spectrometers = 0 while mass_spectrometers < 5: print("Ask for money") mass_spectrometers = mass_spectrometers + 1 # Comment: This can be written as mass_spectrometers += 1 print("Number of mass spectrometers equals", mass_spectrometers) print("\nNow we need more lab space")
Python_tutorial.ipynb
Michaelt293/ANZSMS-Programming-workshop
cc0-1.0
For loops For loops iterate over each item of collection data types (lists, tuples, dictionaries and sets). For loops can also be used to loop over the characters of a string. In fact, this fact will be utilised later to evaluate each amino acid residue of a peptide string.
lipid_masses = [674.5, 688.6, 690.6, 745.7] Na = 23.0 lipid_Na_adducts = [] for mass in lipid_masses: lipid_Na_adducts.append(mass + Na) lipid_Na_adducts
Python_tutorial.ipynb
Michaelt293/ANZSMS-Programming-workshop
cc0-1.0
List comprehension The following is a list comprehension which performs the same operation of the for loop above but in less lines of code.
adducts_comp = [mass + Na for mass in lipid_masses] adducts_comp
Python_tutorial.ipynb
Michaelt293/ANZSMS-Programming-workshop
cc0-1.0
We could also add a predicate to a list comprehension. Here, we calculate the mass of lipids less than 700 Da.
adducts_comp = [mass + Na for mass in lipid_masses if mass < 700] adducts_comp
Python_tutorial.ipynb
Michaelt293/ANZSMS-Programming-workshop
cc0-1.0
While and for loops with conditional statements Both while and for loops can be combined with conditional statements for greater control of flow within a program.
mass_spectrometers = 0 while mass_spectrometers < 5: mass_spectrometers += 1 print("Number of mass spectrometers equals", mass_spectrometers) if mass_spectrometers == 1: print("Woohoo, the first of many!") elif mass_spectrometers == 5: print("That'll do for now.") else: print("More!!") for MS_file in MS_files: if "spectrum" in MS_file: print("MS file:", MS_file) elif "MS2" in MS_file: print("MS2 file:", MS_file) else: print("MS3 file:", MS_file)
Python_tutorial.ipynb
Michaelt293/ANZSMS-Programming-workshop
cc0-1.0
Exercise: Calculate peptide masses In the following example, we will calculate the mass of a peptide from a string containing one letter amino acid residue codes. For example, peptide = "GASPV". To do this, we will first need a dictionary containing the one letter codes as keys and the masses of the amino acid residues as values. We will then need to create a variable to store the mass of the peptide and use a for loop to iterate over each amino acid residue in the peptide.
amino_dict = { 'G': 57.02147, 'A': 71.03712, 'S': 87.03203, 'P': 97.05277, 'V': 99.06842, 'T': 101.04768, 'C': 103.00919, 'I': 113.08407, 'L': 113.08407, 'N': 114.04293, 'D': 115.02695, 'Q': 128.05858, 'K': 128.09497, 'E': 129.0426, 'M': 131.04049, 'H': 137.05891, 'F': 147.06842, 'R': 156.10112, 'Y': 163.06333, 'W': 186.07932, } # Data modified from http://www.its.caltech.edu/~ppmal/sample_prep/work3.html peptide_name = "SCIENCE" mass = 18.010565 for amino_acid in peptide_name: mass += amino_dict[amino_acid] mass
Python_tutorial.ipynb
Michaelt293/ANZSMS-Programming-workshop
cc0-1.0
Functions Functions perform a specified task when called during the execution of a program. Functions reduce the amount of code that needs to be written and greatly improves code readability. (Note: readability matters!) The for loop created above is better placed in a function so that the for loop doesn't need to be re-written everytime we wish to calculate the mass of a peptide. Pay careful attention to the syntax below.
def peptide_mass(peptide): mass = 18.010565 for amino_acid in peptide: mass += amino_dict[amino_acid] return mass peptide_mass(peptide_name)
Python_tutorial.ipynb
Michaelt293/ANZSMS-Programming-workshop
cc0-1.0
User input A simple means to gather user inputted data is to use input. This will prompt the user to enter data which may be used within the program. In the example below, we prompt the user to enter a peptide name. The peptide name is then used for the function call to calculate the peptide's mass.
user_peptide = input("Enter peptide name: ") peptide_mass(user_peptide)
Python_tutorial.ipynb
Michaelt293/ANZSMS-Programming-workshop
cc0-1.0
Exceptions An exception is an event, which occurs during the execution of a program, that disrupts the normal flow of the program's instructions. You've already seen some exceptions in the Debugging lesson. * Many programs want to know about exceptions when they occur. For example, if the input to a program is a file path. If the user inputs an invalid or non-existent path, the program generates an exception. It may be desired to provide a response to the user in this case. It may also be that programs will generate exceptions. This is a way of indicating that there is an error in the inputs provided. In general, this is the preferred style for dealing with invalid inputs or states inside a python function rather than having an error return. Catching Exceptions Python provides a way to detect when an exception occurs. This is done by the use of a block of code surrounded by a "try" and "except" statement.
def divide(numerator, denominator): result = numerator/denominator print("result = %f" % result) divide(1.0, 0) def divide1(numerator, denominator): try: GARBAGE result = numerator/denominator print("result = %f" % result) except (ZeroDivisionError, NameError) as err: import pdb; pdb.set_trace() print("You can't divide by 0! or use GARBAGE.") divide1(1.0, 'a') print(err) divide1(1.0, 2) divide1("x", 2) def divide2(numerator, denominator): try: result = numerator / denominator print("result = %f" % result) except (ZeroDivisionError, TypeError) as err: print("Got an exception: %s" % err) divide2(1, "X") #divide2("x, 2)
week_4/Exceptions.ipynb
UWSEDS/LectureNotes
bsd-2-clause
Why didn't we catch this SyntaxError?
# Handle division by 0 by using a small number SMALL_NUMBER = 1e-3 def divide3(numerator, denominator): try: result = numerator/denominator except ZeroDivisionError: result = numerator/SMALL_NUMBER print("result = %f" % result) except Exception as err: print("Different error than division by zero:", err) divide3(1,0) divide3("1",0)
week_4/Exceptions.ipynb
UWSEDS/LectureNotes
bsd-2-clause
What do you do when you get an exception? First, you can feel relieved that you caught a problematic element of your software! Yes, relieved. Silent fails are much worse. (Again, another plug for testing.) Generating Exceptions Why generate exceptions? (Don't I have enough unintentional errors?)
import pandas as pd def validateDF(df): """" :param pd.DataFrame df: should have a column named "hours" """ if not "hours" in df.columns: raise ValueError("DataFrame should have a column named 'hours'.") df = pd.DataFrame({'hours': range(10) }) validateDF(df) class SeattleCrimeError(Exception): pass b = False if not b: raise SeattleCrimeError("There's been a crime!")
week_4/Exceptions.ipynb
UWSEDS/LectureNotes
bsd-2-clause
Determine the costs of processing existing articles Based on complete data files from through 2019-09-07. Each 1000 words of an article submitted is one "unit", rounded up. 1,496,665 units total = $2487 to process at once, or 300 months in free batches of 5k...
crimetags = tagnews.CrimeTags() df_all = tagnews.load_data() df_all['read_date'] = df_all['created'].str.slice(0, 10) ### Limiting it to last two years because the data volume is unstable before that df = df_all.loc[df_all['read_date'] >= '2017-01-01'] del df_all ### Number of units to process title and article through Google Cloud API df['n_chars'] = df['title'].str.len() + df['bodytext'].str.len() df['n_units'] = np.ceil(df['n_chars']/1000.) def calculate_google_nlp_price(total_units, verbose=True): '''Cost to run entity sentiment analysis on a given number of units in a single month through in Google Cloud API. https://cloud.google.com/natural-language/#natural-language-api-pricing First 5000 = free 5k-1M = $2 per 1000 units 1M-5M = $1 per 1000 units 5M-20M = $0.5 per 1000 units ''' free_units = min(5e3, total_units) first_tier_units = min(1e6-5e3, total_units-free_units) second_tier_units = min(5e6-1e6, total_units-free_units-first_tier_units) third_tier_units = max(0, total_units-free_units-first_tier_units-second_tier_units) units = [free_units, first_tier_units, second_tier_units, third_tier_units] costs = [0, 2., 1., 0.5] total_cost = sum([c*np.ceil(u/1e3) for (c, u) in zip(costs, units)]) if verbose: print('{:.0f} units: {:.0f}*0 + {:.0f}*$2 + {:.0f}*$1 + {:.0f}*$0.50 = ${:.2f}' .format(total_units, np.ceil(free_units/1e3), np.ceil(first_tier_units/1e3), np.ceil(second_tier_units/1e3), np.ceil(third_tier_units/1e3), total_cost)) return total_cost units = df['n_units'].sum() cost = calculate_google_nlp_price(units) units_per_day = (df .groupby('read_date') .agg({'url': 'count', 'n_units': 'sum'}) ) print(units_per_day.index.min(), units_per_day.index.max()) ### Number of units coming in per day ### Typically ranges from 800-2000 daily, so definitely >5000 monthly f1, ax1 = plt.subplots(1, figsize=[15, 6]) ax1.plot(range(units_per_day.shape[0]), units_per_day['n_units'], label='# units')
lib/notebooks/senteval_budgeting.ipynb
chicago-justice-project/article-tagging
mit
Relevance scoring/binning
### Full dataset takes up too much memory, so dropping all but the most recent now ### This keeps 276122 of the original 1.5e6, or a little less than 1/5th of the total df2 = df.loc[df['read_date'] >= '2019-03-01'] del df new_units = df2['n_units'].sum() downscale = new_units/units print(new_units, downscale) ### Assign a made-up CPD relevance score ### Words associated with CPD cop_words = [ "cpd", "police", "officer", "cop", "officers", "pigs", "policeofficer", ] ### Count number of times relevant words appear in title or text df2['cop_word_counts'] = 0 for w in cop_words: df2['cop_word_counts'] += df2['bodytext'].str.lower().str.count(w) df2['cop_word_counts'] += df2['title'].str.lower().str.count(w) df2['cop_word_counts'].describe() ### Does the word count measure the same thing as the CPD_model column? ### No, doesn't look very correlated actually... f1, ax1 = plt.subplots(1, figsize=[14,6]) ax1.scatter(df2['cop_word_counts'], df2['CPD_model'], alpha=0.3, s=5) ax1.set_xlabel('cop word count') ax1.set_ylabel('CPD_model') ### See examples that use the relevant words but didn't score highly in CPD_model ### Some definitely look relevant (e.g. article 650870) relevant_but_zero = df2.loc[(df2['CPD_model']==0) & ((df2['CPD']==0))].sort_values('cop_word_counts', ascending=False) print(relevant_but_zero.loc[650870, 'title']) print(relevant_but_zero.loc[650870, 'bodytext']) ### Basic relevance score: ### - 50% human tagged "CPD" ### - 25% "CPD_model" ### - 25% usage of above words df2['CPD_relevance'] = ( 0.5*df2['CPD'] # upweight because it means more + 0.25*df2['CPD_model'] + 0.25*(df2['cop_word_counts']/(2*len(cop_words))).clip(upper=1.) ) ### 55% have relevance = 0 ### df['relevance_tier'] = 0 df.head() ### What number/fraction have score > 0? print(df2.loc[df2['CPD_relevance']>0, 'n_units'].sum(), (df2['CPD_relevance']>0).mean()) ### What number/fraction have score = 0? print(df2.loc[df2['CPD_relevance']==0, 'n_units'].sum(), (df2['CPD_relevance']==0).mean()) ### About half of scores are 0 ### What is the distribution of the nonzero ones? nonzero_scores = df2.loc[df2['CPD_relevance']>0].sort_values('CPD_relevance', ascending=False) f1, ax1 = plt.subplots(1, figsize=[14, 6]) ax1.hist(nonzero_scores['CPD_relevance'], bins=20) 5000*downscale ### Divide this sample into groups of 900 rows each, in order to get ### sizes needed for bins that would be ~5000 each. ### This ould actually be a bit too big, but you get the general idea ### Bins would have to get progressively smaller as we go down to stay equal in number nonzero_scores['CPD_relevance'].iloc[[i*900 for i in range(1, int(np.ceil(nonzero_scores.shape[0]/900)))]]
lib/notebooks/senteval_budgeting.ipynb
chicago-justice-project/article-tagging
mit
(1b) Plural Vamos criar uma função que transforma uma palavra no plural adicionando uma letra 's' ao final da string. Em seguida vamos utilizar a função map() para aplicar a transformação em cada palavra do RDD. Em Python (e muitas outras linguagens) a concatenação de strings é custosa. Uma alternativa melhor é criar uma nova string utilizando str.format(). Nota: a string entre os conjuntos de três aspas representa a documentação da função. Essa documentação é exibida com o comando help(). Vamos utilizar a padronização de documentação sugerida para o Python, manteremos essa documentação em inglês.
# EXERCICIO def Plural(palavra): """Adds an 's' to `palavra`. Args: palavra (str): A string. Returns: str: A string with 's' added to it. """ return <COMPLETAR> print Plural('gato') help(Plural) assert Plural('rato')=='ratos', 'resultado incorreto!' print 'OK'
Spark/Lab2_Spark_PySpark.ipynb
folivetti/BIGDATA
mit
(1c) Aplicando a função ao RDD Transforme cada palavra do nosso RDD em plural usando map() Em seguida, utilizaremos o comando collect() que retorna a RDD como uma lista do Python.
# EXERCICIO pluralRDD = palavrasRDD.<COMPLETAR> print pluralRDD.collect() assert pluralRDD.collect()==['gatos','elefantes','ratos','ratos','gatos'], 'valores incorretos!' print 'OK'
Spark/Lab2_Spark_PySpark.ipynb
folivetti/BIGDATA
mit
Nota: utilize o comando collect() apenas quando tiver certeza de que a lista caberá na memória. Para gravar os resultados de volta em arquivo texto ou base de dados utilizaremos outro comando. (1d) Utilizando uma função lambda Repita a criação de um RDD de plurais, porém utilizando uma função lambda.
# EXERCICIO pluralLambdaRDD = palavrasRDD.<COMPLETAR> print pluralLambdaRDD.collect() assert pluralLambdaRDD.collect()==['gatos','elefantes','ratos','ratos','gatos'], 'valores incorretos!' print 'OK'
Spark/Lab2_Spark_PySpark.ipynb
folivetti/BIGDATA
mit
(1e) Tamanho de cada palavra Agora use map() e uma função lambda para retornar o número de caracteres em cada palavra. Utilize collect() para armazenar o resultado em forma de listas na variável destino.
# EXERCICIO pluralTamanho = (pluralRDD <COMPLETAR> ) print pluralTamanho assert pluralTamanho==[5,9,5,5,5], 'valores incorretos' print "OK"
Spark/Lab2_Spark_PySpark.ipynb
folivetti/BIGDATA
mit
(1f) RDDs de pares e tuplas Para contar a frequência de cada palavra de maneira distribuída, primeiro devemos atribuir um valor para cada palavra do RDD. Isso irá gerar um base de dados (chave, valor). Desse modo podemos agrupar a base através da chave, calculando a soma dos valores atribuídos. No nosso caso, vamos atribuir o valor 1 para cada palavra. Um RDD contendo a estrutura de tupla chave-valor (k,v) é chamada de RDD de tuplas ou pair RDD. Vamos criar nosso RDD de pares usando a transformação map() com uma função lambda().
# EXERCICIO palavraPar = palavrasRDD.<COMPLETAR> print palavraPar.collect() assert palavraPar.collect() == [('gato',1),('elefante',1),('rato',1),('rato',1),('gato',1)], 'valores incorretos!' print "OK"
Spark/Lab2_Spark_PySpark.ipynb
folivetti/BIGDATA
mit
Parte 2: Manipulando RDD de tuplas Vamos manipular nossa RDD para contar as palavras do texto. (2a) Função groupByKey() A função groupByKey() agrupa todos os valores de um RDD através da chave (primeiro elemento da tupla) agregando os valores em uma lista. Essa abordagem tem um ponto fraco pois: A operação requer que os dados distribuídos sejam movidos em massa para que permaneçam na partição correta. As listas podem se tornar muito grandes. Imagine contar todas as palavras do Wikipedia: termos comuns como "a", "e" formarão uma lista enorme de valores que pode não caber na memória do processo escravo.
# EXERCICIO palavrasGrupo = palavraPar.groupByKey() for chave, valor in palavrasGrupo.collect(): print '{0}: {1}'.format(chave, list(valor)) assert sorted(palavrasGrupo.mapValues(lambda x: list(x)).collect()) == [('elefante', [1]), ('gato',[1, 1]), ('rato',[1, 2])], 'Valores incorretos!' print "OK"
Spark/Lab2_Spark_PySpark.ipynb
folivetti/BIGDATA
mit
(2b) Calculando as contagens Após o groupByKey() nossa RDD contém elementos compostos da palavra, como chave, e um iterador contendo todos os valores correspondentes aquela chave. Utilizando a transformação map() e a função sum(), contrua um novo RDD que consiste de tuplas (chave, soma).
# EXERCICIO contagemGroup = palavrasGrupo.<COMPLETAR> print contagemGroup.collect() assert sorted(contagemGroup.collect())==[('elefante',1), ('gato',2), ('rato',2)], 'valores incorretos!' print "OK"
Spark/Lab2_Spark_PySpark.ipynb
folivetti/BIGDATA
mit
(2c) reduceByKey Um comando mais interessante para a contagem é o reduceByKey() que cria uma nova RDD de tuplas. Essa transformação aplica a transformação reduce() vista na aula anterior para os valores de cada chave. Dessa forma, a função de transformação pode ser aplicada em cada partição local para depois ser enviada para redistribuição de partições, reduzindo o total de dados sendo movidos e não mantendo listas grandes na memória.
# EXERCICIO contagem = palavraPar.<COMPLETAR> print contagem.collect() assert sorted(contagem.collect())==[('elefante',1), ('gato',2), ('rato',2)], 'valores incorretos!' print "OK"
Spark/Lab2_Spark_PySpark.ipynb
folivetti/BIGDATA
mit
(2d) Agrupando os comandos A forma mais usual de realizar essa tarefa, partindo do nosso RDD palavrasRDD, é encadear os comandos map e reduceByKey em uma linha de comando.
# EXERCICIO contagemFinal = (palavrasRDD <COMPLETAR> <COMPLETAR> ) print contagemFinal.collect() assert sorted(contagemFinal)==[('elefante',1), ('gato',2), ('rato',2)], 'valores incorretos!' print "OK"
Spark/Lab2_Spark_PySpark.ipynb
folivetti/BIGDATA
mit
Parte 3: Encontrando as palavras únicas e calculando a média de contagem (3a) Palavras Únicas Calcule a quantidade de palavras únicas do RDD. Utilize comandos de RDD da API do PySpark e alguma das últimas RDDs geradas nos exercícios anteriores.
# EXERCICIO palavrasUnicas = <COMPLETAR> print palavrasUnicas assert palavrasUnicas==3, 'valor incorreto!' print "OK"
Spark/Lab2_Spark_PySpark.ipynb
folivetti/BIGDATA
mit
(3b) Calculando a Média de contagem de palavras Encontre a média de frequência das palavras utilizando o RDD contagem. Note que a função do comando reduce() é aplicada em cada tupla do RDD. Para realizar a soma das contagens, primeiro é necessário mapear o RDD para um RDD contendo apenas os valores das frequências (sem as chaves).
# EXERCICIO # add é equivalente a lambda x,y: x+y from operator import add total = (contagemFinal <COMPLETAR> <COMPLETAR> ) media = total / float(palavrasUnicas) print total print round(media, 2) assert round(media, 2)==1.67, 'valores incorretos!' print "OK"
Spark/Lab2_Spark_PySpark.ipynb
folivetti/BIGDATA
mit
Parte 4: Aplicar nosso algoritmo em um arquivo (4a) Função contaPalavras Para podermos aplicar nosso algoritmo genéricamente em diversos RDDs, vamos primeiro criar uma função para aplicá-lo em qualquer fonte de dados. Essa função recebe de entrada um RDD contendo uma lista de chaves (palavras) e retorna um RDD de tuplas com as chaves e a contagem delas nessa RDD
# EXERCICIO def contaPalavras(chavesRDD): """Creates a pair RDD with word counts from an RDD of words. Args: chavesRDD (RDD of str): An RDD consisting of words. Returns: RDD of (str, int): An RDD consisting of (word, count) tuples. """ return (chavesRDD <COMPLETAR> <COMPLETAR> ) print contaPalavras(palavrasRDD).collect() assert sorted(contaPalavras(palavrasRDD).collect())==[('elefante',1), ('gato',2), ('rato',2)], 'valores incorretos!' print "OK"
Spark/Lab2_Spark_PySpark.ipynb
folivetti/BIGDATA
mit
(4b) Normalizando o texto Quando trabalhamos com dados reais, geralmente precisamos padronizar os atributos de tal forma que diferenças sutis por conta de erro de medição ou diferença de normatização, sejam desconsideradas. Para o próximo passo vamos padronizar o texto para: Padronizar a capitalização das palavras (tudo maiúsculo ou tudo minúsculo). Remover pontuação. Remover espaços no início e no final da palavra. Crie uma função removerPontuacao que converte todo o texto para minúscula, remove qualquer pontuação e espaços em branco no início ou final da palavra. Para isso, utilize a biblioteca re para remover todo texto que não seja letra, número ou espaço, encadeando com as funções de string para remover espaços em branco e converter para minúscula (veja Strings).
# EXERCICIO import re def removerPontuacao(texto): """Removes punctuation, changes to lower case, and strips leading and trailing spaces. Note: Only spaces, letters, and numbers should be retained. Other characters should should be eliminated (e.g. it's becomes its). Leading and trailing spaces should be removed after punctuation is removed. Args: texto (str): A string. Returns: str: The cleaned up string. """ return re.sub(r'[^A-Za-z0-9 ]', '', texto).strip().lower() print removerPontuacao('Ola, quem esta ai??!') print removerPontuacao(' Sem espaco e_sublinhado!') assert removerPontuacao(' O uso de virgulas, embora permitido, nao deve contar. ')=='o uso de virgulas embora permitido nao deve contar', 'string incorreta!' print "OK"
Spark/Lab2_Spark_PySpark.ipynb
folivetti/BIGDATA
mit
(4c) Carregando arquivo texto Para a próxima parte vamos utilizar o livro Trabalhos completos de William Shakespeare do Projeto Gutenberg. Para converter um texto em uma RDD, utilizamos a função textFile() que recebe como entrada o nome do arquivo texto que queremos utilizar e o número de partições. O nome do arquivo texto pode se referir a um arquivo local ou uma URI de arquivo distribuído (ex.: hdfs://). Vamos também aplicar a função removerPontuacao() para normalizar o texto e verificar as 15 primeiras linhas com o comando take().
# Apenas execute a célula import os.path import urllib url = 'http://www.gutenberg.org/cache/epub/100/pg100.txt' # url do livro arquivo = os.path.join('Data','Aula02','shakespeare.txt') # local de destino: 'Data/Aula02/shakespeare.txt' if os.path.isfile(arquivo): # verifica se já fizemos download do arquivo print 'Arquivo já existe!' else: try: urllib.urlretrieve(url, arquivo) # salva conteúdo da url em arquivo except IOError: print 'Impossível fazer o download: {0}'.format(url) # lê o arquivo com textFile e aplica a função removerPontuacao shakesRDD = (sc .textFile(arquivo, 8) .map(removerPontuacao) ) # zipWithIndex gera tuplas (conteudo, indice) onde indice é a posição do conteudo na lista sequencial # Ex.: sc.parallelize(['gato','cachorro','boi']).zipWithIndex() ==> [('gato',0), ('cachorro',1), ('boi',2)] # sep.join() junta as strings de uma lista através do separador sep. Ex.: ','.join(['a','b','c']) ==> 'a,b,c' print '\n'.join(shakesRDD .zipWithIndex() .map(lambda (linha, num): '{0}: {1}'.format(num,linha)) .take(15) )
Spark/Lab2_Spark_PySpark.ipynb
folivetti/BIGDATA
mit
(4d) Extraindo as palavras Antes de poder usar nossa função Before we can use the contaPalavras(), temos ainda que trabalhar em cima da nossa RDD: Precisamos gerar listas de palavras ao invés de listas de sentenças. Eliminar linhas vazias. As strings em Python tem o método split() que faz a separação de uma string por separador. No nosso caso, queremos separar as strings por espaço. Utilize a função map() para gerar um novo RDD como uma lista de palavras.
# EXERCICIO shakesPalavrasRDD = shakesRDD.<COMPLETAR> total = shakesPalavrasRDD.count() print shakesPalavrasRDD.take(5) print total
Spark/Lab2_Spark_PySpark.ipynb
folivetti/BIGDATA
mit
Conforme deve ter percebido, o uso da função map() gera uma lista para cada linha, criando um RDD contendo uma lista de listas. Para resolver esse problema, o Spark possui uma função análoga chamada flatMap() que aplica a transformação do map(), porém achatando o retorno em forma de lista para uma lista unidimensional.
# EXERCICIO shakesPalavrasRDD = shakesRDD.flatMap(lambda x: x.split()) total = shakesPalavrasRDD.count() print shakesPalavrasRDD.top(5) print total assert total==927631 or total == 928908, "valor incorreto de palavras!" print "OK" assert shakesPalavrasRDD.top(5)==[u'zwaggerd', u'zounds', u'zounds', u'zounds', u'zounds'],'lista incorreta de palavras' print "OK"
Spark/Lab2_Spark_PySpark.ipynb
folivetti/BIGDATA
mit
(4e) Remover linhas vazias Para o próximo passo vamos filtrar as linhas vazias com o comando filter(). Uma linha vazia é uma string sem nenhum conteúdo.
# EXERCICIO shakesLimpoRDD = shakesPalavrasRDD.<COMPLETAR> total = shakesLimpoRDD.count() print total assert total==882996, 'valor incorreto!' print "OK"
Spark/Lab2_Spark_PySpark.ipynb
folivetti/BIGDATA
mit
(4f) Contagem de palavras Agora que nossa RDD contém uma lista de palavras, podemos aplicar nossa função contaPalavras(). Aplique a função em nossa RDD e utilize a função takeOrdered para imprimir as 15 palavras mais frequentes. takeOrdered() pode receber um segundo parâmetro que instrui o Spark em como ordenar os elementos. Ex.: takeOrdered(15, key=lambda x: -x): ordem decrescente dos valores de x
# EXERCICIO top15 = <COMPLETAR> print '\n'.join(map(lambda (w, c): '{0}: {1}'.format(w, c), top15)) assert top15 == [(u'the', 27361), (u'and', 26028), (u'i', 20681), (u'to', 19150), (u'of', 17463), (u'a', 14593), (u'you', 13615), (u'my', 12481), (u'in', 10956), (u'that', 10890), (u'is', 9134), (u'not', 8497), (u'with', 7771), (u'me', 7769), (u'it', 7678)],'valores incorretos!' print "OK"
Spark/Lab2_Spark_PySpark.ipynb
folivetti/BIGDATA
mit
Parte 5: Similaridade entre Objetos Nessa parte do laboratório vamos aprender a calcular a distância entre atributos numéricos, categóricos e textuais. (5a) Vetores no espaço Euclidiano Quando nossos objetos são representados no espaço Euclidiano, medimos a similaridade entre eles através da p-Norma definida por: $$d(x,y,p) = (\sum_{i=1}^{n}{|x_i - y_i|^p})^{1/p}$$ As normas mais utilizadas são $p=1,2,\infty$ que se reduzem em distância absoluta, Euclidiana e máxima distância: $$d(x,y,1) = \sum_{i=1}^{n}{|x_i - y_i|}$$ $$d(x,y,2) = (\sum_{i=1}^{n}{|x_i - y_i|^2})^{1/2}$$ $$d(x,y,\infty) = \max(|x_1 - y_1|,|x_2 - y_2|, ..., |x_n - y_n|)$$
import numpy as np # Vamos criar uma função pNorm que recebe como parâmetro p e retorna uma função que calcula a pNorma def pNorm(p): """Generates a function to calculate the p-Norm between two points. Args: p (int): The integer p. Returns: Dist: A function that calculates the p-Norm. """ def Dist(x,y): return np.power(np.power(np.abs(x-y),p).sum(),1/float(p)) return Dist # Vamos criar uma RDD com valores numéricos numPointsRDD = sc.parallelize(enumerate(np.random.random(size=(10,100)))) # EXERCICIO # Procure dentre os comandos do PySpark, um que consiga fazer o produto cartesiano da base com ela mesma cartPointsRDD = numPointsRDD.<COMPLETAR> # Aplique um mapa para transformar nossa RDD em uma RDD de tuplas ((id1,id2), (vetor1,vetor2)) # DICA: primeiro utilize o comando take(1) e imprima o resultado para verificar o formato atual da RDD cartPointsParesRDD = cartPointsRDD.<COMPLETAR> # Aplique um mapa para calcular a Distância Euclidiana entre os pares Euclid = pNorm(2) distRDD = cartPointsParesRDD.<COMPLETAR> # Encontre a distância máxima, mínima e média, aplicando um mapa que transforma (chave,valor) --> valor # e utilizando os comandos internos do pyspark para o cálculo da min, max, mean statRDD = distRDD.<COMPLETAR> minv, maxv, meanv = statRDD.<COMPLETAR>, statRDD.<COMPLETAR>, statRDD.<COMPLETAR> print minv, maxv, meanv assert (minv.round(2), maxv.round(2), meanv.round(2))==(0.0, 4.70, 3.65), 'Valores incorretos' print "OK"
Spark/Lab2_Spark_PySpark.ipynb
folivetti/BIGDATA
mit
(5b) Valores Categóricos Quando nossos objetos são representados por atributos categóricos, eles não possuem uma similaridade espacial. Para calcularmos a similaridade entre eles podemos primeiro transformar nosso vetor de atrbutos em um vetor binário indicando, para cada possível valor de cada atributo, se ele possui esse atributo ou não. Com o vetor binário podemos utilizar a distância de Hamming definida por: $$ H(x,y) = \sum_{i=1}^{n}{x_i != y_i} $$ Também é possível definir a distância de Jaccard como: $$ J(x,y) = \frac{\sum_{i=1}^{n}{x_i == y_i} }{\sum_{i=1}^{n}{\max(x_i, y_i}) } $$
# Vamos criar uma função para calcular a distância de Hamming def Hamming(x,y): """Calculates the Hamming distance between two binary vectors. Args: x, y (np.array): Array of binary integers x and y. Returns: H (int): The Hamming distance between x and y. """ return (x!=y).sum() # Vamos criar uma função para calcular a distância de Jaccard def Jaccard(x,y): """Calculates the Jaccard distance between two binary vectors. Args: x, y (np.array): Array of binary integers x and y. Returns: J (int): The Jaccard distance between x and y. """ return (x==y).sum()/float( np.maximum(x,y).sum() ) # Vamos criar uma RDD com valores categóricos catPointsRDD = sc.parallelize(enumerate([['alto', 'caro', 'azul'], ['medio', 'caro', 'verde'], ['alto', 'barato', 'azul'], ['medio', 'caro', 'vermelho'], ['baixo', 'barato', 'verde'], ])) # EXERCICIO # Crie um RDD de chaves únicas utilizando flatMap chavesRDD = (catPointsRDD .<COMPLETAR> .<COMPLETAR> .<COMPLETAR> ) chaves = dict((v,k) for k,v in enumerate(chavesRDD.collect())) nchaves = len(chaves) print chaves, nchaves assert chaves=={'alto': 0, 'medio': 1, 'baixo': 2, 'barato': 3, 'azul': 4, 'verde': 5, 'caro': 6, 'vermelho': 7}, 'valores incorretos!' print "OK" assert nchaves==8, 'número de chaves incorreta' print "OK" def CreateNP(atributos,chaves): """Binarize the categorical vector using a dictionary of keys. Args: atributos (list): List of attributes of a given object. chaves (dict): dictionary with the relation attribute -> index Returns: array (np.array): Binary array of attributes. """ array = np.zeros(len(chaves)) for atr in atributos: array[ chaves[atr] ] = 1 return array # Converte o RDD para o formato binário, utilizando o dict chaves binRDD = catPointsRDD.map(lambda rec: (rec[0],CreateNP(rec[1], chaves))) binRDD.collect() # EXERCICIO # Procure dentre os comandos do PySpark, um que consiga fazer o produto cartesiano da base com ela mesma cartBinRDD = binRDD.<COMPLETAR> # Aplique um mapa para transformar nossa RDD em uma RDD de tuplas ((id1,id2), (vetor1,vetor2)) # DICA: primeiro utilize o comando take(1) e imprima o resultado para verificar o formato atual da RDD cartBinParesRDD = cartBinRDD.<COMPLETAR> # Aplique um mapa para calcular a Distância de Hamming e Jaccard entre os pares hamRDD = cartBinParesRDD.<COMPLETAR> jacRDD = cartBinParesRDD.<COMPLETAR> # Encontre a distância máxima, mínima e média, aplicando um mapa que transforma (chave,valor) --> valor # e utilizando os comandos internos do pyspark para o cálculo da min, max, mean statHRDD = hamRDD.<COMPLETAR> statJRDD = jacRDD.<COMPLETAR> Hmin, Hmax, Hmean = statHRDD.<COMPLETAR>, statHRDD.<COMPLETAR>, statHRDD.<COMPLETAR> Jmin, Jmax, Jmean = statJRDD.<COMPLETAR>, statJRDD.<COMPLETAR>, statJRDD.<COMPLETAR> print "\t\tMin\tMax\tMean" print "Hamming:\t{:.2f}\t{:.2f}\t{:.2f}".format(Hmin, Hmax, Hmean ) print "Jaccard:\t{:.2f}\t{:.2f}\t{:.2f}".format( Jmin, Jmax, Jmean ) assert (Hmin.round(2), Hmax.round(2), Hmean.round(2)) == (0.00,6.00,3.52), 'valores incorretos' print "OK" assert (Jmin.round(2), Jmax.round(2), Jmean.round(2)) == (0.33,2.67,1.14), 'valores incorretos' print "OK"
Spark/Lab2_Spark_PySpark.ipynb
folivetti/BIGDATA
mit
Now let us implement an extension algorithm. We are leaving out the cancelling step for clarity.
def join(a, b): """Return the join of 2 simplices a and b.""" return tuple(sorted(set(a).union(b))) def extend(K, f): """Extend the field to the complex K. Function on vertices is given in f. Returns the pair V, C, where V is the dictionary containing discrete gradient vector field and C is the list of all critical cells. """ V = dict() C = [] for v in (s for s in K if len(s)==1): # Add your own code pass return V, C
2015_2016/lab13/Extending values on vertices-template.ipynb
gregorjerse/rt2
gpl-3.0
As usual, we will begin by creating a small dataset to test with. It will consist of 10 samples, where each input observation has four features and targets are scalar values.
nb_samples = 10 X = np.outer(np.arange(1, nb_samples + 1, dtype=np.uint8), np.arange(1, 4 + 1, dtype=np.uint8)) y = np.arange(nb_samples, dtype=np.uint8) for i in range(nb_samples): print('Input: {} -> Target: {}'.format(X[i], y[i]))
examples/torch-dataset.ipynb
vicolab/ml-pyxis
mit
The data is written using a with statement.
with px.Writer(dirpath='data', map_size_limit=10, ram_gb_limit=1) as db: db.put_samples('input', X, 'target', y)
examples/torch-dataset.ipynb
vicolab/ml-pyxis
mit
To be sure the data was stored correctly, we will read the data back - again using a with statement.
with px.Reader('data') as db: print(db)
examples/torch-dataset.ipynb
vicolab/ml-pyxis
mit
Working with PyTorch
try: import torch import torch.utils.data except ImportError: raise ImportError('Could not import the PyTorch library `torch` or ' '`torch.utils.data`. Please refer to ' 'https://pytorch.org/ for installation instructions.')
examples/torch-dataset.ipynb
vicolab/ml-pyxis
mit
In pyxis.torch we have implemented a wrapper around torch.utils.data.Dataset called pyxis.torch.TorchDataset. This object is not imported into the pyxis name space because it relies on PyTorch being installed. As such, we first need to import pyxis.torch:
import pyxis.torch as pxt
examples/torch-dataset.ipynb
vicolab/ml-pyxis
mit
pyxis.torch.TorchDataset has a single constructor argument: dirpath, i.e. the location of the pyxis LMDB.
dataset = pxt.TorchDataset('data')
examples/torch-dataset.ipynb
vicolab/ml-pyxis
mit
The pyxis.torch.TorchDataset object has only three methods: __len__, __getitem__, and __repr__, each of which you can see an example of below:
len(dataset) dataset[0] dataset
examples/torch-dataset.ipynb
vicolab/ml-pyxis
mit
pyxis.torch.TorchDataset can be directly combined with torch.utils.data.DataLoader to create an iterator type object:
use_cuda = True and torch.cuda.is_available() kwargs = {"num_workers": 4, "pin_memory": True} if use_cuda else {} loader = torch.utils.data.DataLoader(dataset, batch_size=2, shuffle=False, **kwargs) for i, d in enumerate(loader): print('Batch:', i) print('\t', d['input']) print('\t', d['target'])
examples/torch-dataset.ipynb
vicolab/ml-pyxis
mit
Parsing the Data We got some datetimes back from the API -- but what do these mean?! We can use python to find out! Lets use a new library, arrow, to parse that. http://crsmithdev.com/arrow/
import arrow
SHARE_Curation_Associates_Overview.ipynb
erinspace/share_tutorials
apache-2.0
open your terminal pip install arrow
from arrow.arrow import Arrow for item in data['response']: datetime = Arrow.fromtimestamp(item['risetime']) print( 'The ISS will be visable over Charlottesville on {} at {} for {} seconds.'.format( datetime.date(), datetime.time(), item['duration'] ) )
SHARE_Curation_Associates_Overview.ipynb
erinspace/share_tutorials
apache-2.0
pokeapi = 'http://pokeapi.co/api/v2/generation/1/' pokedata = requests.get(pokeapi).json() # Take that data, print out a nicely formatted version of the first 5 results print(json.dumps(pokedata['pokemon_species'][:5], indent=4)) # Let's get more info about the first pokemon on the list # By following the chain of linked data # Narrow down the url we'd like to get bulbasaur_url = pokedata['pokemon_species'][0]['url'] # request data from that URL bulbasaur_data = requests.get(bulbasaur_url).json() # Let's remove the 'flavor text' because that's really long del bulbasaur_data['flavor_text_entries'] bulbasaur_data
SHARE_Curation_Associates_Overview.ipynb
erinspace/share_tutorials
apache-2.0
Some Great APIs YOU can use! Twitter Google Maps Twillio Yelp Spotify Genius ...and so many more! Many require some kind of authentication, so aren't as simple as the ISS, or PokeAPI. Access an OAI-PMH Feed! Many institutions have an OAI-PMH based API. This is great because they all have a unified way of interacting with the data in the repositories, just with different host urls. You can create common code that will interact with most OAI-PMH feeds with only changing the base access URL.
from furl import furl vt_url = furl('http://vtechworks.lib.vt.edu/oai/request') vt_url.args['verb'] = 'ListRecords' vt_url.args['metadataPrefix'] = 'oai_dc' vt_url.url data = requests.get(vt_url.url) data.content
SHARE_Curation_Associates_Overview.ipynb
erinspace/share_tutorials
apache-2.0
Let's parse this! conda install lxml
from lxml import etree etree_element = etree.XML(data.content) etree_element etree_element.getchildren() # A little namespace parsing and cleanup namespaces = etree_element.nsmap namespaces['ns0'] = etree_element.nsmap[None] del namespaces[None] records = etree_element.xpath('//ns0:record', namespaces=namespaces) records[:10] # What's inside one of these records? one_record = records[0] one_record.getchildren() # We want to check out the "metadata" element, which is the second in the list # Let's make sure to get those namespaces too # Here's a cool trick to join 2 dictionaries in python 3! namespaces = {**namespaces, **one_record[1][0].nsmap} del namespaces[None] # Now we have namespaces we can use! namespaces # Use those namespaces to get titles titles = records[0].xpath('//dc:title/node()', namespaces=namespaces) titles[:10]
SHARE_Curation_Associates_Overview.ipynb
erinspace/share_tutorials
apache-2.0
SHARE Search API Also a fantastic resource! One Way to Access Data Instead of writing custom code to parse both data coming from JSON and XML APIs The SHARE Search Schema The SHARE search API is built on a tool called elasticsearch. It lets you search a subset of SHARE's normalized metadata in a simple format. Here are the fields available in SHARE's elasticsearch endpoint: - 'title' - 'language' - 'subject' - 'description' - 'date' - 'date_created' - 'date_modified - 'date_updated' - 'date_published' - 'tags' - 'links' - 'awards' - 'venues' - 'sources' - 'contributors' You can see a formatted version of the base results from the API by visiting the SHARE Search API URL.
SHARE_SEARCH_API = 'https://staging-share.osf.io/api/search/abstractcreativework/_search' from furl import furl search_url = furl(SHARE_SEARCH_API) search_url.args['size'] = 3 recent_results = requests.get(search_url.url).json() recent_results = recent_results['hits']['hits'] recent_results print('The request URL is {}'.format(search_url.url)) print('----------') for result in recent_results: print( '{} -- from {}'.format( result['_source']['title'], result['_source']['sources'] ) )
SHARE_Curation_Associates_Overview.ipynb
erinspace/share_tutorials
apache-2.0
Sending a Query to the SHARE Search API First, we'll define a function to do the hard work for us. It will take 2 parameters, a URL, and a query to send to the search API.
import json def query_share(url, query): # A helper function that will use the requests library, # pass along the correct headers, and make the query we want headers = {'Content-Type': 'application/json'} data = json.dumps(query) return requests.post(url, headers=headers, data=data).json() search_url.args = None # reset the args so that we remove our old query arguments. search_url.url # Show the URL that we'll be requesting to make sure the args were cleared tags_query = { "query": { "exists": { "field": "tags" } } } missing_tags_query = { "query": { "bool": { "must_not": { "exists": { "field": "tags" } } } } } with_tags = query_share(search_url.url, tags_query) missing_tags = query_share(search_url.url, missing_tags_query) total_results = requests.get(search_url.url).json()['hits']['total'] with_tags_percent = (float(with_tags['hits']['total'])/total_results)*100 missing_tags_percent = (float(missing_tags['hits']['total'])/total_results)*100 print( '{} results out of {}, or {}%, have tags.'.format( with_tags['hits']['total'], total_results, format(with_tags_percent, '.2f') ) ) print( '{} results out of {}, or {}%, do NOT have tags.'.format( missing_tags['hits']['total'], total_results, format(missing_tags_percent, '.2f') ) ) print('------------') print('As a little sanity check....') print('{} + {} = {}%'.format(with_tags_percent, missing_tags_percent, format(with_tags_percent + missing_tags_percent, '.2f')))
SHARE_Curation_Associates_Overview.ipynb
erinspace/share_tutorials
apache-2.0
Other SHARE APIs SHARE has a host of other APIs that provide direct access to the data stored in SHARE. You can read more about the SHARE Data Models here: http://share-research.readthedocs.io/en/latest/share_models.html
SHARE_API = 'https://staging-share.osf.io/api/' share_endpoints = requests.get(SHARE_API).json() share_endpoints
SHARE_Curation_Associates_Overview.ipynb
erinspace/share_tutorials
apache-2.0
Visit the API In Your Browser You can visit https://staging-share.osf.io/api/ and see the data formatted in "pretty printed" JSON SHARE Providers API Access the information about the providers that SHARE harvests from
SHARE_PROVIDERS = 'https://staging-share.osf.io/api/providers/' data = requests.get(SHARE_PROVIDERS).json() data
SHARE_Curation_Associates_Overview.ipynb
erinspace/share_tutorials
apache-2.0
We can print that out a little nicer Using a loop and using the lookups that'd we'd like!
print('Here are the first 10 Providers:') for source in data['results']: print( '{}\n{}\n{}\n'.format( source['long_title'], source['home_page'], source['provider_name'] ) )
SHARE_Curation_Associates_Overview.ipynb
erinspace/share_tutorials
apache-2.0
Fixed-point approximation The regression results are converted to a fixed-point representation, with the exponent in Q6 and the scale in Q3.
plt.figure(figsize=(7, 6)) plt.axis('equal') plt.xticks([0, 10]) plt.yticks([0, 10]) plt.minorticks_on() plt.grid(b=True, which='major') plt.grid(b=True, which='minor', alpha=0.2) segments = dict() for b, accumulator in partials.items(): Ax, Ay, Sxy, Sxx, n, minx, maxx = accumulator fti = b & 3 beta = Sxy/Sxx alpha = Ay - beta*Ax exp = int(np.round(beta*2**6)) beta_ = exp/2**6 alpha_ = Ay - beta_*Ax scale = int(np.round(np.exp2(3 - alpha_/2**57))) label = ['I', 'P', 'B0', 'B1'][fti] print('%2s: exp=%d scale=%d bucket=%d' % (label, exp, scale, b>>2)) xs, ys = segments.get(label, ([], [])) xs = [minx/2**57, maxx/2**57] ys = [xs[0]*beta_ + alpha_/2**57, xs[1]*beta_ + alpha_/2**57] xs_, ys_ = segments.get(label, ([], [])) xs_.extend(xs) ys_.extend(ys) segments[label] = (xs_, ys_) best = dict() for label, xy in segments.items(): plt.plot(xy[0], xy[1], label=label) plt.legend();
doc/regress_log-bitrate_wrt_log-quantizer.ipynb
xiph/rav1e
bsd-2-clause
The endpoints of each linear regression, rounding only the exponent, are detailed in the following output. We use a cubic interpolation of these points to adjust the segment boundaries.
pprint(segments)
doc/regress_log-bitrate_wrt_log-quantizer.ipynb
xiph/rav1e
bsd-2-clause
Piecewise-linear fit We applied a 3-segment piecewise-linear fit. The boundaries were aligned to integer values of pixels-per-bit, while optimizing for similarity to a cubic interpolation of the control points (log-quantizer as a function of log-bitrate).
plt.figure(figsize=(7, 6)) plt.axis('equal') plt.xticks([0, 10]) plt.yticks([0, 10]) plt.minorticks_on() plt.grid(b=True, which='major') plt.grid(b=True, which='minor', alpha=0.2) from scipy import optimize for ft, xy in segments.items(): f = np.poly1d(np.polyfit(np.array(xy[1]).astype(float), np.array(xy[0]).astype(float), 3)) ys = np.linspace(min(xy[1]), max(xy[1]), 20) def cost(X): y0 = np.array([ys[0], X[0], X[1], ys[-1]]).astype(float) x0 = f(y0) f0 = np.where(ys<X[0], np.poly1d(np.polyfit(y0[:2], x0[:2], 1))(ys), np.where(ys<X[1], np.poly1d(np.polyfit(y0[1:3], x0[1:3], 1))(ys), np.poly1d(np.polyfit(y0[2:], x0[2:], 1))(ys))) return ((f0-f(ys))**2).sum() X = optimize.fmin(cost, [2, 5], disp=0) X = np.log2(np.ceil(np.exp2(X))) print(ft, np.exp2(X), np.round(f(X)*2**17)) y0 = [ys.min(), X[0], X[1], ys.max()] x0 = f(y0) plt.plot(x0, y0, '.--', lw=1, c='grey') plt.plot(f(ys), ys, label=ft) plt.legend();
doc/regress_log-bitrate_wrt_log-quantizer.ipynb
xiph/rav1e
bsd-2-clause
MMW production API endpoint base url.
api_url = "https://app.wikiwatershed.org/api/"
MMW_API_landproperties_demo.ipynb
emiliom/stuff
cc0-1.0
The job is not completed instantly and the results are not returned directly by the API request that initiated the job. The user must first issue an API request to confirm that the job is complete, then fetch the results. The demo presented here performs automated retries (checks) until the server confirms the job is completed, then requests the JSON results and converts (deserializes) them into a Python dictionary.
def get_job_result(api_url, s, jobrequest): url_tmplt = api_url + "jobs/{job}/" get_url = url_tmplt.format result = '' while not result: get_req = requests_retry_session(session=s).get(get_url(job=jobrequest['job'])) result = json.loads(get_req.content)['result'] return result s = requests.Session() APIToken = 'Token 0501d9a98b8170a41d57df8ce82c000c477c621a' # HIDE THE API TOKEN s.headers.update({ 'Authorization': APIToken, 'Content-Type': 'application/json' })
MMW_API_landproperties_demo.ipynb
emiliom/stuff
cc0-1.0
2. Construct AOI GeoJSON for job request Parameters passed to the "analyze" API requests.
from shapely.geometry import box, MultiPolygon width = 0.0004 # Looks like using a width smaller than 0.0002 causes a problem with the API? # GOOS: (-88.5552, 40.4374) elev 240.93. Agriculture Site—Goose Creek (Corn field) Site (GOOS) at IML CZO # SJER: (-119.7314, 37.1088) elev 403.86. San Joaquin Experimental Reserve Site (SJER) at South Sierra CZO lon, lat = -119.7314, 37.1088 bbox = box(lon-0.5*width, lat-0.5*width, lon+0.5*width, lat+0.5*width) payload = MultiPolygon([bbox]).__geo_interface__ json_payload = json.dumps(payload) payload
MMW_API_landproperties_demo.ipynb
emiliom/stuff
cc0-1.0
3. Issue job requests, fetch job results when done, then examine results. Repeat for each request type
# convenience function, to simplify the request calls, below def analyze_api_request(api_name, s, api_url, json_payload): post_url = "{}analyze/{}/".format(api_url, api_name) post_req = requests_retry_session(session=s).post(post_url, data=json_payload) jobrequest_json = json.loads(post_req.content) # Fetch and examine job result result = get_job_result(api_url, s, jobrequest_json) return result
MMW_API_landproperties_demo.ipynb
emiliom/stuff
cc0-1.0
Issue job request: analyze/land/
result = analyze_api_request('land', s, api_url, json_payload)
MMW_API_landproperties_demo.ipynb
emiliom/stuff
cc0-1.0
Everything below is just exploration of the results. Examine the content of the results (as JSON, and Python dictionaries)
type(result), result.keys()
MMW_API_landproperties_demo.ipynb
emiliom/stuff
cc0-1.0
result is a dictionary with one item, survey. This item in turn is a dictionary with 3 items: displayName, name, categories. The first two are just labels. The data are in the categories item.
result['survey'].keys() categories = result['survey']['categories'] len(categories), categories[1] land_categories_nonzero = [d for d in categories if d['coverage'] > 0] land_categories_nonzero
MMW_API_landproperties_demo.ipynb
emiliom/stuff
cc0-1.0
Issue job request: analyze/terrain/
result = analyze_api_request('terrain', s, api_url, json_payload)
MMW_API_landproperties_demo.ipynb
emiliom/stuff
cc0-1.0
result is a dictionary with one item, survey. This item in turn is a dictionary with 3 items: displayName, name, categories. The first two are just labels. The data are in the categories item.
categories = result['survey']['categories'] len(categories), categories [d for d in categories if d['type'] == 'average']
MMW_API_landproperties_demo.ipynb
emiliom/stuff
cc0-1.0
Issue job request: analyze/climate/
result = analyze_api_request('climate', s, api_url, json_payload)
MMW_API_landproperties_demo.ipynb
emiliom/stuff
cc0-1.0
result is a dictionary with one item, survey. This item in turn is a dictionary with 3 items: displayName, name, categories. The first two are just labels. The data are in the categories item.
categories = result['survey']['categories'] len(categories), categories[:2] ppt = [d['ppt'] for d in categories] tmean = [d['tmean'] for d in categories] # ppt is in cm, right? sum(ppt) import calendar import numpy as np calendar.mdays # Annual tmean needs to be weighted by the number of days per month sum(np.asarray(tmean) * np.asarray(calendar.mdays[1:]))/365
MMW_API_landproperties_demo.ipynb
emiliom/stuff
cc0-1.0
to make use of periodicity of the provided data grid, use
m.add( PeriodicBox( boxOrigin, boxSize ) )
doc/pages/example_notebooks/extragalactic_fields/MHD_models.v4.ipynb
CRPropa/CRPropa3
gpl-3.0
to not follow particles forever, use
m.add( MaximumTrajectoryLength( 400*Mpc ) )
doc/pages/example_notebooks/extragalactic_fields/MHD_models.v4.ipynb
CRPropa/CRPropa3
gpl-3.0
Uniform injection The most simple scenario of UHECR sources is a uniform distribution of their sources. This can be realized via use of
source = Source() source.add( SourceUniformBox( boxOrigin, boxSize ))
doc/pages/example_notebooks/extragalactic_fields/MHD_models.v4.ipynb
CRPropa/CRPropa3
gpl-3.0
Injection following density field The distribution of gas density can be used as a probability density function for the injection of particles from random positions.
filename_density = "mass-density_clues.dat" ## filename of the density field source = Source() ## initialize grid to hold field values mgrid = ScalarGrid( gridOrigin, gridSize, spacing ) ## load values to grid loadGrid( mgrid, filename_density ) ## add source module to simulation source.add( SourceDensityGrid( mgrid ) )
doc/pages/example_notebooks/extragalactic_fields/MHD_models.v4.ipynb
CRPropa/CRPropa3
gpl-3.0
Mass Halo injection Alternatively, for the CLUES models, we also provide a list of mass halo positions. These positions can be used as sources with the same properties by use of the following
import numpy as np filename_halos = 'clues_halos.dat' # read data from file data = np.loadtxt(filename_halos, unpack=True, skiprows=39) sX = data[0] sY = data[1] sZ = data[2] mass_halo = data[5] ## find only those mass halos inside the provided volume (see Hackstein et al. 2018 for more details) Xdown= sX >= 0.25 Xup= sX <= 0.75 Ydown= sY >= 0.25 Yup= sY <= 0.75 Zdown= sZ >= 0.25 Zup= sZ <= 0.75 insider= Xdown*Xup*Ydown*Yup*Zdown*Zup ## transform relative positions to physical positions within given grid sX = (sX[insider]-0.25)*2*size sY = (sY[insider]-0.25)*2*size sZ = (sZ[insider]-0.25)*2*size ## collect all sources in the multiple sources container smp = SourceMultiplePositions() for i in range(0,len(sX)): pos = Vector3d( sX[i], sY[i], sZ[i] ) smp.add( pos, 1. ) ## add collected sources source = Source() source.add( smp )
doc/pages/example_notebooks/extragalactic_fields/MHD_models.v4.ipynb
CRPropa/CRPropa3
gpl-3.0
additional source properties
## use isotropic emission from all sources source.add( SourceIsotropicEmission() ) ## set particle type to be injected A, Z = 1, 1 # proton source.add( SourceParticleType( nucleusId(A,Z) ) ) ## set injected energy spectrum Emin, Emax = 1*EeV, 1000*EeV specIndex = -1 source.add( SourcePowerLawSpectrum( Emin, Emax, specIndex ) )
doc/pages/example_notebooks/extragalactic_fields/MHD_models.v4.ipynb
CRPropa/CRPropa3
gpl-3.0
Observer To register particles, an observer has to be defined. In the provided constrained simulations the position of the Milky Way is, by definition, in the center of the volume.
filename_output = 'data/output_MW.txt' obsPosition = Vector3d(0.5*size,0.5*size,0.5*size) # position of observer, MW is in center of constrained simulations obsSize = 800*kpc ## physical size of observer sphere ## initialize observer that registers particles that enter into sphere of given size around its position obs = Observer() obs.add( ObserverSmallSphere( obsPosition, obsSize ) ) ## write registered particles to output file obs.onDetection( TextOutput( filename_output ) ) ## choose to not further follow particles paths once detected obs.setDeactivateOnDetection(True) ## add observer to module list m.add(obs)
doc/pages/example_notebooks/extragalactic_fields/MHD_models.v4.ipynb
CRPropa/CRPropa3
gpl-3.0
finally run the simulation by
N = 1000 m.showModules() ## optional, see summary of loaded modules m.setShowProgress(True) ## optional, see progress during runtime m.run(source, N, True) ## perform simulation with N particles injected from source
doc/pages/example_notebooks/extragalactic_fields/MHD_models.v4.ipynb
CRPropa/CRPropa3
gpl-3.0
The Data Let's generate a simple Cepheids-like dataset: observations of $y$ with reported uncertainties $\sigma_y$, at given $x$ values.
import numpy as np import pylab as plt xlimits = [0,350] ylimits = [0,250] def generate_data(seed=None): """ Generate a 30-point data set, with x and sigma_y as standard, but with y values given by y = a_0 + a_1 * x + a_2 * x**2 + a_3 * x**3 + noise """ Ndata = 30 xbar = 0.5*(xlimits[0] + xlimits[1]) xstd = 0.25*(xlimits[1] - xlimits[0]) if seed is not None: np.random.seed(seed=seed) x = xbar + xstd * np.random.randn(Ndata) meanerr = 0.025*(xlimits[1] - xlimits[0]) sigmay = meanerr + 0.3 * meanerr * np.abs(np.random.randn(Ndata)) a = np.array([37.2,0.93,-0.002,0.0]) y = a[0] + a[1] * x + a[2] * x**2 + a[3] * x**3 + sigmay*np.random.randn(len(x)) return x,y,sigmay def plot_yerr(x, y, sigmay): """ Plot an (x,y,sigma) dataset as a set of points with error bars """ plt.errorbar(x, y, yerr=sigmay, fmt='.', ms=7, lw=1, color='k') plt.xlabel('$x$', fontsize=16) plt.ylabel('$y$', fontsize=16) plt.xlim(*xlimits) plt.ylim(*ylimits) return (x, y, sigmay) = generate_data(seed=13) plot_yerr(x, y, sigmay)
tutorials/old/GPRegression.ipynb
KIPAC/StatisticalMethods
gpl-2.0
Fitting a Gaussian Process Let's follow Jake VanderPlas' example, to see how to work with the scikit-learn v0.18 Gaussian Process regression model.
from sklearn.gaussian_process import GaussianProcessRegressor from sklearn.gaussian_process.kernels import RBF as SquaredExponential
tutorials/old/GPRegression.ipynb
KIPAC/StatisticalMethods
gpl-2.0
Defining a GP First we define a kernel function, for populating the covariance matrix of our GP. To avoid confusion, a Gaussian kernel is referred to as a "squared exponential" (or a "radial basis function", RBF). The squared exponential kernel has one hyper-parameter, the length scale that is the Gaussian width.
h = 10.0 kernel = SquaredExponential(length_scale=h, length_scale_bounds=(0.01, 1000.0)) gp0 = GaussianProcessRegressor(kernel=kernel, n_restarts_optimizer=9)
tutorials/old/GPRegression.ipynb
KIPAC/StatisticalMethods
gpl-2.0
Now, let's draw some samples from the unconstrained process, o equivalently, the prior. Each sample is a function $y(x)$, which we evaluate on a grid. We'll need to assert a value for the kernel hyperparameter $h$, which dictates the correlation length between the datapoints. That will allow us to compute a mean function (which for simplicity we'll set to the mean observed $y$ value), and a covariance matrix that captures the correlations between datapoints.
np.random.seed(1) xgrid = np.atleast_2d(np.linspace(0, 399, 100)).T print("y(x) will be predicted on a grid of length", len(xgrid)) # Draw three sample y(x) functions: draws = gp0.sample_y(xgrid, n_samples=3) print("Drew 3 samples, stored in an array with shape ", draws.shape)
tutorials/old/GPRegression.ipynb
KIPAC/StatisticalMethods
gpl-2.0
Let's plot these, to see what our prior looks like.
# Start a 4-panel figure: fig = plt.figure(figsize=(10,10)) # Plot our three prior draws: ax = fig.add_subplot(221) ax.plot(xgrid, draws[:,0], '-r') ax.plot(xgrid, draws[:,1], '-g') ax.plot(xgrid, draws[:,2], '-b', label='Rescaled prior sample $y(x)$') ax.set_xlim(0, 399) ax.set_ylim(-5, 5) ax.set_xlabel('$x$') ax.set_ylabel('$y(x)$') ax.legend(fontsize=8);
tutorials/old/GPRegression.ipynb
KIPAC/StatisticalMethods
gpl-2.0
Each predicted $y(x)$ is drawn from a Gaussian of unit variance, and with off-diagonal elements determined by the covariance function. Try changing h to see what happens to the smoothness of the predictions. Go back up to the cell where h is assigned, and re-run that cell and the subsequent ones. For our data to be well interpolated by this Gaussian Process, it will need to be rescaled such that it has zero mean and unit variance. There are standard methods for doing this, but we'll do this rescaling here for transparency - and so we know what to add back in later!
class Rescale(): def __init__(self, y, err): self.original_data = y self.original_err = err self.mean = np.mean(y) self.std = np.std(y) self.transform() return def transform(self): self.y = (self.original_data - self.mean) / self.std self.err = self.original_err / self.std return() def invert(self, scaled_y, scaled_err): return (scaled_y * self.std + self.mean, scaled_err * self.std) rescaled = Rescale(y, sigmay) print('Mean, variance of original data: ',np.round(np.mean(y)), np.round(np.var(y))) print('Mean, variance of rescaled data: ',np.round(np.mean(rescaled.y)), np.round(np.var(rescaled.y)))
tutorials/old/GPRegression.ipynb
KIPAC/StatisticalMethods
gpl-2.0
Check that we can undo the scaling, for any y and sigmay:
y2, sigmay2 = rescaled.invert(rescaled.y, rescaled.err) print('Mean, variance of inverted, rescaled data: ',np.round(np.mean(y2)), np.round(np.var(y2))) print('Maximum differences in y, sigmay, after round trip: ',np.max(np.abs(y2 - y)), np.max(np.abs(sigmay2 - sigmay)))
tutorials/old/GPRegression.ipynb
KIPAC/StatisticalMethods
gpl-2.0
Constraining the GP Now, using the same covariance function, lets "fit" the GP by constraining each draw from the GP to go through our data points, and optimizing the length scale hyperparameter h. Let's first look at how this would work for two data points with no uncertainty.
# Choose two of our (rescaled) datapoints: x1 = np.array([x[10], x[12]]) rescaled_y1 = np.array([rescaled.y[10], rescaled.y[12]]) rescaled_sigmay1 = np.array([rescaled.err[10], rescaled.err[12]]) # Instantiate a GP model, with initial length_scale h=10: kernel = SquaredExponential(length_scale=10.0, length_scale_bounds=(0.01, 1000.0)) gp1 = GaussianProcessRegressor(kernel=kernel, n_restarts_optimizer=9) # Fit it to our two noiseless datapoints: gp1.fit(x1[:, None], rescaled_y1) # We have fit for the length scale parameter: print the result here: params = gp1.kernel_.get_params() print('Best-fit kernel length scale =', params['length_scale'],'cf. input',10.0) # Now predict y(x) everywhere on our xgrid: rescaled_ygrid1, rescaled_ygrid1_err = gp1.predict(xgrid, return_std=True) # And undo scaling, of both y(x) on our grid, and our two constraining data points: ygrid1, ygrid1_err = rescaled.invert(rescaled_ygrid1, rescaled_ygrid1_err) y1, sigmay1 = rescaled.invert(rescaled_y1, rescaled_sigmay1) ax = fig.add_subplot(222) ax.plot(xgrid, ygrid1, '-', color='gray', label='Posterior mean $y(x)$') ax.fill(np.concatenate([xgrid, xgrid[::-1]]), np.concatenate([(ygrid1 - ygrid1_err), (ygrid1 + ygrid1_err)[::-1]]), alpha=0.3, fc='gray', ec='None', label='68% confidence interval') ax.plot(x1, y1, '.k', ms=6, label='Noiseless constraints') ax.set_xlim(0, 399) ax.set_ylim(0, 399) ax.set_xlabel('$x$') fig
tutorials/old/GPRegression.ipynb
KIPAC/StatisticalMethods
gpl-2.0
In the absence of information, the GP tends to produce $y(x)$ that fluctuate around the prior mean function, which we chose to be a constant. Let's draw some samples from the posterior PDF, and overlay them.
draws = gp1.sample_y(xgrid, n_samples=3) for k in range(3): draws[:,k], dummy = rescaled.invert(draws[:,k], np.zeros(len(xgrid))) ax.plot(xgrid, draws[:,0], '-r') ax.plot(xgrid, draws[:,1], '-g') ax.plot(xgrid, draws[:,2], '-b', label='Posterior sample $y(x)$') ax.legend(fontsize=8) fig
tutorials/old/GPRegression.ipynb
KIPAC/StatisticalMethods
gpl-2.0
See how the posterior sample $y(x)$ functions all pass through the constrained points. Including Observational Uncertainties The mechanism for including uncertainties is a little esoteric: scikit-learn wants to be given a "nugget," called alpha, to multiply the diagonal elements of the covariance matrix.
# Choose two of our datapoints: x2 = np.array([x[10], x[12]]) rescaled_y2 = np.array([rescaled.y[10], rescaled.y[12]]) rescaled_sigmay2 = np.array([rescaled.err[10], rescaled.err[12]]) # Instantiate a GP model, including observational errors: kernel = SquaredExponential(length_scale=10.0, length_scale_bounds=(0.01, 1000.0)) gp2 = GaussianProcessRegressor(kernel=kernel, n_restarts_optimizer=9, alpha=(rescaled_sigmay2 / rescaled_y2) ** 2, random_state=0) # Fit it to our two noisy datapoints: gp2.fit(x2[:, None], rescaled_y2) # We have fit for the length scale parameter: print the result here: params = gp2.kernel_.get_params() print('Best-fit kernel length scale =', params['length_scale'],'cf. input',10.0) # Now predict y(x) everywhere on our xgrid: rescaled_ygrid2, rescaled_ygrid2_err = gp2.predict(xgrid, return_std=True) # And undo scaling: ygrid2, ygrid2_err = rescaled.invert(rescaled_ygrid2, rescaled_ygrid2_err) y2, sigmay2 = rescaled.invert(rescaled_y2, rescaled_sigmay2) # Draw three posterior sample y(x): draws = gp2.sample_y(xgrid, n_samples=3) for k in range(3): draws[:,k], dummy = rescaled.invert(draws[:,k], np.zeros(len(xgrid))) ax = fig.add_subplot(223) def gp_plot(ax, xx, yy, ee, datax, datay, datae, samples, legend=True): ax.cla() ax.plot(xx, yy, '-', color='gray', label='Posterior mean $y(x)$') ax.fill(np.concatenate([xx, xx[::-1]]), np.concatenate([(yy - ee), (yy + ee)[::-1]]), alpha=0.3, fc='gray', ec='None', label='68% confidence interval') ax.errorbar(datax, datay, datae, fmt='.k', ms=6, label='Noisy constraints') ax.set_xlim(0, 399) ax.set_ylim(0, 399) ax.set_xlabel('$x$') ax.set_ylabel('$y(x)$') ax.plot(xgrid, samples[:,0], '-r') ax.plot(xgrid, samples[:,1], '-g') ax.plot(xgrid, samples[:,2], '-b', label='Posterior sample $y(x)$') if legend: ax.legend(fontsize=8) return gp_plot(ax, xgrid, ygrid2, ygrid2_err, x2, y2, sigmay2, draws, legend=True) fig
tutorials/old/GPRegression.ipynb
KIPAC/StatisticalMethods
gpl-2.0
Now, the posterior sample $y(x)$ functions pass through the constraints within the errors. Using all the Data Now let's extend the above example to use all of our datapoints. This additional information should pull the predictions further away from the initial mean function. We'll also compute the marginal log likelihood of the best fit hyperparameter, in case we want to compare this choice of kernel with another one (in the exercises, for example).
# Use all of our datapoints: x3 = x rescaled_y3 = rescaled.y rescaled_sigmay3 = rescaled.err # Instantiate a GP model, including observational errors: kernel = SquaredExponential(length_scale=10.0, length_scale_bounds=(0.01, 1000.0)) # Could comment this out, and then import and use an # alternative kernel here. gp3 = GaussianProcessRegressor(kernel=kernel, n_restarts_optimizer=9, alpha=(rescaled_sigmay3 / rescaled_y3) ** 2, random_state=0) # Fit it to our noisy datapoints: gp3.fit(x3[:, None], rescaled_y3) # Now predict y(x) everywhere on our xgrid: rescaled_ygrid3, rescaled_ygrid3_err = gp3.predict(xgrid, return_std=True) # And undo scaling: ygrid3, ygrid3_err = rescaled.invert(rescaled_ygrid3, rescaled_ygrid3_err) y3, sigmay3 = rescaled.invert(rescaled_y3, rescaled_sigmay3) # We have fitted the length scale parameter - print the result here: params = gp3.kernel_.get_params() print('Kernel: {}'.format(gp3.kernel_)) print('Best-fit kernel length scale =', params['length_scale'],'cf. input',10.0) print('Marginal log-Likelihood: {:.3f}'.format(gp3.log_marginal_likelihood(gp3.kernel_.theta))) # Draw three posterior sample y(x): draws = gp3.sample_y(xgrid, n_samples=3) for k in range(3): draws[:,k], dummy = rescaled.invert(draws[:,k], np.zeros(len(xgrid))) ax = fig.add_subplot(224) gp_plot(ax, xgrid, ygrid3, ygrid3_err, x3, y3, sigmay3, draws, legend=True) fig # fig.savefig('../../lessons/graphics/mfm_gp_example_pjm.png')
tutorials/old/GPRegression.ipynb
KIPAC/StatisticalMethods
gpl-2.0
We now see the Gaussian Process model providing a smooth interpolation between the points. The posterior samples show fluctuations, but all are plausible under our assumptions. Exercises Try a different kernel function, from the list given in the scikit-learn docs here. "Matern" could be a good choice. Do you get a higher value of the marginal log likelihood when you fit this model? Under what circumstances would this marginal log likelihood approximate the Bayesian Evidence well? Extend the analysis above to do a posterior predictive model check of your GP inference. You'll need to generate new replica datasets from posterior draws from the fitted GP. Use the discrepancy measure $T(\theta,d) = -2 \log L(\theta;d)$. Does your GP provide an adequate fit to the data? Could it be over-fitting? What could you do to prevent this? There's some starter code for you below. 1. Alternative kernel Go back to the gp3 cell, and try something new... Kernel: RBF Best-fit kernel length scale = Marginal log-Likelihood: Kernel: ??? Best-fit kernel length scale = Marginal log-Likelihood: 2. Posterior Predictive Model Check For this we need to draw models from our GP, and then generate a dataset from each one. We'll do this in the function below.
def generate_replica_data(xgrid, ygrid, seed=None): """ Generate a 30-point data set, with x and sigma_y as standard, but with y values given by the "lookup tables" (gridded function) provided. """ Ndata = 30 xbar = 0.5*(xlimits[0] + xlimits[1]) xstd = 0.25*(xlimits[1] - xlimits[0]) if seed is not None: np.random.seed(seed=seed) x = xbar + xstd * np.random.randn(Ndata) meanerr = 0.025*(xlimits[1] - xlimits[0]) sigmay = meanerr + 0.3 * meanerr * np.abs(np.random.randn(Ndata)) # Look up values of y given input lookup grid y = np.zeros(Ndata) for k in range(Ndata): y[k] = np.interp(x[k], np.ravel(xgrid), ygrid) # Add noise: y += sigmay*np.random.randn(len(x)) return x,y,sigmay def discrepancy(y_model, y_obs, s_obs): """ Compute discrepancy measure comparing model y and observed/replica y (with its uncertainty). T = -2 log L """ T = REPLACE_WITH_YOUR_SOLUTION() return T # Draw 1000 sample models: Nsamples = 1000 draws = gp3.sample_y(xgrid, n_samples=Nsamples) x_rep, y_rep, sigmay_rep = np.zeros([30,Nsamples]), np.zeros([30,Nsamples]), np.zeros([30,Nsamples]) # Difference in discrepancy measure, for plotting dT = np.zeros(Nsamples) # For each sample model, draw a replica dataset and accumulate test statistics: y_model = np.zeros(30) for k in range(Nsamples): draws[:,k], dummy = rescaled.invert(draws[:,k], np.zeros(len(xgrid))) ygrid = draws[:,k] x_rep[:,k], y_rep[:,k], sigmay_rep[:,k] = generate_replica_data(xgrid, ygrid, seed=None) dT[k] = REPLACE_WITH_YOUR_SOLUTION() # Plot P(T[y_rep]-T[y_obs]|y_obs) as a histogram: plt.hist(dT, density=True) plt.xlabel("$T(y_{rep})-T(y_{obs})$") plt.ylabel("Posterior predictive probability density");
tutorials/old/GPRegression.ipynb
KIPAC/StatisticalMethods
gpl-2.0
Bonus1: Make sure the function returns an empty list if the iterable is empty
multimax([])
Python/Python Morsels/multimax/my_try/multimax.ipynb
nitin-cherian/LifeLongLearning
mit