markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
What issues did you have? The first issue that I has was that I was trying to output a single scalar whose value could be thresholded to determine whether the network should return TRUE or FALSE. It turns out loss functions for this are much more complicated than if I had instead treated the XOR problem as a classification task with one output per possible label ('TRUE', 'FALSE'). This is the approach I have implemented here. Another issue I encountered at first was that I was using too few hidden nodes. I originally thought that such a simple problem would only need a couple nodes in a single hidden layer to implement. However, such small networks were extremely slow to converge. This is exemplified in the Architectures section. Lastly, when I was using small batch sizes (<= 5 examples), and randomly populating the batches, the network would sometimes fail to converge, probably because the batches didn't contain all the possible examples. Which activation functions did you try? Which loss functions? I tried ReLU, sigmoid, and tanh activation functions. I only successfully uses a softmax cross-entropy loss function. The results for the different activation functions can be seen by running the block below. The sigmoid function consistently takes the longest to converge. I'm unsure why tanh does significantly better than sigmoid.
batch_size = 100 num_steps = 10000 num_hidden = 7 num_hidden_layers = 2 learning_rate = 0.2 xor_network.run_network(batch_size, num_steps, num_hidden, num_hidden_layers, learning_rate, False, 'sigmoid') xor_network.run_network(batch_size, num_steps, num_hidden, num_hidden_layers, learning_rate, False, 'tanh') xor_network.run_network(batch_size, num_steps, num_hidden, num_hidden_layers, learning_rate, False, 'relu')
homeworks/XOR/HW1_report.ipynb
daphnei/nn_chatbot
mit
What architectures did you try? What were the different results? How long did it take? The results for several different architectures can be seen by running the code below. Since there is no reading from disk, each iteration takes almost exactly the same amount of time. Therefore, I will report "how long it takes" in number of iterations rather than in time.
# Network with 2 hidden layers of 5 nodes xor_network.run_network(batch_size, num_steps, 5, 2, learning_rate, False, 'relu') # Network with 5 hidden layers of 2 nodes each num_steps = 3000 # (so it doesn't go on forever) xor_network.run_network(batch_size, num_steps, 2, 5, learning_rate, False, 'relu')
homeworks/XOR/HW1_report.ipynb
daphnei/nn_chatbot
mit
Conclusion from the above: With the number of parameters held constant, a deeper network does not necessarily perform better than a shallower one. I am guessing this is because fewer nodes in a layer means that the network can keep around less information from layer to layer.
xor_network.run_network(batch_size, num_steps, 3, 5, learning_rate, False, 'relu')
homeworks/XOR/HW1_report.ipynb
daphnei/nn_chatbot
mit
Conclusion from the above: Indeed, the problem is not the number of layers, but the number of nodes in each layer.
# This is the minimum number of nodes I can use to consistently get convergence with Gradient Descent. xor_network.run_network(batch_size, num_steps, 5, 1, learning_rate, False, 'relu') # If I switch to using Adam Optimizer, I can get down to 2 hidden nodes and consistently have convergence. xor_network.run_network(batch_size, num_steps, 2, 1, learning_rate, True, 'relu')
homeworks/XOR/HW1_report.ipynb
daphnei/nn_chatbot
mit
¿Cuál es el resultado de cada una de las siguientes operaciones? 18/4 18//4 18%4
18%4
Teaching Materials/Programming/Python/Python3Espanol/1_Introduccion/03. Numeros y jerarquía de operaciones.ipynb
astro4dev/OAD-Data-Science-Toolkit
gpl-3.0
Jerarquía de operaciones Paréntesis Exponenciación Multiplicación y División Sumas y Restas (izquierda a derecha)
2 * (3-1) (1+1)**(5-2) 2**1+1 3*1**3 2*3-1 5-2*2 6-3+2 6-(3+2) 100/100/2 100/100*2 100/(100*2)
Teaching Materials/Programming/Python/Python3Espanol/1_Introduccion/03. Numeros y jerarquía de operaciones.ipynb
astro4dev/OAD-Data-Science-Toolkit
gpl-3.0
¿Cuál es el valor de la siguiente expresión? 16 - 2 * 5 // 3 + 1 (a) 14 (b) 24 (c) 3 (d) 13.667 Asignación de variables
x = 15 y = x x == y x = 22 x==y x = x+1 x x+=1 x x-=20 x
Teaching Materials/Programming/Python/Python3Espanol/1_Introduccion/03. Numeros y jerarquía de operaciones.ipynb
astro4dev/OAD-Data-Science-Toolkit
gpl-3.0
Constant Constant simply returns the same, constant value every time.
g = Constant('quux') print_generated_sequence(g, num=10, seed=12345)
notebooks/v4/Primitive_generators.ipynb
maxalbert/tohu
mit
Boolean Boolean returns either True or False, optionally with different probabilities.
g1 = Boolean() g2 = Boolean(p=0.8) print_generated_sequence(g1, num=20, seed=12345) print_generated_sequence(g2, num=20, seed=99999)
notebooks/v4/Primitive_generators.ipynb
maxalbert/tohu
mit
Integer Integer returns a random integer between low and high (both inclusive).
g = Integer(low=100, high=200) print_generated_sequence(g, num=10, seed=12345)
notebooks/v4/Primitive_generators.ipynb
maxalbert/tohu
mit
Float Float returns a random float between low and high (both inclusive).
g = Float(low=2.3, high=4.2) print_generated_sequence(g, num=10, sep='\n', fmt='.12f', seed=12345)
notebooks/v4/Primitive_generators.ipynb
maxalbert/tohu
mit
HashDigest HashDigest returns hex strings representing hash digest values (or alternatively raw bytes). HashDigest hex strings (uppercase)
g = HashDigest(length=6) print_generated_sequence(g, num=10, seed=12345)
notebooks/v4/Primitive_generators.ipynb
maxalbert/tohu
mit
HashDigest hex strings (lowercase)
g = HashDigest(length=6, uppercase=False) print_generated_sequence(g, num=10, seed=12345)
notebooks/v4/Primitive_generators.ipynb
maxalbert/tohu
mit
HashDigest byte strings
g = HashDigest(length=10, as_bytes=True) print_generated_sequence(g, num=5, seed=12345, sep='\n')
notebooks/v4/Primitive_generators.ipynb
maxalbert/tohu
mit
NumpyRandomGenerator This generator can produce random numbers using any of the random number generators supported by numpy.
g1 = NumpyRandomGenerator(method="normal", loc=3.0, scale=5.0) g2 = NumpyRandomGenerator(method="poisson", lam=30) g3 = NumpyRandomGenerator(method="exponential", scale=0.3) g1.reset(seed=12345); print_generated_sequence(g1, num=4) g2.reset(seed=12345); print_generated_sequence(g2, num=15) g3.reset(seed=12345); print_generated_sequence(g3, num=4)
notebooks/v4/Primitive_generators.ipynb
maxalbert/tohu
mit
FakerGenerator FakerGenerator gives access to any of the methods supported by the faker module. Here are a couple of examples. Example: random names
g = FakerGenerator(method='name') print_generated_sequence(g, num=8, seed=12345)
notebooks/v4/Primitive_generators.ipynb
maxalbert/tohu
mit
Example: random addresses
g = FakerGenerator(method='address') print_generated_sequence(g, num=8, seed=12345, sep='\n---\n')
notebooks/v4/Primitive_generators.ipynb
maxalbert/tohu
mit
IterateOver IterateOver is a generator which simply iterates over a given sequence. Note that once the generator has been exhausted (by iterating over all its elements), it needs to be reset before it can produce elements again.
seq = ['a', 'b', 'c', 'd', 'e'] g = IterateOver(seq) g.reset() print([x for x in g]) print([x for x in g]) g.reset() print([x for x in g])
notebooks/v4/Primitive_generators.ipynb
maxalbert/tohu
mit
SelectOne
some_items = ['aa', 'bb', 'cc', 'dd', 'ee'] g = SelectOne(some_items) print_generated_sequence(g, num=30, seed=12345)
notebooks/v4/Primitive_generators.ipynb
maxalbert/tohu
mit
By default, all possible values are chosen with equal probability, but this can be changed by passing a distribution as the parameter p.
g = SelectOne(some_items, p=[0.1, 0.05, 0.7, 0.03, 0.12]) print_generated_sequence(g, num=30, seed=99999)
notebooks/v4/Primitive_generators.ipynb
maxalbert/tohu
mit
We can see that the item 'cc' has the highest chance of being selected (70%), followed by 'ee' and 'aa' (12% and 10%, respectively). Timestamp Timestamp produces random timestamps between a start and end time (both inclusive).
g = Timestamp(start='1998-03-01 00:02:00', end='1998-03-01 00:02:15') print_generated_sequence(g, num=10, sep='\n', seed=99999)
notebooks/v4/Primitive_generators.ipynb
maxalbert/tohu
mit
If start or end are dates of the form YYYY-MM-DD (without the exact HH:MM:SS timestamp), they are interpreted as start='YYYY-MM-DD 00:00:00 and end='YYYY-MM-DD 23:59:59', respectively - i.e., as the beginning and the end of the day.
g = Timestamp(start='2018-02-14', end='2018-02-18') print_generated_sequence(g, num=5, sep='\n', seed=12345)
notebooks/v4/Primitive_generators.ipynb
maxalbert/tohu
mit
For convenience, one can also pass a single date, which will produce timestamps during this particular date.
g = Timestamp(date='2018-01-01') print_generated_sequence(g, num=5, sep='\n', seed=12345)
notebooks/v4/Primitive_generators.ipynb
maxalbert/tohu
mit
Note that the generated items are datetime objects (even though they appear as strings when printed above).
g.reset(seed=12345) [next(g), next(g), next(g)]
notebooks/v4/Primitive_generators.ipynb
maxalbert/tohu
mit
We can use the .strftime() method to create another generator which returns timestamps as strings instead of datetime objects.
h = Timestamp(date='2018-01-01').strftime('%-d %b %Y, %H:%M (%a)') h.reset(seed=12345) [next(h), next(h), next(h)]
notebooks/v4/Primitive_generators.ipynb
maxalbert/tohu
mit
CharString
g = CharString(length=15) print_generated_sequence(g, num=5, seed=12345) print_generated_sequence(g, num=5, seed=99999)
notebooks/v4/Primitive_generators.ipynb
maxalbert/tohu
mit
It is possible to explicitly specify the character set.
g = CharString(length=12, charset="ABCDEFG") print_generated_sequence(g, num=5, sep='\n', seed=12345)
notebooks/v4/Primitive_generators.ipynb
maxalbert/tohu
mit
There are also a few pre-defined character sets.
g1 = CharString(length=12, charset="<lowercase>") g2 = CharString(length=12, charset="<alphanumeric_uppercase>") print_generated_sequence(g1, num=5, sep='\n', seed=12345); print() print_generated_sequence(g2, num=5, sep='\n', seed=12345)
notebooks/v4/Primitive_generators.ipynb
maxalbert/tohu
mit
DigitString DigitString is the same as CharString with charset='0123456789'.
g = DigitString(length=15) print_generated_sequence(g, num=5, seed=12345) print_generated_sequence(g, num=5, seed=99999)
notebooks/v4/Primitive_generators.ipynb
maxalbert/tohu
mit
Sequential Generates a sequence of sequentially numbered strings with a given prefix.
g = Sequential(prefix='Foo_', digits=3)
notebooks/v4/Primitive_generators.ipynb
maxalbert/tohu
mit
Calling reset() on the generator makes the numbering start from 1 again.
g.reset() print_generated_sequence(g, num=5) print_generated_sequence(g, num=5) print() g.reset() print_generated_sequence(g, num=5)
notebooks/v4/Primitive_generators.ipynb
maxalbert/tohu
mit
Note that the method Sequential.reset() supports the seed argument for consistency with other generators, but its value is ignored - the generator is simply reset to its initial value. This is illustrated here:
g.reset(seed=12345); print_generated_sequence(g, num=5) g.reset(seed=99999); print_generated_sequence(g, num=5)
notebooks/v4/Primitive_generators.ipynb
maxalbert/tohu
mit
Na parte 2 já ficamos a saber que 'Orçamento de/do Estado' não se usava antes de 1984, e se falava mais de decretos-lei antes de 1983. Mas sinceramente não encontramos nada de interessante. Vamos acelerar o processo, e olhar para mais palavras:
# retorna o número de ocorrências de palavra em texto def conta_palavra(texto,palavra): return texto.count(palavra) # retorna um vector com um item por sessao, e valor verdadeiro se o ano é =i, falso se nao é def selecciona_ano(data,i): return data.map(lambda d: d.year == i) # faz o histograma do número de ocorrencias de 'palavra' por ano def histograma_palavra(palavra): # cria uma coluna de tabela contendo as contagens de palavra por cada sessão dados = sessoes['sessao'].map(lambda texto: conta_palavra(texto,palavra.lower())) ocorrencias_por_ano = numpy.zeros(2016-1976) for i in range(0,2016-1976): # agrupa contagens por ano ocorrencias_por_ano[i] = numpy.sum(dados[selecciona_ano(sessoes['data'],i+1976)]) f = pylab.figure(figsize=(10,6)) ax = pylab.bar(range(1976,2016),ocorrencias_por_ano) pylab.xlabel('Ano') pylab.ylabel('Ocorrencias de '+str(palavra)) import time start = time.time() histograma_palavra('Paulo Portas') #já vimos que Paulo e Portas foram anormalmente frequentes em 2000, vamos ver se há mais eventos destes print(str(time.time()-start)+' s') # mede o tempo que o código 'histograma_palavra('Paulo Portas')' demora a executar, para nossa referencia
notebooks/Deputado-Histogramado-3.ipynb
fsilva/deputado-histogramado
gpl-3.0
Tal como tinhamos visto antes, o ano 2000 foi um ano bastante presente para o Paulo Portas. Parece que as suas contribuições vêm em ondas.
histograma_palavra('Crise')
notebooks/Deputado-Histogramado-3.ipynb
fsilva/deputado-histogramado
gpl-3.0
Sempre se esteve em crise, mas em 2010 foi uma super-crise.
histograma_palavra('aborto')
notebooks/Deputado-Histogramado-3.ipynb
fsilva/deputado-histogramado
gpl-3.0
Os debates sobre o aborto parecem estar bem localizados, a 1982, 1984, 1997/8 e 2005.
histograma_palavra('Euro') histograma_palavra('Europa') histograma_palavra('geringonça') histograma_palavra('corrupção') histograma_palavra('calúnia')
notebooks/Deputado-Histogramado-3.ipynb
fsilva/deputado-histogramado
gpl-3.0
Saiu de moda.
histograma_palavra('iraque') histograma_palavra('china') histograma_palavra('alemanha') histograma_palavra('brasil') histograma_palavra('internet') histograma_palavra('telemóvel') histograma_palavra('redes sociais') histograma_palavra('sócrates') histograma_palavra('droga') histograma_palavra('aeroporto') histograma_palavra('hospital') histograma_palavra('médicos')
notebooks/Deputado-Histogramado-3.ipynb
fsilva/deputado-histogramado
gpl-3.0
e se quisermos acumular varias palavras no mesmo histograma?
def conta_palavras(texto,palavras): l = [texto.count(palavra.lower()) for palavra in palavras] return sum(l) def selecciona_ano(data,i): return data.map(lambda d: d.year == i) def histograma_palavras(palavras): dados = sessoes['sessao'].map(lambda texto: conta_palavras(texto,palavras)) ocorrencias_por_ano = numpy.zeros(2016-1976) for i in range(0,2016-1976): ocorrencias_por_ano[i] = numpy.sum(dados[selecciona_ano(sessoes['data'],i+1976)]) f = pylab.figure(figsize=(10,6)) ax = pylab.bar(range(1976,2016),ocorrencias_por_ano) pylab.xlabel('Ano') pylab.ylabel('Ocorrencias de '+str(palavras)) histograma_palavras(['escudos','contos','escudo']) histograma_palavras(['muito bem','aplausos','fantastico','excelente','grandioso']) histograma_palavras([' ecu ',' ecu.']) histograma_palavra('União Europeia') histograma_palavras(['CEE','Comunidade Económica Europeia'])
notebooks/Deputado-Histogramado-3.ipynb
fsilva/deputado-histogramado
gpl-3.0
A União Europeia foi fundada em ~93 e a CEE integrada nesta (segundo a wikipedia), logo o gráfico faz sentido. Vamos criar uma função para integrar os 2 graficos, para nos permitir comparar a evolução:
def conta_palavras(texto,palavras): l = [texto.count(palavra) for palavra in palavras] return sum(l) def selecciona_ano(data,i): return data.map(lambda d: d.year == i) # calcula os dados para os 2 histogramas, e representa-os no mesmo gráfico def grafico_palavras_vs_palavras(palavras1, palavras2): palavras1 = [p.lower() for p in palavras1] palavras2 = [p.lower() for p in palavras2] dados = sessoes['sessao'].map(lambda texto: conta_palavras(texto,palavras1)) ocorrencias_por_ano1 = numpy.zeros(2016-1976) for i in range(0,2016-1976): ocorrencias_por_ano1[i] = numpy.sum(dados[selecciona_ano(sessoes['data'],i+1976)]) dados = sessoes['sessao'].map(lambda texto: conta_palavras(texto,palavras2)) ocorrencias_por_ano2 = numpy.zeros(2016-1976) for i in range(0,2016-1976): ocorrencias_por_ano2[i] = numpy.sum(dados[selecciona_ano(sessoes['data'],i+1976)]) anos = range(1976,2016) f = pylab.figure(figsize=(10,6)) p1 = pylab.bar(anos, ocorrencias_por_ano1) p2 = pylab.bar(anos, ocorrencias_por_ano2,bottom=ocorrencias_por_ano1) pylab.legend([palavras1[0], palavras2[0]]) pylab.xlabel('Ano') pylab.ylabel('Ocorrencias totais') grafico_palavras_vs_palavras(['CEE','Comunidade Económica Europeia'],['União Europeia'])
notebooks/Deputado-Histogramado-3.ipynb
fsilva/deputado-histogramado
gpl-3.0
Boa, uma substitui a outra, basicamente.
grafico_palavras_vs_palavras(['contos','escudo'],['euro.','euro ','euros'])
notebooks/Deputado-Histogramado-3.ipynb
fsilva/deputado-histogramado
gpl-3.0
Novamente, uma substitui a outra.
histograma_palavra('Troika')
notebooks/Deputado-Histogramado-3.ipynb
fsilva/deputado-histogramado
gpl-3.0
Ok isto parece um mistério. Falava-se bastante mais da troika em 1989 do que 2011. Vamos investigar isto procurando e mostrando as frases onde as palavras aparecem. Queremos saber o que foi dito quando se mencionou 'Troika' no parlamento. Vamos tentar encontrar e imprimir as frases onde se dão as >70 ocorrencias de troika de 1989 e as 25 de 2011.
sessoes_1989 = sessoes[selecciona_ano(sessoes['data'],1989)] sessoes_2011 = sessoes[selecciona_ano(sessoes['data'],2011)] def divide_em_frases(texto): return texto.replace('!','.').replace('?','.').split('.') def acumula_lista_de_lista(l): return [j for x in l for j in x ] def selecciona_frases_com_palavra(sessoes, palavra): frases_ = sessoes['sessao'].map(divide_em_frases) frases = acumula_lista_de_lista(frases_) return list(filter(lambda frase: frase.find(palavra) != -1, frases)) frases_com_troika1989 = selecciona_frases_com_palavra(sessoes_1989, 'troika') print('Frases com troika em 1989: ' + str(len(frases_com_troika1989))) frases_com_troika2011 = selecciona_frases_com_palavra(sessoes_2011, 'troika') print('Frases com troika em 2011: ' + str(len(frases_com_troika2011))) from IPython.display import Markdown, display #print markdown permite-nos escrever a negrito ou como título def print_markdown(string): display(Markdown(string)) def imprime_frases(lista_de_frases, palavra_negrito): for i in range(len(lista_de_frases)): string = lista_de_frases[i].replace(palavra_negrito,'**' + palavra_negrito + '**') #print_markdown(str(i+1) + ':' + string) print(str(i+1) + ':' + string) # no Jupyter notebooks 4.3.1 não se pode gravar output em markdown, tem de ser texto normal # se estiverem a executar o notebook e não a ler no github, podem descomentar a linha anterior para ver o texto com formatação #print_markdown('1989:\n====') print('1989:\n====') imprime_frases(frases_com_troika1989[1:73:5],'troika') #print_markdown('2011:\n====') print('2011:\n====') imprime_frases(frases_com_troika2011[1:20:2],'troika')
notebooks/Deputado-Histogramado-3.ipynb
fsilva/deputado-histogramado
gpl-3.0
Como vemos na última frase, a verdade é que no parlmento se usa mais o termo 'Troica' do que 'Troika'! Na comunicação social usa-se muito 'Troika'. E para quem não sabe o que foi a perestroika: https://pt.wikipedia.org/wiki/Perestroika Ok, assim já faz sentido:
def conta_palavras(texto,palavras): l = [texto.count(palavra) for palavra in palavras] return sum(l) def selecciona_ano(data,i): return data.map(lambda d: d.year == i) # calcula os dados para os 2 histogramas, e representa-os no mesmo gráfico def grafico_palavras_vs_palavras(palavras1, palavras2): palavras1 = [p.lower() for p in palavras1] palavras2 = [p.lower() for p in palavras2] dados = sessoes['sessao'].map(lambda texto: conta_palavras(texto,palavras1)) ocorrencias_por_ano1 = numpy.zeros(2016-1976) for i in range(0,2016-1976): ocorrencias_por_ano1[i] = numpy.sum(dados[selecciona_ano(sessoes['data'],i+1976)]) dados = sessoes['sessao'].map(lambda texto: conta_palavras(texto,palavras2)) ocorrencias_por_ano2 = numpy.zeros(2016-1976) for i in range(0,2016-1976): ocorrencias_por_ano2[i] = numpy.sum(dados[selecciona_ano(sessoes['data'],i+1976)]) anos = range(1976,2016) f = pylab.figure(figsize=(10,6)) p1 = pylab.bar(anos, ocorrencias_por_ano1) p2 = pylab.bar(anos, ocorrencias_por_ano2,bottom=ocorrencias_por_ano1) pylab.legend([palavras1[0], palavras2[0]]) pylab.xlabel('Ano') pylab.ylabel('Ocorrencias totais') grafico_palavras_vs_palavras(['troica'],['troika'])
notebooks/Deputado-Histogramado-3.ipynb
fsilva/deputado-histogramado
gpl-3.0
Set the operating parameters to the default values:
def set_fpe_defaults(fpe): "Set the FPE to the default operating parameters, and outputs a table of the default values" defaults = {} for k in range(len(fpe.ops.address)): if fpe.ops.address[k] is None: continue fpe.ops.address[k].value = fpe.ops.address[k].default defaults[fpe.ops.address[k].name] = fpe.ops.address[k].default return defaults
Evaluating Parameter Interdependence.ipynb
TESScience/FPE_Test_Procedures
mit
Get, sort, and print the default operating parameters:
from tessfpe.data.operating_parameters import operating_parameters for k in sorted(operating_parameters.keys()): v = operating_parameters[k] print k, ":", v["default"], v["unit"]
Evaluating Parameter Interdependence.ipynb
TESScience/FPE_Test_Procedures
mit
Take a number of sets of housekeeping data, with one operating parameter varying across it's control range, then repeat for every operating parameter:
def get_base_name(name): import re if '_offset' not in name: return None offset_name = name derived_parameter_name = name.replace('_offset', '') base_name = None if 'low' in derived_parameter_name: base_name = derived_parameter_name.replace('low', 'high') if 'high' in derived_parameter_name: base_name = derived_parameter_name.replace('high', 'low') if 'output_drain' in derived_parameter_name: base_name = re.sub(r'output_drain_._offset$', 'reset_drain', offset_name) return base_name def get_derived_parameter_name(name): if '_offset' not in name: return None offset_name = name return name.replace('_offset', '') data = {} base_steps = 15 offset_steps = 5 set_fpe_defaults(fpe1) for i in range(base_steps,0,-1): for j in range(offset_steps, 0, -1): for k in range(len(fpe1.ops.address)): # If there's no operating parameter to set, go on to the next one if fpe1.ops.address[k] is None: continue name = fpe1.ops.address[k].name base_name = get_base_name(name) derived_parameter_name = get_derived_parameter_name(name) # If there's no derived parameter reflecting this parameter, go on to the next one if derived_parameter_name is None: continue offset_name = name base_low = fpe1.ops[base_name].low base_high = fpe1.ops[base_name].high offset_low = fpe1.ops[offset_name].low offset_high = fpe1.ops[offset_name].high base_value = base_low + i / float(base_steps) * (base_high - base_low) fpe1.ops[base_name].value = base_value fpe1.ops[offset_name].value = offset_low + j / float(offset_steps) * (offset_high - offset_low) fpe1.ops.send() analogue_house_keeping = fpe1.house_keeping["analogue"] for k in range(len(fpe1.ops.address)): # If there's no operating parameter to set, go on to the next one if fpe1.ops.address[k] is None: continue name = fpe1.ops.address[k].name base_name = get_base_name(name) derived_parameter_name = get_derived_parameter_name(name) if derived_parameter_name is None: continue if derived_parameter_name not in data: data[derived_parameter_name] = {} offset_name = name base_low = fpe1.ops[base_name].low base_high = fpe1.ops[base_name].high offset_low = fpe1.ops[offset_name].low offset_high = fpe1.ops[offset_name].high base_value = base_low + i / float(base_steps) * (base_high - base_low) if base_value not in data[derived_parameter_name]: data[derived_parameter_name][base_value] = {"X": [], "Y": []} data[derived_parameter_name][base_value]["X"].append(fpe1.ops[base_name].value + fpe1.ops[offset_name].value) data[derived_parameter_name][base_value]["Y"].append(analogue_house_keeping[derived_parameter_name])
Evaluating Parameter Interdependence.ipynb
TESScience/FPE_Test_Procedures
mit
Set up to plot:
%matplotlib inline %config InlineBackend.figure_format = 'svg' import numpy as np import matplotlib.pyplot as plt import pylab
Evaluating Parameter Interdependence.ipynb
TESScience/FPE_Test_Procedures
mit
Plot selected data:
def get_range_square(X,Y): return [min(X + Y)-1, max(X + Y)+1] # Plot the set vs. measured values of selected channels: for nom in sorted(data.keys()): print nom for base_value in sorted(data[nom].keys()): print base_value X = data[nom][base_value]["X"] Y = data[nom][base_value]["Y"] ran = get_range_square(X,Y) pylab.ylim(ran) pylab.xlim(ran) pylab.grid(True) plt.axes().set_aspect(1) plt.title("{derived_param} with base {base}".format( derived_param=nom, base=base_value )) plt.scatter(X,Y,color='red') plt.plot(X,Y,color='blue') plt.show()
Evaluating Parameter Interdependence.ipynb
TESScience/FPE_Test_Procedures
mit
Sensitivity map of SSP projections This example shows the sources that have a forward field similar to the first SSP vector correcting for ECG.
# Author: Alexandre Gramfort <[email protected]> # # License: BSD-3-Clause import matplotlib.pyplot as plt from mne import read_forward_solution, read_proj, sensitivity_map from mne.datasets import sample print(__doc__) data_path = sample.data_path() subjects_dir = data_path / 'subjects' meg_path = data_path / 'MEG' / 'sample' fname = meg_path / 'sample_audvis-meg-eeg-oct-6-fwd.fif' ecg_fname = meg_path / 'sample_audvis_ecg-proj.fif' fwd = read_forward_solution(fname) projs = read_proj(ecg_fname) # take only one projection per channel type projs = projs[::2] # Compute sensitivity map ssp_ecg_map = sensitivity_map(fwd, ch_type='grad', projs=projs, mode='angle')
stable/_downloads/82d9c13e00105df6fd0ebed67b862464/ssp_projs_sensitivity_map.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Show sensitivity map
plt.hist(ssp_ecg_map.data.ravel()) plt.show() args = dict(clim=dict(kind='value', lims=(0.2, 0.6, 1.)), smoothing_steps=7, hemi='rh', subjects_dir=subjects_dir) ssp_ecg_map.plot(subject='sample', time_label='ECG SSP sensitivity', **args)
stable/_downloads/82d9c13e00105df6fd0ebed67b862464/ssp_projs_sensitivity_map.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
1. Represent Read Article in terms of Topic Vector
article_topic_distribution = pd.read_csv(PATH_ARTICLE_TOPIC_DISTRIBUTION) article_topic_distribution.shape article_topic_distribution.head()
session-2/python/Topic_Model_Recommender.ipynb
sourabhrohilla/ds-masterclass-hands-on
mit
Generate Article-Topic Distribution matrix
#Pivot the dataframe article_topic_pivot = article_topic_distribution.pivot(index='Article_Id', columns='Topic_Id', values='Topic_Weight') #Fill NaN with 0 article_topic_pivot.fillna(value=0, inplace=True) #Get the values in dataframe as matrix articles_topic_matrix = article_topic_pivot.values articles_topic_matrix.shape article_topic_pivot.head()
session-2/python/Topic_Model_Recommender.ipynb
sourabhrohilla/ds-masterclass-hands-on
mit
2. Represent user in terms of Topic Vector of read articles A user vector is represented in terms of average of read articles topic vector
#Select user in terms of read article topic distribution row_idx = np.array(ARTICLES_READ) read_articles_topic_matrix=articles_topic_matrix[row_idx[:, None]] #Calculate the average of read articles topic vector user_vector = np.mean(read_articles_topic_matrix, axis=0) user_vector.shape user_vector
session-2/python/Topic_Model_Recommender.ipynb
sourabhrohilla/ds-masterclass-hands-on
mit
3. Calculate cosine similarity between read and unread articles
def calculate_cosine_similarity(articles_topic_matrix, user_vector): articles_similarity_score=cosine_similarity(articles_topic_matrix, user_vector) recommended_articles_id = articles_similarity_score.flatten().argsort()[::-1] #Remove read articles from recommendations final_recommended_articles_id = [article_id for article_id in recommended_articles_id if article_id not in ARTICLES_READ ][:NUM_RECOMMENDED_ARTICLES] return final_recommended_articles_id recommended_articles_id = calculate_cosine_similarity(articles_topic_matrix, user_vector) recommended_articles_id
session-2/python/Topic_Model_Recommender.ipynb
sourabhrohilla/ds-masterclass-hands-on
mit
4. Recommendation Using Topic Model:-
#Recommended Articles and their title news_articles = pd.read_csv(PATH_NEWS_ARTICLES) print 'Articles Read' print news_articles.loc[news_articles['Article_Id'].isin(ARTICLES_READ)]['Title'] print '\n' print 'Recommender ' print news_articles.loc[news_articles['Article_Id'].isin(recommended_articles_id)]['Title']
session-2/python/Topic_Model_Recommender.ipynb
sourabhrohilla/ds-masterclass-hands-on
mit
Topics + NER Recommender Topic + NER Based Recommender Represent user in terms of - <br/> (Alpha) <Topic Vector> + (1-Alpha) <NER Vector> <br/> where <br/> Alpha => [0,1] <br/> [Topic Vector] => Topic vector representation of concatenated read articles <br/> [NER Vector] => Topic vector representation of NERs associated with concatenated read articles <br/> Calculate cosine similarity between user vector and articles Topic matrix Get the recommended articles
ALPHA = 0.5 DICTIONARY_PATH = "/home/phoenix/Documents/HandsOn/Final/python/Topic Model/model/dictionary_of_words.p" LDA_MODEL_PATH = "/home/phoenix/Documents/HandsOn/Final/python/Topic Model/model/lda.model" from nltk import word_tokenize, pos_tag, ne_chunk from nltk.chunk import tree2conlltags import re from nltk.corpus import stopwords from nltk.tokenize import TweetTokenizer from nltk.stem.snowball import SnowballStemmer import pickle import gensim from gensim import corpora, models
session-2/python/Topic_Model_Recommender.ipynb
sourabhrohilla/ds-masterclass-hands-on
mit
1. Represent User in terms of Topic Distribution and NER Represent user in terms of read article topic distribution Represent user in terms of NERs associated with read articles 2.1 Get NERs of read articles 2.2 Load LDA model 2.3 Get topic distribution for the concated NERs Generate user vector 1.1. Represent user in terms of read article topic distribution
row_idx = np.array(ARTICLES_READ) read_articles_topic_matrix=articles_topic_matrix[row_idx[:, None]] #Calculate the average of read articles topic vector user_topic_vector = np.mean(read_articles_topic_matrix, axis=0) user_topic_vector.shape
session-2/python/Topic_Model_Recommender.ipynb
sourabhrohilla/ds-masterclass-hands-on
mit
1.2. Represent user in terms of NERs associated with read articles
# Get NERs of read articles def get_ner(article): ne_tree = ne_chunk(pos_tag(word_tokenize(article))) iob_tagged = tree2conlltags(ne_tree) ner_token = ' '.join([token for token,pos,ner_tag in iob_tagged if not ner_tag==u'O']) #Discarding tokens with 'Other' tag return ner_token articles = news_articles['Content'].tolist() user_articles_ner = ' '.join([get_ner(articles[i]) for i in ARTICLES_READ]) print "NERs of Read Article =>", user_articles_ner stop_words = set(stopwords.words('english')) tknzr = TweetTokenizer() stemmer = SnowballStemmer("english") def clean_text(text): cleaned_text=re.sub('[^\w_\s-]', ' ', text) #remove punctuation marks return cleaned_text #and other symbols def tokenize(text): word = tknzr.tokenize(text) #tokenization filtered_sentence = [w for w in word if not w.lower() in stop_words] #removing stop words stemmed_filtered_tokens = [stemmer.stem(plural) for plural in filtered_sentence] #stemming tokens = [i for i in stemmed_filtered_tokens if i.isalpha() and len(i) not in [0, 1]] return tokens #Cleaning the article cleaned_text = clean_text(user_articles_ner) article_vocabulary = tokenize(cleaned_text) #Load model dictionary model_dictionary = pickle.load(open(DICTIONARY_PATH,"rb")) #Generate article maping using IDs associated with vocab corpus = [model_dictionary.doc2bow(text) for text in [article_vocabulary]] #Load LDA Model lda = models.LdaModel.load(LDA_MODEL_PATH) # Get topic distribution for the concated NERs article_topic_distribution=lda.get_document_topics(corpus[0]) article_topic_distribution ner_vector =[0]*NO_OF_TOPICS for topic_id, topic_weight in article_topic_distribution: ner_vector[topic_id]=topic_weight user_ner_vector = np.asarray(ner_vector).reshape(1,150)
session-2/python/Topic_Model_Recommender.ipynb
sourabhrohilla/ds-masterclass-hands-on
mit
1.3. Generate user vector
alpha_topic_vector = ALPHA*user_topic_vector alpha_ner_vector = (1-ALPHA) * user_ner_vector user_vector = np.add(alpha_topic_vector,alpha_ner_vector) user_vector
session-2/python/Topic_Model_Recommender.ipynb
sourabhrohilla/ds-masterclass-hands-on
mit
2. Calculate cosine similarity between user vector and articles Topic matrix
recommended_articles_id = calculate_cosine_similarity(articles_topic_matrix, user_vector) recommended_articles_id # [array([ 0.75807146]), array([ 0.74644157]), array([ 0.74440326]), array([ 0.7420562]), array([ 0.73966259])]
session-2/python/Topic_Model_Recommender.ipynb
sourabhrohilla/ds-masterclass-hands-on
mit
3. Get recommended articles
#Recommended Articles and their title news_articles = pd.read_csv(PATH_NEWS_ARTICLES) print 'Articles Read' print news_articles.loc[news_articles['Article_Id'].isin(ARTICLES_READ)]['Title'] print '\n' print 'Recommender ' print news_articles.loc[news_articles['Article_Id'].isin(recommended_articles_id)]['Title']
session-2/python/Topic_Model_Recommender.ipynb
sourabhrohilla/ds-masterclass-hands-on
mit
Then, load the data (takes a few moments):
# Load data uda = pd.read_csv("./aws-data/user_dist.txt", sep="\t") # User distribution, all udf = pd.read_csv("./aws-data/user_dist_fl.txt", sep="\t") # User distribution, Florence dra = pd.read_csv("./aws-data/user_duration.txt", sep="\t") # Duration, all drf = pd.read_csv("./aws-data/user_duration_fl.txt", sep="\t") # Duration, Florence dra['min'] = pd.to_datetime(dra['min'], format='%Y-%m-%d%H:%M:%S') dra['max'] = pd.to_datetime(dra['max'], format='%Y-%m-%d%H:%M:%S') drf['min'] = pd.to_datetime(drf['min'], format='%Y-%m-%d%H:%M:%S') drf['max'] = pd.to_datetime(drf['max'], format='%Y-%m-%d%H:%M:%S') dra['duration'] = dra['max'] - dra['min'] drf['duration'] = drf['max'] - drf['min'] dra['days'] = dra['duration'].dt.days drf['days'] = drf['duration'].dt.days cda = pd.read_csv("./aws-data/calls_per_day.txt", sep="\t") # Calls per day, all cdf = pd.read_csv("./aws-data/calls_per_day_fl.txt", sep="\t") # Calls per day, Florence cda['day_'] = pd.to_datetime(cda['day_'], format='%Y-%m-%d%H:%M:%S').dt.date cdf['day_'] = pd.to_datetime(cdf['day_'], format='%Y-%m-%d%H:%M:%S').dt.date cda.head() mcpdf = cdf.groupby('cust_id')['count'].mean().to_frame() # Mean calls per day, Florence mcpdf.columns = ['mean_calls_per_day'] mcpdf = mcpdf.sort_values('mean_calls_per_day',ascending=False) mcpdf.index.name = 'cust_id' mcpdf.reset_index(inplace=True) mcpdf.head() # mcpdf.plot(y='mean_calls_per_day', style='.', logy=True, figsize=(10,10)) mcpdf.plot.hist(y='mean_calls_per_day', logy=True, figsize=(10,10), bins=100) plt.ylabel('Number of customers with x average calls per day') # plt.xlabel('Customer rank') plt.title('Mean number of calls per day during days in Florence by foreign SIM cards') cvd = udf.merge(drf, left_on='cust_id', right_on='cust_id', how='outer') # Count versus days cvd.plot.scatter(x='days', y='count', s=.1, figsize = (10, 10)) plt.ylabel('Number of calls') plt.xlabel('Duration between first and last days active') plt.title('Calls versus duration of records of foreign SIMs in Florence') fr = drf['days'].value_counts().to_frame() # NOTE: FIGURE OUT HOW TO ROUND, NOT TRUNCATE fr.columns = ['frequency'] fr.index.name = 'days' fr.reset_index(inplace=True) fr = fr.sort_values('days') fr['cumulative'] = fr['frequency'].cumsum()/fr['frequency'].sum()
dev/notebooks/Distributions_MM.ipynb
DSSG2017/florence
mit
The code below creates a calls-per-person frequency distribution, which is the first thing we want to see.
fr.plot(x='days', y='frequency', style='o-', logy=True, figsize = (10, 10)) plt.ylabel('Number of people') plt.axvline(14,ls='dotted') plt.title('Foreign SIM days between first and last instances in Florence') cvd = udf.merge(drf, left_on='cust_id', right_on='cust_id', how='outer') # Count versus days cvd.plot.scatter(x='days', y='count', s=.1, figsize = (10, 10)) plt.ylabel('Number of calls') plt.xlabel('Duration between first and last days active') plt.title('Calls versus duration of records of foreign SIMs in Florence')
dev/notebooks/Distributions_MM.ipynb
DSSG2017/florence
mit
Plot this distribution. This shows that 19344 people made 1 call over the 4 months, 36466 people made 2 calls over the 4 months, 41900 people made 3 calls over the 4 months, etc.
fr = udf['count'].value_counts().to_frame() fr.columns = ['frequency'] fr.index.name = 'calls' fr.reset_index(inplace=True) fr = fr.sort_values('calls') fr['cumulative'] = fr['frequency'].cumsum()/fr['frequency'].sum() fr.head() fr.plot(x='calls', y='frequency', style='o-', logx=True, figsize = (10, 10)) # plt.axvline(5,ls='dotted') plt.ylabel('Number of people') plt.title('Number of people placing or receiving x number of calls over 4 months')
dev/notebooks/Distributions_MM.ipynb
DSSG2017/florence
mit
It might be more helpful to look at a cumulative distribution curve, from which we can read off quantiles (e.g., this percentage of the people in the data set had x or more calls, x or fewer calls). Specifically, 10% of people have 3 or fewer calls over the entire period, 25% have 7 of fewer, 33% have 10 or fewer, 50% have 17 of fewer calls, etc., all the way up to 90% of people having 76 or fewer calls.
fr.plot(x='calls', y='cumulative', style='o-', logx=True, figsize = (10, 10)) plt.axhline(1.0,ls='dotted',lw=.5) plt.axhline(.90,ls='dotted',lw=.5) plt.axhline(.75,ls='dotted',lw=.5) plt.axhline(.67,ls='dotted',lw=.5) plt.axhline(.50,ls='dotted',lw=.5) plt.axhline(.33,ls='dotted',lw=.5) plt.axhline(.25,ls='dotted',lw=.5) plt.axhline(.10,ls='dotted',lw=.5) plt.axhline(0.0,ls='dotted',lw=.5) plt.axvline(max(fr['calls'][fr['cumulative']<.90]),ls='dotted',lw=.5) plt.ylabel('Cumulative fraction of people') plt.title('Cumulative fraction of people placing or receiving x number of calls over 4 months')
dev/notebooks/Distributions_MM.ipynb
DSSG2017/florence
mit
We also want to look at the number of unique lat-long addresses, which will (roughly) correspond to either where cell phone towers are, and/or the level of truncation. This takes too long in pandas, so we use postgres, piping the results of the query, \o towers_with_counts.txt select lat, lon, count(*) as calls, count(distinct cust_id) as users, count(distinct date_trunc('day', date_time_m) ) as days from optourism.cdr_foreigners group by lat, lon order by calls desc; \q into the file towers_with_counts.txt. This is followed by the bash command cat towers_with_counts.txt | sed s/\ \|\ /'\t'/g | sed s/\ //g | sed 2d &gt; towers_with_counts2.txt to clean up the postgres output format.
df2 = pd.read_table("./aws-data/towers_with_counts2.txt") df2.head()
dev/notebooks/Distributions_MM.ipynb
DSSG2017/florence
mit
Do the same thing as above.
fr2 = df2['count'].value_counts().to_frame() fr2.columns = ['frequency'] fr2.index.name = 'count' fr2.reset_index(inplace=True) fr2 = fr2.sort_values('count') fr2['cumulative'] = fr2['frequency'].cumsum()/fr2['frequency'].sum() fr2.head() fr2.plot(x='count', y='frequency', style='o-', logx=True, figsize = (10, 10)) # plt.axvline(5,ls='dotted') plt.ylabel('Number of cell towers') plt.title('Number of towers with x number of calls placed or received over 4 months')
dev/notebooks/Distributions_MM.ipynb
DSSG2017/florence
mit
Unlike the previous plot, this is not very clean at all, making the cumulative distribution plot critical.
fr2.plot(x='count', y='cumulative', style='o-', logx=True, figsize = (10, 10)) plt.axhline(0.1,ls='dotted',lw=.5) plt.axvline(max(fr2['count'][fr2['cumulative']<.10]),ls='dotted',lw=.5) plt.axhline(0.5,ls='dotted',lw=.5) plt.axvline(max(fr2['count'][fr2['cumulative']<.50]),ls='dotted',lw=.5) plt.axhline(0.9,ls='dotted',lw=.5) plt.axvline(max(fr2['count'][fr2['cumulative']<.90]),ls='dotted',lw=.5) plt.ylabel('Cumulative fraction of cell towers') plt.title('Cumulative fraction of towers with x number of calls placed or received over 4 months')
dev/notebooks/Distributions_MM.ipynb
DSSG2017/florence
mit
Now, we want to look at temporal data. First, convert the categorical date_time_m to a datetime object; then, extract the date component.
df['datetime'] = pd.to_datetime(df['date_time_m'], format='%Y-%m-%d %H:%M:%S') df['date'] = df['datetime'].dt.floor('d') # Faster than df['datetime'].dt.date df2 = df.groupby(['cust_id','date']).size().to_frame() df2.columns = ['count'] df2.index.name = 'date' df2.reset_index(inplace=True) df2.head(20) df3 = (df2.groupby('cust_id')['date'].max() - df2.groupby('cust_id')['date'].min()).to_frame() df3['calls'] = df2.groupby('cust_id')['count'].sum() df3.columns = ['days','calls'] df3['days'] = df3['days'].dt.days df3.head() fr = df['cust_id'].value_counts().to_frame()['cust_id'].value_counts().to_frame() # plt.scatter(np.log(df3['days']), np.log(df3['calls'])) # plt.show() fr.plot(x='calls', y='freq', style='o', logx=True, logy=True) x=np.log(fr['calls']) y=np.log(1-fr['freq'].cumsum()/fr['freq'].sum()) plt.plot(x, y, 'r-') # How many home_Regions np.count_nonzero(data['home_region'].unique()) # How many customers np.count_nonzero(data['cust_id'].unique()) # How many Nulls are there in the customer ID column? df['cust_id'].isnull().sum() # How many missing data are there in the customer ID? len(df['cust_id']) - df['cust_id'].count() df['cust_id'].unique() data_italians = pd.read_csv("./aws-data/firence_italians_3days_past_future_sample_1K_custs.csv", header=None) data_italians.columns = ['lat', 'lon', 'date_time_m', 'home_region', 'cust_id', 'in_florence'] regions = np.array(data_italians['home_region'].unique()) regions 'Sardegna' in data['home_region']
dev/notebooks/Distributions_MM.ipynb
DSSG2017/florence
mit
3. Read tables from websites pandas is cool - Use pd.read_html(url) - It returns a list of all tables in the website - It tries to guess the encoding of the website, but with no much success.
df = pd.read_html("https://piie.com/summary-economic-sanctions-episodes-1914-2006",encoding="UTF-8") print(type(df),len(df)) df df[0].head(10) df[0].columns df = pd.read_html("https://piie.com/summary-economic-sanctions-episodes-1914-2006",encoding="UTF-8") df = df[0] print(df.columns) df.columns = ['Year imposed', 'Year ended', 'Principal sender', 'Target country', 'Policy goal', 'Success score (scale 1 to 16)', 'Cost to target (percent of GNP)'] df = df.replace('negligible', 0) df = df.replace("–","-",regex=True) #the file uses long dashes df.to_csv("data/economic_sanctions.csv",index=None,sep="\t") df = pd.read_csv("data/economic_sanctions.csv",sep="\t",na_values=["-","Ongoing"]) df["Duration"] = df["Year ended"] - df["Year imposed"] df.head() sns.lmplot(x="Duration",y="Cost to target (percent of GNP)",data=df,fit_reg=False,hue="Year imposed",legend=False,palette="YlOrBr") plt.ylim((-2,10)) plt.legend(loc="center left", bbox_to_anchor=(1, 0.5),ncol=4)
class4/class4_timeseries.ipynb
jgarciab/wwd2017
gpl-3.0
4. Parse dates pandas is cool - Use parse_dates=[columns] when reading the file - It parses the date 4.1. Use parse_dates when reading the file
df = pd.read_csv("data/exchange-rate-twi-may-1970-aug-1.tsv",sep="\t",parse_dates=["Month"],skipfooter=2) df.head()
class4/class4_timeseries.ipynb
jgarciab/wwd2017
gpl-3.0
4.2. You can now filter by date
#filter by time df_after1980 = df.loc[df["Month"] > "1980-05-02"] #year-month-date df_after1980.columns = ["Date","Rate"] df_after1980.head()
class4/class4_timeseries.ipynb
jgarciab/wwd2017
gpl-3.0
4.3. And still extract columns of year and month
#make columns with year and month (useful for models) df_after1980["Year"] = df_after1980["Date"].apply(lambda x: x.year) df_after1980["Month"] = df_after1980["Date"].apply(lambda x: x.month) df_after1980.head()
class4/class4_timeseries.ipynb
jgarciab/wwd2017
gpl-3.0
4.4. You can resample the data with a specific frequency Very similar to groupby. Groups the data with a specific frequency "A" = End of year "B" = Business day others: http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases Then you tell pandas to apply a function to the group (mean/max/median...)
#resample df_after1980_resampled = df_after1980.resample("A",on="Date").mean() display(df_after1980_resampled.head()) df_after1980_resampled = df_after1980_resampled.reset_index() df_after1980_resampled.head()
class4/class4_timeseries.ipynb
jgarciab/wwd2017
gpl-3.0
4.5 And of course plot it with a line plot
#Let's visualize it plt.figure(figsize=(6,4)) plt.plot(df_after1980["Date"],df_after1980["Rate"],label="Before resampling") plt.plot(df_after1980_resampled["Date"],df_after1980_resampled["Rate"],label="After resampling") plt.xlabel("Time") plt.ylabel("Rate") plt.legend() plt.show()
class4/class4_timeseries.ipynb
jgarciab/wwd2017
gpl-3.0
We can also implement this test with a while loop instead of a for loop. This doesn't make much of a difference, in Python 3.x. (In Python 2.x, this would save memory).
def is_prime(n): ''' Checks whether the argument n is a prime number. Uses a brute force search for factors between 1 and n. ''' j = 2 while j < n: # j will proceed through the list of numbers 2,3,...,n-1. if n%j == 0: # is n divisible by j? print("{} is a factor of {}.".format(j,n)) return False j = j + 1 # There's a Python abbreviation for this: j += 1. return True is_prime(10001) is_prime(101)
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
If $n$ is a prime number, then the is_prime(n) function will iterate through all the numbers between $2$ and $n-1$. But this is overkill! Indeed, if $n$ is not prime, it will have a factor between $2$ and the square root of $n$. This is because factors come in pairs: if $ab = n$, then one of the factors, $a$ or $b$, must be less than or equal to the square root of $n$. So it suffices to search for factors up to (and including) the square root of $n$. We haven't worked with square roots in Python yet. But Python comes with a standard math package which enables square roots, trig functions, logs, and more. Click the previous link for documentation. This package doesn't load automatically when you start Python, so you have to load it with a little Python code.
from math import sqrt
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
This command imports the square root function (sqrt) from the package called math. Now you can find square roots.
sqrt(1000)
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
There are a few different ways to import functions from packages. The above syntax is a good starting point, but sometimes problems can arise if different packages have functions with the same name. Here are a few methods of importing the sqrt function and how they differ. from math import sqrt: After this command, sqrt will refer to the function from the math package (overriding any previous definition). import math: After this command, all the functions from the math package will be imported. But to call sqrt, you would type a command like math.sqrt(1000). This is convenient if there are potential conflicts with other packages. from math import *: After this command, all the functions from the math package will be imported. To call them, you can access them directly with a command like sqrt(1000). This can easily cause conflicts with other packages, since packages can have hundreds of functions in them! import math as mth: Some people like abbreviations. This imports all the functions from the math package. To call one, you type a command like mth.sqrt(1000).
import math math.sqrt(1000) factorial(10) # This will cause an error! math.factorial(10) # This is ok, since the math package comes with a function called factorial.
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
Now let's improve our is_prime(n) function by searching for factors only up to the square root of the number n. We consider two options.
def is_prime_slow(n): ''' Checks whether the argument n is a prime number. Uses a brute force search for factors between 1 and n. ''' j = 2 while j <= sqrt(n): # j will proceed through the list of numbers 2,3,... up to sqrt(n). if n%j == 0: # is n divisible by j? print("{} is a factor of {}.".format(j,n)) return False j = j + 1 # There's a Python abbreviation for this: j += 1. return True def is_prime_fast(n): ''' Checks whether the argument n is a prime number. Uses a brute force search for factors between 1 and n. ''' j = 2 root_n = sqrt(n) while j <= root_n: # j will proceed through the list of numbers 2,3,... up to sqrt(n). if n%j == 0: # is n divisible by j? print("{} is a factor of {}.".format(j,n)) return False j = j + 1 # There's a Python abbreviation for this: j += 1. return True is_prime_fast(1000003) is_prime_slow(1000003)
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
I've chosen function names with "fast" and "slow" in them. But what makes them faster or slower? Are they faster than the original? And how can we tell? Python comes with a great set of tools for these questions. The simplest (for the user) are the time utilities. By placing the magic %timeit before a command, Python does something like the following: Python makes a little container in your computer devoted to the computations, to avoid interference from other running programs if possible. Python executes the command lots and lots of times. Python averages the amount of time taken for each execution. Give it a try below, to compare the speed of the functions is_prime (the original) with the new is_prime_fast and is_prime_slow. Note that the %timeit commands might take a little while.
%timeit is_prime_fast(1000003) %timeit is_prime_slow(1000003) %timeit is_prime(1000003)
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
Time is measured in seconds, milliseconds (1 ms = 1/1000 second), microseconds (1 µs = 1/1,000,000 second), and nanoseconds (1 ns = 1/1,000,000,000 second). So it might appear at first that is_prime is the fastest, or about the same speed. But check the units! The other two approaches are about a thousand times faster! How much faster were they on your computer?
is_prime_fast(10000000000037) # Don't try this with `is_prime` unless you want to wait for a long time!
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
Indeed, the is_prime_fast(n) function will go through a loop of length about sqrt(n) when n is prime. But is_prime(n) will go through a loop of length about n. Since sqrt(n) is much less than n, especially when n is large, the is_prime_fast(n) function is much faster. Between is_prime_fast and is_prime_slow, the difference is that the fast version precomputes the square root sqrt(n) before going through the loop, where the slow version repeats the sqrt(n) every time the loop is repeated. Indeed, writing while j &lt;= sqrt(n): suggests that Python might execute sqrt(n) every time to check. This might lead to Python computing the same square root a million times... unnecessarily! A basic principle of programming is to avoid repetition. If you have the memory space, just compute once and store the result. It will probably be faster to pull the result out of memory than to compute it again. Python does tend to be pretty smart, however. It's possible that Python is precomputing sqrt(n) even in the slow loop, just because it's clever enough to tell in advance that the same thing is being computed over and over again. This depends on your Python version and takes place behind the scenes. If you want to figure it out, there's a whole set of tools (for advanced programmers) like the disassembler to figure out what Python is doing.
is_prime_fast(10**14 + 37) # This might get a bit of delay.
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
Now we have a function is_prime_fast(n) that is speedy for numbers n in the trillions! You'll probably start to hit a delay around $10^{15}$ or so, and the delays will become intolerable if you add too many more digits. In a future lesson, we will see a different primality test that will be essentially instant even for numbers around $10^{1000}$! Exercises To check whether a number n is prime, you can first check whether n is even, and then check whether n has any odd factors. Change the is_prime_fast function by implementing this improvement. How much of a speedup did you get? Use the %timeit tool to study the speed of is_prime_fast for various sizes of n. Using 10-20 data points, make a graph relating the size of n to the time taken by the is_prime_fast function. Write a function is_square(n) to test whether a given integer n is a perfect square (like 0, 1, 4, 9, 16, etc.). How fast can you make it run? Describe the different approaches you try and which are fastest. <a id='lists'></a> List manipulation We have already (briefly) encountered the list type in Python. Recall that the range command produces a range, which can be used to produce a list. For example, list(range(10)) produces the list [0,1,2,3,4,5,6,7,8,9]. You can also create your own list by a writing out its terms, e.g. L = [4,7,10]. Here we work with lists, and a very Pythonic approach to list manipulation. With practice, this can be a powerful tool to write fast algorithms, exploiting the hard-wired capability of your computer to shift and slice large chunks of data. Our application will be to implement the Sieve of Eratosthenes, producing a long list of prime numbers (without using any is_prime test along the way). We begin by creating two lists to play with.
L = [0,'one',2,'three',4,'five',6,'seven',8,'nine',10]
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
List terms and indices Notice that the entries in a list can be of any type. The above list L has some integer entries and some string entries. Lists are ordered in Python, starting at zero. One can access the $n^{th}$ entry in a list with a command like L[n].
L[3] print(L[3]) # Note that Python has slightly different approaches to the print-function, and the output above. print(L[4]) # We will use the print function, because it makes our printing intentions clear. print(L[0])
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
The location of an entry is called its index. So at the index 3, the list L stores the entry three. Note that the same entry can occur in many places in a list. E.g. [7,7,7] is a list with 7 at the zeroth, first, and second index.
print(L[-1]) print(L[-2])
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
The last bit of code demonstrates a cool Python trick. The "-1st" entry in a list refers to the last entry. The "-2nd entry" refers to the second-to-last entry, and so on. It gives a convenient way to access both sides of the list, even if you don't know how long it is. Of course, you can use Python to find out how long a list is.
len(L)
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
You can also use Python to find the sum of a list of numbers.
sum([1,2,3,4,5]) sum(range(100)) # Be careful. This is the sum of which numbers? # The sum function can take lists or ranges.
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
List slicing Slicing lists allows us to create new lists (or ranges) from old lists (or ranges), by chopping off one end or the other, or even slicing out entries at a fixed interval. The simplest syntax has the form L[a:b] where a denotes the index of the starting entry and index of the final entry is one less than b. It is best to try a few examples to get a feel for it. Slicing a list with a command like L[a:b] doesn't actually change the original list L. It just extracts some terms from the list and outputs those terms. Soon enough, we will change the list L using a list assignment.
L[0:5] L[5:11] # Notice that L[0:5] and L[5:11] together recover the whole list. L[3:7]
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
This continues the strange (for beginners) Python convention of starting at the first number and ending just before the last number. Compare to range(3,7), for example. The command L[0:5] can be replaced by L[:5] to abbreviate. The empty opening index tells Python to start at the beginning. Similarly, the command L[5:11] can be replaced by L[5:]. The empty closing index tells Python to end the slice and the end. This is helpful if one doesn't know where the list ends.
L[:5] L[3:]
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
Just like the range command, list slicing can take an optional third argument to give a step size. To understand this, try the command below.
L[2:10] L[2:10:3]
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
If, in this three-argument syntax, the first or second argument is absent, then the slice starts at the beginning of the list or ends at the end of the list accordingly.
L # Just a reminder. We haven't modified the original list! L[:9:3] # Start at zero, go up to (but not including) 9, by steps of 3. L[2: :3] # Start at two, go up through the end of the list, by steps of 3. L[::3] # Start at zero, go up through the end of the list, by steps of 3.
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
Changing list slices Not only can we extract and study terms or slices of a list, we can change them by assignment. The simplest case would be changing a single term of a list.
print(L) # Start with the list L. L[5] = 'Bacon!' print(L) # What do you think L is now? print(L[2::3]) # What do you think this will do?
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
We can change an entire slice of a list with a single assignment. Let's change the first two terms of L in one line.
L[:2] = ['Pancakes', 'Ham'] # What was L[:2] before? print(L) # Oh... what have we done! L[0] L[1] L[2]
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
We can change a slice of a list with a single assignment, even when that slice does not consist of consecutive terms. Try to predict what the following commands will do.
print(L) # Let's see what the list looks like before. L[::2] = ['A','B','C','D','E','F'] # What was L[::2] before this assignment? print(L) # What do you predict?
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
Exercises Create a list L with L = [1,2,3,...,100] (all the numbers from 1 to 100). What is L[50]? Take the same list L, and extract a slice of the form [5,10,15,...,95] with a command of the form L[a:b:c]. Take the same list L, and change all the even numbers to zeros, so that L looks like [1,0,3,0,5,0,...,99,0]. Hint: You might wish to use the list [0]*50. Try the command L[-1::-1] on a list. What does it do? Can you guess before executing it? Can you understand why? In fact, strings are lists too. Try setting L = 'Hello' and the previous command. <a id='sieve'></a> Sieve of Eratosthenes The Sieve of Eratosthenes (hereafter called "the sieve") is a very fast way of producing long lists of primes, without doing repeated primality checking. It is described in more detail in Chapter 2 of An Illustrated Theory of Numbers. The basic idea is to start with all of the natural numbers, and successively filter out, or sieve, the multiples of 2, then the multiples of 3, then the multiples of 5, etc., until only primes are left. Using list slicing, we can carry out this sieving process efficiently. And with a few more tricks we encounter here, we can carry out the Sieve very efficiently. The basic sieve The first approach we introduce is a bit naive, but is a good starting place. We will begin with a list of numbers up to 100, and sieve out the appropriate multiples of 2,3,5,7.
primes = list(range(100)) # Let's start with the numbers 0...99.
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
Now, to "filter", i.e., to say that a number is not prime, let's just change the number to the value None.
primes[0] = None # Zero is not prime. primes[1] = None # One is not prime. print(primes) # What have we done?
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
Now let's filter out the multiples of 2, starting at 4. This is the slice primes[4::2]
primes[4::2] = [None] * len(primes[4::2]) # The right side is a list of Nones, of the necessary length. print(primes) # What have we done?
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
Now we filter out the multiples of 3, starting at 9.
primes[9::3] = [None] * len(primes[9::3]) # The right side is a list of Nones, of the necessary length. print(primes) # What have we done?
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
Next the multiples of 5, starting at 25 (the first multiple of 5 greater than 5 that's left!)
primes[25::5] = [None] * len(primes[25::5]) # The right side is a list of Nones, of the necessary length. print(primes) # What have we done?
P3wNT Notebook 3.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0