markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Mais cela conduit à des résultats incorrects pour des calculs avec plusieurs développements :
(cos(x)*sin(x)).series(x, 0, 6)
Calcul symbolique.ipynb
regisDe/compagnons
gpl-2.0
Plus sur les séries https://fr.wikipedia.org/wiki/D%C3%A9veloppement_limit%C3%A9 - Article de Wikipedia. Algèbre linéaire Matrices Les matrices sont définies par la classe Matrix :
m11, m12, m21, m22 = symbols("m11, m12, m21, m22") b1, b2 = symbols("b1, b2") A = Matrix([[m11, m12],[m21, m22]]) A b = Matrix([[b1], [b2]]) b
Calcul symbolique.ipynb
regisDe/compagnons
gpl-2.0
Avec les instances de la classe Matrix on peut faire les opérations algébriques classiques :
A**2 A * b
Calcul symbolique.ipynb
regisDe/compagnons
gpl-2.0
Et calculer les déterminants et inverses :
A.det() A.inv()
Calcul symbolique.ipynb
regisDe/compagnons
gpl-2.0
Résolution d'équations Pour résoudre des équations et des systèmes d'équations on utilise la fonction solve :
solve(x**2 - 1, x) solve(x**4 - x**2 - 1, x) expand((x-1)*(x-2)*(x-3)*(x-4)*(x-5)) solve(x**5 - 15*x**4 + 85*x**3 - 225*x**2 + 274*x - 120, x)
Calcul symbolique.ipynb
regisDe/compagnons
gpl-2.0
Système d'équations :
solve([x + y - 1, x - y - 1], [x,y])
Calcul symbolique.ipynb
regisDe/compagnons
gpl-2.0
En termes d'autres expressions symboliques :
solve([x + y - a, x - y - c], [x,y])
Calcul symbolique.ipynb
regisDe/compagnons
gpl-2.0
Résolution d'équations différentielles Pour résoudre des équations diférentielles et des systèmes d'équations différentielles on utilise la fonction dsolve :
from sympy import Function, dsolve, Eq, Derivative, sin, cos, symbols from sympy.abc import x
Calcul symbolique.ipynb
regisDe/compagnons
gpl-2.0
Exemple d'équation différentielle du 2e ordre
f = Function('f') dsolve(Derivative(f(x), x, x) + 9*f(x), f(x)) dsolve(diff(f(x), x, 2) + 9*f(x), f(x), hint='default', ics={f(0):0, f(1):10}) # Essai de récupération de la valeur de la constante C1 quand une condition initiale est fournie eqg = Symbol("eqg") g = Function('g') eqg = dsolve(Derivative(g(x), x) + g(x), g(x), ics={g(2): 50}) eqg print "g(x) est de la forme {}".format(eqg.rhs) # recherche manuelle de la valeur de c1 qui vérifie la condition initiale c1 = Symbol("c1") c1 = solve(Eq(c1*E**(-2),50), c1) print c1
Calcul symbolique.ipynb
regisDe/compagnons
gpl-2.0
SymPy ne sait pas résoudre cette equation différentielle non linéaire avec $h(x)^2$ :
h = Function('h') try: dsolve(Derivative(h(x), x) + 0.001*h(x)**2 - 10, h(x)) except: print "une erreur s'est produite"
Calcul symbolique.ipynb
regisDe/compagnons
gpl-2.0
On peut résoudre cette équation différentielle avec une méthode numérique fournie par la fonction odeint de SciPy : Méthode numérique pour équations différentielles (non SymPy)
from scipy.integrate import odeint def dv_dt(vec, t, k, m, g): z, v = vec[0], vec[1] dz = -v dv = -k/m*v**2 + g return [dz, dv] vec0 = [0, 0] # conditions initiales [altitude, vitesse] t_si = numpy.linspace (0, 30 ,150) # de 0 à 30 s, 150 points k = 0.1 # coefficient aérodynamique m = 80 # masse (kg) g = 9.81 # accélération pesanteur (m/s/s) v_si = odeint(dv_dt, vec0, t_si, args=(k, m, g)) print "vitesse finale : {0:.1f} m/s soit {1:.0f} km/h".format(v_si[-1, 1], v_si[-1, 1] * 3.6) fig_si, ax_si = plt.subplots() ax_si.set_title("Vitesse en chute libre") ax_si.set_xlabel("s") ax_si.set_ylabel("m/s") ax_si.plot(t_si, v_si[:,1], 'b')
Calcul symbolique.ipynb
regisDe/compagnons
gpl-2.0
Character counting and entropy Write a function char_probs that takes a string and computes the probabilities of each character in the string: First do a character count and store the result in a dictionary. Then divide each character counts by the total number of character to compute the normalized probabilties. Return the dictionary of characters (keys) and probabilities (values).
def char_probs(s): """Find the probabilities of the unique characters in the string s. Parameters ---------- s : str A string of characters. Returns ------- probs : dict A dictionary whose keys are the unique characters in s and whose values are the probabilities of those characters. """ # YOUR CODE HERE #raise NotImplementedError() s=s.replace(' ','') l = [i for i in s] dic={i:l.count(i) for i in l} prob = [(dic[i]/len(l)) for i in dic] result = {i:prob[j] for i in l for j in range(len(prob))} return result test1 = char_probs('aaaa') assert np.allclose(test1['a'], 1.0) test2 = char_probs('aabb') assert np.allclose(test2['a'], 0.5) assert np.allclose(test2['b'], 0.5) test3 = char_probs('abcd') assert np.allclose(test3['a'], 0.25) assert np.allclose(test3['b'], 0.25) assert np.allclose(test3['c'], 0.25) assert np.allclose(test3['d'], 0.25)
assignments/midterm/AlgorithmsEx03.ipynb
jpilgram/phys202-2015-work
mit
The entropy is a quantiative measure of the disorder of a probability distribution. It is used extensively in Physics, Statistics, Machine Learning, Computer Science and Information Science. Given a set of probabilities $P_i$, the entropy is defined as: $$H = - \Sigma_i P_i \log_2(P_i)$$ In this expression $\log_2$ is the base 2 log (np.log2), which is commonly used in information science. In Physics the natural log is often used in the definition of entropy. Write a funtion entropy that computes the entropy of a probability distribution. The probability distribution will be passed as a Python dict: the values in the dict will be the probabilities. To compute the entropy, you should: First convert the values (probabilities) of the dict to a Numpy array of probabilities. Then use other Numpy functions (np.log2, etc.) to compute the entropy. Don't use any for or while loops in your code.
def entropy(d): """Compute the entropy of a dict d whose values are probabilities.""" # YOUR CODE HERE #raise NotImplementedError() s = char_probs(d) z = [(i,s[i]) for i in s] w=np.array(z) P = np.array(w[::,1]) np.log2(P[1]) entropy('haldjfhasdf') assert np.allclose(entropy({'a': 0.5, 'b': 0.5}), 1.0) assert np.allclose(entropy({'a': 1.0}), 0.0)
assignments/midterm/AlgorithmsEx03.ipynb
jpilgram/phys202-2015-work
mit
Use IPython's interact function to create a user interface that allows you to type a string into a text box and see the entropy of the character probabilities of the string.
# YOUR CODE HERE raise NotImplementedError() assert True # use this for grading the pi digits histogram
assignments/midterm/AlgorithmsEx03.ipynb
jpilgram/phys202-2015-work
mit
Regular Expressions Daniel Rice Introduction Definition Examples Exercise 1 Decomposing the syntax Character classes Metacharacters Repetition Capture groups Regex's in Python match search Introduction Definition A regular expression (also known as a RE, regex, regex pattern, or regexp) is a sequence of symbols and characters expressing a text pattern. A regular expression allows us to specify a string pattern that we can then search for within a body of text. The idea is to make a pattern template (regex), and then query some text to see if the template is present or not. Example 1 Let's say we want to determine if a string begins with the word PASS. Our regular expression will simply be:
pass_regex = 'PASS'
python-club/notebooks/regular-expressions.ipynb
wtsi-medical-genomics/team-code
gpl-2.0
This pattern will match the occurence of PASS in the query text. Now let's test it out:
re_test(pass_regex, 'PASS: Data good') re_test(pass_regex, 'FAIL: Data bad')
python-club/notebooks/regular-expressions.ipynb
wtsi-medical-genomics/team-code
gpl-2.0
Example 2 Let's say we have a text file that contains numerical readings that we need to perform some analysis on. Here's the first few lines from the file:
lines = \ """ Device-initialized. Version-19.23 12-12-2014 12 4353 3452 ERROR 498 34598734 345982398 23 ERROR 3434345798 """
python-club/notebooks/regular-expressions.ipynb
wtsi-medical-genomics/team-code
gpl-2.0
We don't want the header lines and those ERROR lines are going to ruin our analysis! Let's filter these out with with a regex. First we will create the pattern template (or regex) for what we want to find: ^\d+$ This regex can be split into four parts: ^ This indicates the start of the string. \d This specifies we want to match decimal digits (the numbers 0-9). + This symbol means we want to find one or more of the previous symbol (which in this case is a decimal digit). $ This indicates the end of the string. Putting it all together we want to find patterns that are one or more (+) numbers (\d) from start (^) to finish ($). Let's load the regex into Python's re module:
integer_regex = re.compile('\d+$')
python-club/notebooks/regular-expressions.ipynb
wtsi-medical-genomics/team-code
gpl-2.0
Now let's get our string of lines into a list of strings:
lines = lines.split() print lines
python-club/notebooks/regular-expressions.ipynb
wtsi-medical-genomics/team-code
gpl-2.0
Now we need to run through each of these lines and determine if it matches our regex. Converting to integer would be nice as well.
clean_data = [] # We will accumulate our filtered integers here for line in lines: if integer_regex.match(line): clean_data.append(int(line)) print clean_data # If you're into one liners you could also do one of these: # clean_data = [int(line) for line in lines if integer_regex.match(line)] # clean_data = map(int, filter(integer_regex.match, lines))
python-club/notebooks/regular-expressions.ipynb
wtsi-medical-genomics/team-code
gpl-2.0
It worked like a dream. You may be arguing that there other non-regex solutions to this problem and indeed there are (for example integer typecasting with a catch clause) but this example was given to show you the process of: Creating a regex pattern for what you want to find. Appyling it to some text. Extracting the positive hits. There will be situations where regex's will really be the only viable solution when you want to match some super-complex strings. Exercise 1 You have a file consisting of DNA bases which you want to perform analysis on:
lines = \ """ Acme-DNA-Reader ACTG AA -1 CCTC TTTCG C TGCTA -1 TCCCCCC """
python-club/notebooks/regular-expressions.ipynb
wtsi-medical-genomics/team-code
gpl-2.0
The -1 represent reading erros and we want these removed. Using the preceeding example as a guide, filter out the header and the reading errors. Hint The bases can be represented with the pattern [ACGT].
bases_regex = re.compile('[ACGT]+$') lines = lines.split() #print lines clean_data = [] # We will accumulate our filtered integers here for line in lines: print line if bases_regex.match(line): clean_data.append(line) print clean_data
python-club/notebooks/regular-expressions.ipynb
wtsi-medical-genomics/team-code
gpl-2.0
Decomposing the syntax Regexps can appear cryptic but they can be decomposed into character classes and metacharacters. Character classes These allow us to concisely specify the types or classes of characters to match. In the example above \d is a character class that represents decimal digits. There are many such character classes and we will go through these below. The square brackets allow us to specify a set of characters to match. We have already seen this with [ACGT]. We can also use the hyphen - to specify ranges. | Character Class | Description | Match Examples | |:---------------:| ----------- | -------------- | | \d | Matches any decimal digit; this is equivalent to the class [0-9]. | 0, 1, 2, ... | | \D | Matches any non-digit character; this is equivalent to the class [^0-9]. | a, @, ; | | \s | Matches any whitespace character; this is equivalent to the class [ \t\n\r\f\v]. | space, tab, newline | | \S | Matches any non-whitespace character; this is equivalent to the class [^ \t\n\r\f\v]. | 1, A, & | | \w | Matches any alphanumeric character (word character) ; this is equivalent to the class [a-zA-Z0-9_].| x, Z, 2 | | \W | Matches any non-alphanumeric character; this is equivalent to the class [^a-zA-Z0-9_]. | £, (, space | | . | Matches anything (except newline). | 8, (, a, space | This can look like a lot to remember but there are some menomics here: | Character Class | Mnemonic | |:---------------:| -------- | | \d | decimal digit | | \D | uppercase so not \d | | \s | whitespace character | | \S | uppercase so not \s | | \w | word character | | \W | uppercase so not \w| Metacharacters Repitition The character classes will match only a single character. How can say match exactly 3 occurences of Q? The metacharacters include different sybmols to reflect repetition: | Repetition Metacharacter | Description | |:------------------------:| ----------- | | * | Matches zero or more occurences of the previous character (class). | | + | Matches one or more occurences of the previous character (class). | | {m,n} | With integers m and n, specifies at least m and at most n occurences of the previous character (class). Do not put any space after the comma as this prevents the metacharacter from being recognized. | Examples
re_test('A*', ' ') re_test('A*', 'A') re_test('A*', 'AA') re_test('A*', 'Z12345') re_test('A+', ' ') re_test('A+', 'A') re_test('A+', 'ZZZZ') re_test('BA{1,3}B', 'BB') re_test('BA{1,3}B', 'BAB') re_test('BA{1,3}B', 'BAAAB') re_test('BA{1,3}B', 'BAAAAAB') re_test('.*', 'AB12[]9023') re_test('\d{1,3}B', '123B') re_test('\w{1,3}\d+', 'aaa2') re_test('\w{1,3}\d+', 'aaaa2') #http://path/ssh://dr9@farm3-login:/path p = re.compile(r'http://(\w+)/ssh://(\w+)@(\w+):/(\w+)') m = p.match(r'http://path/ssh://dr9@farm3-login:/path') RE_SSH = re.compile(r'/ssh://(\w+)@(.+):(.+)/(?:chr)?([mxy0-9]{1,2}):(\d+)-(\d+)$', re.IGNORECASE) RE_SSH = re.compile(r'/ssh://(\w+)@(.+)$', re.IGNORECASE) t = '/ssh://dr9@farm3-login' m = RE_SSH.match(t) #user, server, path, lchr, lmin, lmax = m.groups() for el in m.groups(): print el
python-club/notebooks/regular-expressions.ipynb
wtsi-medical-genomics/team-code
gpl-2.0
Exercise 2 Determine if a string contains "wazup" or "wazzup" or "wazzzup" where the number of z's must be greater than zero. Use the following list of strings:
L = [ 'So I said wazzzzzzzup?', 'And she said wazup back to me', 'waup isn\'t a word', 'what is up', 'wazzzzzzzzzzzzzzzzzzzzzzzup'] wazup_regex = re.compile(r'.*waz+up.*') matches = [el for el in L if wazup_regex.match(el)] print matches
python-club/notebooks/regular-expressions.ipynb
wtsi-medical-genomics/team-code
gpl-2.0
Example We have a list of strings and some of these contain names that we want to extract. The names have the format 0123_FirstName_LastName where the quantity of numbers at the beginning of the string are variable (e.g. 1_Bob_Smith, 12_Bob_Smith, 123456_Bob_Smith) are all valid).
L = [ '123_George_Washington', 'Blah blah', '894542342_Winston_Churchill', 'More blah blah', 'String_without_numbers']
python-club/notebooks/regular-expressions.ipynb
wtsi-medical-genomics/team-code
gpl-2.0
Don't worry if the following regex looks cryptic, it will soon be broken down.
p = re.compile(r'\d+_([A-Z,a-z]+)_([A-Z,a-z]+)') for el in L: m = p.match(el) if m: print m.groups()
python-club/notebooks/regular-expressions.ipynb
wtsi-medical-genomics/team-code
gpl-2.0
Exercise 3 Find all occurences of AGT within a string of DNA where contiguous repeated occurences should be counted only once (e.g. AGTAGTAGT will be counted once and not three times).
dna = 'AGTAGTACTACAAGTAGTCCAGTCCTTGGGAGTAGTAGTAGTAAGGGCCT' p = re.compile(r'(AGT)+') m = p.finditer(dna) for match in m: print '(start, stop): {}'.format(match.span()) print 'matching string: {}'.format(match.group()) p.finditer?
python-club/notebooks/regular-expressions.ipynb
wtsi-medical-genomics/team-code
gpl-2.0
Exercise 4 A text file contains some important information about a test that has been run. The individual who wrote this file is inconsistent with date formats.
L = [ 'Test 1-2 commencing 2012-12-12 for multiple reads.', 'Date of birth of individual 803232435345345 is 1983/06/27.', 'Test 1-2 complete 20130420.']
python-club/notebooks/regular-expressions.ipynb
wtsi-medical-genomics/team-code
gpl-2.0
Convert all dates to the format YYYYMMDD. Hints: * Use groups () * Use {m, n} where m=n=2 or m=n=4 * Use ? for the bits between date components * You can use either search or match, though in the latter you will need to specify what happens before and after the date (.* maybe)? * The second element in the list will present you with issues as there is a number there that may accidentally be captured as a date. Use \D to make sure your date is not surrounded by decimal digits.
p = re.compile(r'\D+\d{4,4}[-/]?\d{2,2}[-/]?\d{2,2}\D') date_regex = re.compile(r'\D(\d{4,4})[-/]?(\d{2,2})[-/]?(\d{2,2})\D') standard_dates = [] for el in L: m = date_regex.search(el) if m: standard_dates.append(''.join(m.groups())) print standard_dates
python-club/notebooks/regular-expressions.ipynb
wtsi-medical-genomics/team-code
gpl-2.0
Resources | Resource | Description | | ------------------------------------------ | ----------------------------------------------------------------- | | https://docs.python.org/2/howto/regex.html | A great in-depth tutorial from the official Python documentation. | | https://www.regex101.com/#python | A useful online tool to quickly test regular expressions. | | http://regexcrossword.com/ | A nice way to practice your regular expression skills. |
text = 'abcd \e' print text re.compile(r'\\')
python-club/notebooks/regular-expressions.ipynb
wtsi-medical-genomics/team-code
gpl-2.0
Go to the "Cluster" tab of the notebook and start a local cluster with 2 engines. Then come back here. We should now be able to use our cluster from our notebook session (or any other Python process running on localhost):
from IPython.parallel import Client client = Client() len(client)
unit_20/parallel_ml/rendered_notebooks/06 - Distributed Model Selection and Assessment.ipynb
janusnic/21v-python
mit
The %px and %%px magics All the engines of the client can be accessed imperatively using the %px and %%px IPython cell magics:
%%px import os import socket print("This is running in process with pid {0} on host '{1}'.".format( os.getpid(), socket.gethostname()))
unit_20/parallel_ml/rendered_notebooks/06 - Distributed Model Selection and Assessment.ipynb
janusnic/21v-python
mit
The content of the __main__ namespace can also be read and written via the %px magic:
%px a = 1 %px print(a) %%px a *= 2 print(a)
unit_20/parallel_ml/rendered_notebooks/06 - Distributed Model Selection and Assessment.ipynb
janusnic/21v-python
mit
It is possible to restrict the %px and %%px magic instructions to specific engines:
%%px --targets=-1 a *= 2 print(a) %px print(a)
unit_20/parallel_ml/rendered_notebooks/06 - Distributed Model Selection and Assessment.ipynb
janusnic/21v-python
mit
The DirectView objects Cell magics are very nice to work interactively from the notebook but it's also possible to replicate their behavior programmatically with more flexibility with a DirectView instance. A DirectView can be created by slicing the client object:
all_engines = client[:] all_engines
unit_20/parallel_ml/rendered_notebooks/06 - Distributed Model Selection and Assessment.ipynb
janusnic/21v-python
mit
The namespace of the __main__ module of each running python engine can be accessed in read and write mode as a python dictionary:
all_engines['a'] = 1 all_engines['a']
unit_20/parallel_ml/rendered_notebooks/06 - Distributed Model Selection and Assessment.ipynb
janusnic/21v-python
mit
Direct views can also execute the same code in parallel on each engine of the view:
def my_sum(a, b): return a + b my_sum_apply_results = all_engines.apply(my_sum, 11, 31) my_sum_apply_results
unit_20/parallel_ml/rendered_notebooks/06 - Distributed Model Selection and Assessment.ipynb
janusnic/21v-python
mit
The ouput of the apply method is an asynchronous handle returned immediately without waiting for the end of the computation. To block until the results are ready use:
my_sum_apply_results.get()
unit_20/parallel_ml/rendered_notebooks/06 - Distributed Model Selection and Assessment.ipynb
janusnic/21v-python
mit
Here is a more useful example to fetch the network hostname of each engine in the cluster. Let's study it in more details:
def hostname(): """Return the name of the host where the function is being called""" import socket return socket.gethostname() hostname_apply_result = all_engines.apply(hostname)
unit_20/parallel_ml/rendered_notebooks/06 - Distributed Model Selection and Assessment.ipynb
janusnic/21v-python
mit
When doing the above, the hostname function is first defined locally (the client python process). The DirectView.apply method introspects it, serializes its name and bytecode and ships it to each engine of the cluster where it is reconstructed as local function on each engine. This function is then called on each engine of the view with the optionally provided arguments. In return, the client gets a python object that serves as an handle to asynchronously fetch the list of the results of the calls:
hostname_apply_result hostname_apply_result.get()
unit_20/parallel_ml/rendered_notebooks/06 - Distributed Model Selection and Assessment.ipynb
janusnic/21v-python
mit
It is also possible to key the results explicitly with the engine ids with the AsyncResult.get_dict method. This is a very simple idiom to fetch metadata on the runtime environment of each engine of the direct view:
hostnames = hostname_apply_result.get_dict() hostnames
unit_20/parallel_ml/rendered_notebooks/06 - Distributed Model Selection and Assessment.ipynb
janusnic/21v-python
mit
It can be handy to invert this mapping to find one engine id per host in the cluster so as to execute host specific operation:
one_engine_by_host = dict((hostname, engine_id) for engine_id, hostname in hostnames.items()) one_engine_by_host one_engine_by_host_ids = list(one_engine_by_host.values()) one_engine_by_host_ids one_engine_per_host_view = client[one_engine_by_host_ids] one_engine_per_host_view
unit_20/parallel_ml/rendered_notebooks/06 - Distributed Model Selection and Assessment.ipynb
janusnic/21v-python
mit
Trick: you can even use those engines ids to execute shell commands in parallel on each host of the cluster:
one_engine_by_host.values() %%px --targets=[1] !pip install flask
unit_20/parallel_ml/rendered_notebooks/06 - Distributed Model Selection and Assessment.ipynb
janusnic/21v-python
mit
Note on Importing Modules on Remote Engines In the previous example we put the import socket statement inside the body of the hostname function to make sure to make sure that is is available when the rest of the function is executed in the python processes of the remote engines. Alternatively it is possible to import the required modules ahead of time on all the engines of a directview using a context manager / with syntax:
with all_engines.sync_imports(): import numpy
unit_20/parallel_ml/rendered_notebooks/06 - Distributed Model Selection and Assessment.ipynb
janusnic/21v-python
mit
However this method does not support alternative import syntaxes: >>> import numpy as np >>> from numpy import linalg Hence the method of importing in the body of the "applied" functions is more flexible. Additionally, this does not pollute the __main__ namespace of the engines as it only impact the local namespace of the function itself. Exercise: Write a function that returns the memory usage of each engine process in the cluster. Allocate a largish numpy array of zeros of known size (e.g. 100MB) on each engine of the cluster. Hints: Use the psutil module to collect the runtime info on a specific process or host. For instance to fetch the memory usage of the currently running process in MB: >>> import os >>> import psutil >>> psutil.Process(os.getpid()).get_memory_info().rss / 1e6 To allocate a numpy array with 1000 zeros stored as 64bit floats you can use: >>> import numpy as np >>> z = np.zeros(1000, dtype=np.float64) The size in bytes of such a numpy array can then be fetched with z.nbytes: >>> z.nbytes / 1e6 0.008
def get_engines_memory(client): def memory_mb(): import os, psutil return psutil.Process(os.getpid()).get_memory_info().rss / 1e6 return client[:].apply(memory_mb).get_dict() get_engines_memory(client) sum(get_engines_memory(client).values()) %%px import numpy as np z = np.zeros(int(1e7), dtype=np.float64) print("Allocated {0}MB on engine.".format(z.nbytes / 1e6)) get_engines_memory(client)
unit_20/parallel_ml/rendered_notebooks/06 - Distributed Model Selection and Assessment.ipynb
janusnic/21v-python
mit
Load Balanced View LoadBalancedView is an alternative to the DirectView to run one function call at a time on a free engine.
lv = client.load_balanced_view() def slow_square(x): import time time.sleep(2) return x ** 2 result = lv.apply(slow_square, 4) result result.ready() result.get() # blocking call
unit_20/parallel_ml/rendered_notebooks/06 - Distributed Model Selection and Assessment.ipynb
janusnic/21v-python
mit
It is possible to spread some tasks among the engines of the LB view by passing a callable and an iterable of task arguments to the LoadBalancedView.map method:
results = lv.map(slow_square, [0, 1, 2, 3]) results results.ready() results.progress # results.abort() # Iteration on AsyncMapResult is blocking for r in results: print(r)
unit_20/parallel_ml/rendered_notebooks/06 - Distributed Model Selection and Assessment.ipynb
janusnic/21v-python
mit
The load balanced view will be used in the following to schedule work on the cluster while being able to monitor progress and occasionally add new computing nodes to the cluster while computing to speed up the processing when using EC2 and StarCluster (see later). Sharing Read-only Data between Processes on the Same Host with Memmapping Let's restart the cluster to kill the existing python processes and restart with a new client instances to be able to monitor the memory usage in details:
!ipcluster stop !ipcluster start -n=2 --daemon from IPython.parallel import Client client = Client() len(client)
unit_20/parallel_ml/rendered_notebooks/06 - Distributed Model Selection and Assessment.ipynb
janusnic/21v-python
mit
The numpy package makes it possible to memory map large contiguous chunks of binary files as shared memory for all the Python processes running on a given host:
%px import numpy as np
unit_20/parallel_ml/rendered_notebooks/06 - Distributed Model Selection and Assessment.ipynb
janusnic/21v-python
mit
Creating a numpy.memmap instance with the w+ mode creates a file on the filesystem and zeros its content. Let's do it from the first engine process or our current IPython cluster:
%%px --targets=-1 # Cleanup any existing file from past session (necessary for windows) import os if os.path.exists('small.mmap'): os.unlink('small.mmap') mm_w = np.memmap('small.mmap', shape=10, dtype=np.float32, mode='w+') print(mm_w)
unit_20/parallel_ml/rendered_notebooks/06 - Distributed Model Selection and Assessment.ipynb
janusnic/21v-python
mit
Assuming the notebook process was launched with: cd notebooks ipython notebook and the cluster was launched from the ipython notebook UI, the engines will have a the same current working directory as the notebook process, hence we can find the small.mmap file the current folder:
ls -lh small.mmap
unit_20/parallel_ml/rendered_notebooks/06 - Distributed Model Selection and Assessment.ipynb
janusnic/21v-python
mit
This binary file can then be mapped as a new numpy array by all the engines having access to the same filesystem. The mode='r+' opens this shared memory area in read write mode:
%%px mm_r = np.memmap('small.mmap', dtype=np.float32, mode='r+') print(mm_r) %%px --targets=-1 mm_w[0] = 42 print(mm_w) print(mm_r) %px print(mm_r)
unit_20/parallel_ml/rendered_notebooks/06 - Distributed Model Selection and Assessment.ipynb
janusnic/21v-python
mit
Memory mapped arrays created with mode='r+' can be modified and the modifications are shared with all the engines:
%%px --targets=1 mm_r[1] = 43 %%px print(mm_r)
unit_20/parallel_ml/rendered_notebooks/06 - Distributed Model Selection and Assessment.ipynb
janusnic/21v-python
mit
Be careful those, there is no builtin read nor write lock available on this such datastructures so it's better to avoid concurrent read & write operations on the same array segments unless there engine operations are made to cooperate with some synchronization or scheduling orchestrator. Memmap arrays generally behave very much like regular in-memory numpy arrays:
%%px print("sum={:.3}, mean={:.3}, std={:.3}".format( float(mm_r.sum()), np.mean(mm_r), np.std(mm_r)))
unit_20/parallel_ml/rendered_notebooks/06 - Distributed Model Selection and Assessment.ipynb
janusnic/21v-python
mit
Before allocating more data in memory on the cluster let us define a couple of utility functions from the previous exercise (and more) to monitor what is used by which engine and what is still free on the cluster as a whole:
def get_engines_memory(client): """Gather the memory allocated by each engine in MB""" def memory_mb(): import os import psutil return psutil.Process(os.getpid()).get_memory_info().rss / 1e6 return client[:].apply(memory_mb).get_dict() def get_host_free_memory(client): """Free memory on each host of the cluster in MB.""" all_engines = client[:] def hostname(): import socket return socket.gethostname() hostnames = all_engines.apply(hostname).get_dict() one_engine_per_host = dict((hostname, engine_id) for engine_id, hostname in hostnames.items()) def host_free_memory(): import psutil return psutil.virtual_memory().free / 1e6 one_engine_per_host_ids = list(one_engine_per_host.values()) host_mem = client[one_engine_per_host_ids].apply( host_free_memory).get_dict() return dict((hostnames[eid], m) for eid, m in host_mem.items()) get_engines_memory(client) get_host_free_memory(client)
unit_20/parallel_ml/rendered_notebooks/06 - Distributed Model Selection and Assessment.ipynb
janusnic/21v-python
mit
Let's allocate a 80MB memmap array in the first engine and load it in readwrite mode in all the engines:
%%px --targets=-1 # Cleanup any existing file from past session (necessary for windows) import os if os.path.exists('big.mmap'): os.unlink('big.mmap') np.memmap('big.mmap', shape=10 * int(1e6), dtype=np.float64, mode='w+') ls -lh big.mmap get_host_free_memory(client)
unit_20/parallel_ml/rendered_notebooks/06 - Distributed Model Selection and Assessment.ipynb
janusnic/21v-python
mit
No significant memory was used in this operation as we just asked the OS to allocate the buffer on the hard drive and just maitain a virtual memory area as a cheap reference to this buffer. Let's open new references to the same buffer from all the engines at once:
%px %time big_mmap = np.memmap('big.mmap', dtype=np.float64, mode='r+') %px big_mmap get_host_free_memory(client)
unit_20/parallel_ml/rendered_notebooks/06 - Distributed Model Selection and Assessment.ipynb
janusnic/21v-python
mit
No physical memory was allocated in the operation as it just took a couple of ms to do so. This is also confirmed by the engines process stats:
get_engines_memory(client)
unit_20/parallel_ml/rendered_notebooks/06 - Distributed Model Selection and Assessment.ipynb
janusnic/21v-python
mit
Let's trigger an actual load of the data from the drive into the in-memory disk cache of the OS, this can take some time depending on the speed of the hard drive (on the order of 100MB/s to 300MB/s hence 3s to 8s for this dataset):
%%px --targets=-1 %time np.sum(big_mmap) get_engines_memory(client) get_host_free_memory(client)
unit_20/parallel_ml/rendered_notebooks/06 - Distributed Model Selection and Assessment.ipynb
janusnic/21v-python
mit
We can see that the first engine has now access to the data in memory and the free memory on the host has decreased by the same amount. We can now access this data from all the engines at once much faster as the disk will no longer be used: the shared memory buffer will instead accessed directly by all the engines:
%px %time np.sum(big_mmap) get_engines_memory(client) get_host_free_memory(client)
unit_20/parallel_ml/rendered_notebooks/06 - Distributed Model Selection and Assessment.ipynb
janusnic/21v-python
mit
So it seems that the engines have loaded a whole copy of the data but this actually not the case as the total amount of free memory was not impacted by the parallel access to the shared buffer. Furthermore, once the data has been preloaded from the hard drive using one process, all the of the other processes on the same host can access it almost instantly saving a lot of IO wait. This strategy makes it very interesting to load the readonly datasets of machine learning problems, especially when the same data is reused over and over by concurrent processes as can be the case when doing learning curves analysis or grid search. Memmaping Nested Numpy-based Data Structures with Joblib joblib is a utility library included in the sklearn package. Among other things it provides tools to serialize objects that comprise large numpy arrays and reload them as memmap backed datastructures. To demonstrate it, let's create an arbitrary python datastructure involving numpy arrays:
import numpy as np class MyDataStructure(object): def __init__(self, shape): self.float_zeros = np.zeros(shape, dtype=np.float32) self.integer_ones = np.ones(shape, dtype=np.int64) data_structure = MyDataStructure((3, 4)) data_structure.float_zeros, data_structure.integer_ones
unit_20/parallel_ml/rendered_notebooks/06 - Distributed Model Selection and Assessment.ipynb
janusnic/21v-python
mit
We can now persist this datastructure to disk:
from sklearn.externals import joblib joblib.dump(data_structure, 'data_structure.pkl') !ls -l data_structure*
unit_20/parallel_ml/rendered_notebooks/06 - Distributed Model Selection and Assessment.ipynb
janusnic/21v-python
mit
A memmapped copy of this datastructure can then be loaded:
memmaped_data_structure = joblib.load('data_structure.pkl', mmap_mode='r+') memmaped_data_structure.float_zeros, memmaped_data_structure.integer_ones
unit_20/parallel_ml/rendered_notebooks/06 - Distributed Model Selection and Assessment.ipynb
janusnic/21v-python
mit
Memmaping CV Splits for Multiprocess Dataset Sharing We can leverage the previous tools to build a utility function that extracts Cross Validation splits ahead of time to persist them on the hard drive in a format suitable for memmaping by IPython engine processes.
from sklearn.externals import joblib from sklearn.cross_validation import ShuffleSplit import os def persist_cv_splits(X, y, n_cv_iter=5, name='data', suffix="_cv_%03d.pkl", test_size=0.25, random_state=None): """Materialize randomized train test splits of a dataset.""" cv = ShuffleSplit(X.shape[0], n_iter=n_cv_iter, test_size=test_size, random_state=random_state) cv_split_filenames = [] for i, (train, test) in enumerate(cv): cv_fold = (X[train], y[train], X[test], y[test]) cv_split_filename = name + suffix % i cv_split_filename = os.path.abspath(cv_split_filename) joblib.dump(cv_fold, cv_split_filename) cv_split_filenames.append(cv_split_filename) return cv_split_filenames
unit_20/parallel_ml/rendered_notebooks/06 - Distributed Model Selection and Assessment.ipynb
janusnic/21v-python
mit
Let's try it on the digits dataset, we can run this from the :
from sklearn.datasets import load_digits digits = load_digits() digits_split_filenames = persist_cv_splits(digits.data, digits.target, name='digits', random_state=42) digits_split_filenames ls -lh digits*
unit_20/parallel_ml/rendered_notebooks/06 - Distributed Model Selection and Assessment.ipynb
janusnic/21v-python
mit
Each of the persisted CV splits can then be loaded back again using memmaping:
X_train, y_train, X_test, y_test = joblib.load( 'digits_cv_002.pkl', mmap_mode='r+') X_train y_train
unit_20/parallel_ml/rendered_notebooks/06 - Distributed Model Selection and Assessment.ipynb
janusnic/21v-python
mit
Parallel Model Selection and Grid Search Let's leverage IPython.parallel and the Memory Mapping features of joblib to write a custom grid search utility that runs on cluster in a memory efficient manner. Assume that we want to reproduce the grid search from the previous session:
import numpy as np from pprint import pprint svc_params = { 'C': np.logspace(-1, 2, 4), 'gamma': np.logspace(-4, 0, 5), } pprint(svc_params)
unit_20/parallel_ml/rendered_notebooks/06 - Distributed Model Selection and Assessment.ipynb
janusnic/21v-python
mit
GridSearchCV internally uses the following ParameterGrid utility iterator class to build the possible combinations of parameters:
from sklearn.grid_search import ParameterGrid list(ParameterGrid(svc_params))
unit_20/parallel_ml/rendered_notebooks/06 - Distributed Model Selection and Assessment.ipynb
janusnic/21v-python
mit
Let's write a function to load the data from a CV split file and compute the validation score for a given parameter set and model:
def compute_evaluation(cv_split_filename, model, params): """Function executed by a worker to evaluate a model on a CV split""" # All module imports should be executed in the worker namespace from sklearn.externals import joblib X_train, y_train, X_validation, y_validation = joblib.load( cv_split_filename, mmap_mode='c') model.set_params(**params) model.fit(X_train, y_train) validation_score = model.score(X_validation, y_validation) return validation_score def grid_search(lb_view, model, cv_split_filenames, param_grid): """Launch all grid search evaluation tasks.""" all_tasks = [] all_parameters = list(ParameterGrid(param_grid)) for i, params in enumerate(all_parameters): task_for_params = [] for j, cv_split_filename in enumerate(cv_split_filenames): t = lb_view.apply( compute_evaluation, cv_split_filename, model, params) task_for_params.append(t) all_tasks.append(task_for_params) return all_parameters, all_tasks
unit_20/parallel_ml/rendered_notebooks/06 - Distributed Model Selection and Assessment.ipynb
janusnic/21v-python
mit
Let's try on the digits dataset that we splitted previously as memmapable files:
from sklearn.svm import SVC from IPython.parallel import Client client = Client() lb_view = client.load_balanced_view() model = SVC() svc_params = { 'C': np.logspace(-1, 2, 4), 'gamma': np.logspace(-4, 0, 5), } all_parameters, all_tasks = grid_search( lb_view, model, digits_split_filenames, svc_params)
unit_20/parallel_ml/rendered_notebooks/06 - Distributed Model Selection and Assessment.ipynb
janusnic/21v-python
mit
The grid_search function is using the asynchronous API of the LoadBalancedView, we can hence monitor the progress:
import time time.sleep(5) def progress(tasks): return np.mean([task.ready() for task_group in tasks for task in task_group]) print("Tasks completed: {0}%".format(100 * progress(all_tasks)))
unit_20/parallel_ml/rendered_notebooks/06 - Distributed Model Selection and Assessment.ipynb
janusnic/21v-python
mit
Even better, we can introspect the completed task to find the best parameters set so far:
def find_bests(all_parameters, all_tasks, n_top=5): """Compute the mean score of the completed tasks""" mean_scores = [] for param, task_group in zip(all_parameters, all_tasks): scores = [t.get() for t in task_group if t.ready()] if len(scores) == 0: continue mean_scores.append((np.mean(scores), param)) return sorted(mean_scores, reverse=True, key=lambda x: x[0])[:n_top] from pprint import pprint print("Tasks completed: {0}%".format(100 * progress(all_tasks))) pprint(find_bests(all_parameters, all_tasks)) [t.wait() for tasks in all_tasks for t in tasks] print("Tasks completed: {0}%".format(100 * progress(all_tasks))) pprint(find_bests(all_parameters, all_tasks))
unit_20/parallel_ml/rendered_notebooks/06 - Distributed Model Selection and Assessment.ipynb
janusnic/21v-python
mit
Optimization Trick: Truncated Randomized Search It is often wasteful to search all the possible combinations of parameters as done previously, especially if the number of parameters is large (e.g. more than 3). To speed up the discovery of good parameters combinations, it is often faster to randomized the search order and allocate a budget of evaluations, e.g. 10 or 100 combinations. See this JMLR paper by James Bergstra for an empirical analysis of the problem. The interested reader should also have a look at hyperopt that further refines this parameter search method using meta-optimizers. Randomized Parameter Search has just been implemented in the master branch of scikit-learn be part of the 0.14 release. A More Complete Parallel Model Selection and Assessment Example
%matplotlib inline import matplotlib.pyplot as plt import numpy as np # Some nice default configuration for plots plt.rcParams['figure.figsize'] = 10, 7.5 plt.rcParams['axes.grid'] = True plt.gray(); lb_view = client.load_balanced_view() model = SVC() import sys, imp from collections import OrderedDict sys.path.append('..') import model_selection, mmap_utils imp.reload(model_selection), imp.reload(mmap_utils) lb_view.abort() svc_params = OrderedDict([ ('gamma', np.logspace(-4, 0, 5)), ('C', np.logspace(-1, 2, 4)), ]) search = model_selection.RandomizedGridSeach(lb_view) search.launch_for_splits(model, svc_params, digits_split_filenames) time.sleep(5) print(search.report()) time.sleep(5) print(search.report()) search.boxplot_parameters(display_train=False) #search.abort()
unit_20/parallel_ml/rendered_notebooks/06 - Distributed Model Selection and Assessment.ipynb
janusnic/21v-python
mit
Pass !r to get the <strong>string representation</strong>:
print(f'His name is {name!r}')
nlp/UPDATED_NLP_COURSE/00-Python-Text-Basics/00-Working-with-Text-Files.ipynb
rishuatgithub/MLPy
apache-2.0
Be careful not to let quotation marks in the replacement fields conflict with the quoting used in the outer string:
d = {'a':123,'b':456} print(f'Address: {d['a']} Main Street')
nlp/UPDATED_NLP_COURSE/00-Python-Text-Basics/00-Working-with-Text-Files.ipynb
rishuatgithub/MLPy
apache-2.0
Instead, use different styles of quotation marks:
d = {'a':123,'b':456} print(f"Address: {d['a']} Main Street")
nlp/UPDATED_NLP_COURSE/00-Python-Text-Basics/00-Working-with-Text-Files.ipynb
rishuatgithub/MLPy
apache-2.0
Minimum Widths, Alignment and Padding You can pass arguments inside a nested set of curly braces to set a minimum width for the field, the alignment and even padding characters.
library = [('Author', 'Topic', 'Pages'), ('Twain', 'Rafting', 601), ('Feynman', 'Physics', 95), ('Hamilton', 'Mythology', 144)] for book in library: print(f'{book[0]:{10}} {book[1]:{8}} {book[2]:{7}}')
nlp/UPDATED_NLP_COURSE/00-Python-Text-Basics/00-Working-with-Text-Files.ipynb
rishuatgithub/MLPy
apache-2.0
Here the first three lines align, except Pages follows a default left-alignment while numbers are right-aligned. Also, the fourth line's page number is pushed to the right as Mythology exceeds the minimum field width of 8. When setting minimum field widths make sure to take the longest item into account. To set the alignment, use the character &lt; for left-align, ^ for center, &gt; for right.<br> To set padding, precede the alignment character with the padding character (- and . are common choices). Let's make some adjustments:
for book in library: print(f'{book[0]:{10}} {book[1]:{10}} {book[2]:.>{7}}') # here .> was added
nlp/UPDATED_NLP_COURSE/00-Python-Text-Basics/00-Working-with-Text-Files.ipynb
rishuatgithub/MLPy
apache-2.0
Date Formatting
from datetime import datetime today = datetime(year=2018, month=1, day=27) print(f'{today:%B %d, %Y}')
nlp/UPDATED_NLP_COURSE/00-Python-Text-Basics/00-Working-with-Text-Files.ipynb
rishuatgithub/MLPy
apache-2.0
For more info on formatted string literals visit https://docs.python.org/3/reference/lexical_analysis.html#f-strings Files Python uses file objects to interact with external files on your computer. These file objects can be any sort of file you have on your computer, whether it be an audio file, a text file, emails, Excel documents, etc. Note: You will probably need to install certain libraries or modules to interact with those various file types, but they are easily available. (We will cover downloading modules later on in the course). Python has a built-in open function that allows us to open and play with basic file types. First we will need a file though. We're going to use some IPython magic to create a text file! Creating a File with IPython This function is specific to jupyter notebooks! Alternatively, quickly create a simple .txt file with Sublime text editor.
%%writefile test.txt Hello, this is a quick test file. This is the second line of the file.
nlp/UPDATED_NLP_COURSE/00-Python-Text-Basics/00-Working-with-Text-Files.ipynb
rishuatgithub/MLPy
apache-2.0
Python Opening a File Know Your File's Location It's easy to get an error on this step:
myfile = open('whoops.txt')
nlp/UPDATED_NLP_COURSE/00-Python-Text-Basics/00-Working-with-Text-Files.ipynb
rishuatgithub/MLPy
apache-2.0
To avoid this error, make sure your .txt file is saved in the same location as your notebook. To check your notebook location, use pwd:
pwd
nlp/UPDATED_NLP_COURSE/00-Python-Text-Basics/00-Working-with-Text-Files.ipynb
rishuatgithub/MLPy
apache-2.0
Alternatively, to grab files from any location on your computer, simply pass in the entire file path. For Windows you need to use double \ so python doesn't treat the second \ as an escape character, a file path is in the form: myfile = open("C:\\Users\\YourUserName\\Home\\Folder\\myfile.txt") For MacOS and Linux you use slashes in the opposite direction: myfile = open("/Users/YourUserName/Folder/myfile.txt")
# Open the text.txt file we created earlier my_file = open('test.txt') my_file
nlp/UPDATED_NLP_COURSE/00-Python-Text-Basics/00-Working-with-Text-Files.ipynb
rishuatgithub/MLPy
apache-2.0
my_file is now an open file object held in memory. We'll perform some reading and writing exercises, and then we have to close the file to free up memory. .read() and .seek()
# We can now read the file my_file.read() # But what happens if we try to read it again? my_file.read()
nlp/UPDATED_NLP_COURSE/00-Python-Text-Basics/00-Working-with-Text-Files.ipynb
rishuatgithub/MLPy
apache-2.0
This happens because you can imagine the reading "cursor" is at the end of the file after having read it. So there is nothing left to read. We can reset the "cursor" like this:
# Seek to the start of file (index 0) my_file.seek(0) # Now read again my_file.read()
nlp/UPDATED_NLP_COURSE/00-Python-Text-Basics/00-Working-with-Text-Files.ipynb
rishuatgithub/MLPy
apache-2.0
.readlines() You can read a file line by line using the readlines method. Use caution with large files, since everything will be held in memory. We will learn how to iterate over large files later in the course.
# Readlines returns a list of the lines in the file my_file.seek(0) my_file.readlines()
nlp/UPDATED_NLP_COURSE/00-Python-Text-Basics/00-Working-with-Text-Files.ipynb
rishuatgithub/MLPy
apache-2.0
When you have finished using a file, it is always good practice to close it.
my_file.close()
nlp/UPDATED_NLP_COURSE/00-Python-Text-Basics/00-Working-with-Text-Files.ipynb
rishuatgithub/MLPy
apache-2.0
Writing to a File By default, the open() function will only allow us to read the file. We need to pass the argument 'w' to write over the file. For example:
# Add a second argument to the function, 'w' which stands for write. # Passing 'w+' lets us read and write to the file my_file = open('test.txt','w+')
nlp/UPDATED_NLP_COURSE/00-Python-Text-Basics/00-Working-with-Text-Files.ipynb
rishuatgithub/MLPy
apache-2.0
<div class="alert alert-danger" style="margin: 20px">**Use caution!**<br> Opening a file with 'w' or 'w+' *truncates the original*, meaning that anything that was in the original file **is deleted**!</div>
# Write to the file my_file.write('This is a new first line') # Read the file my_file.seek(0) my_file.read() my_file.close() # always do this when you're done with a file
nlp/UPDATED_NLP_COURSE/00-Python-Text-Basics/00-Working-with-Text-Files.ipynb
rishuatgithub/MLPy
apache-2.0
Appending to a File Passing the argument 'a' opens the file and puts the pointer at the end, so anything written is appended. Like 'w+', 'a+' lets us read and write to a file. If the file does not exist, one will be created.
my_file = open('test.txt','a+') my_file.write('\nThis line is being appended to test.txt') my_file.write('\nAnd another line here.') my_file.seek(0) print(my_file.read()) my_file.close()
nlp/UPDATED_NLP_COURSE/00-Python-Text-Basics/00-Working-with-Text-Files.ipynb
rishuatgithub/MLPy
apache-2.0
Appending with %%writefile Jupyter notebook users can do the same thing using IPython cell magic:
%%writefile -a test.txt This is more text being appended to test.txt And another line here.
nlp/UPDATED_NLP_COURSE/00-Python-Text-Basics/00-Working-with-Text-Files.ipynb
rishuatgithub/MLPy
apache-2.0
Add a blank space if you want the first line to begin on its own line, as Jupyter won't recognize escape sequences like \n Aliases and Context Managers You can assign temporary variable names as aliases, and manage the opening and closing of files automatically using a context manager:
with open('test.txt','r') as txt: first_line = txt.readlines()[0] print(first_line)
nlp/UPDATED_NLP_COURSE/00-Python-Text-Basics/00-Working-with-Text-Files.ipynb
rishuatgithub/MLPy
apache-2.0
Note that the with ... as ...: context manager automatically closed test.txt after assigning the first line of text to first_line:
txt.read()
nlp/UPDATED_NLP_COURSE/00-Python-Text-Basics/00-Working-with-Text-Files.ipynb
rishuatgithub/MLPy
apache-2.0
Iterating through a File
with open('test.txt','r') as txt: for line in txt: print(line, end='') # the end='' argument removes extra linebreaks
nlp/UPDATED_NLP_COURSE/00-Python-Text-Basics/00-Working-with-Text-Files.ipynb
rishuatgithub/MLPy
apache-2.0
The command %matplotlib inline is not a Python command, but an IPython command. When using the console, or the notebook, it makes the plots appear inline. You do not want to use this in a plain Python code.
from math import sin, pi x = [] y = [] for i in range(201): x_point = 0.01*i x.append(x_point) y.append(sin(pi*x_point)**2) pyplot.plot(x, y) pyplot.show()
04-basic-plotting.ipynb
linglaiyao1314/maths-with-python
mit
We have defined two sequences - in this case lists, but tuples would also work. One contains the $x$-axis coordinates, the other the data points to appear on the $y$-axis. A basic plot is produced using the plot command of pyplot. However, this plot will not automatically appear on the screen, as after plotting the data you may wish to add additional information. Nothing will actually happen until you either save the figure to a file (using pyplot.savefig(&lt;filename&gt;)) or explicitly ask for it to be displayed (with the show command). When the plot is displayed the program will typically pause until you dismiss the plot. This plotting interface is straightforward, but the results are not particularly nice. The following commands illustrate some of the ways of improving the plot:
from math import sin, pi x = [] y = [] for i in range(201): x_point = 0.01*i x.append(x_point) y.append(sin(pi*x_point)**2) pyplot.plot(x, y, marker='+', markersize=8, linestyle=':', linewidth=3, color='b', label=r'$\sin^2(\pi x)$') pyplot.legend(loc='lower right') pyplot.xlabel(r'$x$') pyplot.ylabel(r'$y$') pyplot.title('A basic plot') pyplot.show()
04-basic-plotting.ipynb
linglaiyao1314/maths-with-python
mit
Whilst most of the commands are self-explanatory, a note should be made of the strings line r'$x$'. These strings are in LaTeX format, which is the standard typesetting method for professional-level mathematics. The $ symbols surround mathematics. The r before the definition of the string is Python notation, not LaTeX. It says that the following string will be "raw": that backslash characters should be left alone. Then, special LaTeX commands have a backslash in front of them: here we use \pi and \sin. Most basic symbols can be easily guessed (eg \theta or \int), but there are useful lists of symbols, and a reverse search site available. We can also use ^ to denote superscripts (used here), _ to denote subscripts, and use {} to group terms. By combining these basic commands with other plotting types (semilogx and loglog, for example), most simple plots can be produced quickly. Here are some more examples:
from math import sin, pi, exp, log x = [] y1 = [] y2 = [] for i in range(201): x_point = 1.0 + 0.01*i x.append(x_point) y1.append(exp(sin(pi*x_point))) y2.append(log(pi+x_point*sin(x_point))) pyplot.loglog(x, y1, linestyle='--', linewidth=4, color='k', label=r'$y_1=e^{\sin(\pi x)}$') pyplot.loglog(x, y2, linestyle='-.', linewidth=4, color='r', label=r'$y_2=\log(\pi+x\sin(x))$') pyplot.legend(loc='lower right') pyplot.xlabel(r'$x$') pyplot.ylabel(r'$y$') pyplot.title('A basic logarithmic plot') pyplot.show() from math import sin, pi, exp, log x = [] y1 = [] y2 = [] for i in range(201): x_point = 1.0 + 0.01*i x.append(x_point) y1.append(exp(sin(pi*x_point))) y2.append(log(pi+x_point*sin(x_point))) pyplot.semilogy(x, y1, linestyle='None', marker='o', color='g', label=r'$y_1=e^{\sin(\pi x)}$') pyplot.semilogy(x, y2, linestyle='None', marker='^', color='r', label=r'$y_2=\log(\pi+x\sin(x))$') pyplot.legend(loc='lower right') pyplot.xlabel(r'$x$') pyplot.ylabel(r'$y$') pyplot.title('A different logarithmic plot') pyplot.show()
04-basic-plotting.ipynb
linglaiyao1314/maths-with-python
mit
Imagem original e seu histograma
# Imagem original f = mpimg.imread('../data/cameraman.tif') ia.adshow(f,'Imagem Original') h = ia.histogram(f) plt.plot(h), plt.title('Histograma da Imagem Original')
master/tutorial_contraste_iterativo_2.ipynb
robertoalotufo/ia898
mit
Calculando e visualizando a Transforma de Contraste Window & Level
W = 30 L = 15 Tw = TWL(L,W) plt.plot(Tw) #plt.ylabel('Output intensity') #plt.xlabel('Input intensity') plt.title('Transformada de intensidade W=%d L=%d' % (W,L))
master/tutorial_contraste_iterativo_2.ipynb
robertoalotufo/ia898
mit
Aplicando a Transformação de Contraste Observe que esta transformação amplia o contraste ao redor do nível de cinza 15, tornando os detalhes do paletó do "cameraman" bem visíveis:
g = WL(f,L,W) ia.adshow(g, 'Imagem com contraste ajustado, L = %d, W = %d' %(L,W))
master/tutorial_contraste_iterativo_2.ipynb
robertoalotufo/ia898
mit