markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
---|---|---|---|---|
Multi-dimension slice indexing
If you are familiar with numpy, sliced index then this should be cake for the SimpleITK image. The Python standard slice interface for 1-D object:
<table>
<tr><td>Operation</td> <td>Result</td></tr>
<tr><td>d[i]</td> <td>i-th item of d, starting index 0</td></tr>
<tr><td>d[i:j]</td> <td>slice of d from i to j</td></tr>
<tr><td>d[i:j:k]</td> <td>slice of d from i to j with step k</td></tr>
</table>
With this convenient syntax many basic tasks can be easily done. | img[24, 24] | Python/02_Pythonic_Image.ipynb | InsightSoftwareConsortium/SimpleITK-Notebooks | apache-2.0 |
Cropping | myshow(img[16:48, :])
myshow(img[:, 16:-16])
myshow(img[:32, :32]) | Python/02_Pythonic_Image.ipynb | InsightSoftwareConsortium/SimpleITK-Notebooks | apache-2.0 |
Flipping | img_corner = img[:32, :32]
myshow(img_corner)
myshow(img_corner[::-1, :])
myshow(
sitk.Tile(
img_corner,
img_corner[::-1, ::],
img_corner[::, ::-1],
img_corner[::-1, ::-1],
[2, 2],
)
) | Python/02_Pythonic_Image.ipynb | InsightSoftwareConsortium/SimpleITK-Notebooks | apache-2.0 |
Slice Extraction
A 2D image can be extracted from a 3D one. | img = sitk.GaborSource(size=[64] * 3, frequency=0.05)
# Why does this produce an error?
myshow(img)
myshow(img[:, :, 32])
myshow(img[16, :, :]) | Python/02_Pythonic_Image.ipynb | InsightSoftwareConsortium/SimpleITK-Notebooks | apache-2.0 |
Subsampling | myshow(img[:, ::3, 32]) | Python/02_Pythonic_Image.ipynb | InsightSoftwareConsortium/SimpleITK-Notebooks | apache-2.0 |
Mathematical Operators
Most python mathematical operators are overloaded to call the SimpleITK filter which does that same operation on a per-pixel basis. They can operate on a two images or an image and a scalar.
If two images are used then both must have the same pixel type. The output image type is usually the same.
As these operators basically call ITK filter, which just use raw C++ operators, care must be taken to prevent overflow, and divide by zero etc.
<table>
<tr><td>Operators</td></tr>
<tr><td>+</td></tr>
<tr><td>-</td></tr>
<tr><td>*</td></tr>
<tr><td>/</td></tr>
<tr><td>//</td></tr>
<tr><td>**</td></tr>
</table> | img = sitk.ReadImage(fdata("cthead1.png"))
img = sitk.Cast(img, sitk.sitkFloat32)
myshow(img)
img[150, 150]
timg = img**2
myshow(timg)
timg[150, 150] | Python/02_Pythonic_Image.ipynb | InsightSoftwareConsortium/SimpleITK-Notebooks | apache-2.0 |
Division Operators
All three Python division operators are implemented __floordiv__, __truediv__, and __div__.
The true division's output is a double pixel type.
See PEP 238 to see why Python changed the division operator in Python 3.
Bitwise Logic Operators
<table>
<tr><td>Operators</td></tr>
<tr><td>&</td></tr>
<tr><td>|</td></tr>
<tr><td>^</td></tr>
<tr><td>~</td></tr>
</table> | img = sitk.ReadImage(fdata("cthead1.png"))
myshow(img) | Python/02_Pythonic_Image.ipynb | InsightSoftwareConsortium/SimpleITK-Notebooks | apache-2.0 |
Comparative Operators
<table>
<tr><td>Operators</td></tr>
<tr><td>></td></tr>
<tr><td>>=</td></tr>
<tr><td><</td></tr>
<tr><td><=</td></tr>
<tr><td>==</td></tr>
</table>
These comparative operators follow the same convention as the reset of SimpleITK for binary images. They have the pixel type of sitkUInt8 with values of 0 and 1. | img = sitk.ReadImage(fdata("cthead1.png"))
myshow(img) | Python/02_Pythonic_Image.ipynb | InsightSoftwareConsortium/SimpleITK-Notebooks | apache-2.0 |
Amazingly make common trivial tasks really trivial | myshow(img > 90)
myshow(img > 150)
myshow((img > 90) + (img > 150)) | Python/02_Pythonic_Image.ipynb | InsightSoftwareConsortium/SimpleITK-Notebooks | apache-2.0 |
First we define the positions of the 3 sites in the perovskite structure and specify the allowed oxidation states at each site. Note that the A site is defined as an anion (i.e. with a -1 oxidation state). | site_A = lattice.Site([0,0,0],[-1])
site_B = lattice.Site([0.5,0.5,0.5],[+5,+4])
site_C = lattice.Site([0.5,0.5,0.5],[-2,-1])
perovskite = lattice.Lattice([site_A,site_B,site_C],space_group=221) | examples/Inverse_perovskites/Inverse_formate_perovskites.ipynb | WMD-group/SMACT | mit |
Approach 1
We now search through the elements of interest (Li-Fr) and find those that are allowed on each site. In this example, we use the F- anion with an increased Shannon radius to simulate the formate anion. We access the Shannon radii data directly from the smact data directory and are interested in the octahedral (6_n) Shannon radius. | search = smact.ordered_elements(3,87) # Li - Fr
A_list = [] # will be populated with anions
B_list = [] # will be populated with cations
C_list = [['F',-1,4.47]] # is always the "formate anion"
for element in search:
with open(os.path.join(data_directory, 'shannon_radii.csv'),'r') as f:
reader = csv.reader(f)
r_shannon=False
for row in reader:
if row[2] =="6_n" and row[0]==element and int(row[1]) in site_A.oxidation_states:
A_list.append([row[0],int(row[1]),float(row[4])])
if row[2]=="6_n" and row[0]==element and int(row[1]) in site_B.oxidation_states:
B_list.append([row[0],int(row[1]),float(row[4])]) | examples/Inverse_perovskites/Inverse_formate_perovskites.ipynb | WMD-group/SMACT | mit |
NB: We access the data directly from the data directory file here for transparency. However, reading the file multiple times would slow down the code if we were looping over many (perhaps millions to billions) of compositions. As such, reading all the data in once into a dictionary, then accessing that dictionary from within a loop, could be preferable, e.g.:
python
for element in search:
...
r_shannon = shannon_radii[element][coordination]
...
We go through and apply the electronegativity order test (pauling_test) to each combo. Then, we use Goldschmidt tolernace factor to group into crystal structure types. | # We define the different categories of list we will populate
charge_balanced = []
goldschmidt_cubic = []
goldschmidt_ortho = []
a_too_large = []
A_B_similar = []
pauling_perov = []
anion_stats = []
# We recursively search all ABC combinations using nested for loops
for C in C_list:
anion_hex = 0
anion_cub = 0
anion_ort = 0
for B in B_list:
for A in A_list:
# We check that we have 3 different elements
if B[0] != A[0]:
# Check for charge neutrality
if int(A[1])+int(B[1])+3*int(C[1]) == 0:
charge_balanced.append([A[0],B[0],C[0]])
# We apply the pauling electronegativity test
paul_a = smact.Element(A[0]).pauling_eneg
paul_b = smact.Element(B[0]).pauling_eneg
paul_c = smact.Element(C[0]).pauling_eneg
electroneg_makes_sense = screening.pauling_test([A[1],B[1],C[1]], [paul_a,paul_b,paul_c])
if electroneg_makes_sense:
pauling_perov.append([A[0],B[0],C[0]])
# We calculate the Goldschmidt tolerance factor
tol = (float(A[2]) + C[2])/(np.sqrt(2)*(float(B[2])+C[2]))
if tol > 1.0:
a_too_large.append([A[0],B[0],C[0]])
anion_hex = anion_hex+1
if tol > 0.9 and tol <= 1.0:
goldschmidt_cubic.append([A[0],B[0],C[0]])
anion_cub = anion_cub + 1
if tol >= 0.71 and tol < 0.9:
goldschmidt_ortho.append([A[0],B[0],C[0]])
anion_ort = anion_ort + 1
if tol < 0.71:
A_B_similar.append([A[0],B[0],C[0]])
anion_stats.append([anion_hex,anion_cub,anion_ort])
print (anion_stats)
colours=['#991D1D','#8D6608','#857070']
matplotlib.rcParams.update({'font.size': 22})
plt.pie(anion_stats[0],labels=['Hex','Cubic','Ortho']
,startangle=90,autopct='%1.1f%%',colors=colours)
plt.axis('equal')
plt.savefig('Form-perovskites.png')
print ('Number of possible charge neutral perovskites from', search[0], 'to', search[len(search)-1], '=', len(charge_balanced))
print ('Number of Pauling sensible perovskites from', search[0], 'to', search[len(search)-1], '=', len(pauling_perov))
print ('Number of possible cubic perovskites from', search[0], 'to', search[len(search)-1], '=', len(goldschmidt_cubic))
print ('Number of possible ortho perovskites from', search[0], 'to', search[len(search)-1], '=', len(goldschmidt_ortho))
print ('Number of possible hexagonal perovskites from', search[0], 'to', search[len(search)-1], '=', len(a_too_large))
print ('Number of possible non-perovskites from', search[0], 'to', search[len(search)-1], '=', len(A_B_similar))
#print goldschmidt_cubic
print( "----------------------------------------------------------------")
print( "Structures identified with cubic tolerance factor 0.9 < t < 1.0 ")
print( "----------------------------------------------------------------")
for structure in goldschmidt_cubic:
print( structure[0],structure[1],'(HCOO)3')
| examples/Inverse_perovskites/Inverse_formate_perovskites.ipynb | WMD-group/SMACT | mit |
Approach 2 | # Get list of Element objects
search = [el for el in smact.ordered_elements(3,87) if
Element(el).oxidation_states]
# Covert to list of Species objects
all_species = []
for el in search:
for oxi_state in Element(el).oxidation_states:
all_species.append(Species(el,oxi_state,"6_n"))
# Define lists of interest
A_list = [sp for sp in all_species if
(sp.oxidation == -1) and (sp.ionic_radius)]
B_list = [sp for sp in all_species if
(4 <= sp.oxidation <= 5) and (sp.ionic_radius)]
C_list = [Species('F',-1,4.47)]
# We define the different categories of list we will populate
charge_balanced = []
goldschmidt_cubic = []
goldschmidt_ortho = []
a_too_large = []
A_B_similar = []
pauling_perov = []
anion_stats = []
for combo in product(A_list,B_list,C_list):
A, B, C = combo[0], combo[1], combo[2]
# Check for charge neutrality in 1:1:3 ratio
if (1,1,3) in screening.neutral_ratios(
[A.oxidation, B.oxidation, C.oxidation])[1]:
charge_balanced.append(combo)
# Check for pauling test
if screening.pauling_test([A.oxidation, B.oxidation, C.oxidation],
[A.pauling_eneg, B.pauling_eneg, C.pauling_eneg]):
pauling_perov.append(combo)
# Calculate tolerance factor
tol = (float(A.ionic_radius) + 4.47)/(np.sqrt(2)*(float(B.ionic_radius)+4.47))
if tol > 1.0:
a_too_large.append(combo)
if tol > 0.9 and tol <= 1.0:
goldschmidt_cubic.append([combo])
if tol >= 0.71 and tol < 0.9:
goldschmidt_ortho.append(combo)
if tol < 0.71:
A_B_similar.append(combo)
print ('Number of possible charge neutral perovskites from', search[0], 'to', search[len(search)-1], '=', len(charge_balanced))
print ('Number of Pauling sensible perovskites from', search[0], 'to', search[len(search)-1], '=', len(pauling_perov))
print ('Number of possible cubic perovskites from', search[0], 'to', search[len(search)-1], '=', len(goldschmidt_cubic))
print ('Number of possible ortho perovskites from', search[0], 'to', search[len(search)-1], '=', len(goldschmidt_ortho))
print ('Number of possible hexagonal perovskites from', search[0], 'to', search[len(search)-1], '=', len(a_too_large))
print ('Number of possible non-perovskites from', search[0], 'to', search[len(search)-1], '=', len(A_B_similar))
#print goldschmidt_cubic
print( "----------------------------------------------------------------")
print( "Structures identified with cubic tolerance factor 0.9 < t < 1.0 ")
print( "----------------------------------------------------------------")
for structure in goldschmidt_cubic:
print( structure[0][0].symbol,structure[0][1].symbol,'(HCOO)3')
| examples/Inverse_perovskites/Inverse_formate_perovskites.ipynb | WMD-group/SMACT | mit |
Import relevant stingray libraries. | from stingray import Lightcurve, Crossspectrum, sampledata
from stingray.simulator import simulator, models | Simulator/Lag Analysis.ipynb | StingraySoftware/notebooks | mit |
Initializing
Instantiate a simulator object and define a variability signal. | var = sampledata.sample_data()
# Beware: set tstart here, or nothing will work!
sim = simulator.Simulator(N=1024, mean=0.5, dt=0.125, rms=0.4, tstart=var.tstart)
| Simulator/Lag Analysis.ipynb | StingraySoftware/notebooks | mit |
For ease of analysis, define a simple delta impulse response with width 1. Here, start parameter refers to the lag delay, which we will soon see. | delay = 10
s_ir = sim.simple_ir(start=delay, width=1) | Simulator/Lag Analysis.ipynb | StingraySoftware/notebooks | mit |
Finally, simulate a filtered light curve. Here, filtered means that the initial lag delay portion is cut. | lc = sim.simulate(var.counts, s_ir)
plt.plot(lc.time, lc.counts)
plt.plot(var.time, var.counts) | Simulator/Lag Analysis.ipynb | StingraySoftware/notebooks | mit |
Analysis
Compute crossspectrum. | cross = Crossspectrum(var, lc) | Simulator/Lag Analysis.ipynb | StingraySoftware/notebooks | mit |
Rebin the crosss-spectrum for ease of visualization. | cross = cross.rebin(0.0050) | Simulator/Lag Analysis.ipynb | StingraySoftware/notebooks | mit |
Calculate time lag. | lag = cross.time_lag() | Simulator/Lag Analysis.ipynb | StingraySoftware/notebooks | mit |
Plot lag. | plt.figure()
# Plot lag-frequency spectrum.
plt.plot(cross.freq, lag, 'r')
# Find cutoff points
v_cutoff = 1.0/(2*delay)
h_cutoff = lag[int((v_cutoff-0.0050)*1/0.0050)]
plt.axvline(v_cutoff, color='g',linestyle='--')
plt.axhline(h_cutoff, color='g', linestyle='-.')
# Define axis
plt.axis([0,0.2,-20,20])
plt.xlabel('Frequency (Hz)')
plt.ylabel('Lag')
plt.title('Lag-frequency Spectrum')
plt.show() | Simulator/Lag Analysis.ipynb | StingraySoftware/notebooks | mit |
According to Uttley et al (2014), the lag-frequency spectrum shows a constant delay until the frequency (1/2*time_delay) which is represented by the green vertical line in the above figure. After this point, the phase wraps and the lag becomes negative.
Energy Dependent Impulse Responses
In practical situations, different channels may have different impulse responses and hence, would react differently to incoming light curves. To account for this, stingray an option to simulate light curves and add them to corresponding energy channels.
Below, we analyse the lag-frequency spectrum in such cases.
We define two delta impulse responses with same intensity but varying positions, each applicable on different energy channels (say '3.5-4.5 keV' and '4.5-5.5 keV' energy ranges). | delays = [10,20]
h1 = sim.simple_ir(start=delays[0], width=1)
h2 = sim.simple_ir(start=delays[1], width=1) | Simulator/Lag Analysis.ipynb | StingraySoftware/notebooks | mit |
Now, we create two energy channels to simulate light curves for these two impulse responses. | sim.simulate_channel('3.5-4.5', var, h1)
sim.simulate_channel('4.5-5.5', var, h2) | Simulator/Lag Analysis.ipynb | StingraySoftware/notebooks | mit |
Compute cross-spectrum for each channel. | cross = [Crossspectrum(var, lc).rebin(0.005) for lc in sim.get_channels(['3.5-4.5', '4.5-5.5'])] | Simulator/Lag Analysis.ipynb | StingraySoftware/notebooks | mit |
Calculate lags. | lags = [c.time_lag() for c in cross] | Simulator/Lag Analysis.ipynb | StingraySoftware/notebooks | mit |
Get cut-off points. | v_cuts = [1.0/(2*d) for d in delays]
h_cuts = [lag[int((v_cutoff-0.005)*1/0.005)] for lag, v_cut in zip(lags, v_cuts)] | Simulator/Lag Analysis.ipynb | StingraySoftware/notebooks | mit |
Plot lag-frequency spectrums. | plt.figure()
plots = []
colors = ['r','g']
energies = ['3.5-4.5 keV', '4.5-5.5 keV']
# Plot lag-frequency spectrum
for i in range(0,len(lags)):
plots += plt.plot(cross[i].freq, lags[i], colors[i], label=energies[i])
plt.axvline(v_cuts[i],color=colors[i],linestyle='--')
plt.axhline(h_cuts[i], color=colors[i], linestyle='-.')
# Define axes and add labels
plt.axis([0,0.2,-20,20])
plt.legend()
plt.xlabel('Frequencies (Hz)')
plt.ylabel('Lags')
plt.title('Energy Dependent Frequency-lag Spectrum')
plt.show() | Simulator/Lag Analysis.ipynb | StingraySoftware/notebooks | mit |
Pre-filter data from other programs (e.g., FreeBayes, GATK)
You can use the program bcftools to pre-filter your data to exclude indels and low quality SNPs. If you ran the conda install commands above then you will have all of the required tools installed. To achieve the format that ipyrad expects you will need to exclude indel containing SNPs (this may change in the future). Further quality filtering is optional.
The example below reduced the size of a VCF data file from 29Gb to 80Mb! VCF contains a lot of information that you do not need to retain through all of your analyses. We will keep only the final genotype calls.
Note that the code below is bash script. You can run this from a terminal, or in a jupyter notebook by appending the (%%bash) header like below. | %%bash
# compress the VCF file if not already done (creates .vcf.gz)
bgzip data.vcf
# tabix index the compressed VCF (creates .vcf.gz.tbi)
tabix data.vcf.gz
# remove multi-allelic SNPs and INDELs and PIPE to next command
bcftools view -m2 -M2 -i'CIGAR="1X" & QUAL>30' data.vcf.gz -Ou |
# remove extra annotations/formatting info and save to new .vcf
bcftools annotate -x FORMAT,INFO > data.cleaned.vcf
# recompress the final file (create .vcf.gz)
bgzip data.cleaned.vcf | newdocs/API-analysis/cookbook-vcf2hdf5.ipynb | dereneaton/ipyrad | gpl-3.0 |
A peek at the cleaned VCF file | # load the VCF as an datafram
dfchunks = pd.read_csv(
"/home/deren/Documents/ipyrad/sandbox/Macaque-Chr1.clean.vcf.gz",
sep="\t",
skiprows=1000,
chunksize=1000,
)
# show first few rows of first dataframe chunk
next(dfchunks).head() | newdocs/API-analysis/cookbook-vcf2hdf5.ipynb | dereneaton/ipyrad | gpl-3.0 |
Converting clean VCF to HDF5
Here I using a VCF file from whole geome data for 20 monkey's from an unpublished study (in progress). It contains >6M SNPs all from chromosome 1. Because many SNPs are close together and thus tightly linked we will likely wish to take linkage into account in our downstream analyses.
The ipyrad analysis tools can do this by encoding linkage block information into the HDF5 file. Here we encode ld_block_size of 20K bp. This breaks the 1 scaffold (chromosome) into about 10K linkage blocks. See the example below of this information being used in an ipyrad PCA analysis. | # init a conversion tool
converter = ipa.vcf_to_hdf5(
name="Macaque_LD20K",
data="/home/deren/Documents/ipyrad/sandbox/Macaque-Chr1.clean.vcf.gz",
ld_block_size=20000,
)
# run the converter
converter.run() | newdocs/API-analysis/cookbook-vcf2hdf5.ipynb | dereneaton/ipyrad | gpl-3.0 |
Downstream analyses
The data file now contains 6M SNPs across 20 samples and N linkage blocks. By default the PCA tool subsamples a single SNP per linkage block. To explore variation over multiple random subsamplings we can use the nreplicates argument. | # init a PCA tool and filter to allow no missing data
pca = ipa.pca(
data="./analysis-vcf2hdf5/Macaque_LD20K.snps.hdf5",
mincov=1.0,
) | newdocs/API-analysis/cookbook-vcf2hdf5.ipynb | dereneaton/ipyrad | gpl-3.0 |
Run a single PCA analysis from subsampled unlinked SNPs | pca.run_and_plot_2D(0, 1, seed=123); | newdocs/API-analysis/cookbook-vcf2hdf5.ipynb | dereneaton/ipyrad | gpl-3.0 |
Run multiple PCAs over replicates of subsampled SNPs
Here you can see the results for a different 10K SNPs that are sampled in each replicate iteration. If the signal in the data is robust then we should expect to see the points clustering at a similar place across replicates. Internally ipyrad will rotate axes to ensure the replicate plots align despite axes swapping (which is arbitrary in PCA space). You can see this provides a better view of uncertainty in our estimates than the plot above (and it looks cool!) | pca.run_and_plot_2D(0, 1, seed=123, nreplicates=25); | newdocs/API-analysis/cookbook-vcf2hdf5.ipynb | dereneaton/ipyrad | gpl-3.0 |
Just some helper code to make things easier. metpy_units_handler plugins into siphon to automatically add units to variables. post_process_data is used to clean up some oddities from the NCSS point feature collection. | units.define('degrees_north = 1 degree')
units.define('degrees_east = 1 degree')
unit_remap = dict(inches='inHg', Celsius='celsius')
def metpy_units_handler(vals, unit):
arr = np.array(vals)
if unit:
unit = unit_remap.get(unit, unit)
arr = arr * units(unit)
return arr
# Fix dates and sorting
def sort_list(list1, list2):
return [l1 for (l1, l2) in sorted(zip(list1, list2), key=lambda i: i[1])]
def post_process_data(data):
data['time'] = [datetime.strptime(d.decode('ascii'), '%Y-%m-%d %H:%M:%SZ') for d in data['time']]
ret = dict()
for key,val in data.items():
try:
val = units.Quantity(sort_list(val.magnitude.tolist(), data['time']), val.units)
except AttributeError:
val = sort_list(val, data['time'])
ret[key] = val
return ret | talks/MetPy Exercise.ipynb | Unidata/MetPy | bsd-3-clause |
METAR Meteogram
First we need to grab the catalog for the METAR feature collection data from http://thredds.ucar.edu/thredds/catalog.html | cat = TDSCatalog('http://thredds.ucar.edu/thredds/catalog/nws/metar/ncdecoded/catalog.xml?dataset=nws/metar/ncdecoded/Metar_Station_Data_fc.cdmr') | talks/MetPy Exercise.ipynb | Unidata/MetPy | bsd-3-clause |
Set up NCSS access to the dataset | ds = list(cat.datasets.values())[0]
ncss = NCSS(ds.access_urls['NetcdfSubset'])
ncss.unit_handler = metpy_units_handler | talks/MetPy Exercise.ipynb | Unidata/MetPy | bsd-3-clause |
Create a query for the last 7 days of data for a specific lon/lat point. We should ask for: air temperature, dewpoint temperature, wind speed, and wind direction. | now = datetime.utcnow()
query = ncss.query().accept('csv')
query.lonlat_point(-97, 35.25).time_range(now - timedelta(days=7), now)
query.variables('air_temperature', 'dew_point_temperature', 'wind_speed', 'wind_from_direction') | talks/MetPy Exercise.ipynb | Unidata/MetPy | bsd-3-clause |
Get the data | data = ncss.get_data(query)
data = post_process_data(data) # Fixes for NCSS point | talks/MetPy Exercise.ipynb | Unidata/MetPy | bsd-3-clause |
Heat Index
First, we need relative humidity:
$$RH = e / e_s$$ | e = mpcalc.saturation_vapor_pressure(data['dew_point_temperature'])
e_s = mpcalc.saturation_vapor_pressure(data['air_temperature'])
rh = e / e_s | talks/MetPy Exercise.ipynb | Unidata/MetPy | bsd-3-clause |
Calculate heat index: | # RH should be [0, 100]
hi = mpcalc.heat_index(data['air_temperature'], rh * 100) | talks/MetPy Exercise.ipynb | Unidata/MetPy | bsd-3-clause |
Plot the temperature, dewpoint, and heat index. Bonus points to also plot wind speed and direction. | import matplotlib.pyplot as plt
times = data['time']
fig, axes = plt.subplots(2, 1, figsize=(9, 9))
axes[0].plot(times, data['air_temperature'].to('degF'), 'r', linewidth=2)
axes[0].plot(times, data['dew_point_temperature'].to('degF'), 'g', linewidth=2)
axes[0].plot(times, hi, color='darkred', linestyle='--', linewidth=2)
axes[0].grid(True)
axes[1].plot(times, data['wind_speed'].to('mph'), 'b')
twin = plt.twinx(axes[1])
twin.plot(times, data['wind_from_direction'], 'kx') | talks/MetPy Exercise.ipynb | Unidata/MetPy | bsd-3-clause |
Sounding
First grab the catalog for the Best dataset from the GSD HRRR from http://thredds.ucar.edu/thredds/catalog.html | cat = TDSCatalog('http://thredds-jumbo.unidata.ucar.edu/thredds/catalog/grib/HRRR/CONUS_3km/wrfprs/catalog.xml?dataset=grib/HRRR/CONUS_3km/wrfprs/Best') | talks/MetPy Exercise.ipynb | Unidata/MetPy | bsd-3-clause |
Set up NCSS access to the dataset | best_ds = list(cat.datasets.values())[0]
ncss = NCSS(best_ds.access_urls['NetcdfSubset'])
ncss.unit_handler = metpy_units_handler | talks/MetPy Exercise.ipynb | Unidata/MetPy | bsd-3-clause |
What variables do we have? | ncss.variables | talks/MetPy Exercise.ipynb | Unidata/MetPy | bsd-3-clause |
Set up a query for the most recent set of data from a point. We should request temperature, dewpoint, and U and V. | query = ncss.query().accept('csv')
query.lonlat_point(-105, 40).time(datetime.utcnow())
query.variables('Temperature_isobaric', 'Dewpoint_temperature_isobaric',
'u-component_of_wind_isobaric', 'v-component_of_wind_isobaric') | talks/MetPy Exercise.ipynb | Unidata/MetPy | bsd-3-clause |
Get the data | data = ncss.get_data(query)
T = data['Temperature_isobaric'].to('degC')
Td = data['Dewpoint_temperature_isobaric'].to('degC')
p = data['vertCoord'].to('mbar') | talks/MetPy Exercise.ipynb | Unidata/MetPy | bsd-3-clause |
Plot a sounding of the data | fig = plt.figure(figsize=(9, 9))
skew = SkewT(fig=fig)
skew.plot(p, T, 'r')
skew.plot(p, Td, 'g')
skew.ax.set_ylim(1050, 100)
skew.plot_mixing_lines()
skew.plot_dry_adiabats()
skew.plot_moist_adiabats() | talks/MetPy Exercise.ipynb | Unidata/MetPy | bsd-3-clause |
Also calculate the parcel profile and add that to the plot | prof = mpcalc.parcel_profile(p[::-1], T[-1], Td[-1])
skew.plot(p[::-1], prof.to('degC'), 'k', linewidth=2)
fig | talks/MetPy Exercise.ipynb | Unidata/MetPy | bsd-3-clause |
Let's also plot the location of the LCL and the 0 isotherm as well: | lcl = mpcalc.lcl(p[-1], T[-1], Td[-1])
lcl_temp = mpcalc.dry_lapse(concatenate((p[-1], lcl)), T[-1])[-1].to('degC')
skew.plot(lcl, lcl_temp, 'bo')
skew.ax.axvline(0, color='blue', linestyle='--', linewidth=2)
fig | talks/MetPy Exercise.ipynb | Unidata/MetPy | bsd-3-clause |
Let's start with the original regular expression and
string to search from Travis'
regex problem. | pattern = re.compile(r"""
(?P<any>any4?) # "any"
# association
| # or
(?P<object_eq>object ([\w-]+) eq (\d+)) # object
alone
# association
| # or
(?P<object_range>object ([a-z0-9A-Z-]+) range (\d+) (\d+)) # object range
# association
| # or
(?P<object_group>object-group ([a-z0-9A-Z-]+)) # object group
# association
| # or
(?P<object_alone>object ([[a-z0-9A-Z-]+)) # object alone
# association
""", re.VERBOSE)
s = ''' object-group jfi-ip-ranges object DA-TD-WEB01 eq 8850
''' | 20160826-dojo-regex-travis.ipynb | james-prior/cohpy | mit |
The regex had two bugs.
- Two [[ near the end of the pattern string.
- The significant spaces in the pattern (such as after object-group) were being ignored because of re.VERBOSE.
So those bugs are fixed in the pattern below. | pattern = re.compile(r"""
(?P<any>any4?) # "any"
# association
| # or
(?P<object_eq>object\ ([\w-]+)\ eq\ (\d+)) # object
alone
# association
| # or
(?P<object_range>object\ ([a-z0-9A-Z-]+)\ range\ (\d+)\ (\d+)) # object range
# association
| # or
(?P<object_group>object-group\ ([a-z0-9A-Z-]+)) # object group
# association
| # or
(?P<object_alone>object\ ([a-z0-9A-Z-]+)) # object alone
# association
""", re.VERBOSE)
re.findall(pattern, s)
for m in re.finditer(pattern, s):
print(repr(m))
print('groups', m.groups())
print('groupdict', m.groupdict()) | 20160826-dojo-regex-travis.ipynb | james-prior/cohpy | mit |
The above works, but keeping track of the indexes of the unnamed groups drives me crazy. So I add names for all groups. | pattern = re.compile(r"""
(?P<any>any4?) # "any"
# association
| # or
(?P<object_eq>object\ (?P<oe_name>[\w-]+)\ eq\ (?P<oe_i>\d+)) # object
alone
# association
| # or
(?P<object_range>object\ (?P<or_name>[a-z0-9A-Z-]+)
\ range\ (?P<oe_r_start>\d+)\ (?P<oe_r_end>\d+)) # object range
# association
| # or
(?P<object_group>object-group\ (?P<og_name>[a-z0-9A-Z-]+)) # object group
# association
| # or
(?P<object_alone>object\ (?P<oa_name>[a-z0-9A-Z-]+)) # object alone
# association
""", re.VERBOSE)
for m in re.finditer(pattern, s):
print(repr(m))
print('groups', m.groups())
print('groupdict', m.groupdict()) | 20160826-dojo-regex-travis.ipynb | james-prior/cohpy | mit |
The following shows me just the groups that matched. | for m in re.finditer(pattern, s):
for key, value in m.groupdict().items():
if value is not None:
print(key, repr(value))
print() | 20160826-dojo-regex-travis.ipynb | james-prior/cohpy | mit |
Looking at the above,
I see that I probably don't care about the big groups,
just the parameters,
so I remove the big groups (except for "any")
from the regular expression. | pattern = re.compile(r"""
(?P<any>any4?) # "any"
# association
| # or
(object\ (?P<oe_name>[\w-]+)\ eq\ (?P<oe_i>\d+)) # object
alone
# association
| # or
(object\ (?P<or_name>[a-z0-9A-Z-]+)
\ range\ (?P<oe_r_start>\d+)\ (?P<oe_r_end>\d+)) # object range
# association
| # or
(object-group\ (?P<og_name>[a-z0-9A-Z-]+)) # object group
# association
| # or
(object\ (?P<oa_name>[a-z0-9A-Z-]+)) # object alone
# association
""", re.VERBOSE) | 20160826-dojo-regex-travis.ipynb | james-prior/cohpy | mit |
Now it tells me just the meat of what I want to know. | for m in re.finditer(pattern, s):
for key, value in m.groupdict().items():
if value is not None:
print(key, repr(value))
print() | 20160826-dojo-regex-travis.ipynb | james-prior/cohpy | mit |
Since the variable answer here is defined within each function seperately, you can reuse the same name of the variable, as the scope of the variables itself is different.
Note : Functions, however, can access variables that are defined outside of its scope or in the larger scope, but can only read the value of the variable, not modify it. This is shown by the UnboundLocalError in the example below. | egg_count = 0
def buy_eggs():
egg_count += 12 # purchase a dozen eggs
# buy_eggs() | 2. Python Advanced.ipynb | ShantanuKamath/PythonWorkshop | mit |
In such situations its better to redefine the functions as below. | egg_count = 0
def buy_eggs():
return egg_count + 12
egg_count = buy_eggs()
print(egg_count)
egg_count = buy_eggs()
print(egg_count) | 2. Python Advanced.ipynb | ShantanuKamath/PythonWorkshop | mit |
List Basics
In Python, it is possible to create a list of values. Each item in the list is called an element and can be accessed individually using a zero-based index. Hence avoiding the need to create multiple variables to store individual values.
Note: negative indexes help access elements from the end of the array. -1 refers to the last element and -2 refers to the second last element and so on. | # list of numbers of type Integer
numbers = [1, 2, 3, 4, 5]
print("List :", numbers)
print("Second element :", numbers[1]) ## 2
print("Length of list :",len(numbers)) ## 5
print() # Empty line
# list of strings
colors = ['red', 'blue', 'green']
print("List :", colors)
print ("First color :", colors[0]) ## red
print ("Third color :", colors[2]) ## green
print ("Last color :", colors[-1]) ## green
print ("Second last color :", colors[-2]) ## blue
print ("Length of list :",len(colors)) ## 3
print() # Empty line
# list with multiple variable types
me = ['Shantanu Kamath', 'Computer Science', 20, 1000000]
print("List :", me)
print("Fourth element :", me[3]) ## 1000000
print("Length of list :", len(me)) ## 4 | 2. Python Advanced.ipynb | ShantanuKamath/PythonWorkshop | mit |
Since lists are considered to be sequentially ordered, they support a number of operations that can be applied to any Python sequence.
|Operation Name|Operator|Explanation|
|:-------------|:-------|:----------|
|Indexing|[ ]|Access an element of a sequence|
|Concatenation|+|Combine sequences together|
|Repetition|*|Concatenate a repeated number of times|
|Membership|in|Ask whether an item is in a sequence|
|Length|len|Ask the number of items in the sequence|
|Slicing|[ : ]|Extract a part of a sequence| | myList = [1,2,3,4]
# Indexing
A = myList[2]
print(A)
# Repititoin
A = [A]*3
print(A)
# Concatenation
print(myList + A)
# Membership
print(1 in myList)
# Length
print(len(myList))
# Slicing [inclusive : exclusive]
print(myList[1:3])
# Leaving the exclusive parameter empty
print(myList[-3:]) | 2. Python Advanced.ipynb | ShantanuKamath/PythonWorkshop | mit |
Mutability
Strings are immutable and list are mutable.
For example : | # Creating sentence and list form of sentence
name = "Welcome to coding with Python v3.6"
words = ["Welcome", "to", "coding", "with", "Python", "v3.6"]
print(name[4])
print(words[4])
# This is okay
words[5] = "v2.7"
print(words)
# This is not
# name[5] = "d"
# print(name) | 2. Python Advanced.ipynb | ShantanuKamath/PythonWorkshop | mit |
Passed by reference
The list is stored at a memory locations and only a reference of this memory location is what the variable holds. So changes applied to one variable reflect in other variables as well. | langs = ["Python", "Java", "C++", "C"]
languages = langs
langs.append("C#")
print(langs)
print(languages) | 2. Python Advanced.ipynb | ShantanuKamath/PythonWorkshop | mit |
List Methods
Besides simple accessing of values, lists have a large variety of methods that are used to performed different useful manipulations on them.
Some of them are:
list.append(element): adds a single element to the end of the list. Common error: does not return the new list, just modifies the original. | # list.append example
names = ['Hermione Granger', 'Ronald Weasley']
names.append('Harry Potter')
print("New list :", names) ## ['Hermione Granger', 'Ronald Weasley', 'Harry Potter'] | 2. Python Advanced.ipynb | ShantanuKamath/PythonWorkshop | mit |
list.insert(index, element): inserts the element at the given index, shifting elements to the right. | # list.insert example
names = ['Ronald Weasley', 'Hermione Granger']
names.insert(1, 'Harry Potter')
print("New list :", names) ## ['Ronald Weasley', 'Harry Potter', 'Hermione Granger'] | 2. Python Advanced.ipynb | ShantanuKamath/PythonWorkshop | mit |
list.extend(list2): adds the elements in list2 to the end of the list. Using + or += on a list is similar to using extend(). | # list.extend example
MainChar = ['Ronald Weasley', 'Harry Potter', 'Hermione Granger']
SupChar = ['Neville Longbottom', 'Luna Lovegood']
MainChar.extend(SupChar)
print("Full list :", MainChar) ## ['Ronald Weasley', 'Harry Potter', 'Hermione Granger', 'Neville Longbottom', 'Luna Lovegood'] | 2. Python Advanced.ipynb | ShantanuKamath/PythonWorkshop | mit |
list.index(element): searches for the given element from the start of the list and returns its index. Throws a ValueError if the element does not appear (use 'in' to check without a ValueError). | # list.index example
names = ['Ronald Weasley', 'Harry Potter', 'Hermione Granger']
index = names.index('Harry Potter')
print("Index of Harry Potter in list :",index) ## 1
# Throws a ValueError (Uncomment to see error.)
# index = names.index('Albus Dumbledore') | 2. Python Advanced.ipynb | ShantanuKamath/PythonWorkshop | mit |
list.remove(element): searches for the first instance of the given element and removes it (throws ValueError if not present) | names = ['Ronald Weasley', 'Harry Potter', 'Hermione Granger']
index = names.remove('Harry Potter') ## ['Ronald Weasley', 'Hermione Granger']
print("Modified list :", names)
| 2. Python Advanced.ipynb | ShantanuKamath/PythonWorkshop | mit |
list.pop(index): removes and returns the element at the given index. Returns the rightmost element if index is omitted (roughly the opposite of append()). | names = ['Ronald Weasley', 'Harry Potter', 'Hermione Granger']
index = names.pop(1)
print("Modified list :", names) ## ['Ronald Weasley', 'Hermione Granger'] | 2. Python Advanced.ipynb | ShantanuKamath/PythonWorkshop | mit |
list.sort(): sorts the list in place (does not return it). (The sorted() function shown below is preferred.) | alphabets = ['a', 'f','c', 'e','b', 'd']
alphabets.sort();
print ("Sorted list :", alphabets) ## ['a', 'b', 'c', 'd', 'e', 'f']
| 2. Python Advanced.ipynb | ShantanuKamath/PythonWorkshop | mit |
list.reverse(): reverses the list in place (does not return it). | alphabets = ['a', 'b', 'c', 'd', 'e', 'f']
alphabets.reverse()
print("Reversed list :", alphabets) ## ['f', 'e', 'd', 'c', 'b', 'a'] | 2. Python Advanced.ipynb | ShantanuKamath/PythonWorkshop | mit |
Others methods include :
- Count : list.count()
- Delete : del list[index]
- Join : "[Seperator string]".join(list)
List Comprehensions
In Python, List comprehensions provide a concise way to create lists. Common applications are to make new lists where each element is the result of some operations applied to each member of another sequence, or to create a subsequence of those elements that satisfy a certain condition.
It can be used to construct lists in a very natural, easy way, like a mathematician is used to do.
This is how we can explain sets in maths:
- Squares = {x² : x in {0 ... 9}}
- Exponents = (1, 2, 4, 8, ..., 2¹²)
- EvenSquares = {x | x in S and x even}
Lets try to do this in Python using normal loops and list methods: | # Using loops and list methods
squares = []
for x in range(10):
squares.append(x**2)
print("Squares :", squares) ## [0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
exponents = []
for i in range(13):
exponents.append(2**i)
print("Exponents :", exponents) ## [1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096]
evenSquares = []
for x in squares:
if x % 2 == 0:
evenSquares.append(x)
print("Even Squares :", evenSquares) ## [0, 4, 16, 36, 64]
| 2. Python Advanced.ipynb | ShantanuKamath/PythonWorkshop | mit |
These extend to more than one line. But by using list comprehensions you can bring it down to just one line. | # Using list comprehensions
squares = [x**2 for x in range(10)]
exponents = [2**i for i in range(13)]
evenSquares = [x for x in squares if x % 2 == 0]
print("Squares :", squares) ## [0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
print("Exponents :", exponents) ## [1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096]
print("Even Squares :", evenSquares) ## [0, 4, 16, 36, 64]
| 2. Python Advanced.ipynb | ShantanuKamath/PythonWorkshop | mit |
Searching
Searching is the process of finding a particular item in a collections of items. It is one of the most common problems that arise in computer programming. A search typically answers either True or False as to whether the item is present.
In Python, there is a very easy way to ask whether an item is in a list of items. We use the in operator. | # Using in to check if number is present in the list.
print(15 in [3,5,2,4,1])
print('Work' in 'Python Advanced Workshop')
| 2. Python Advanced.ipynb | ShantanuKamath/PythonWorkshop | mit |
Sometimes it can be important to get the position of the searched value. In that case, we can use index method for lists and the find method for strings. | # Using index to get position of the number if present in list.
# In case of lists, its important to remember that the index function will throw an error if the value isn't present in the list.
values = [3,5,2,4,1]
if 5 in values:
print("Value present at",values.index(5)) ## 1
else:
print("Value not present in list")
# Using find to get the index of the first occurrence of the word in a sentence.
sentence = "This be a string"
index = sentence.find("is")
if index == -1:
print("There is no 'is' here!")
else:
print("Found 'is' in the sentence at position "+str(index))
# Using index to find words in a list of words
sentence = "This be a string"
words = sentence.split(' ')
if 'is' in words:
print("Found 'is' in the list at position "+str(words.index('is')))
else:
print("There is no 'is' here!") | 2. Python Advanced.ipynb | ShantanuKamath/PythonWorkshop | mit |
For more efficient Search Algorithms, look through the Algorithm Implementation section of this repository
Sorting
Sorting is the process of placing elements from a collection in some kind of order.
For example, a list of words could be sorted alphabetically or by length.
A list of cities could be sorted by population, by area, or by zip code.
Python lists have a built-in sort() method that modifies the list in-place and a sorted() built-in function that builds a new sorted list from an iterable.
list.sort(): Modifies existing list and can be used only with lists.
sorted(list): Creates a new list when called and can be used with other iterables.
Basic sorting functions
The most basic use of the sorted function can be seen below : | # Using sort() with a list.
values = [7, 4, 3, 6, 1, 2, 5]
print("Unsorted list :", values) ## [7, 4, 3, 6, 1, 2, 5]
newValues = values.sort()
print("New list :", newValues) ## None
print("Old list :", values) ## [1, 2, 3, 4, 5, 6, 7]
print()
# Using sorted() with a list.
values = [7, 4, 3, 6, 1, 2, 5]
print("Unsorted list :", values) ## [7, 4, 3, 6, 1, 2, 5]
newValues = sorted(values)
print("New list :", newValues) ## [1, 2, 3, 4, 5, 6, 7]
print("Old list :", values) ## [7, 4, 3, 6, 1, 2, 5]
| 2. Python Advanced.ipynb | ShantanuKamath/PythonWorkshop | mit |
Sorting using additional key
For more complex custom sorting, sorted() takes an optional "key=" specifying a "key" function that transforms each element before comparison.
The key function takes in 1 value and returns 1 value, and the returned "proxy" value is used for the comparisons within the sort. | # Using key in sorted
values = ['ccc', 'aaaa', 'd', 'bb']
print (sorted(values, key=len)) ## ['d', 'bb', 'ccc', 'aaaa']
# Remember case sensitivity : All upper case characters come before lower case character in an ascending sequence.
sentence = "This is a test string from Andrew"
print(sorted(sentence.split(), key=str.lower)) ## ['a', 'Andrew', 'from', 'is', 'string', 'test', 'This']
# Using reverse for ascending and descending
strs = ['aa', 'BB', 'zz', 'CC']
print (sorted(strs)) ## ['BB', 'CC', 'aa', 'zz'] (case sensitive)
print (sorted(strs, reverse=True)) ## ['zz', 'aa', 'CC', 'BB']
| 2. Python Advanced.ipynb | ShantanuKamath/PythonWorkshop | mit |
Basics on Class and OOP
This section is built around the fundamental of Object Oriented Programming (OOP).
It aims strengthening basics but doesn't justify the broad topic itself. As OOP is a very important programming concept you should read further to better get a grip on python as well as go deep in understanding how it is useful and essential to programming.
Below are some essential resources :
- Improve Your Python: Python Classes and Object Oriented Programming
- Learn Python The Hard Way
- Python For Beginners
- A Byte Of Python
OOP
In all the code we wrote till now, we have designed our program around functions i.e. blocks of statements which manipulate data. This is called the procedure-oriented way of programming.
There is another way of organizing your program which is to combine data and functionality and wrap it inside something called an object. This is called the object oriented programming paradigm.
Most of the time you can use procedural programming, but when writing large programs or have a problem that is better suited to this method, you can use object oriented programming techniques.
Classes and Objects
Classes and objects are the two main aspects of object oriented programming. A class creates a new type where objects are instances of the class.
Objects can store data using ordinary variables that belong to the object. Variables that belong to an object or class are referred to as fields or attributes.
Objects can also have functionality by using functions that belong to a class. Such functions are called methods of the class.
The simplest class possible is shown in the following example : | class Person:
pass # An empty block
p = Person()
print(p)
| 2. Python Advanced.ipynb | ShantanuKamath/PythonWorkshop | mit |
Methods
Class methods have only one specific difference from ordinary functions - they must have an extra first name that has to be added to the beginning of the parameter list, but you do not give a value for this parameter when you call the method, Python will provide it. This particular variable refers to the object itself, and by convention, it is given the name self. | class Person:
def say_hi(self):
print('Hello, how are you?')
p = Person()
p.say_hi()
| 2. Python Advanced.ipynb | ShantanuKamath/PythonWorkshop | mit |
The init
There are many method names which have special significance in Python classes. We will see the significance of the init method now.
The init method is run as soon as an object of a class is instantiated. The method is useful to do any initialization you want to do with your object. Notice the double underscores both at the beginning and at the end of the name. | class Person:
def __init__(self, name):
self.name = name
def say_hi(self):
print('Hello, my name is', self.name)
p = Person('Shantanu')
p.say_hi() | 2. Python Advanced.ipynb | ShantanuKamath/PythonWorkshop | mit |
Object variables
Now let us learn about the data part. The data part, i.e. fields, are nothing but ordinary variables that are bound to the namespaces of the classes and objects. This means that these names are valid within the context of these classes and objects only. That's why they are called name spaces.
There are two types of fields - class variables and object variables which are classified depending on whether the class or the object owns the variables respectively.
Class variables are shared - they can be accessed by all instances of that class. There is only one copy of the class variable and when any one object makes a change to a class variable, that change will be seen by all the other instances.
Object variables are owned by each individual object/instance of the class. In this case, each object has its own copy of the field i.e. they are not shared and are not related in any way to the field by the same name in a different instance.
An example will make this easy to understand. | class Robot:
## Represents a robot, with a name.
# A class variable, counting the number of robots
population = 0
def __init__(self, name):
## Initializes the data.
self.name = name
print("(Initializing {})".format(self.name))
# When this person is created, the robot
# adds to the population
Robot.population += 1
def die(self):
## I am dying.
print("{} is being destroyed!".format(self.name))
Robot.population -= 1
if Robot.population == 0:
print("{} was the last one.".format(self.name))
else:
print("There are still {:d} robots working.".format(
Robot.population))
def say_hi(self):
## Greeting by the robot. Yeah, they can do that.
print("Greetings, my masters call me {}.".format(self.name))
@classmethod
def how_many(cls):
## Prints the current population.
print("We have {:d} robots.".format(cls.population))
droid1 = Robot("R2-D2")
droid1.say_hi()
Robot.how_many()
droid2 = Robot("C-3PO")
droid2.say_hi()
Robot.how_many()
print("\nRobots can do some work here.\n")
print("Robots have finished their work. So let's destroy them.")
droid1.die()
droid2.die()
Robot.how_many()
| 2. Python Advanced.ipynb | ShantanuKamath/PythonWorkshop | mit |
How It Works
This is a long example but helps demonstrate the nature of class and object variables. Here, population belongs to the Robot class and hence is a class variable. The name variable belongs to the object (it is assigned using self) and hence is an object variable.
Thus, we refer to the population class variable as Robot.population and not as self.population. We refer to the object variable name using self.name notation in the methods of that object. Remember this simple difference between class and object variables. Also note that an object variable with the same name as a class variable will hide the class variable!
Instead of Robot.population, we could have also used self.class.population because every object refers to its class via the self.class attribute.
The how_many is actually a method that belongs to the class and not to the object. This means we can define it as either a classmethod or a staticmethod depending on whether we need to know which class we are part of. Since we refer to a class variable, let's use classmethod.
We have marked the how_many method as a class method using a decorator.
Decorators can be imagined to be a shortcut to calling a wrapper function, so applying the @classmethod decorator is same as calling:
how_many = classmethod(how_many)
Observe that the init method is used to initialize the Robot instance with a name. In this method, we increase the population count by 1 since we have one more robot being added. Also observe that the values of self.name is specific to each object which indicates the nature of object variables.
Remember, that you must refer to the variables and methods of the same object using the self only. This is called an attribute reference.
All class members are public. One exception: If you use data members with names using the double underscore prefix such as __privatevar , Python uses name-mangling to effectively make it a private variable.
Thus, the convention followed is that any variable that is to be used only within the class or object should begin with an underscore and all other names are public and can be used by other classes/objects. Remember that this is only a convention and is not enforced by Python (except for the double underscore prefix).
There are more concepts in OOP such as Inheritance, Abstraction and Polymorphism, which would require a lot more time to cover. You may refer to reference material for explanation on these topics.
File I/O
File handling is super simplified in Python compared to other programming languages.
The first thing you’ll need to know is Python’s built-in open function to get a file object.
The open function opens a file. When you use the open function, it returns something called a file object. File objects contain methods and attributes that can be used to collect information about the file you opened. They can also be used to manipulate said file.
For example, the mode attribute of a file object tells you which mode a file was opened in. And the name attribute tells you the name of the file that the file object has opened.
File Types
In Python, a file is categorized as either text or binary, and the difference between the two file types is important.
Text files are structured as a sequence of lines, where each line includes a sequence of characters. This is what you know as code or syntax.
Each line is terminated with a special character, called the EOL or End of Line character. There are several types, but the most common is the comma {,} or newline character. It ends the current line and tells the interpreter a new one has begun.
A backslash character can also be used, and it tells the interpreter that the next character – following the slash – should be treated as a new line. This character is useful when you don’t want to start a new line in the text itself but in the code.
A binary file is any type of file that is not a text file. Because of their nature, binary files can only be processed by an application that know or understand the file’s structure. In other words, they must be applications that can read and interpret binary.
Open ( ) Function
The syntax to open a file object in Python is:
python
file_object = open("filename", "mode") ## where file_object is the variable to add the file object.
The second argument you see – mode – tells the interpreter and developer which way the file will be used.
Mode
Including a mode argument is optional because a default value of r will be assumed if it is omitted.
The modes are:
r – Read mode which is used when the file is only being read
w – Write mode which is used to edit and write new information to the file (any existing files with the same name will be erased when this mode is activated)
a – Appending mode, which is used to add new data to the end of the file; that is new information is automatically amended to the end
r+ – Special read and write mode, which is used to handle both actions when working with a file
Create a text file
Using a simple text editor, let’s create a file. You can name it anything you like, and it’s better to use something you’ll identify with.
For the purpose of this workshop, however, we are going to call it "testfile.txt".
Just create the file and leave it blank.
To manipulate the file :
```python
file = open("testfile.txt","w")
file.write("Hello World")
file.write("This is our new text file")
file.write("and this is another line.")
file.write("Why? Because we can.")
file.close()
```
Reading a text file
Following methods allow reading a file :
- file.read(): extract a string that contains all characters in the file.
python
file = open("testfile.text", "r")
print(file.read())
- file.read(numberOfCharacters): extract only a certain number of characters.
python
file = open("testfile.txt", "r")
print(file.read(5))
- file.readline(): read a file line by line – as opposed to pulling the content of the entire file at once.
python
file = open("testfile.txt", "r")
print(file.readline())
- file.readline(lineNumber): return a specific line
python
file = open("testfile.txt", "r")
print(file.readline(3))
- file.readlines(): return every line in the file, properly separated in a list
python
file = open("testfile.txt", "r")
print (file.readlines())
Looping over file
When you want to read – or return – all the lines from a file in a more memory efficient, and fast manner, you can use the loop over method. The advantage to using this method is that the related code is both simple and easy to read.
python
file = open("testfile.txt", "r")
for line in file:
print line
Using the File Write Method
This method is used to add information or content to an existing file. To start a new line after you write data to the file, you can add an EOL ("\n")) character.
```python
file = open("testfile.txt", "w")
file.write("This is a test")
file.write("To add more lines.")
file.close()
```
Closing a file
When you’re done working, you can use the fh.close() command to end things. What this does is close the file completely, terminating resources in use, in turn freeing them up for the system to deploy elsewhere.
It’s important to understand that when you use the fh.close() method, any further attempts to use the file object will fail.
python
file.close()
Exception Handling
There are two types of errors that typically occur when writing programs. The first, known as a syntax error, simply means that the programmer has made a mistake in the structure of a statement or expression. For example, it is incorrect to write a for statement and forget the colon. | # ( Uncomment to see Syntax error. )
# for i in range(10) | 2. Python Advanced.ipynb | ShantanuKamath/PythonWorkshop | mit |
The other type of error, known as a logic error, denotes a situation where the program executes but gives the wrong result. This can be due to an error in the underlying algorithm or an error in your translation of that algorithm. In some cases, logic errors lead to very bad situations such as trying to dividing by zero or trying to access an item in a list where the index of the item is outside the bounds of the list. In this case, the logic error leads to a runtime error that causes the program to terminate. These types of runtime errors are typically called exceptions.
When an exception occurs, we say that it has been raised. You can handle the exception that has been raised by using a try statement. For example, consider the following session that asks the user for an integer and then calls the square root function from the math library. If the user enters a value that is greater than or equal to 0, the print will show the square root. However, if the user enters a negative value, the square root function will report a ValueError exception. | import math
anumber = int(input("Please enter an integer "))
# Give input as negative number and also see output from next code snippet
print(math.sqrt(anumber)) | 2. Python Advanced.ipynb | ShantanuKamath/PythonWorkshop | mit |
We can handle this exception by calling the print function from within a try block. A corresponding except block catches the exception and prints a message back to the user in the event that an exception occurs. For example: | try:
print(math.sqrt(anumber))
except:
print("Bad Value for square root")
print("Using absolute value instead")
print(math.sqrt(abs(anumber))) | 2. Python Advanced.ipynb | ShantanuKamath/PythonWorkshop | mit |
It is also possible for a programmer to cause a runtime exception by using the raise statement. For example, instead of calling the square root function with a negative number, we could have checked the value first and then raised our own exception. The code fragment below shows the result of creating a new RuntimeError exception. Note that the program would still terminate but now the exception that caused the termination is something explicitly created by the programmer. | if anumber < 0:
raise RuntimeError("You can't use a negative number")
else:
print(math.sqrt(anumber)) | 2. Python Advanced.ipynb | ShantanuKamath/PythonWorkshop | mit |
General information on the Gapminder data | display(Markdown("Number of countries: {}".format(len(data))))
display(Markdown("Number of variables: {}".format(len(data.columns))))
# Convert interesting variables in numeric format
for variable in ('internetuserate', 'suicideper100th', 'employrate'):
data[variable] = pd.to_numeric(data[variable], errors='coerce') | Making_Data_Management.ipynb | fcollonval/coursera_data_visualization | mit |
But the unemployment rate is not provided directly. In the database, the employment rate (% of the popluation) is available. So the unemployement rate will be computed as 100 - employment rate: | data['unemployrate'] = 100. - data['employrate'] | Making_Data_Management.ipynb | fcollonval/coursera_data_visualization | mit |
The first records of the data restricted to the three analyzed variables are: | subdata = data[['internetuserate', 'suicideper100th', 'unemployrate']]
subdata.head(10) | Making_Data_Management.ipynb | fcollonval/coursera_data_visualization | mit |
Data analysis
We will now have a look at the frequencies of the variables after grouping them as all three are continuous variables. I will group the data in intervals using the cut function.
Internet use rate frequencies | display(Markdown("Internet Use Rate (min, max) = ({0:.2f}, {1:.2f})".format(subdata['internetuserate'].min(), subdata['internetuserate'].max())))
internetuserate_bins = pd.cut(subdata['internetuserate'],
bins=np.linspace(0, 100., num=21))
counts1 = internetuserate_bins.value_counts(sort=False, dropna=False)
percentage1 = internetuserate_bins.value_counts(sort=False, normalize=True, dropna=False)
data_struct = {
'Counts' : counts1,
'Cumulative counts' : counts1.cumsum(),
'Percentages' : percentage1,
'Cumulative percentages' : percentage1.cumsum()
}
internetrate_summary = pd.DataFrame(data_struct)
internetrate_summary.index.name = 'Internet use rate (per 100 people)'
(internetrate_summary[['Counts', 'Cumulative counts', 'Percentages', 'Cumulative percentages']]
.style.set_precision(3)
.set_properties(**{'text-align':'right'})) | Making_Data_Management.ipynb | fcollonval/coursera_data_visualization | mit |
Suicide per 100,000 people frequencies | display(Markdown("Suicide per 100,000 people (min, max) = ({:.2f}, {:.2f})".format(subdata['suicideper100th'].min(), subdata['suicideper100th'].max())))
suiciderate_bins = pd.cut(subdata['suicideper100th'],
bins=np.linspace(0, 40., num=21))
counts2 = suiciderate_bins.value_counts(sort=False, dropna=False)
percentage2 = suiciderate_bins.value_counts(sort=False, normalize=True, dropna=False)
data_struct = {
'Counts' : counts2,
'Cumulative counts' : counts2.cumsum(),
'Percentages' : percentage2,
'Cumulative percentages' : percentage2.cumsum()
}
suiciderate_summary = pd.DataFrame(data_struct)
suiciderate_summary.index.name = 'Suicide (per 100 000 people)'
(suiciderate_summary[['Counts', 'Cumulative counts', 'Percentages', 'Cumulative percentages']]
.style.set_precision(3)
.set_properties(**{'text-align':'right'})) | Making_Data_Management.ipynb | fcollonval/coursera_data_visualization | mit |
Unemployment rate frequencies | display(Markdown("Unemployment rate (min, max) = ({0:.2f}, {1:.2f})".format(subdata['unemployrate'].min(), subdata['unemployrate'].max())))
unemployment_bins = pd.cut(subdata['unemployrate'],
bins=np.linspace(0, 100., num=21))
counts3 = unemployment_bins.value_counts(sort=False, dropna=False)
percentage3 = unemployment_bins.value_counts(sort=False, normalize=True, dropna=False)
data_struct = {
'Counts' : counts3,
'Cumulative counts' : counts3.cumsum(),
'Percentages' : percentage3,
'Cumulative percentages' : percentage3.cumsum()
}
unemployment_summary = pd.DataFrame(data_struct)
unemployment_summary.index.name = 'Unemployement rate (% population age 15+)'
(unemployment_summary[['Counts', 'Cumulative counts', 'Percentages', 'Cumulative percentages']]
.style.set_precision(3)
.set_properties(**{'text-align':'right'})) | Making_Data_Management.ipynb | fcollonval/coursera_data_visualization | mit |
Decision Tree Classification | from plots import plot_tree_interactive
plot_tree_interactive() | 05.1 Trees and Forests.ipynb | amueller/advanced_training | bsd-2-clause |
Random Forests | from plots import plot_forest_interactive
plot_forest_interactive()
from sklearn.ensemble import RandomForestClassifier
from sklearn.datasets import make_moons
from sklearn.model_selection import train_test_split
X, y = make_moons(n_samples=100, noise=0.25, random_state=3)
X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y, random_state=42)
forest = RandomForestClassifier(n_estimators=5, random_state=2)
forest.fit(X_train, y_train)
fig, axes = plt.subplots(2, 3, figsize=(20, 10))
for i, (ax, tree) in enumerate(zip(axes.ravel(), forest.estimators_)):
ax.set_title("tree %d" % i)
mglearn.plots.plot_tree_partition(X_train, y_train, tree, ax=ax)
mglearn.plots.plot_2d_separator(forest, X_train, fill=True, ax=axes[-1, -1], alpha=.4)
axes[-1, -1].set_title("random forest")
plt.scatter(X_train[:, 0], X_train[:, 1], c=y_train, s=60, cmap=mglearn.cm2) | 05.1 Trees and Forests.ipynb | amueller/advanced_training | bsd-2-clause |
Selecting the Optimal Estimator via Cross-Validation | from sklearn.model_selection import GridSearchCV
from sklearn.datasets import load_boston
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestRegressor
boston = load_boston()
X, y = boston.data, boston.target
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
rf = RandomForestRegressor(n_estimators=200, n_jobs=-1)
parameters = {'max_features':['sqrt', 'log2'],
'max_depth':[5, 7, 9]}
grid = GridSearchCV(rf, parameters, cv=5)
grid.fit(X_train, y_train)
grid.score(X_train, y_train)
grid.score(X_test, y_test)
grid.best_estimator_.feature_importances_ | 05.1 Trees and Forests.ipynb | amueller/advanced_training | bsd-2-clause |
Prepare the pipeline
(str) filepath: Give the csv file
(str) y_col: The column to predict
(bool) regression: Regression or Classification ?
(bool) process: (WARNING) apply some preprocessing on your data (tune this preprocess with params below)
(char) sep: delimiter
(list) col_to_drop: which columns you don't want to use in your prediction
(bool) derivate: for all features combination apply, n1 * n2, n1 / n2 ...
(bool) transform: for all features apply, log(n), sqrt(n), square(n)
(bool) scaled: scale the data ?
(bool) infer_datetime: for all columns check the type and build new columns from them (day, month, year, time) if they are date type
(str) encoding: data encoding
(bool) dummify: apply dummies on your categoric variables
The data files have been generated by sklearn.dataset.make_classification | cls = Baboulinet(filepath="toto.csv", y_col="predict", regression=False) | mozinor/example/Mozinor example Class.ipynb | Jwuthri/Mozinor | mit |
Now run the pipeline
May take some times | res = cls.babouline() | mozinor/example/Mozinor example Class.ipynb | Jwuthri/Mozinor | mit |
The class instance, now contains 2 objects, the model for this data, and the best stacking for this data
To make auto generate the code of the model
Generate the code for the best model | cls.bestModelScript() | mozinor/example/Mozinor example Class.ipynb | Jwuthri/Mozinor | mit |
Generate the code for the best stacking | cls.bestStackModelScript() | mozinor/example/Mozinor example Class.ipynb | Jwuthri/Mozinor | mit |
To check which model is the best
Best model | res.best_model
show = """
Model: {},
Score: {}
"""
print(show.format(res.best_model["Estimator"], res.best_model["Score"])) | mozinor/example/Mozinor example Class.ipynb | Jwuthri/Mozinor | mit |
Best stacking | res.best_stack_models
show = """
FirstModel: {},
SecondModel: {},
Score: {}
"""
print(show.format(res.best_stack_models["Fit1stLevelEstimator"], res.best_stack_models["Fit2ndLevelEstimator"], res.best_stack_models["Score"])) | mozinor/example/Mozinor example Class.ipynb | Jwuthri/Mozinor | mit |
Network Architecture
The encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.
Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data.
What's going on with the decoder
Okay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers aren't. Usually, you'll see deconvolutional layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but it reverse. A stride in the input layer results in a larger stride in the deconvolutional layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a deconvolutional layer. Deconvolution is often called "transpose convolution" which is what you'll find with the TensorFlow API, with tf.nn.conv2d_transpose.
However, deconvolutional layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In this Distill article from Augustus Odena, et al, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with tf.image.resize_images, followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.
Exercise: Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by 2. Odena et al claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in tf.image.resize_images or use tf.image.resize_nearest_neighbor. | learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='input')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='input')
### Encoder
conv1 = tf.layers.conv2d(inputs_, 16, (3, 3), padding='SAME', activation=tf.nn.relu)
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(conv1, (2, 2), (2, 2), padding='SAME')
# Now 14x14x16
conv2 = tf.layers.conv2d(maxpool1, 8, (3, 3), padding='SAME', activation=tf.nn.relu)
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(conv2, (2, 2), (2, 2), padding='SAME')
# Now 7x7x8
conv3 = tf.layers.conv2d(maxpool2, 8, (3, 3), padding='SAME', activation=tf.nn.relu)
# Now 7x7x8
encoded = tf.layers.max_pooling2d(conv3, (2, 2), (2, 2), padding='SAME')
# Now 4x4x8
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7, 7))
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1, 8, (3, 3), padding='SAME', activation=tf.nn.relu)
# Now 7x7x8
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14, 14))
# Now 14x14x8
conv5 = tf.layers.conv2d(upsample2, 8, (3, 3), padding='SAME', activation=tf.nn.relu)
# Now 14x14x8
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28, 28))
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3, 16, (3, 3), padding='SAME', activation=tf.nn.relu)
# Now 28x28x16
logits = tf.layers.conv2d(conv6, 1, (3, 3), padding='SAME', activation=None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits)
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost) | autoencoder/Convolutional_Autoencoder.ipynb | danresende/deep-learning | mit |
Denoising
As I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.
Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.
Exercise: Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers. | learning_rate = 0.001
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_, 32, (3, 3), padding='SAME', activation=tf.nn.relu)
# Now 28x28x32
maxpool1 = tf.layers.max_pooling2d(conv1, (2, 2), (2, 2), padding='SAME')
# Now 14x14x32
conv2 = tf.layers.conv2d(maxpool1, 32, (3, 3), padding='SAME', activation=tf.nn.relu)
# Now 14x14x32
maxpool2 = tf.layers.max_pooling2d(conv2, (2, 2), (2, 2), padding='SAME')
# Now 7x7x32
conv3 = tf.layers.conv2d(maxpool2, 16, (3, 3), padding='SAME', activation=tf.nn.relu)
# Now 7x7x16
encoded = tf.layers.max_pooling2d(conv3, (2, 2), (2, 2), padding='SAME')
# Now 4x4x16
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7, 7))
# Now 7x7x16
conv4 = tf.layers.conv2d(upsample1, 16, (3, 3), padding='SAME', activation=tf.nn.relu)
# Now 7x7x16
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14, 14))
# Now 14x14x16
conv5 = tf.layers.conv2d(upsample2, 32, (3, 3), padding='SAME', activation=tf.nn.relu)
# Now 14x14x32
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28, 28))
# Now 28x28x32
conv6 = tf.layers.conv2d(upsample3, 32, (3, 3), padding='SAME', activation=tf.nn.relu)
# Now 28x28x32
logits = tf.layers.conv2d(conv6, 1, (3, 3), padding='SAME', activation=None)
#Now 28x28x1
# Pass logits through sigmoid to get reconstructed image
decoded = tf.nn.sigmoid(logits)
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
epochs = 100
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost)) | autoencoder/Convolutional_Autoencoder.ipynb | danresende/deep-learning | mit |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.