markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
The function trim catalog is a convenience function to simply return only those sources that are well enough isolated for PSF generation. It rejects any sources within 30 pixels of another source, any sources with peak pixel above 70,000, and any sources that sextractor has flagged for what ever reason. We may fold this into psfStarChooser in the future.
def trimCatalog(cat): good=[] for i in range(len(cat['XWIN_IMAGE'])): try: a = int(cat['XWIN_IMAGE'][i]) b = int(cat['YWIN_IMAGE'][i]) m = num.max(data[b-4:b+5,a-4:a+5]) except: pass dist = num.sort(((cat['XWIN_IMAGE']-cat['XWIN_IMAGE'][i])**2+(cat['YWIN_IMAGE']-cat['YWIN_IMAGE'][i])**2)**0.5) d = dist[1] if cat['FLAGS'][i]==0 and d>30 and m<70000: good.append(i) good=num.array(good) outcat = {} for i in cat: outcat[i] = cat[i][good] return outcat
tutorial/trippytutorial.ipynb
fraserw/PyMOP
gpl-2.0
Get the image this tutorial assumes you have. If wget fails then you are likely on a mac, and should just download it manually
inputFile='Polonskaya.fits' if not path.isfile(inputFile): os.system('wget -O Polonskaya.fits http://www.canfar.phys.uvic.ca/vospace/nodes/fraserw/Polonskaya.fits?view=data') else: print("We already have the file.")
tutorial/trippytutorial.ipynb
fraserw/PyMOP
gpl-2.0
First load the fits image and get out the header, data, and exposure time.
with pyf.open(inputFile) as han: data = han[0].data header = han[0].header EXPTIME = header['EXPTIME']
tutorial/trippytutorial.ipynb
fraserw/PyMOP
gpl-2.0
Next run sextractor on the images, and use trimCatalog to create a trimmed down list of isolated sources. makeParFiles handles the creation of all the sextractor files, including the .sex file which we call example.sex, the default.conv, the param file which is saved as def.param. .runSex creates example.cat which is read by .getCatalog. getCatalog takes as input the catalog name and the parameter file "def.param". The parameters that are actually used by psfStarChooser and psf.genLookupTable are XWIN_IMAGE, YWIN_IMAGE, FLUX_AUTO, and FLUXERR_AUTO, which are the x,y coordinates, the flux, and the flux uncertainty estimate respectively. The latter two are used in the SNR cut that psfStarChooser makes.
scamp.makeParFiles.writeSex('example.sex', minArea=3., threshold=5., zpt=27.8, aperture=20., min_radius=2.0, catalogType='FITS_LDAC', saturate=55000) scamp.makeParFiles.writeConv() scamp.makeParFiles.writeParam(numAps=1) #numAps is thenumber of apertures that you want to use. Here we use 1 scamp.runSex('example.sex', inputFile ,options={'CATALOG_NAME':'example.cat'},verbose=False) catalog = trimCatalog(scamp.getCatalog('example.cat',paramFile='def.param'))
tutorial/trippytutorial.ipynb
fraserw/PyMOP
gpl-2.0
Finally, find the source closest to 811, 4005 which is the bright asteroid, 2006 Polonskaya. Also, set the rate and angle of motion. These were found from JPL horizons. The 1 degree increase is to account for the slight rotation of the image. Note: in this image, the asteroid is near (4005,811) and we apply a distance sort to the catalog to find correct catalog entry, and the source centroid, which we store in (xt,yt). Setting the important asteroid parameters. xt,yt contain the location of the asteroid itself (near 811,4005), rate and angle are the rate and angle of traililng, in "/hr and degrees. We find the actual centroid as the location closest to that point.
dist = ((catalog['XWIN_IMAGE']-811)**2+(catalog['YWIN_IMAGE']-4005)**2)**0.5 args = num.argsort(dist) xt = catalog['XWIN_IMAGE'][args][0] yt = catalog['YWIN_IMAGE'][args][0] rate = 18.4588 # "/hr angle = 31.11+1.1 # degrees counter clockwise from horizontal, right
tutorial/trippytutorial.ipynb
fraserw/PyMOP
gpl-2.0
Now use psfStarChooser to select the PSF stars. The first and second parameters to starChooser are the fitting box width in pixels, and the SNR minimum required for a star to be considered as a potential PSF star. Optional but important inputs are autoTrim and noVisualSelection. The former, when True, uses bgFinder.fraserMode to attempt to determine what FWHM corresponds to actual stars, and rejects all sources with FWHM outside +-0.5 pixels of the modal value. noVisualSelection determines if manual input is required. When set to false, all stars are considered. Until you know the software, I suggest you use noVisualSelection=True for manual selection, and autoTrim=False to see all sources in the plot window. For each star provided to psfStarChooser, it will print a line to screen of x,y and best fit alpha, beta, and FWHM of the moffat profile fit. Then psfStarChooser will pop-up a multipanel window. Top left: histogram of fit chi values. Top right: chi vs. FWHM for each fitted source. Middle right: histogram of FWHM. Bottom right: image display of the currently selected source. Bottom left: Radial profiles of all sources displayed in the top right scatter plot. The point of this window is to select only good stars for PSF generation, done by zooming to the good sources, and rejecting those that are bad. Use the zoom tool to select the region containing the stars. In this image, that's a cluser at FWHM~3.5 pixels. Left and right clicks will select a source, now surrounded by a diamond, displaying the radial profile bottom left, and the actual image bottom right. Right click will oscillate between accepted source and rejected source (blue and red respectively). Keyboard funcitonality is now also implemented. Use the left/right arrow keys (or a/d) to cycle through each source, and the up/down keys (or w/d) to mark a source as rejected (red) or accepted (blue). This is probably the fastest way to cycle through sources. Note that for some mac python installs, key presses won't be recognized inside a pylab window. To solve this, invoke your trippy script with pythonw instead of python. When the window is closed, only those sources shown as blue points, and within the zoom of the top right plot will be used to generate the PSF. The array goodFits is returned for convenience and contains the moffat fit details of each accepted source. Each entry is [FWHM, chi, alpha, beta, x, y, local background value]. The array goodMeds is just the median of goodFits, and provides the median moffat alpha and beta of the selected stars. Note on a couple starChooser options: --bgRadius is the radius outside of which the image background level is sampled. The fitting is relatively insensitive to this value, however, if you happen to know what the FWHM is approximately, then the best fitting results can be had with bgRadius>~3xFWHM in pixels. --ftol is the least squares fitting tolerance parameter passed to the scipy least sqaures fitter. Increasing this number can result in dramatic performance improvements. Default is 1.4e-8 to provide an extremely accurate fit. Good enough fits can be had with 1.e-7 or even 1.e-6 if one has a need for speed. --repFact defaults to 5. If you want to run faster but still preserve most accuracy in the fitting procedure, use repFact = 3 --quickFit = True will provide the fastest moffat fitting. The speed improvement over quickFit = False is dramatic, but results in slightly less accurate moffat fit parameters. For the majority of use cases, where the number of good psf stars are more than a few, the degredation in PSF accuracy will not be appreciable because of the fact that a lookup table is used. But the user should confirm this be comparing PSFs generated in both circumstances. --printStarInfo = True will display an inset in the starchooser plot that shows the parameters of the selected source, such as alpha, beta, and FWHM, among others.
starChooser=psfStarChooser.starChooser(data, catalog['XWIN_IMAGE'],catalog['YWIN_IMAGE'], catalog['FLUX_AUTO'],catalog['FLUXERR_AUTO']) (goodFits,goodMeds,goodSTDs) = starChooser(30,200,noVisualSelection=False,autoTrim=True, bgRadius=15, quickFit = False, printStarInfo = True, repFact = 5, ftol=1.49012e-08) print(goodFits) print(goodMeds)
tutorial/trippytutorial.ipynb
fraserw/PyMOP
gpl-2.0
Generate the PSF. We want a 61 pixel wide PSF, adopt a repFactor of 10, and use the mean star fits chosen above. always use odd values for the dimensions. Even values (eg. 60 instead of 61) result in off centered lookup tables. Repfactors of 5 and 10 have been tested thoroughly. Larger is pointless, smaller is inaccurate. 5 is faster than 10, 10 is more accurate than 5. The PSF has to be wide/tall enough to handle the trailing length and the seeing disk. For Polonskaya, the larger is trailing, at ~19"/hr*480s/3600/0.185"/pix = 14 pixels. Choose something a few times larger. Also, stick with odd width PSFs, as the even ones have some funny centroid stuff that I haven't fully sorted out. The full PSF is created with instantiation, and running both genLookupTable and genPSF.
goodPSF = psf.modelPSF(num.arange(61),num.arange(61), alpha=goodMeds[2],beta=goodMeds[3],repFact=10) goodPSF.genLookupTable(data,goodFits[:,4],goodFits[:,5],verbose=False) fwhm = goodPSF.FWHM() ###this is the FWHM with lookuptable included fwhm = goodPSF.FWHM(fromMoffatProfile=True) ###this is the pure moffat FWHM. print("Full width at half maximum {:5.3f} (in pix).".format(fwhm)) zscale = ZScaleInterval() (z1, z2) = zscale.get_limits(goodPSF.lookupTable) normer = interval.ManualInterval(z1,z2) pyl.imshow(normer(goodPSF.lookupTable)) pyl.show()
tutorial/trippytutorial.ipynb
fraserw/PyMOP
gpl-2.0
Now generate the TSF, which we call the line/long PSF interchangeably through the code... Rate is in units of length/time and pixScale is in units of length/pixel, time and length are in units of your choice. Sanity suggests arcseconds and hours. Then rate in "/hr and pixScale in "/pix. Angle is in degrees counter clockwise from horizontal between +-90 degrees. This can be rerun to create a TSF with different rate/angle of motion, though keep in mind that the psf class only contains one longPSF (one rate/angle) at any given time.
goodPSF.line(rate,angle,EXPTIME/3600.,pixScale=0.185,useLookupTable=True)
tutorial/trippytutorial.ipynb
fraserw/PyMOP
gpl-2.0
Now calculate aperture corrections for the PSF and TSF. Store for values of r=1.4*FWHM. Note that the precision of the aperture correction depends lightly on the sampling from the compute functions. 10 is generally enough to preserve 1% precision in the .roundAperCorr() and lineAperCorr() functions which use linear interpolation to get the value one actually desires. NOTE: Set useLookupTable=False if one wants to calculate from the moffat profile alone. Generally, not accuarate for small apertures however.
goodPSF.computeRoundAperCorrFromPSF(psf.extent(0.8*fwhm,4*fwhm,10),display=False, displayAperture=False, useLookupTable=True) roundAperCorr = goodPSF.roundAperCorr(1.4*fwhm) goodPSF.computeLineAperCorrFromTSF(psf.extent(0.1*fwhm,4*fwhm,10), l=(EXPTIME/3600.)*rate/0.185,a=angle,display=False,displayAperture=False) lineAperCorr = goodPSF.lineAperCorr(1.4*fwhm) print(lineAperCorr,roundAperCorr)
tutorial/trippytutorial.ipynb
fraserw/PyMOP
gpl-2.0
Store the PSF. In TRIPPy v1.0 we introduced a new psf save format which decreases the storage requirements by roughly half, at the cost of increase CPU time when restoring the stored PSF. The difference is that the moffat component of the PSF was originally saved in the fits file's first extension. This is no longer saved, as it's pretty quick to calculate. Default behaviour is the old PSF format, but the new format can be flagged with psfV2=True as shown below.
goodPSF.psfStore('psf.fits', psfV2=True)
tutorial/trippytutorial.ipynb
fraserw/PyMOP
gpl-2.0
If we've already done the above once, we could doing it again by restoring the previously constructed PSF by the following commented out code.
#goodPSF = psf.modelPSF(restore='psf.fits')
tutorial/trippytutorial.ipynb
fraserw/PyMOP
gpl-2.0
And we could generate a new line psf by recalling .line with a new rate and angle
#goodPSF.line(new_rate,new_angle,EXPTIME/3600.,pixScale=0.185,useLookupTable=True)
tutorial/trippytutorial.ipynb
fraserw/PyMOP
gpl-2.0
Now let's do some pill aperture photometry. Instantiate the class, then call the object you created to get photometry of Polonskaya. Again assume repFact=10. pillPhot takes as input the same coordinates as outputted by sextractor. First example is of a round star which I have manually taken the coordinates from above. Second example is for the asteroid itself. New feature! The input radii can either be singletons like in the example below, or a numpy array of radii. If photometry of the same source using multiple radii are needed, the numpy array is much much faster than passing individual singletons. enableBGselection=True will cause a popup display of the source, in which one can zoom to a section with no background source. The detault background selection technique is "smart". See bgFinder documentation for what that means. If you want to change this away from 'fraserMode', take a look at the options in bgFinder. display=True to see the image subsection r is the radius of the pill, l is the length, a is the angle. Sky radius is the radius of a larger pill aperture. The pixels in this larger aperture, but outside the smaller aperture are ignored. Anything outside the larger pill, but inside +-width is used for background estimation. Trimbghighpix is mostly made not important if mode=smart. But if you want to use a mean or median for some reason, then this value is used to reject pixels with values trimBGhighPix standard deviations above the mean of the cutout.
#initiate the pillPhot object phot = pill.pillPhot(data,repFact=10) #get photometry, assume ZPT=26.0 #enableBGselection=True allows you to zoom in on a good background region in the aperture display window #trimBGhighPix is a sigma cut to get rid of the cosmic rays. They get marked as blue in the display window #background is selected inside the box and outside the skyRadius value #mode is th background mode selection. Options are median, mean, histMode (JJ's jjkmode technique), fraserMode (ask me about it), gaussFit, and "smart". Smart does a gaussian fit first, and if the gaussian fit value is discrepant compared to the expectation from the background std, it resorts to the fraserMode. "smart" seems quite robust to nearby bright sources #examples of round sources phot(goodFits[0][4], goodFits[0][5],radius=3.09*1.1,l=0.0,a=0.0, skyRadius=4*3.09,width=6*3.09, zpt=26.0,exptime=EXPTIME,enableBGSelection=True,display=True, backupMode="fraserMode",trimBGHighPix=3.) #example of a trailed source phot(xt,yt,radius=fwhm*1.4,l=(EXPTIME/3600.)*rate/0.185,a=angle, skyRadius=4*fwhm,width=6*fwhm, zpt=26.0,exptime=EXPTIME,enableBGSelection=True,display=True, backupMode="smart",trimBGHighPix=3.)
tutorial/trippytutorial.ipynb
fraserw/PyMOP
gpl-2.0
The SNR function calculates the SNR of the aperture,as well as provide an estiamte of the magnitude/flux uncertainties. Select useBGstd=True if you wish to use the background noise level instead of sqrt of the background level in your uncertainty estimate. Note: currently, this uncertainty estimate is approximate, good to a few percent. Future improvements will be made to get this a bit more accurate. If the photometry radius was an array, then so are the products created using the SNR function. verbose=True puts some nice terminal output in your face. These values can be accessed with their internal names.
phot.SNR(verbose=True) #get those values print(phot.magnitude) print(phot.dmagnitude) print(phot.sourceFlux) print(phot.snr) print(phot.bg)
tutorial/trippytutorial.ipynb
fraserw/PyMOP
gpl-2.0
Let's get aperture corrections measured directly from a star.
phot.computeRoundAperCorrFromSource(goodFits[0,4],goodFits[0,5],num.linspace(1*fwhm,4*fwhm,10), skyRadius=5*fwhm, width=6*fwhm,displayAperture=False,display=True) print('Round aperture correction for a 4xFWHM aperture is {:.3f}.'.format(phot.roundAperCorr(1.4*fwhm)))
tutorial/trippytutorial.ipynb
fraserw/PyMOP
gpl-2.0
Finally, let's do some PSF source subtraction. This is only possible with emcee and sextractor installed. First get the cutout. This makes everything faster later. Also, remove the background, just because. This also provides an example of how to use zscale now built into trippy and astropy.visualization to display an astronomy image using the zscale scaling.
Data = data[int(yt)-200:int(yt)+200,int(xt)-200:int(xt)+200]-phot.bg zscale = ZScaleInterval() (z1, z2) = zscale.get_limits(Data) normer = interval.ManualInterval(z1,z2) pyl.imshow(normer(Data)) pyl.show()
tutorial/trippytutorial.ipynb
fraserw/PyMOP
gpl-2.0
Now instantiate the MCMCfitter class, and then perform the fit. Verbose=False will not put anything to terminal. Setting to true will dump the result of each step. Only good idea if you insist on seeing what's happening. Do you trust black boxes? Set useLinePSF to True if you are fitting a trailed source, False if a point source. Set useErrorMap to True if you care to use an estimate of the poisson noise in each pixel during your fit. This produces honest confidence ranges. I personally like nWalkers=nBurn=nStep=40. To get a reasonable fit however, that's overkill. But to get the best... your mileage will vary. This will take a while on a computer. ~1 minute on a modern i5 processor, much longer if you computer is a few years old. You can reduce the number of walkers, nBurn and nStep to ~10 each if you are impatient. This will drop the run time by ~4x
fitter = MCMCfit.MCMCfitter(goodPSF,Data) fitter.fitWithModelPSF(200+xt-int(xt)-1,200+yt-int(yt)-1, m_in=1000., fitWidth=10, nWalkers=20, nBurn=20, nStep=20, bg=phot.bg, useLinePSF=True, verbose=False,useErrorMap=False)
tutorial/trippytutorial.ipynb
fraserw/PyMOP
gpl-2.0
Now get the fits results, including best fit and confidence region using the input value. 0.67 for 1-sigma is shown
(fitPars, fitRange) = fitter.fitResults(0.67) print(fitPars) print(fitRange)
tutorial/trippytutorial.ipynb
fraserw/PyMOP
gpl-2.0
Finally, lets produce the model best fit image, and perform a subtraction. Plant will plant a fake source with the given input x,y,amplitude into the input data. If returnModel=True, then no source is planted, but the model image that would have been planted is returned. remove will do the opposite of plant given input data (it actually just calls plant).
modelImage = goodPSF.plant(fitPars[0],fitPars[1],fitPars[2],Data,addNoise=False,useLinePSF=True,returnModel=True) pyl.imshow(normer(modelImage)) pyl.show()
tutorial/trippytutorial.ipynb
fraserw/PyMOP
gpl-2.0
Now show the image and the image with model removed for comparison.
removed = goodPSF.remove(fitPars[0],fitPars[1],fitPars[2],Data,useLinePSF=True) pyl.imshow(normer(removed)) pyl.show()
tutorial/trippytutorial.ipynb
fraserw/PyMOP
gpl-2.0
What is a shapefile? A shapefile contains spatial information in a particular format and is used commonly in GIS applications. It typically contains information like the polygons describing counties, countries, or other political boundaries; lakes, rivers, or bays; or land and coastline. A shapefile record has a geometry, which contains the points that make up the objects, and attributes, which store information like the name of the record. Shapefiles are commonly available online through local or federal agencies for geometric data on public lands and waterways. Read and examine records from Natural Earth We saw in the maps notebook how easy it is to access shapefiles through Natural Earth and cartopy. Here we go into more detail. We can read in a dataset from Natural Earth with the following lines. Note If we didn't re-read this each time this cell was run, we could only run through the records once. Once the states have been iterated over, the pointer is at the end of them and there are none left to show. This is like reading all of the lines of a file and reaching the end.
# how we tell cartopy which data we want, from the list at the end of the maps notebook shapename = 'admin_1_states_provinces_lakes_shp' # Set up reader for this file states_shp = shpreader.natural_earth(category='cultural', resolution='110m', name=shapename) reader = shpreader.Reader(states_shp) # Read in the data from the file into the "states" generator which we can iterate/loop over states = reader.records()
materials/7_shapefiles.ipynb
hetland/python4geosciences
mit
Information about the states is in variable states and is a generator. Without going into too much detail about generators, they are used in loops and we can see two ways to access the individual records (or states in this case) in the next few cells. Let's look at a few of the states by looking at the generator as a list:
list(states)[:2]
materials/7_shapefiles.ipynb
hetland/python4geosciences
mit
Note Each time you access the states, you will need to rerun the cell above that reads in the records in the first place. Or in its natural state, we can step through the records of the generator using next after rereading in the records. The following cell shows the first record, which contains a single state.
next(states)
materials/7_shapefiles.ipynb
hetland/python4geosciences
mit
Now the next.
next(states)
materials/7_shapefiles.ipynb
hetland/python4geosciences
mit
We can save one to a variable name so that we can examine it more carefully:
state = next(states) state
materials/7_shapefiles.ipynb
hetland/python4geosciences
mit
We are seeing the attributes of the record, unique to this file, which we can access more specifically as follows:
state.attributes
materials/7_shapefiles.ipynb
hetland/python4geosciences
mit
... and then each attribute individually as in a dictionary:
state.attributes['name']
materials/7_shapefiles.ipynb
hetland/python4geosciences
mit
We can also access the geometry of the record:
state.geometry state.geometry.centroid.xy # this is in lon/lat
materials/7_shapefiles.ipynb
hetland/python4geosciences
mit
and properties of the geometry like the area and centroid location:
state.geometry.area # what are the units of this area?
materials/7_shapefiles.ipynb
hetland/python4geosciences
mit
Pull out specific records Find states that start with "A":
pc = cartopy.crs.PlateCarree() # how we tell cartopy which data we want, from the list at the end of the maps notebook shapename = 'admin_1_states_provinces_lakes_shp' # Set up reader for this file states_shp = shpreader.natural_earth(category='cultural', resolution='110m', name=shapename) reader = shpreader.Reader(states_shp) # Read in the data from the file into the "states" generator which we can iterate/loop over states = reader.records() Astates = [] # initialize list to save states that start with "A" fig = plt.figure() ax = fig.add_subplot(1,1,1, projection=cartopy.crs.Mercator()) ax.set_extent([-170,-80,20,75], pc) for state in states: if state.attributes['name'][0] == 'A': print(state.attributes['name']) ax.add_geometries([state.geometry], pc, facecolor='k', alpha=0.4) # save state Astates.append(state)
materials/7_shapefiles.ipynb
hetland/python4geosciences
mit
How could you change this loop to check for states in a specific region of the country? Transforming geometry between projections Shapefiles are often in geographic coordinates (lon/lat), and they come out of Natural Earth as lon/lat. Here we change a state's projection from PlateCarree (pc) to LambertConformal. We use the project_geometry method in the projection we want to transform to (lc in this case), and input the current projection of the shape into the method (pc in this case).
state.geometry # we can see the shape in PlateCarree lc = cartopy.crs.LambertConformal() statelc = lc.project_geometry(state.geometry, cartopy.crs.PlateCarree()) statelc # this is now the geometry of the record only, without attributes # the shape has changed in the new projection
materials/7_shapefiles.ipynb
hetland/python4geosciences
mit
Reading your own shapes and using cartopy You can read in shapefiles outside of the Natural Earth dataset and use them on maps with cartopy. Here we look at shipping lanes in the northwest Gulf of Mexico. You can get to the shapes or polygons themselves two different ways using cartopy. The first uses the feature interface that we've been using (with add_feature), but limits our ability to access attributes of the files. The second gives more access. 1st approach for using a generic shapefile: We start with a map:
proj = cartopy.crs.LambertConformal() pc = cartopy.crs.PlateCarree() land_10m = cartopy.feature.NaturalEarthFeature('physical', 'land', '10m', edgecolor='face') fig = plt.figure(figsize=(12,8)) ax = fig.add_subplot(111, projection=proj) ax.set_extent([-98, -87, 25, 31], pc) ax.add_feature(land_10m, facecolor='0.8')
materials/7_shapefiles.ipynb
hetland/python4geosciences
mit
We then set up to read in shipping lane data, which is in the data directory:
fname = '../data/fairway/fairway.shp' shipping_lanes = cartopy.feature.ShapelyFeature(shpreader.Reader(fname).geometries(), cartopy.crs.PlateCarree(), facecolor='none')
materials/7_shapefiles.ipynb
hetland/python4geosciences
mit
Now we can just add the shipping lanes onto our map!
fig = plt.figure(figsize=(12,8)) ax = fig.add_subplot(111, projection=proj) ax.set_extent([-98, -87, 25, 31], cartopy.crs.PlateCarree()) ax.add_feature(land_10m, facecolor='0.8') # shipping lanes ax.add_feature(shipping_lanes, edgecolor='r', linewidth=0.5)
materials/7_shapefiles.ipynb
hetland/python4geosciences
mit
2nd approach for using a generic shapefile
fig = plt.figure(figsize=(12,8)) ax = fig.add_subplot(111, projection=proj) ax.set_extent([-98, -87, 25, 31], cartopy.crs.PlateCarree()) ax.add_feature(land_10m, facecolor='0.8') fname = '../data/fairway/fairway.shp' ax.add_geometries(cartopy.io.shapereader.Reader(fname).geometries(), pc, edgecolor='darkcyan')
materials/7_shapefiles.ipynb
hetland/python4geosciences
mit
Great Circle Distance How do you find an airplane's flight path? The shortest line between two places on earth is not necessarily a straight line in the projection you are using. The shortest distance is called the Great Circle distance and it is the shortest distance between two places on a sphere. For example, here is the shortest path between Boston and Tokyo. It is a straight line in this rather globe-like projection because it preserves this property. However, this link shows the flight path in a different projection. Not so straight anymore. Here are previously-saved latitude and longitude points along the great circle line between the LA and Newark airports (calculated using the pyproj package which is great but beyond the scope of this notebook). In particular, the LA and Newark airports have the following coordinates and are in the first and last elements of the two arrays. LAX: 33.9425° N, 118.4081° W EWR: 40.6925° N, 74.1686° W
lons = [-118.4081, -116.53656281803954, -114.63494404602989, -112.70342143546311, -110.74234511851722, -108.75224911337924, -106.73386144433508, -104.6881124356053, -102.6161407277617, -100.51929657411526, -98.3991420049751, -96.25744750245255, -94.09618490844686, -91.91751639275596, -89.72377943401308, -87.51746790832203, -85.30120953200326, -83.07774005710772, -80.84987476165341, -78.62047790110475, -76.39243088444343, -74.1686] lats = [33.9425, 34.62185468395183, 35.27195983702588, 35.89163680795418, 36.47971217805657, 37.03502459436787, 37.5564322473648, 38.042820934293715, 38.493112624072936, 38.9062744137114, 39.281327740305926, 39.61735768834621, 39.9135222108212, 40.169061066104604, 40.38330426236194, 40.55567979862256, 40.68572049769913, 40.773069741323866, 40.81748594212188, 40.818845619619054, 40.77714498701483, 40.6925] fig = plt.figure(figsize=(12,8)) ax = fig.add_subplot(111, projection=cartopy.crs.Mercator()) ax.set_extent([-128, -60, 24, 50], cartopy.crs.PlateCarree()) ax.add_feature(cartopy.feature.LAND, facecolor='0.9') ax.add_feature(cartopy.feature.OCEAN, facecolor='w') # add end points ax.plot(lons, lats, transform=cartopy.crs.PlateCarree())
materials/7_shapefiles.ipynb
hetland/python4geosciences
mit
Make your own Shape from points You can create your own Shape geometry from coordinate locations or x,y points, so that you can interact with it in a similar manner as from a shapefile. Once you have a Shape, you can change projections and look at geometric properties of the Shape, as we did above for a single state.
# use lons and lats of the great circle path from above line = shapely.geometry.LineString(zip(lons, lats)) line
materials/7_shapefiles.ipynb
hetland/python4geosciences
mit
We can look at properties like the length of the line, though keep in mind that any properties will be calculated in the projection being used. In this case, the line is in geographic coordinates, so the length is also in geographic coordinates, not in meters.
line.length
materials/7_shapefiles.ipynb
hetland/python4geosciences
mit
Exercise Convert the line between these two cities to another projection, calculate the length, and compare with the actual distance. Which projection should you use for this calculation and why? Other shape options include: Polygon LineString MultiLineString MultiPoint MultiPolygon Point and some basic information about working with shapes separately from maps and shapefiles is available in notebook ST_shapes.ipynb. States Flown Over Consider the following: What states do you travel over when you fly from LA (airport code LAX) to NYC (airport code EWR)? First, a plot of the problem:
fig = plt.figure(figsize=(12,8)) ax = fig.add_subplot(111, projection=cartopy.crs.Mercator()) ax.set_extent([-128, -60, 24, 50], cartopy.crs.PlateCarree()) ax.add_feature(cartopy.feature.LAND, facecolor='0.9') ax.add_feature(cartopy.feature.OCEAN, facecolor='w') # add states # can plot states like this, but doesn't allow access to metadata shapename = 'admin_1_states_provinces_lakes_shp' states = cartopy.feature.NaturalEarthFeature(category='cultural', scale='110m', facecolor='none', name=shapename) ax.add_feature(states, edgecolor='gray') # add end points ax.plot([lons[0], lons[-1]], [lats[0], lats[-1]], 'ro', transform=pc) # add the flight path as a shape ax.add_geometries([line], pc, facecolor='none', edgecolor='k')
materials/7_shapefiles.ipynb
hetland/python4geosciences
mit
Shape intersections An easy way to find what states the flight path intersects is looking for intersections of the Shapes.
# Set up reader for this file states_shp = shpreader.natural_earth(category='cultural', resolution='110m', name=shapename) reader = shpreader.Reader(states_shp) # Read in the data from the file into the "states" generator which we can iterate over states = reader.records() # Note that if we didn't re-read this each time this cell was run, we could only run it once. # Once the states have been iterated over, the pointer is at the end of them and there are # none left to show. This is like reading all of the lines of a file and reaching the end. # Remake map here fig = plt.figure(figsize=(12,8)) ax = fig.add_subplot(111, projection=cartopy.crs.Mercator()) ax.set_extent([-128, -60, 24, 50], cartopy.crs.PlateCarree()) ax.add_feature(cartopy.feature.LAND, facecolor='0.9') ax.add_feature(cartopy.feature.OCEAN, facecolor='w') # add end points ax.plot([lons[0], lons[-1]], [lats[0], lats[-1]], 'ro', transform=pc) # add the flight path as a shape ax.add_geometries([line], pc, facecolor='none', edgecolor='k') # Loop through states and see if they intersect flight path # deal with shapes differently if want to dig into them more visible_states = [] # initialize for storing states for state in states: # pick a default color for the land with a black outline, # this will change if the flight intersects with a state facecolor = '0.9' edgecolor = 'black' if state.geometry.intersects(line): facecolor = 'red' # also save to list if intersects visible_states.append(state.attributes['name']) ax.add_geometries([state.geometry], pc, facecolor=facecolor, edgecolor=edgecolor, alpha=0.4) print(visible_states)
materials/7_shapefiles.ipynb
hetland/python4geosciences
mit
Basic interact At the most basic level, interact autogenerates UI controls for function arguments, and then calls the function with those arguments when you manipulate the controls interactively. To use interact, you need to define a function that you want to explore. Here is a function that prints its only argument x.
def f(x): return x
notebooks/Using_Interact.ipynb
SamLau95/nbinteract
bsd-3-clause
When you pass this function as the first argument to interact along with an integer keyword argument (x=10), a slider is generated and bound to the function parameter.
interact(f, x=10);
notebooks/Using_Interact.ipynb
SamLau95/nbinteract
bsd-3-clause
When you move the slider, the function is called, which prints the current value of x. If you pass True or False, interact will generate a checkbox:
interact(f, x=True);
notebooks/Using_Interact.ipynb
SamLau95/nbinteract
bsd-3-clause
If you pass a string, interact will generate a text area.
interact(f, x='Hi there!');
notebooks/Using_Interact.ipynb
SamLau95/nbinteract
bsd-3-clause
interact can also be used as a decorator. This allows you to define a function and interact with it in a single shot. As this example shows, interact also works with functions that have multiple arguments.
@interact(x=True, y=1.0) def g(x, y): return (x, y)
notebooks/Using_Interact.ipynb
SamLau95/nbinteract
bsd-3-clause
Fixing arguments using fixed There are times when you may want to explore a function using interact, but fix one or more of its arguments to specific values. This can be accomplished by wrapping values with the fixed function.
def h(p, q): return (p, q)
notebooks/Using_Interact.ipynb
SamLau95/nbinteract
bsd-3-clause
When we call interact, we pass fixed(20) for q to hold it fixed at a value of 20.
interact(h, p=5, q=fixed(20));
notebooks/Using_Interact.ipynb
SamLau95/nbinteract
bsd-3-clause
Notice that a slider is only produced for p as the value of q is fixed. Widget abbreviations When you pass an integer-valued keyword argument of 10 (x=10) to interact, it generates an integer-valued slider control with a range of [-10,+3*10]. In this case, 10 is an abbreviation for an actual slider widget: python IntSlider(min=-10,max=30,step=1,value=10) In fact, we can get the same result if we pass this IntSlider as the keyword argument for x:
interact(f, x=widgets.IntSlider(min=-10,max=30,step=1,value=10));
notebooks/Using_Interact.ipynb
SamLau95/nbinteract
bsd-3-clause
This examples clarifies how interact proceses its keyword arguments: If the keyword argument is a Widget instance with a value attribute, that widget is used. Any widget with a value attribute can be used, even custom ones. Otherwise, the value is treated as a widget abbreviation that is converted to a widget before it is used. The following table gives an overview of different widget abbreviations: <table class="table table-condensed table-bordered"> <tr><td><strong>Keyword argument</strong></td><td><strong>Widget</strong></td></tr> <tr><td>`True` or `False`</td><td>Checkbox</td></tr> <tr><td>`'Hi there'`</td><td>Text</td></tr> <tr><td>`value` or `(min,max)` or `(min,max,step)` if integers are passed</td><td>IntSlider</td></tr> <tr><td>`value` or `(min,max)` or `(min,max,step)` if floats are passed</td><td>FloatSlider</td></tr> <tr><td>`['orange','apple']` or `{'one':1,'two':2}`</td><td>Dropdown</td></tr> </table> Note that a dropdown is used if a list or a dict is given (signifying discrete choices), and a slider is used if a tuple is given (signifying a range). You have seen how the checkbox and textarea widgets work above. Here, more details about the different abbreviations for sliders and dropdowns are given. If a 2-tuple of integers is passed (min,max), an integer-valued slider is produced with those minimum and maximum values (inclusively). In this case, the default step size of 1 is used.
interact(f, x=(0,4));
notebooks/Using_Interact.ipynb
SamLau95/nbinteract
bsd-3-clause
If a 3-tuple of integers is passed (min,max,step), the step size can also be set.
interact(f, x=(0,8,2));
notebooks/Using_Interact.ipynb
SamLau95/nbinteract
bsd-3-clause
A float-valued slider is produced if the elements of the tuples are floats. Here the minimum is 0.0, the maximum is 10.0 and step size is 0.1 (the default).
interact(f, x=(0.0,10.0));
notebooks/Using_Interact.ipynb
SamLau95/nbinteract
bsd-3-clause
The step size can be changed by passing a third element in the tuple.
interact(f, x=(0.0,10.0,0.01));
notebooks/Using_Interact.ipynb
SamLau95/nbinteract
bsd-3-clause
For both integer and float-valued sliders, you can pick the initial value of the widget by passing a default keyword argument to the underlying Python function. Here we set the initial value of a float slider to 5.5.
@interact(x=(0.0,20.0,0.5)) def h(x=5.5): return x
notebooks/Using_Interact.ipynb
SamLau95/nbinteract
bsd-3-clause
Dropdown menus are constructed by passing a list of strings. In this case, the strings are both used as the names in the dropdown menu UI and passed to the underlying Python function.
interact(f, x=['apples','oranges']);
notebooks/Using_Interact.ipynb
SamLau95/nbinteract
bsd-3-clause
If you want a dropdown menu that passes non-string values to the Python function, you can pass a list of (label, value) pairs.
interact(f, x=[('one', 10), ('two', 20)]);
notebooks/Using_Interact.ipynb
SamLau95/nbinteract
bsd-3-clause
interactive In addition to interact, IPython provides another function, interactive, that is useful when you want to reuse the widgets that are produced or access the data that is bound to the UI controls. Note that unlike interact, the return value of the function will not be displayed automatically, but you can display a value inside the function with IPython.display.display. Here is a function that returns the sum of its two arguments and displays them. The display line may be omitted if you don't want to show the result of the function.
from IPython.display import display def f(a, b): display(a + b) return a+b
notebooks/Using_Interact.ipynb
SamLau95/nbinteract
bsd-3-clause
Unlike interact, interactive returns a Widget instance rather than immediately displaying the widget.
w = interactive(f, a=10, b=20)
notebooks/Using_Interact.ipynb
SamLau95/nbinteract
bsd-3-clause
The widget is an interactive, a subclass of VBox, which is a container for other widgets.
type(w)
notebooks/Using_Interact.ipynb
SamLau95/nbinteract
bsd-3-clause
The children of the interactive are two integer-valued sliders and an output widget, produced by the widget abbreviations above.
w.children
notebooks/Using_Interact.ipynb
SamLau95/nbinteract
bsd-3-clause
To actually display the widgets, you can use IPython's display function.
display(w)
notebooks/Using_Interact.ipynb
SamLau95/nbinteract
bsd-3-clause
At this point, the UI controls work just like they would if interact had been used. You can manipulate them interactively and the function will be called. However, the widget instance returned by interactive also gives you access to the current keyword arguments and return value of the underlying Python function. Here are the current keyword arguments. If you rerun this cell after manipulating the sliders, the values will have changed.
w.kwargs
notebooks/Using_Interact.ipynb
SamLau95/nbinteract
bsd-3-clause
Here is the current return value of the function.
w.result
notebooks/Using_Interact.ipynb
SamLau95/nbinteract
bsd-3-clause
Disabling continuous updates When interacting with long running functions, realtime feedback is a burden instead of being helpful. See the following example:
def slow_function(i): print(int(i),list(x for x in range(int(i)) if str(x)==str(x)[::-1] and str(x**2)==str(x**2)[::-1])) return %%time slow_function(1e6)
notebooks/Using_Interact.ipynb
SamLau95/nbinteract
bsd-3-clause
Notice that the output is updated even while dragging the mouse on the slider. This is not useful for long running functions due to lagging:
from ipywidgets import FloatSlider interact(slow_function,i=FloatSlider(min=1e5, max=1e7, step=1e5));
notebooks/Using_Interact.ipynb
SamLau95/nbinteract
bsd-3-clause
There are two ways to mitigate this. You can either only execute on demand, or restrict execution to mouse release events. interact_manual The interact_manual function provides a variant of interaction that allows you to restrict execution so it is only done on demand. A button is added to the interact controls that allows you to trigger an execute event.
interact_manual(slow_function,i=FloatSlider(min=1e5, max=1e7, step=1e5));
notebooks/Using_Interact.ipynb
SamLau95/nbinteract
bsd-3-clause
continuous_update If you are using slider widgets, you can set the continuous_update kwarg to False. continuous_update is a kwarg of slider widgets that restricts executions to mouse release events.
interact(slow_function,i=FloatSlider(min=1e5, max=1e7, step=1e5, continuous_update=False));
notebooks/Using_Interact.ipynb
SamLau95/nbinteract
bsd-3-clause
interactive_output interactive_output provides additional flexibility: you can control how the UI elements are laid out. Unlike interact, interactive, and interact_manual, interactive_output does not generate a user interface for the widgets. This is powerful, because it means you can create a widget, put it in a box, and then pass the widget to interactive_output, and have control over the widget and its layout.
a = widgets.IntSlider() b = widgets.IntSlider() c = widgets.IntSlider() ui = widgets.HBox([a, b, c]) def f(a, b, c): print((a, b, c)) out = widgets.interactive_output(f, {'a': a, 'b': b, 'c': c}) display(ui, out)
notebooks/Using_Interact.ipynb
SamLau95/nbinteract
bsd-3-clause
Arguments that are dependent on each other Arguments that are dependent on each other can be expressed manually using observe. See the following example, where one variable is used to describe the bounds of another. For more information, please see the widget events example notebook.
x_widget = FloatSlider(min=0.0, max=10.0, step=0.05) y_widget = FloatSlider(min=0.5, max=10.0, step=0.05, value=5.0) def update_x_range(*args): x_widget.max = 2.0 * y_widget.value y_widget.observe(update_x_range, 'value') def printer(x, y): print(x, y) interact(printer,x=x_widget, y=y_widget);
notebooks/Using_Interact.ipynb
SamLau95/nbinteract
bsd-3-clause
Flickering and jumping output On occasion, you may notice interact output flickering and jumping, causing the notebook scroll position to change as the output is updated. The interactive control has a layout, so we can set its height to an appropriate value (currently chosen manually) so that it will not change size as it is updated.
%matplotlib inline from ipywidgets import interactive import matplotlib.pyplot as plt import numpy as np def f(m, b): plt.figure(2) x = np.linspace(-10, 10, num=1000) plt.plot(x, m * x + b) plt.ylim(-5, 5) plt.show() interactive_plot = interactive(f, m=(-2.0, 2.0), b=(-3, 3, 0.5)) output = interactive_plot.children[-1] output.layout.height = '350px' interactive_plot
notebooks/Using_Interact.ipynb
SamLau95/nbinteract
bsd-3-clause
MUDANÇA DA VARIÁVEL INICIAL QUE MOSTRA O ANO DE PESQUISA.
base.V0101=base.V0101.astype("int") base9.V0101=base9.V0101.astype("int")
Projeto 1 - CD.ipynb
gabrielhpbc/CD
mit
DEFINIÇÃO DAS REGIÕES E TRANSFORMAÇÃO EM UMA CATEGORIA;
base.loc[(base.UF<18),"REGIAO"]="NORTE" base.loc[(base.UF>20)&(base.UF<30),"REGIAO"]="NORDESTE" base.loc[(base.UF>30)&(base.UF<36),"REGIAO"]="SUDESTE" base.loc[(base.UF>35)&(base.UF<44),"REGIAO"]="SUL" base.loc[(base.UF>43)&(base.UF<54),"REGIAO"]="CENTRO-OESTE" base.REGIAO=base.REGIAO.astype("category") base9.loc[(base9.UF<18),"REGIAO"]="NORTE" base9.loc[(base9.UF>20)&(base9.UF<30),"REGIAO"]="NORDESTE" base9.loc[(base9.UF>30)&(base9.UF<36),"REGIAO"]="SUDESTE" base9.loc[(base9.UF>35)&(base9.UF<44),"REGIAO"]="SUL" base9.loc[(base9.UF>43)&(base9.UF<54),"REGIAO"]="CENTRO-OESTE" base9.REGIAO=base9.REGIAO.astype("category")
Projeto 1 - CD.ipynb
gabrielhpbc/CD
mit
DIVISÃO EM ZONA RURAL E URBANA, A SEGUNDA VARIÁVEL DE ANÁLISE
base.loc[(base.V4105<4),"ZONA"]="Urbana" base.loc[(base.V4105>3),"ZONA"]="Rural" base.ZONA=base.ZONA.astype("category") base9.loc[(base9.V4105<4),"ZONA"]="Urbana" base9.loc[(base9.V4105>3),"ZONA"]="Rural" base9.ZONA=base9.ZONA.astype("category")
Projeto 1 - CD.ipynb
gabrielhpbc/CD
mit
CRIACÃO DA VARIÁVEL INSEGURANÇA ALIMENTAR: A SEGUIR MODIFICA-SE AS VARIÁVEIS (PERGUNTAS SOBRE INSEGURANÇA ALIMENTAR) CRIANDO UMA ÚNICA CHAMADA "INSEGURANÇA ALIMENTAR". O MOTIVO PARA ISSO É QUE AS 4 PERGUNTAS FEITAS REPRESENTAM SITUAÇÕES DE DIFICULDADE PARA SE ALIMENTAR, PORTANTO PARA SE CONSIDERAR UMA PESSOA QUE PASSOU POR SITUAÇÃO DE DIFICULDADE ALIMENTAR DEVE SE TER PELO MENOS UMA PERGUNTA RESPONDIDA COM "SIM". HÁ AINDA A CARACTERIZACAO PARA CATEGORIA DAS 4 PERGUNTAS.
base.loc[(base.V2103==1) | (base.V2105==1) | (base.V2107==1) | (base.V2109==1),'Insegurança_Alimentar'] = 'Sim' base.loc[(base.V2103==3) & (base.V2105==3) & (base.V2107==3) & (base.V2109==3),'Insegurança_Alimentar'] = 'Não' base.V2103=base.V2103.astype("category") base.V2105=base.V2105.astype("category") base.V2107=base.V2107.astype("category") base.V2109=base.V2109.astype("category") base9.loc[(base9.V2103==1) | (base9.V2105==1) | (base9.V2107==1) | (base9.V2109==1),'Insegurança_Alimentar'] = 'Sim' base9.loc[(base9.V2103==3) & (base9.V2105==3) & (base9.V2107==3) & (base9.V2109==3),'Insegurança_Alimentar'] = 'Não' base9.V2103=base9.V2103.astype("category") base9.V2105=base9.V2105.astype("category") base9.V2107=base9.V2107.astype("category") base9.V2109=base9.V2109.astype("category")
Projeto 1 - CD.ipynb
gabrielhpbc/CD
mit
CRIAÇÃO DO "PROBLEMA ALIMENTAR": EM SEQUÊNCIA HÁ MAIS 4 PERGUNTAS DESTINADAS APENAS ÀQUELES QUE APRESENTARAM INSEGURANÇA ALIMENTAR. PORTANTO UTILIZOU-SE O MESMO PROCESSO DO QUADRO ACIMA. ESSAS PERGUNTAS REFLETEM ALGUNS PROBLEMAS PELOS QUAIS AS PESSOAS PODERIAM TER PASSADO CASO RESPONDESSEM PELO MENOS UM SIM NAS 4 PERGUNTAS INICIAIS.
base.loc[(base.V2113==1) | (base.V2115==1) | (base.V2117==1) | (base.V2121==1),'Problema_Alimentar'] = 'Sim' base.loc[(base.V2113==3) & (base.V2115==3) & (base.V2117==3) & (base.V2121==3),'Problema_Alimentar'] = 'Não' base.V2113=base.V2113.astype("category") base.V2115=base.V2115.astype("category") base.V2117=base.V2117.astype("category") base.V2121=base.V2121.astype("category") base9.loc[(base9.V2111==1) | (base9.V2113==1) | (base9.V2115==1) | (base9.V2117==1) | (base9.V2119==1) | (base9.V2120==1) | (base9.V2121==1),'Problema_Alimentar'] = 'Sim' base9.loc[(base9.V2111==3) & (base9.V2113==3) & (base9.V2115==3) & (base9.V2117==3) & (base9.V2119==3) & (base9.V2120==3) & (base9.V2121==3),'Problema_Alimentar'] = 'Não' base9.V2113=base9.V2113.astype("category") base9.V2115=base9.V2115.astype("category") base9.V2117=base9.V2117.astype("category") base9.V2117=base9.V2119.astype("category") base9.V2121=base9.V2120.astype("category") base9.V2121=base9.V2121.astype("category")
Projeto 1 - CD.ipynb
gabrielhpbc/CD
mit
FILTRAGEM INICIAL: TRANSFORMACÃO DAS SIGLAS EM NOME DAS VARIÁVEIS DE INTERESSE E POSTERIOR FILTRO PARA RETIRAR PESSOAS QUE NAO RESPONDERAM (NaN) AS 4 PERGUNTAS INICAIS E RENDA. VALE DESTACAR QUE NAO SE UTILIZOU PARA A VARIÁVEL "PROBLEMA_ALIMENTAR" POIS AQUELES QUE NÃO TIVERAM INSEGURANÇA ALIMENTAR NÃO FORAM CHEGARAM A SER QUESTIONADOS SOBRE E PORTANTO PERDERIA-SE DADOS.
base=base.loc[:,["V0101","REGIAO","ZONA","V4614",'Insegurança_Alimentar',"Problema_Alimentar"]] base.columns=["ANO","REGIAO","ZONA","RENDA",'Insegurança_Alimentar',"Problema_Alimentar"] base=base.dropna(subset=["RENDA","Insegurança_Alimentar"]) base
Projeto 1 - CD.ipynb
gabrielhpbc/CD
mit
TABELA 1 - 2013
writer = pd.ExcelWriter('Tabela1-2013.xlsx',engine='xlsxwriter') base.to_excel(writer,sheet_name="Projeto_1") writer.save() base9=base9.loc[:,["V0101","REGIAO","ZONA","V4614",'Insegurança_Alimentar',"Problema_Alimentar"]] base9.columns=["ANO","REGIAO","ZONA","RENDA",'Insegurança_Alimentar',"Problema_Alimentar"] base9=base9.dropna(subset=["RENDA","Insegurança_Alimentar"]) base9
Projeto 1 - CD.ipynb
gabrielhpbc/CD
mit
TABELA 1 - 2009
writer = pd.ExcelWriter('Tabela1-2009.xlsx',engine='xlsxwriter') base9.to_excel(writer,sheet_name="Projeto_1") writer.save()
Projeto 1 - CD.ipynb
gabrielhpbc/CD
mit
PRIMEIRA OBSERVAÇÃO: OCORRÊNCIA DE PESSOAS QUE JÁ PASSARAM POR SITUAÇÕES DE INSEGURANÇA ALIMENTAR ("Sim") PARA POSTERIORMENTE ANALISAR AINDA A DIFERENÇA ENTRE AS REGIÕES E ZONAS.
g1 = (base.Insegurança_Alimentar.value_counts(sort=False, normalize=True)*100).round(decimals=1) plot = g1.plot(kind='bar',title='DIFICULDADE ALIMENTAR 2013 (G1)',figsize=(5, 5),color=('b','g')) print(g1,"\n") g2 = (base9.Insegurança_Alimentar.value_counts(sort=False, normalize=True)*100).round(decimals=1) plot = g2.plot(kind='bar',title='DIFICULDADE ALIMENTAR 2009 (G2)',figsize=(5, 5),color=('b','g')) print(g2,"\n")
Projeto 1 - CD.ipynb
gabrielhpbc/CD
mit
APROFUNDAMENTO NAS REGIÕES: GRÁFICO DE FREQUÊNCIA SEGUIDO DE UMA TABELA QUE POTENCIALIZA A ANÁLISE DOS VALORES, JÁ QUE MOSTRA OS VALORES ABSOLUTOS E VISA BUSCAR MAIOR COMPREENSÃO E COERÊNCIA DOS VALORES.
tb1= (pd.crosstab(base.REGIAO,base.Insegurança_Alimentar,margins=True,rownames=["REGIÃO"],colnames=["Insegurança Alimentar"],normalize='index')*100).round(decimals=1) plot = tb1.plot(kind="bar",title="Distribuição Regional de Insegurança Alimentar 2013 (G3)") abs1=pd.crosstab(base.REGIAO,base.Insegurança_Alimentar, margins=True, rownames=['REGIÃO'], colnames=['INSEGURANÇA ALIMENTAR']) abs1=abs1.loc[['NORTE','NORDESTE','SUDESTE','SUL','CENTRO-OESTE']] abs1
Projeto 1 - CD.ipynb
gabrielhpbc/CD
mit
Nesse caso pode-se observar uma clara coerência entre os dados percentuais e absolutos, isso porque as regiões Norte e Nordeste mostram a maior frequência e número de pessoas que já passaram por situação de insegurança alimentar.
tb19= (pd.crosstab(base9.REGIAO,base9.Insegurança_Alimentar,margins=True,rownames=["REGIÃO"],colnames=["Insegurança Alimentar"],normalize='index')*100).round(decimals=1) plot = tb19.plot(kind="bar",title="Distribuição Regional de Insegurança Alimentar 2009 (G4)") abs19=pd.crosstab(base9.REGIAO,base9.Insegurança_Alimentar, margins=True, rownames=['REGIÃO'], colnames=['INSEGURANÇA ALIMENTAR']) abs19=abs19.loc[['NORTE','NORDESTE','SUDESTE','SUL','CENTRO-OESTE']] abs19
Projeto 1 - CD.ipynb
gabrielhpbc/CD
mit
OBSERVAÇÃO DA SITUAÇÃO NA ZONA URBANA E RURAL: ASSIM COMO NA CELULA SUPERIOR, UM GRÁFICO INICIAL PERCENTUAL SEGUIDO DE UMA TABELA CONTENDO VALORES ABSOLUTOS QUE POSSIBILITAM OBSERVAR A DIFERENÇA ENTRE AS DUAS ZONAS
tb2 = (pd.crosstab(base.ZONA,base.Insegurança_Alimentar,margins=True,rownames=["ZONA"],colnames=["Insegurança Alimentar"],normalize='index')*100).round(decimals=1) plot = tb2.plot(kind="bar",title="Distribuição em Zonas de Insegurança Alimentar 2013 (G5)") abs2=pd.crosstab(base.ZONA,base.Insegurança_Alimentar, margins=True, rownames=['ZONA'], colnames=['INSEGURANÇA ALIMENTAR']) abs2=abs2.loc[['Rural','Urbana']] abs2 tb29 = (pd.crosstab(base9.ZONA,base9.Insegurança_Alimentar,margins=True,rownames=["ZONA"],colnames=["Insegurança Alimentar"],normalize='index')*100).round(decimals=1) plot = tb29.plot(kind="bar",title="Distribuição em Zonas de Insegurança Alimentar 2009 (G6)") abs29=pd.crosstab(base9.ZONA,base9.Insegurança_Alimentar, margins=True, rownames=['ZONA'], colnames=['INSEGURANÇA ALIMENTAR']) abs29=abs29.loc[['Rural','Urbana']] abs29
Projeto 1 - CD.ipynb
gabrielhpbc/CD
mit
CRUZAMENTO DE DADOS: SUB-DIVISÃO MAIS COMPLEXA, CADA ZONA DIVIDIDA POR ESTADO E A FREQUÊNCIA DE CADA UM DESSES, O OBJETIVO DESTE GRÁFICO É ANALISAR EM UMA ÚNICA IMAGEM AS DIFERENÇAS NOTÁVEIS ENTRE OS FATORES TERRITORIAIS ANALISADOS E ASSIM FOCAR DIRETAMENTE NAS REGIÕES QUE PRECISAM DA ANÁLISE PARA RESPONDER A PERGUNTA
ct1=(pd.crosstab([base.REGIAO, base.ZONA],base.Insegurança_Alimentar, normalize='index')*100).round(decimals=1) ct1 print(ct1,'\n') plot = ct1.plot(kind='bar',title="Análise de Insegurança Alimentar 2013 (G7)") ax = plt.subplot(111) box = ax.get_position() ax.set_position([box.x0, box.y0, box.width * 0.8, box.height]) ax.legend(loc='center left', bbox_to_anchor=(1, 0.5)) plt.ylabel('Freq.Relativa (em %)') plt.show() ct2=(pd.crosstab([base9.REGIAO, base9.ZONA],base9.Insegurança_Alimentar, normalize='index')*100).round(decimals=1) ct2 print(ct2,'\n') plot = ct2.plot(kind='bar',title="Análise de Insegurança Alimentar 2009 (G8)") ax = plt.subplot(111) box = ax.get_position() ax.set_position([box.x0, box.y0, box.width * 0.8, box.height]) ax.legend(loc='center left', bbox_to_anchor=(1, 0.5)) plt.ylabel('Freq.Relativa (em %)') plt.show()
Projeto 1 - CD.ipynb
gabrielhpbc/CD
mit
SEQUÊNCIA DE ANÁLISE PARA CADA ANO: Observando os dois últimos gráficos pode-se perceber precisamente as duas regiões que apresentam maior disparidade entre zona urbana e rural. No caso de 2013 (1°gráfico) Norte e Nordeste são as duas regiões que serão analisadas a fim de responder a pergunta-guia do projeto, já na situação de 2009 apresenta-se o Centro-Oeste e o Nordeste. ANÁLISE QUANTITATIVA: OBSERVAR COMO SE COMPORTA A INSEGURANÇA ALIMENTAR DE ACORDO COM A RENDA FAMILIAR. O PRIMEIRO HISTOGRAMA DEMONSTRA A FREQUÊNCIA ENTRE AQUELES QUE RESPONDERAM PELO MENOS UM "Sim" NAS 4 PERGUNTAS INICIAIS E SÃO CONSIDERADOS PORTANTO, EM INSEGURANÇA ALIMENTAR.
faixa = np.arange(0,7350,350) frenda = pd.cut(base.RENDA[(base.Insegurança_Alimentar=='Sim')&(base.REGIAO=="NORTE")], bins=faixa, right=False) t1 = (frenda.value_counts(sort=False, normalize=True)*100).round(decimals=1) print(t1,"\n") plot = base.RENDA[(base.Insegurança_Alimentar=='Sim')&(base.REGIAO=="NORTE")].plot.hist(bins=faixa,title="Histograma - Insegurança Alimentar - NORTE - 2013 (H1)", weights=zeros_like(base.RENDA[(base.Insegurança_Alimentar=='Sim')&(base.REGIAO=="NORTE")])+1./base.RENDA[(base.Insegurança_Alimentar=='Sim')&(base.REGIAO=="NORTE")].size*100, figsize=(6, 6), alpha=0.5) plt.ylabel('Frequência relativa (em %)') plt.xlabel('Renda (em reais)') plt.show() faixa = np.arange(0,7350,350) frenda2 = pd.cut(base.RENDA[(base.Insegurança_Alimentar=='Sim')&(base.REGIAO=="NORDESTE")], bins=faixa, right=False) t2 = (frenda2.value_counts(sort=False, normalize=True)*100).round(decimals=1) print(t2,"\n") plot = base.RENDA[(base.Insegurança_Alimentar=='Sim')&(base.REGIAO=="NORDESTE")].plot.hist(bins=faixa,title="Histograma - Insegurança Alimentar - NORDESTE - 2013(H2)", weights=zeros_like(base.RENDA[(base.Insegurança_Alimentar=='Sim')&(base.REGIAO=="NORDESTE")])+1./base.RENDA[(base.Insegurança_Alimentar=='Sim')&(base.REGIAO=="NORDESTE")].size*100, figsize=(6, 6), alpha=0.5,color="red") plt.ylabel('Frequência relativa (em %)') plt.xlabel('Renda (em reais)') plt.show() frenda9 = pd.cut(base9.RENDA[(base9.Insegurança_Alimentar=='Sim')&(base.REGIAO=="CENTRO-OESTE")], bins=faixa, right=False) t19 = (frenda9.value_counts(sort=False, normalize=True)*100).round(decimals=1) print(t19,"\n") plot = base9.RENDA[(base9.Insegurança_Alimentar=='Sim')&(base9.REGIAO=="CENTRO-OESTE")].plot.hist(bins=faixa,title="Histograma - Insegurança Alimentar - CENTRO-OESTE - 2009(H3)", weights=zeros_like(base9.RENDA[(base9.Insegurança_Alimentar=='Sim')&(base9.REGIAO=="CENTRO-OESTE")])+1./base9.RENDA[(base9.Insegurança_Alimentar=='Sim')&(base9.REGIAO=="CENTRO-OESTE")].size*100, figsize=(6, 6), alpha=0.5,color="chocolate") plt.ylabel('Frequência relativa (em %)') plt.xlabel('Renda (em reais)') plt.show() frenda29 = pd.cut(base9.RENDA[(base9.Insegurança_Alimentar=='Sim')&(base9.REGIAO=="NORDESTE")], bins=faixa, right=False) t29 = (frenda29.value_counts(sort=False, normalize=True)*100).round(decimals=1) print(t29,"\n") plot = base9.RENDA[(base9.Insegurança_Alimentar=='Sim')&(base9.REGIAO=="NORDESTE")].plot.hist(bins=faixa,title="Histograma - Insegurança Alimentar - NORDESTE - 2009(H4)", weights=zeros_like(base9.RENDA[(base9.Insegurança_Alimentar=='Sim')&(base9.REGIAO=="NORDESTE")])+1./base9.RENDA[(base9.Insegurança_Alimentar=='Sim')&(base9.REGIAO=="NORDESTE")].size*100, figsize=(6, 6), alpha=0.5,color="darkslategray") plt.ylabel('Frequência relativa (em %)') plt.xlabel('Renda (em reais)') plt.show()
Projeto 1 - CD.ipynb
gabrielhpbc/CD
mit
ANÁLISE INICIAL E NOVA FILTRAGEM: COM A PRECISÃO DOS VALORES MOSTRADOS ACIMA, PODE-SE OBSERVAR ONDE HÁ MAIOR CONCENTRAÇÃO EM CADA UMA DAS REGIÕES DE INTERESSE DE ACORDO COM A DISPARIDADE ANALISADA ANTERIORAMENTE NOS GRÁFICOS. DESSA FORMA A PARTIR DE AGORA A ANÁLISE SE CENTRARÁ APENAS ÀQUELES QUE PASSARAM POR SITUACÃO DE INSEGURANÇA ABRINDO PARA UMA NOVA VARIÁVEL, CHAMADA DE PROBLEMA ALIMENTAR E PAUTADA EM PERGUNTAS QUE DEMONSTRAM FALTA DE COMIDA OU ALIMENTAÇÃO RESTRITA POR CONTA DE FALTA DE DINHEIRO.
base=base[(base.Insegurança_Alimentar=="Sim")] base
Projeto 1 - CD.ipynb
gabrielhpbc/CD
mit
TABELA 2 - 2013
writer = pd.ExcelWriter('Tabela2-2013.xlsx',engine='xlsxwriter') base.to_excel(writer,sheet_name="Projeto_1") writer.save() base9=base9[(base9.Insegurança_Alimentar=="Sim")] base9
Projeto 1 - CD.ipynb
gabrielhpbc/CD
mit
TABELA 2 - 2009
writer = pd.ExcelWriter('Tabela2-2009.xlsx',engine='xlsxwriter') base9.to_excel(writer,sheet_name="Projeto_1") writer.save()
Projeto 1 - CD.ipynb
gabrielhpbc/CD
mit
Caracterização dos problemas alimentares: Os próximos gráficos tem como objetivo avaliar, além do comportamento da variável "problema alimentar" de acordo com a renda mensal familiar comparar com a distribuição de "insegurança alimentar" ou seja se a distribuição analisada anteriormente se mantém de certa maneira nessa variável que por sinal é dependente da inicial, "insegurança alimentar".
frenda3 = pd.cut(base.RENDA[(base.Problema_Alimentar=='Sim')&(base.REGIAO=="NORTE")], bins=faixa, right=False) t3 = (frenda3.value_counts(sort=False, normalize=True)*100).round(decimals=1) print(t3,"\n") plot = base.RENDA[(base.Problema_Alimentar=='Sim')&(base.REGIAO=="NORTE")].plot.hist(bins=faixa,title="Problema Alimentar - NORTE - 2013 (H5)", weights=zeros_like(base.RENDA[(base.Problema_Alimentar=='Sim')&(base.REGIAO=="NORTE")])+1./base.RENDA[(base.Problema_Alimentar=='Sim')&(base.REGIAO=="NORTE")].size*100, figsize=(6, 6), alpha=0.5,color="purple") plt.ylabel('Frequência relativa (em %)') plt.xlabel('Renda (em reais)') plt.show() frenda4 = pd.cut(base.RENDA[(base.Problema_Alimentar=='Sim')&(base.REGIAO=="NORDESTE")], bins=faixa, right=False) t4 = (frenda4.value_counts(sort=False, normalize=True)*100).round(decimals=1) print(t4,"\n") plot = base.RENDA[(base.Problema_Alimentar=='Sim')&(base.REGIAO=="NORDESTE")].plot.hist(bins=faixa,title="Problema Alimentar - NORDESTE - 2013(H6)", weights=zeros_like(base.RENDA[(base.Problema_Alimentar=='Sim')&(base.REGIAO=="NORDESTE")])+1./base.RENDA[(base.Problema_Alimentar=='Sim')&(base.REGIAO=="NORDESTE")].size*100, figsize=(6, 6), alpha=0.5,color="darkgreen") plt.ylabel('Frequência relativa (em %)') plt.xlabel('Renda (em reais)') plt.show() frenda39 = pd.cut(base9.RENDA[(base9.Problema_Alimentar=='Sim')&(base.REGIAO=="CENTRO-OESTE")], bins=faixa, right=False) t39 = (frenda39.value_counts(sort=False, normalize=True)*100).round(decimals=1) print(t39,"\n") plot = base9.RENDA[(base9.Problema_Alimentar=='Sim')&(base9.REGIAO=="CENTRO-OESTE")].plot.hist(bins=faixa,title="Problema Alimentar - CENTRO-OESTE - 2009(H7)", weights=zeros_like(base9.RENDA[(base9.Problema_Alimentar=='Sim')&(base9.REGIAO=="CENTRO-OESTE")])+1./base9.RENDA[(base9.Problema_Alimentar=='Sim')&(base9.REGIAO=="CENTRO-OESTE")].size*100, figsize=(6, 6), alpha=0.5,color="black") plt.ylabel('Frequência relativa (em %)') plt.xlabel('Renda (em reais)') plt.show() frenda49 = pd.cut(base9.RENDA[(base9.Problema_Alimentar=='Sim')&(base.REGIAO=="CENTRO-OESTE")], bins=faixa, right=False) t49 = (frenda49.value_counts(sort=False, normalize=True)*100).round(decimals=1) print(t49,"\n") plot = base9.RENDA[(base9.Problema_Alimentar=='Sim')&(base9.REGIAO=="NORDESTE")].plot.hist(bins=faixa,title="Problema Alimentar - NORDESTE - 2009(H8) ", weights=zeros_like(base9.RENDA[(base9.Problema_Alimentar=='Sim')&(base9.REGIAO=="NORDESTE")])+1./base9.RENDA[(base9.Problema_Alimentar=='Sim')&(base9.REGIAO=="NORDESTE")].size*100, figsize=(6, 6), alpha=0.5,color="orange") plt.ylabel('Frequência relativa (em %)') plt.xlabel('Renda (em reais)') plt.show()
Projeto 1 - CD.ipynb
gabrielhpbc/CD
mit
The outcome variable here is binary, so this might be treated in several ways. First, it might be possible to apply the normal approximation to the binomial distribution. In this case, the distribution proportions is $\mathcal{N}(np,np(1-p))$ There are a number of guidelines as to whether this is a suitable approximation (see Wikipedia for a list of such conditions), some of which include: n > 20 (or 30) np > 5, np(1-p) > 5 (or 10) But these conditions can be roughly summed up as not too small of a sample and an estimated proportion far enough from 0 and 1 that the distribution isn't overly skewed. If the normal approximation is reasonable, a z-test can be used, with the following standard error calculation: $$SE = \sqrt{\hat{p}(1-\hat{p})\left(\frac{1}{n_1}+\frac{1}{n_2}\right)}$$ where $$\hat{p}=\frac{np_1+np_2}{n_1+n_2}$$ giving $$z = \frac{p_1-p2}{SE}$$
xb = sum(data[data.race=='b'].call) nb = len(data[data.race=='b']) xw = sum(data[data.race=='w'].call) nw = len(data[data.race=='w']) pHat = (nb*(xb/nb) + nw*(xw/nw))/(nb+nw) se = np.sqrt(pHat*(1-pHat)*(1/nb + 1/nw)) z = (xb/nb -xw/nw)/se print "z-score:",round(z,3),"p =", round(stats.norm.sf(abs(z))*2,6)
exercises/SlideRule-DS-Intensive/Inferential Statistics/sliderule_dsi_inferential_statistics_exercise_2.ipynb
phasedchirp/Assorted-Data-Analysis
gpl-2.0
So, the difference in probability of a call-back is statistically significant here. Plotting the distribution for call-backs with black-sounding names, it looks fairly symmetrical and well-behaved, so it's quite likely that the normal approximation is fairly reasonable here.
pb = xb/nb x = np.arange(110,210) matplotlib.pyplot.vlines(x,0,stats.binom.pmf(x,nb,pb))
exercises/SlideRule-DS-Intensive/Inferential Statistics/sliderule_dsi_inferential_statistics_exercise_2.ipynb
phasedchirp/Assorted-Data-Analysis
gpl-2.0
Alternatives Because the normal distribution is only an approximation, the assumptions don't always work out for a particular data set. There are several methods for calculating confidence intervals around the estimated proportion. For example, with a significance level of $\alpha$, the Jeffrey's interval is defined as the $\frac{\alpha}{2}$ and 1-$\frac{\alpha}{2}$ quantiles of a beta$(x+\frac{1}{2}, n-x+\frac{1}{2})$ distribution. Using scipy:
intervalB = (stats.beta.ppf(0.025,xb+0.5,nb-xb+0.5),stats.beta.ppf(0.975,xb+0.5,nb-xb+0.5)) intervalW = (stats.beta.ppf(0.025,xw+0.5,nw-xw+0.5),stats.beta.ppf(0.975,xw+0.5,nw-xw+0.5)) print "Interval for black-sounding names: ",map(lambda x: round(x,3),intervalB) print "Interval for white-sounding names: ",map(lambda x: round(x,3),intervalW)
exercises/SlideRule-DS-Intensive/Inferential Statistics/sliderule_dsi_inferential_statistics_exercise_2.ipynb
phasedchirp/Assorted-Data-Analysis
gpl-2.0
The complete lack of overlap in the intervals here implies a significant difference with $p\lt 0.05$ (Cumming & Finch,2005). Given that this particular interval can be interpreted as a Bayesian credible interval, this is a fairly comfortable conclusion. Calculating credible intervals using Markov Chain Monte Carlo Slightly different method of calculating approximately the same thing (the beta distribution used above the posterior distribution given given the observations with a Jeffreys prior):
import pystan modelCode = ''' data { int<lower=0> N; int<lower=1,upper=2> G[N]; int<lower=0,upper=1> y[N]; } parameters { real<lower=0,upper=1> theta[2]; } model { # beta(0.5,0.5) prior theta ~ beta(0.5,0.5); # bernoulli likelihood # This could be modified to use a binomial with successes and counts instead for (i in 1:N) y[i] ~ bernoulli(theta[G[i]]); } generated quantities { real diff; // difference in proportions: diff <- theta[1]-theta[2]; } ''' model = pystan.StanModel(model_code=modelCode) dataDict = dict(N=len(data),G=np.where(data.race=='b',1,2),y=map(int,data.call)) fit = model.sampling(data=dataDict) print fit samples = fit.extract(permuted=True) MCMCIntervalB = np.percentile(samples['theta'].transpose()[0],[2.5,97.5]) MCMCIntervalW = np.percentile(samples['theta'].transpose()[1],[2.5,97.5]) fit.plot().show()
exercises/SlideRule-DS-Intensive/Inferential Statistics/sliderule_dsi_inferential_statistics_exercise_2.ipynb
phasedchirp/Assorted-Data-Analysis
gpl-2.0
Estimating rough 95% credible intervals:
print map(lambda x: round(x,3),MCMCIntervalB) print map(lambda x: round(x,3),MCMCIntervalW)
exercises/SlideRule-DS-Intensive/Inferential Statistics/sliderule_dsi_inferential_statistics_exercise_2.ipynb
phasedchirp/Assorted-Data-Analysis
gpl-2.0
So, this method gives a result that fits quite nicely with previous results, while allowing more flexible specification of priors. Interval for sampled differences in proportions:
print map(lambda x: round(x,3),np.percentile(samples['diff'],[2.5,97.5]))
exercises/SlideRule-DS-Intensive/Inferential Statistics/sliderule_dsi_inferential_statistics_exercise_2.ipynb
phasedchirp/Assorted-Data-Analysis
gpl-2.0
And this interval does not include 0, so that we're left fairly confident that black-sounding names get less call-backs, although the estimated differences in proportions are fairly small (significant in the technical sense isn't really the right word to describe this part). Accounting for additional factors: A next step here would be to check whether other factors influence the proportion of call-backs. This can be done using logistic regression, although there will be a limit to the complexity of the model to be fit, given that the proportion of call-backs is quite small, potentially leading to small cell-counts and unstable estimates (one rule of thumb being n>30 per cell is reasonably safe).
data.columns # The data is balanced by design, and this mostly isn't a problem for relatively simple models. # For example: pd.crosstab(data.computerskills,data.race) import statsmodels.formula.api as smf
exercises/SlideRule-DS-Intensive/Inferential Statistics/sliderule_dsi_inferential_statistics_exercise_2.ipynb
phasedchirp/Assorted-Data-Analysis
gpl-2.0
Checking to see if computer skills have a significant effect on call-backs:
glm = smf.Logit.from_formula(formula="call~race+computerskills",data=data).fit() glm.summary()
exercises/SlideRule-DS-Intensive/Inferential Statistics/sliderule_dsi_inferential_statistics_exercise_2.ipynb
phasedchirp/Assorted-Data-Analysis
gpl-2.0
The effect might be described as marginal, but probably best not to over-interpret. But maybe the combination of race and computer skills makes a difference? Apparently not in this data (not even an improvement to the model log-likelihood or other measures of model fit):
glm2 = smf.Logit.from_formula(formula="call~race*computerskills",data=data).fit() glm2.summary()
exercises/SlideRule-DS-Intensive/Inferential Statistics/sliderule_dsi_inferential_statistics_exercise_2.ipynb
phasedchirp/Assorted-Data-Analysis
gpl-2.0
Corrupt known signal with point spread The aim of this tutorial is to demonstrate how to put a known signal at a desired location(s) in a :class:mne.SourceEstimate and then corrupt the signal with point-spread by applying a forward and inverse solution.
import os.path as op import numpy as np from mayavi import mlab import mne from mne.datasets import sample from mne.minimum_norm import read_inverse_operator, apply_inverse from mne.simulation import simulate_stc, simulate_evoked
0.17/_downloads/f44d9c0360e7806c2f8988ccd7a3b432/plot_point_spread.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
First, we set some parameters.
seed = 42 # parameters for inverse method method = 'sLORETA' snr = 3. lambda2 = 1.0 / snr ** 2 # signal simulation parameters # do not add extra noise to the known signals nave = np.inf T = 100 times = np.linspace(0, 1, T) dt = times[1] - times[0] # Paths to MEG data data_path = sample.data_path() subjects_dir = op.join(data_path, 'subjects') fname_fwd = op.join(data_path, 'MEG', 'sample', 'sample_audvis-meg-oct-6-fwd.fif') fname_inv = op.join(data_path, 'MEG', 'sample', 'sample_audvis-meg-oct-6-meg-fixed-inv.fif') fname_evoked = op.join(data_path, 'MEG', 'sample', 'sample_audvis-ave.fif')
0.17/_downloads/f44d9c0360e7806c2f8988ccd7a3b432/plot_point_spread.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Load the MEG data
fwd = mne.read_forward_solution(fname_fwd) fwd = mne.convert_forward_solution(fwd, force_fixed=True, surf_ori=True, use_cps=False) fwd['info']['bads'] = [] inv_op = read_inverse_operator(fname_inv) raw = mne.io.RawFIF(op.join(data_path, 'MEG', 'sample', 'sample_audvis_raw.fif')) events = mne.find_events(raw) event_id = {'Auditory/Left': 1, 'Auditory/Right': 2} epochs = mne.Epochs(raw, events, event_id, baseline=(None, 0), preload=True) epochs.info['bads'] = [] evoked = epochs.average() labels = mne.read_labels_from_annot('sample', subjects_dir=subjects_dir) label_names = [l.name for l in labels] n_labels = len(labels)
0.17/_downloads/f44d9c0360e7806c2f8988ccd7a3b432/plot_point_spread.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Estimate the background noise covariance from the baseline period
cov = mne.compute_covariance(epochs, tmin=None, tmax=0.)
0.17/_downloads/f44d9c0360e7806c2f8988ccd7a3b432/plot_point_spread.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause