markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Now we create a 5 by 5 grid with a spacing (dx and dy) of 1. We also create an elevation field with value of 1. everywhere, except at the outlet, where the elevation is 0. In this case the outlet is in the middle of the bottom row, at location (0,2), and has a node id of 2.
mg1 = RasterModelGrid((5, 5), 1.0) z1 = mg1.add_ones("topographic__elevation", at="node") mg1.at_node["topographic__elevation"][2] = 0.0 mg1.at_node["topographic__elevation"]
notebooks/tutorials/boundary_conds/set_watershed_BCs_raster.ipynb
landlab/landlab
mit
Check to see that node status were set correctly. imshow will default to not plot the value of BC_NODE_IS_CLOSED nodes, which is why we override this below with the option color_for_closed
mg1.imshow(mg1.status_at_node, color_for_closed="blue")
notebooks/tutorials/boundary_conds/set_watershed_BCs_raster.ipynb
landlab/landlab
mit
The second example uses set_watershed_boundary_condition_outlet_coords In this case the user knows the coordinates of the outlet node. First instantiate a new grid, with new data values.
mg2 = RasterModelGrid((5, 5), 10.0) z2 = mg2.add_ones("topographic__elevation", at="node") mg2.at_node["topographic__elevation"][1] = 0.0 mg2.at_node["topographic__elevation"]
notebooks/tutorials/boundary_conds/set_watershed_BCs_raster.ipynb
landlab/landlab
mit
Plot grid of boundary status information
mg2.imshow(mg2.status_at_node, color_for_closed="blue")
notebooks/tutorials/boundary_conds/set_watershed_BCs_raster.ipynb
landlab/landlab
mit
The third example uses set_watershed_boundary_condition_outlet_id In this case the user knows the node id value of the outlet node. First instantiate a new grid, with new data values.
mg3 = RasterModelGrid((5, 5), 5.0) z3 = mg3.add_ones("topographic__elevation", at="node") mg3.at_node["topographic__elevation"][5] = 0.0 mg3.at_node["topographic__elevation"]
notebooks/tutorials/boundary_conds/set_watershed_BCs_raster.ipynb
landlab/landlab
mit
Another plot to illustrate the results.
mg3.imshow(mg3.status_at_node, color_for_closed="blue")
notebooks/tutorials/boundary_conds/set_watershed_BCs_raster.ipynb
landlab/landlab
mit
The final example uses set_watershed_boundary_condition on a watershed that was exported from Arc. First import read_esri_ascii and then import the DEM data. An optional value of halo=1 is used with read_esri_ascii. This puts a perimeter of nodata values around the DEM. This is done just in case there are data values on the edge of the raster. These would have to become closed to set watershed boundary conditions, but in order to avoid that, we add a perimeter to the data.
from landlab.io import read_esri_ascii (grid_bijou, z_bijou) = read_esri_ascii("west_bijou_gully.asc", halo=1)
notebooks/tutorials/boundary_conds/set_watershed_BCs_raster.ipynb
landlab/landlab
mit
Let's plot the data to see what the topography looks like.
grid_bijou.imshow(z_bijou)
notebooks/tutorials/boundary_conds/set_watershed_BCs_raster.ipynb
landlab/landlab
mit
Now we can look at the boundary status of the nodes to see where the found outlet was.
grid_bijou.imshow(grid_bijou.status_at_node, color_for_closed="blue")
notebooks/tutorials/boundary_conds/set_watershed_BCs_raster.ipynb
landlab/landlab
mit
This looks sensible. Now that the boundary conditions ae set, we can also look at the topography. imshow will default to show boundaries as black, as illustrated below. But that can be overwridden as we have been doing all along.
grid_bijou.imshow(z_bijou)
notebooks/tutorials/boundary_conds/set_watershed_BCs_raster.ipynb
landlab/landlab
mit
Now we look at the probability that the various configurations of significant and non-significant results will be obtained.
plt.figure(figsize=(7,6)) for k in [0,1,2,3,6,7]: plt.plot(Ns/4, cs[:,k]) plt.legend(['nothing','X','B','BX','AB','ABX'],loc=2) plt.xlabel('N per cell') plt.ylabel('pattern frequency');
_ipynb/No Way Anova - Interactions need more power.ipynb
simkovic/simkovic.github.io
mit
Load and Augment the Data Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data. Augmentation In this cell, we perform some simple data augmentation by randomly flipping and rotating the given image data. We do this by defining a torchvision transform, and you can learn about all the transforms that are used to pre-process and augment data, here. TODO: Look at the transformation documentation; add more augmentation transforms, and see how your model performs. This type of data augmentation should add some positional variety to these images, so that when we train a model on this data, it will be robust in the face of geometric changes (i.e. it will recognize a ship, no matter which direction it is facing). It's recommended that you choose one or two transforms.
from torchvision import datasets import torchvision.transforms as transforms from torch.utils.data.sampler import SubsetRandomSampler # number of subprocesses to use for data loading num_workers = 0 # how many samples per batch to load batch_size = 20 # percentage of training set to use as validation valid_size = 0.2 # convert data to a normalized torch.FloatTensor transform = transforms.Compose([ transforms.RandomHorizontalFlip(), # randomly flip and rotate transforms.RandomRotation(10), transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)) ]) # choose the training and test datasets train_data = datasets.CIFAR10('data', train=True, download=True, transform=transform) test_data = datasets.CIFAR10('data', train=False, download=True, transform=transform) # obtain training indices that will be used for validation num_train = len(train_data) indices = list(range(num_train)) np.random.shuffle(indices) split = int(np.floor(valid_size * num_train)) train_idx, valid_idx = indices[split:], indices[:split] # define samplers for obtaining training and validation batches train_sampler = SubsetRandomSampler(train_idx) valid_sampler = SubsetRandomSampler(valid_idx) # prepare data loaders (combine dataset and sampler) train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size, sampler=train_sampler, num_workers=num_workers) valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size, sampler=valid_sampler, num_workers=num_workers) test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size, num_workers=num_workers) # specify the image classes classes = ['airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
DEEP LEARNING/Pytorch from scratch/CNN/cifar10_cnn_augm.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Train the Network Remember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting.
# number of epochs to train the model n_epochs = 30 valid_loss_min = np.Inf # track change in validation loss for epoch in range(1, n_epochs+1): # keep track of training and validation loss train_loss = 0.0 valid_loss = 0.0 ################### # train the model # ################### model.train() for batch_idx, (data, target) in enumerate(train_loader): # move tensors to GPU if CUDA is available if train_on_gpu: data, target = data.cuda(), target.cuda() # clear the gradients of all optimized variables optimizer.zero_grad() # forward pass: compute predicted outputs by passing inputs to the model output = model(data) # calculate the batch loss loss = criterion(output, target) # backward pass: compute gradient of the loss with respect to model parameters loss.backward() # perform a single optimization step (parameter update) optimizer.step() # update training loss train_loss += loss.item()*data.size(0) ###################### # validate the model # ###################### model.eval() for batch_idx, (data, target) in enumerate(valid_loader): # move tensors to GPU if CUDA is available if train_on_gpu: data, target = data.cuda(), target.cuda() # forward pass: compute predicted outputs by passing inputs to the model output = model(data) # calculate the batch loss loss = criterion(output, target) # update average validation loss valid_loss += loss.item()*data.size(0) # calculate average losses train_loss = train_loss/len(train_loader.dataset) valid_loss = valid_loss/len(valid_loader.dataset) # print training/validation statistics print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format( epoch, train_loss, valid_loss)) # save model if validation loss has decreased if valid_loss <= valid_loss_min: print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format( valid_loss_min, valid_loss)) torch.save(model.state_dict(), 'model_augmented.pt') valid_loss_min = valid_loss
DEEP LEARNING/Pytorch from scratch/CNN/cifar10_cnn_augm.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Load the Model with the Lowest Validation Loss
model.load_state_dict(torch.load('model_augmented.pt'))
DEEP LEARNING/Pytorch from scratch/CNN/cifar10_cnn_augm.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Test the Trained Network Test your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
# track test loss test_loss = 0.0 class_correct = list(0. for i in range(10)) class_total = list(0. for i in range(10)) model.eval() # iterate over test data for batch_idx, (data, target) in enumerate(test_loader): # move tensors to GPU if CUDA is available if train_on_gpu: data, target = data.cuda(), target.cuda() # forward pass: compute predicted outputs by passing inputs to the model output = model(data) # calculate the batch loss loss = criterion(output, target) # update test loss test_loss += loss.item()*data.size(0) # convert output probabilities to predicted class _, pred = torch.max(output, 1) # compare predictions to true label correct_tensor = pred.eq(target.data.view_as(pred)) correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy()) # calculate test accuracy for each object class for i in range(batch_size): label = target.data[i] class_correct[label] += correct[i].item() class_total[label] += 1 # average test loss test_loss = test_loss/len(test_loader.dataset) print('Test Loss: {:.6f}\n'.format(test_loss)) for i in range(10): if class_total[i] > 0: print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % ( classes[i], 100 * class_correct[i] / class_total[i], np.sum(class_correct[i]), np.sum(class_total[i]))) else: print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i])) print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % ( 100. * np.sum(class_correct) / np.sum(class_total), np.sum(class_correct), np.sum(class_total)))
DEEP LEARNING/Pytorch from scratch/CNN/cifar10_cnn_augm.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Visualize Sample Test Results
# obtain one batch of test images dataiter = iter(test_loader) images, labels = dataiter.next() images.numpy() # move model inputs to cuda, if GPU available if train_on_gpu: images = images.cuda() # get sample outputs output = model(images) # convert output probabilities to predicted class _, preds_tensor = torch.max(output, 1) preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy()) # plot the images in the batch, along with predicted and true labels fig = plt.figure(figsize=(25, 4)) for idx in np.arange(20): ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[]) imshow(images[idx]) ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]), color=("green" if preds[idx]==labels[idx].item() else "red"))
DEEP LEARNING/Pytorch from scratch/CNN/cifar10_cnn_augm.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
First, we load the image and show it.
#Load the image path_to_image = 'images/graffiti.jpg' img = cv2.imread(path_to_image) sr.show_image(img)
Notebooks/DetectorExample.ipynb
NLeSC/SalientDetector-python
apache-2.0
Now we create a SalientDetector object, with some parameters.
det = sr.SalientDetector(SE_size_factor=0.20, lam_factor=4)
Notebooks/DetectorExample.ipynb
NLeSC/SalientDetector-python
apache-2.0
We ask the SalientDetector to detect all types of regions:
regions = det.detect(img, find_holes=True, find_islands=True, find_indentations=True, find_protrusions=True, visualize=True) print(regions.keys())
Notebooks/DetectorExample.ipynb
NLeSC/SalientDetector-python
apache-2.0
We can also output the regions as ellipses
num_regions, features_standard, features_poly = sr.binary_mask2ellipse_features(regions, min_square=False) print("number of features per saliency type: ", num_regions) sr.visualize_ellipses(regions["holes"], features_standard["holes"]) sr.visualize_ellipses(regions["islands"], features_standard["islands"]) sr.visualize_ellipses(regions["indentations"], features_standard["indentations"]) sr.visualize_ellipses(regions["protrusions"], features_standard["protrusions"]) #print "Elliptic polynomial features:", features_poly sr.visualize_elements_ellipses(img, features_standard);
Notebooks/DetectorExample.ipynb
NLeSC/SalientDetector-python
apache-2.0
We can also save the elliptic parameters in text files. Below is an example of saving the polynomial coefficients of all regions represented as ellipses.
total_num_regions = sr.save_ellipse_features2file(num_regions, features_poly, 'poly_features.txt') print("total_num_regions", total_num_regions)
Notebooks/DetectorExample.ipynb
NLeSC/SalientDetector-python
apache-2.0
To load the saved features from file, use the loading funciton:
import sys, os sys.path.insert(0, os.path.abspath('..')) import salientregions as sr total_num_regions, num_regions, features = sr.load_ellipse_features_from_file('poly_features.txt') print("total_num_regions: ", total_num_regions) print("number of features per saliency type: ", num_regions) #print "features: ", features
Notebooks/DetectorExample.ipynb
NLeSC/SalientDetector-python
apache-2.0
Process an observation Setup the processing
# Default input parameters (replaced in next cell) sequence_id = '' # e.g. PAN012_358d0f_20191005T112325 # Unused option for now. See below. # vmag_min = 6 # vmag_max = 14 position_column_x = 'catalog_wcs_x' position_column_y = 'catalog_wcs_y' input_bucket = 'panoptes-images-processed' # JSON string of additional settings. observation_settings = '{}' output_dir = tempfile.TemporaryDirectory().name image_status = 'MATCHED' base_url = 'https://storage.googleapis.com' # Set up output directory and filenames. output_dir = Path(output_dir) output_dir.mkdir(parents=True, exist_ok=True) observation_store_path = output_dir / 'observation.h5' observation_settings = from_json(observation_settings) observation_settings['output_dir'] = output_dir
notebooks/ProcessObservation.ipynb
panoptes/PIAA
mit
Fetch all the image documents from the metadata store. We then filter based off image status and measured properties.
unit_id, camera_id, sequence_time = sequence_id.split('_') # Get sequence information sequence_doc_path = f'units/{unit_id}/observations/{sequence_id}' sequence_doc_ref = firestore_db.document(sequence_doc_path) sequence_info = sequence_doc_ref.get().to_dict() exptime = sequence_info['total_exptime'] / sequence_info['num_images'] sequence_info['exptime'] = int(exptime) pd.json_normalize(sequence_info, sep='_').T # Get and show the metadata about the observation. matched_query = sequence_doc_ref.collection('images').where('status', '==', image_status) matched_docs = [d.to_dict() for d in matched_query.stream()] images_df = pd.json_normalize(matched_docs, sep='_') # Set a time index. images_df.time = pd.to_datetime(images_df.time) images_df = images_df.set_index(['time']).sort_index() num_frames = len(images_df) print(f'Found {num_frames} images in observation')
notebooks/ProcessObservation.ipynb
panoptes/PIAA
mit
Filter frames Filter some of the frames based on the image properties as a whole.
# Sigma filtering of certain stats mask_columns = [ 'camera_colortemp', 'sources_num_detected', 'sources_photutils_fwhm_mean' ] for mask_col in mask_columns: images_df[f'mask_{mask_col}'] = stats.sigma_clip(images_df[mask_col]).mask display(plot.filter_plot(images_df, mask_col, sequence_id)) images_df['is_masked'] = False images_df['is_masked'] = images_df.filter(regex='mask_*').any(1) pg = sb.pairplot(images_df[['is_masked', *mask_columns]], hue='is_masked') pg.fig.suptitle(f'Masked image properties for {sequence_id}', y=1.01) pg.fig.set_size_inches(9, 8); # Get the unfiltered frames images_df = images_df.query('is_masked==False') num_frames = len(images_df) print(f'Frames after filtering: {num_frames}') if num_frames < 10: raise RuntimeError(f'Cannot process with less than 10 frames,have {num_frames}') pg = sb.pairplot(images_df[mask_columns]) pg.fig.suptitle(f'Image properties w/ clipping for {sequence_id}', y=1.01) pg.fig.set_size_inches(9, 8); # Save (most of) the images info to the observation store. images_df.select_dtypes(exclude='object').to_hdf(observation_store_path, key='images', format='table', errors='ignore')
notebooks/ProcessObservation.ipynb
panoptes/PIAA
mit
Load metadata for images
# Build the joined metadata file. sources = list() for image_id in images_df.uid: blob_path = f'gcs://{input_bucket}/{image_id.replace("_", "/")}/sources.parquet' try: sources.append(pd.read_parquet(blob_path)) except FileNotFoundError: print(f'Error finding {blob_path}, skipping') sources_df = pd.concat(sources).sort_index() del sources
notebooks/ProcessObservation.ipynb
panoptes/PIAA
mit
Filter stars Now that we have images of a sufficient quality, filter the star detections themselves. We get the mean metadata values for each star and use that to filter any stellar outliers based on a few properties of the observation as a whole.
# Use the mean value for the observation for each source. sample_source_df = sources_df.groupby('picid').mean() num_sources = len(sample_source_df) print(f'Sources before filtering: {num_sources}') frame_count = sources_df.groupby('picid').catalog_vmag.count() exptime = images_df.camera_exptime.mean() # Mask sources that don't appear in all (filtered) frames. sample_source_df['frame_count'] = frame_count sample_source_df.eval('mask_frame_count = frame_count!=frame_count.max()', inplace=True) fig = Figure() fig.set_dpi(100) ax = fig.subplots() sb.histplot(data=sample_source_df, x='frame_count', hue=f'mask_frame_count', ax=ax, legend=False) ax.set_title(f'{sequence_id} {num_frames=}') fig.suptitle(f'Frame star detection') fig
notebooks/ProcessObservation.ipynb
panoptes/PIAA
mit
See gini coefficient info here.
# Sigma clip columns. clip_columns = [ 'catalog_vmag', 'photutils_gini', 'photutils_fwhm', ] # Display in pair plot columns. pair_columns = [ 'catalog_sep', 'photutils_eccentricity', 'photutils_background_mean', 'catalog_wcs_x_int', 'catalog_wcs_y_int', 'is_masked', ] for mask_col in clip_columns: sample_source_df[f'mask_{mask_col}'] = stats.sigma_clip(sample_source_df[mask_col]).mask # sample_source_df.eval('mask_catalog_vmag = catalog_vmag > @vmag_max or catalog_vmag < @vmag_min', inplace=True) sample_source_df['is_masked'] = False sample_source_df['is_masked'] = sample_source_df.filter(regex='mask_*').any(1) display(Markdown('Number of stars filtered by type (with overlap):')) display(Markdown(sample_source_df.filter(regex='mask_').sum(0).sort_values(ascending=False).to_markdown())) fig = Figure() fig.set_dpi(100) fig.set_size_inches(10, 3) axes = fig.subplots(ncols=len(clip_columns), sharey=True) for i, col in enumerate(clip_columns): sb.histplot(data=sample_source_df, x=col, hue=f'mask_{col}', ax=axes[i], legend=False) fig.suptitle(f'Filter properties for {sequence_id}') fig pp = sb.pairplot(sample_source_df[clip_columns + ['is_masked']], hue='is_masked', plot_kws=dict(alpha=0.5)) pp.fig.suptitle(f'Filter properties for {sequence_id}', y=1.01) pp.fig.set_dpi(100); pp = sb.pairplot(sample_source_df[clip_columns + pair_columns], hue='is_masked', plot_kws=dict(alpha=0.5)) pp.fig.suptitle(f'Catalog vs detected properties for {sequence_id}', y=1.01); pp = sb.pairplot(sample_source_df.query('is_masked==False')[clip_columns + pair_columns], hue='is_masked', plot_kws=dict(alpha=0.5)) pp.fig.suptitle(f'Catalog vs detected for filtered sources of {sequence_id}', y=1.01); fig = Figure() fig.set_dpi(100) ax = fig.add_subplot() plot_data = sample_source_df.query('is_masked == True') sb.scatterplot(data=plot_data, x='catalog_wcs_x_int', y='catalog_wcs_y_int', marker='*', hue='photutils_fwhm', palette='Reds', edgecolor='k', linewidth=0.2, size='catalog_vmag_bin', sizes=(100, 5), ax=ax ) ax.set_title(f'Location of {len(plot_data)} outlier stars in {exptime:.0f}s for {sequence_id}') fig.set_size_inches(12, 8) fig fig = Figure() fig.set_dpi(100) ax = fig.add_subplot() plot_data = sample_source_df.query('is_masked == False') sb.scatterplot(data=plot_data, x='catalog_wcs_x_int', y='catalog_wcs_y_int', marker='*', hue='photutils_fwhm', palette='Blues', edgecolor='k', linewidth=0.2, size='catalog_vmag_bin', sizes=(100, 5), ax=ax ) ax.set_title(f'Location of {len(plot_data)} detected stars in {exptime:.0f}s for {sequence_id}') fig.set_size_inches(12, 8) fig # Get the sources that aren't filtered. sources_df = sources_df.loc[sample_source_df.query('is_masked == False').index] num_sources = len(sources_df.index.get_level_values('picid').unique()) print(f'Detected stars after filtering: {num_sources}') # Filter based on mean x and y movement of stars. position_diffs = sources_df[['catalog_wcs_x_int', 'catalog_wcs_y_int']].groupby('picid').apply(lambda grp: grp - grp.mean()) pixel_diff_mask = stats.sigma_clip(position_diffs.groupby('time').mean()).mask x_mask = pixel_diff_mask[:, 0] y_mask = pixel_diff_mask[:, 1] print(f'Filtering {sum(x_mask | y_mask)} of {num_frames} frames based on pixel movement.') filtered_time_index = sources_df.index.get_level_values('time').unique()[~(x_mask | y_mask)] # Filter sources sources_df = sources_df.reset_index('picid').loc[filtered_time_index].reset_index().set_index(['picid', 'time']).sort_index() # Filter images images_df = images_df.loc[filtered_time_index] num_frames = len(filtered_time_index) print(f'Now have {num_frames}') fig = Figure() fig.set_dpi(100) fig.set_size_inches(8, 4) ax = fig.add_subplot() position_diffs.groupby('time').mean().plot(marker='.', ax=ax) # Mark outliers time_mean = position_diffs.groupby('time').mean() pd.DataFrame(time_mean[x_mask]['catalog_wcs_x_int']).plot(marker='o', c='r', ls='', ax=ax, legend=False) pd.DataFrame(time_mean[y_mask]['catalog_wcs_y_int']).plot(marker='o', c='r', ls='', ax=ax, legend=False) ax.hlines(1, time_mean.index[0], time_mean.index[-1], ls='--', color='grey', alpha=0.5) ax.hlines(-1, time_mean.index[0], time_mean.index[-1], ls='--', color='grey', alpha=0.5) if time_mean.max().max() < 6: ax.set_ylim([-6, 6]) ax.set_title(f'Mean xy pixel movement for {num_sources} stars {sequence_id}') ax.set_xlabel('Time [utc]') ax.set_ylabel('Difference from mean [pixel]') fig # Save sources to observation hdf5 file. sources_df.to_hdf(observation_store_path, key='sources', format='table') del sources_df
notebooks/ProcessObservation.ipynb
panoptes/PIAA
mit
Make stamp locations
xy_catalog = pd.read_hdf(observation_store_path, key='sources', columns=[position_column_x, position_column_y]).reset_index().groupby('picid') # Get max diff in xy positions. x_catalog_diff = (xy_catalog.catalog_wcs_x.max() - xy_catalog.catalog_wcs_x.min()).max() y_catalog_diff = (xy_catalog.catalog_wcs_y.max() - xy_catalog.catalog_wcs_y.min()).max() if x_catalog_diff >= 18 or y_catalog_diff >= 18: raise RuntimeError(f'Too much drift! {x_catalog_diff=} {y_catalog_diff}') stamp_width = 10 if x_catalog_diff < 10 else 18 stamp_height = 10 if y_catalog_diff < 10 else 18 # Determine stamp size stamp_size = (stamp_width, stamp_height) print(f'Using {stamp_size=}.') # Get the mean positions xy_mean = xy_catalog.mean() xy_std = xy_catalog.std() xy_mean = xy_mean.rename(columns=dict( catalog_wcs_x=f'{position_column_x}_mean', catalog_wcs_y=f'{position_column_y}_mean') ) xy_std = xy_std.rename(columns=dict( catalog_wcs_x=f'{position_column_x}_std', catalog_wcs_y=f'{position_column_y}_std') ) xy_mean = xy_mean.join(xy_std) stamp_positions = xy_mean.apply( lambda row: bayer.get_stamp_slice(row[f'{position_column_x}_mean'], row[f'{position_column_y}_mean'], stamp_size=stamp_size, as_slices=False, ), axis=1, result_type='expand') stamp_positions[f'{position_column_x}_mean'] = xy_mean[f'{position_column_x}_mean'] stamp_positions[f'{position_column_y}_mean'] = xy_mean[f'{position_column_y}_mean'] stamp_positions[f'{position_column_x}_std'] = xy_mean[f'{position_column_x}_std'] stamp_positions[f'{position_column_y}_std'] = xy_mean[f'{position_column_y}_std'] stamp_positions.rename(columns={0: 'stamp_y_min', 1: 'stamp_y_max', 2: 'stamp_x_min', 3: 'stamp_x_max'}, inplace=True) stamp_positions.to_hdf(observation_store_path, key='positions', format='table')
notebooks/ProcessObservation.ipynb
panoptes/PIAA
mit
Extract stamps
# Get list of FITS file urls fits_urls = [f'{base_url}/{input_bucket}/{image_id.replace("_", "/")}/image.fits.fz' for image_id in images_df.uid] # Build the joined metadata file. reference_image = None diff_image = None stack_image = None for image_time, fits_url in zip(images_df.index, fits_urls): try: data = fits_utils.getdata(fits_url) if reference_image is None: reference_image = data diff_image = np.zeros_like(data) stack_image = np.zeros_like(data) # Get the diff and stack images. diff_image = diff_image + (data - reference_image) stack_image = stack_image + data # Get stamps data from positions. stamps = make_stamps(stamp_positions, data) # Add the time stamp to this index. time_index = [image_time] * num_sources stamps.index = pd.MultiIndex.from_arrays([stamps.index, time_index], names=('picid', 'time')) # Append directly to the observation store. stamps.to_hdf(observation_store_path, key='stamps', format='table', append=True) except Exception as e: print(f'Problem with {fits_url}: {e!r}') fits.HDUList([ fits.PrimaryHDU(diff_image), fits.ImageHDU(stack_image / num_frames), ]).writeto(str(output_dir / f'stack-and-diff.fits'), overwrite=True) image_title = sequence_id if 'field_name' in sequence_info: image_title = f'{sequence_id} \"{sequence_info["field_name"]}\"' plot.image_simple(stack_image, title=f'Stack image for {image_title}') plot.image_simple(diff_image, title=f'Diff image for {image_title}') # Results JSON(sequence_info, expanded=True)
notebooks/ProcessObservation.ipynb
panoptes/PIAA
mit
Notebook environment info
!jupyter --version current_time()
notebooks/ProcessObservation.ipynb
panoptes/PIAA
mit
Analyse de clusters
nperm = 1000 T_obs_bin,clusters_bin,clusters_pb_bin,H0_bin = mne.stats.spatio_temporal_cluster_test(X_bin,threshold=None,n_permutations=nperm,out_type='mask') T_obs_ste,clusters_ste,clusters_pb_ste,H0_ste = mne.stats.spatio_temporal_cluster_test(X_ste,threshold=None,n_permutations=nperm,out_type='mask')
oddball/ERP_grav_statistics-univariate.ipynb
nicofarr/eeg4sounds
apache-2.0
On récupère les channels trouvés grace a l'analyse de clusters
def extract_electrodes_times(clusters,clusters_pb,tmin_ind=500,tmax_ind=640,alpha=0.005,evoked = ev_bin_dev): ch_list_temp = [] time_list_temp = [] for clust,pval in zip(clusters,clusters_pb): if pval < alpha: for j,curline in enumerate(clust[tmin_ind:tmax_ind]): for k,el in enumerate(curline): if el: ch_list_temp.append(evoked.ch_names[k]) time_list_temp.append(evoked.times[j+tmin_ind]) return np.unique(ch_list_temp),np.unique(time_list_temp) channels_deviance_ste,times_deviance_ste=extract_electrodes_times(clusters_ste,clusters_pb_ste) channels_deviance_bin,times_deviance_bin=extract_electrodes_times(clusters_bin,clusters_pb_bin) print(channels_deviance_bin),print(times_deviance_bin) print(channels_deviance_ste),print(times_deviance_ste) times_union = np.union1d(times_deviance_bin,times_deviance_ste) ch_union = np.unique(np.hstack([channels_deviance_bin,channels_deviance_ste])) print(ch_union) #Selecting channels epochs_bin_dev_ch = epochs_bin_dev.pick_channels(ch_union) epochs_bin_std_ch = epochs_bin_std.pick_channels(ch_union) epochs_ste_dev_ch = epochs_ste_dev.pick_channels(ch_union) epochs_ste_std_ch = epochs_ste_std.pick_channels(ch_union) X_diff = [epochs_bin_dev_ch.get_data().transpose(0, 2, 1) - epochs_bin_std_ch.get_data().transpose(0, 2, 1), epochs_ste_dev_ch.get_data().transpose(0, 2, 1) - epochs_ste_std_ch.get_data().transpose(0, 2, 1)] X_diff_ste_bin = X_diff[1]-X_diff[0] epochs_bin_dev_ch.plot_sensors(show_names=True) plt.show() roi = ['E117','E116','E108','E109','E151','E139','E141','E152','E110','E131','E143','E154','E142','E153','E140','E127','E118'] roi_frontal = ['E224','E223','E2','E4','E5','E6','E13','E14','E15','E20','E21','E27','E28','E30','E36','E40','E41'] len(roi_frontal),len(roi)
oddball/ERP_grav_statistics-univariate.ipynb
nicofarr/eeg4sounds
apache-2.0
One sample ttest FDR corrected (per electrode)
from scipy.stats import ttest_1samp from mne.stats import bonferroni_correction,fdr_correction def ttest_amplitude(X,times_ind,ch_names,times): # Selecting time points and averaging over time amps = X[:,times_ind,:].mean(axis=1) T, pval = ttest_1samp(amps, 0) alpha = 0.05 n_samples, n_tests= amps.shape threshold_uncorrected = stats.t.ppf(1.0 - alpha, n_samples - 1) reject_bonferroni, pval_bonferroni = bonferroni_correction(pval, alpha=alpha) threshold_bonferroni = stats.t.ppf(1.0 - alpha / n_tests, n_samples - 1) reject_fdr, pval_fdr = fdr_correction(pval, alpha=alpha, method='indep') mask_fdr = pval_fdr < 0.05 mask_bonf = pval_bonferroni < 0.05 print('FDR from %02f to %02f' % ((times[times_ind[0]]),times[times_ind[-1]])) for i,curi in enumerate(mask_fdr): if curi: print("Channel %s, T = %0.2f, p = %0.3f " % (ch_names[i], T[i],pval_fdr[i])) print('Bonferonni from %02f to %02f' % ((times[times_ind[0]]),times[times_ind[-1]])) for i,curi in enumerate(mask_bonf): if curi: print("Channel %s, T = %0.2f, p = %0.3f " % (ch_names[i], T[i],pval_bonferroni[i])) return T,pval,pval_fdr,pval_bonferronia def ttest_amplitude_roi(X,times_ind,ch_names_roi,times): print(X.shape) # Selecting time points and averaging over time amps = X[:,times_ind,:].mean(axis=1) # averaging over channels amps = amps.mean(axis=1) T, pval = ttest_1samp(amps, 0) alpha = 0.05 n_samples, _, n_tests= X.shape print('Uncorrected from %02f to %02f' % ((times[times_ind[0]]),times[times_ind[-1]])) print("T = %0.2f, p = %0.3f " % (T,pval)) return T,pval,pval_fdr,pval_bonferroni
oddball/ERP_grav_statistics-univariate.ipynb
nicofarr/eeg4sounds
apache-2.0
Tests de 280 a 440, par fenetres de 20 ms avec chevauchement de 10 ms
toi = np.arange(0.28,0.44,0.001) toi_index = ev_bin_dev.time_as_index(toi) wsize = 20 wstep = 10 toi
oddball/ERP_grav_statistics-univariate.ipynb
nicofarr/eeg4sounds
apache-2.0
Printing and preparing all time windows
all_toi_indexes = [] for i in range(14): print(toi[10*i],toi[10*i + 20]) cur_toi_ind = range(10*i+1,(10*i+21)) all_toi_indexes.append(ev_bin_dev.time_as_index(toi[cur_toi_ind])) print(toi[10*14],toi[10*14 + 19]) cur_toi_ind = range(10*14+1,(10*14+19)) all_toi_indexes.append(ev_bin_dev.time_as_index(toi[cur_toi_ind]))
oddball/ERP_grav_statistics-univariate.ipynb
nicofarr/eeg4sounds
apache-2.0
Tests on each time window
for cur_timewindow in all_toi_indexes: T,pval,pval_fdr,pval_bonferroni = ttest_amplitude(X_diff_ste_bin,cur_timewindow,epochs_bin_dev_ch.ch_names,times=epochs_bin_dev_ch.times)
oddball/ERP_grav_statistics-univariate.ipynb
nicofarr/eeg4sounds
apache-2.0
On a channel subset (ROI) - average over channels Parietal roi
#Selecting channels epochs_bin_dev = _matstruc2mne_epochs(mat_bin_dev).crop(tmax=tcrop) epochs_bin_std = _matstruc2mne_epochs(mat_bin_std).crop(tmax=tcrop) epochs_ste_dev = _matstruc2mne_epochs(mat_ste_dev).crop(tmax=tcrop) epochs_ste_std = _matstruc2mne_epochs(mat_ste_std).crop(tmax=tcrop) mne.equalize_channels([epochs_bin_dev,epochs_bin_std,epochs_ste_dev,epochs_ste_std]) epochs_bin_dev_ch = epochs_bin_dev.pick_channels(roi) epochs_bin_std_ch = epochs_bin_std.pick_channels(roi) epochs_ste_dev_ch = epochs_ste_dev.pick_channels(roi) epochs_ste_std_ch = epochs_ste_std.pick_channels(roi) X_diff_roi = [epochs_bin_dev_ch.get_data().transpose(0, 2, 1) - epochs_bin_std_ch.get_data().transpose(0, 2, 1), epochs_ste_dev_ch.get_data().transpose(0, 2, 1) - epochs_ste_std_ch.get_data().transpose(0, 2, 1)] X_diff_ste_bin_roi = X_diff_roi[1]-X_diff_roi[0] for cur_timewindow in all_toi_indexes: T,pval,pval_fdr,pval_bonferroni = ttest_amplitude_roi(X_diff_ste_bin_roi,cur_timewindow,roi,times=epochs_bin_dev_ch.times) grav_bin_dev = epochs_bin_dev_ch.average() grav_bin_std = epochs_bin_std_ch.average() grav_ste_dev = epochs_ste_dev_ch.average() grav_ste_std = epochs_ste_std_ch.average() evoked_bin = mne.combine_evoked([grav_bin_dev, -grav_bin_std], weights='equal') evoked_ste = mne.combine_evoked([grav_ste_dev, -grav_ste_std], weights='equal') mne.viz.plot_compare_evokeds([grav_bin_std,grav_bin_dev,grav_ste_std,grav_ste_dev],picks=[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16]) plt.show() mne.viz.plot_compare_evokeds([evoked_bin,evoked_ste],picks=[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16]) plt.show()
oddball/ERP_grav_statistics-univariate.ipynb
nicofarr/eeg4sounds
apache-2.0
Frontal roi
#Selecting channels epochs_bin_dev = _matstruc2mne_epochs(mat_bin_dev).crop(tmax=tcrop) epochs_bin_std = _matstruc2mne_epochs(mat_bin_std).crop(tmax=tcrop) epochs_ste_dev = _matstruc2mne_epochs(mat_ste_dev).crop(tmax=tcrop) epochs_ste_std = _matstruc2mne_epochs(mat_ste_std).crop(tmax=tcrop) mne.equalize_channels([epochs_bin_dev,epochs_bin_std,epochs_ste_dev,epochs_ste_std]) epochs_bin_dev_ch = epochs_bin_dev.pick_channels(roi_frontal) epochs_bin_std_ch = epochs_bin_std.pick_channels(roi_frontal) epochs_ste_dev_ch = epochs_ste_dev.pick_channels(roi_frontal) epochs_ste_std_ch = epochs_ste_std.pick_channels(roi_frontal) X_diff_roi = [epochs_bin_dev_ch.get_data().transpose(0, 2, 1) - epochs_bin_std_ch.get_data().transpose(0, 2, 1), epochs_ste_dev_ch.get_data().transpose(0, 2, 1) - epochs_ste_std_ch.get_data().transpose(0, 2, 1)] X_diff_ste_bin_roi = X_diff_roi[1]-X_diff_roi[0] for cur_timewindow in all_toi_indexes: T,pval,pval_fdr,pval_bonferroni = ttest_amplitude_roi(X_diff_ste_bin_roi,cur_timewindow,roi,times=epochs_bin_dev_ch.times) grav_bin_dev = epochs_bin_dev_ch.average() grav_bin_std = epochs_bin_std_ch.average() grav_ste_dev = epochs_ste_dev_ch.average() grav_ste_std = epochs_ste_std_ch.average() evoked_bin = mne.combine_evoked([grav_bin_dev, -grav_bin_std], weights='equal') evoked_ste = mne.combine_evoked([grav_ste_dev, -grav_ste_std], weights='equal') mne.viz.plot_compare_evokeds([grav_bin_std,grav_bin_dev,grav_ste_std,grav_ste_dev],picks=[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16]) plt.show() mne.viz.plot_compare_evokeds([evoked_bin,evoked_ste],picks=[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16]) plt.show() mne.viz.plot_compare_evokeds? from scipy import stats from mne.stats import bonferroni_correction,fdr_correction T, pval = ttest_1samp(X_diff_ste_bin, 0) alpha = 0.05 n_samples, n_tests,_ = X_diff_ste_bin.shape threshold_uncorrected = stats.t.ppf(1.0 - alpha, n_samples - 1) reject_bonferroni, pval_bonferroni = bonferroni_correction(pval, alpha=alpha) threshold_bonferroni = stats.t.ppf(1.0 - alpha / n_tests, n_samples - 1) reject_fdr, pval_fdr = fdr_correction(pval, alpha=alpha, method='indep') #threshold_fdr = np.min(np.abs(T)[reject_fdr]) masking_mat = pval<0.05 Tbis = np.zeros_like(T) Tbis[masking_mat] = T[masking_mat] plt.matshow(Tbis.T,cmap=plt.cm.RdBu_r) plt.colorbar() plt.show() plt.matshow(-np.log10(pval).T) plt.colorbar()
oddball/ERP_grav_statistics-univariate.ipynb
nicofarr/eeg4sounds
apache-2.0
Exercise: Let's consider a more general version of the Monty Hall problem where Monty is more unpredictable. As before, Monty never opens the door you chose (let's call it A) and never opens the door with the prize. So if you choose the door with the prize, Monty has to decide which door to open. Suppose he opens B with probability p and C with probability 1-p. If you choose A and Monty opens B, what is the probability that the car is behind A, in terms of p? What if Monty opens C? Hint: you might want to use SymPy to do the algebra for you.
from sympy import symbols p = symbols('p') pmf = Pmf('ABC') pmf['A'] *= p pmf['B'] *= 0 pmf['C'] *= 1 pmf.Normalize() pmf.Print() p pmf['A'].simplify() pmf['A'].subs(p, 0.5)
code/chap02mine.ipynb
NathanYee/ThinkBayes2
gpl-2.0
Parameters
num_inputs = 784 # 28*28 neurons_hid1 = 392 neurons_hid2 = 196 neurons_hid3 = neurons_hid1 # Decoder Begins num_outputs = num_inputs learning_rate = 0.01
18-05-28-Complete-Guide-to-Tensorflow-for-Deep-Learning-with-Python/06-Autoencoders/02-Stacked-Autoencoder-Example.ipynb
arcyfelix/Courses
apache-2.0
Activation function
actf = tf.nn.relu
18-05-28-Complete-Guide-to-Tensorflow-for-Deep-Learning-with-Python/06-Autoencoders/02-Stacked-Autoencoder-Example.ipynb
arcyfelix/Courses
apache-2.0
Placeholder
X = tf.placeholder(tf.float32, shape = [None, num_inputs])
18-05-28-Complete-Guide-to-Tensorflow-for-Deep-Learning-with-Python/06-Autoencoders/02-Stacked-Autoencoder-Example.ipynb
arcyfelix/Courses
apache-2.0
Weights Initializer capable of adapting its scale to the shape of weights tensors. With distribution="normal", samples are drawn from a truncated normal distribution centered on zero, with stddev = sqrt(scale / n) where n is: - number of input units in the weight tensor, if mode = "fan_in" - number of output units, if mode = "fan_out" - average of the numbers of input and output units, if mode = "fan_avg" With distribution="uniform", samples are drawn from a uniform distribution within [-limit, limit], with limit = sqrt(3 * scale / n).
initializer = tf.variance_scaling_initializer() w1 = tf.Variable(initializer([num_inputs, neurons_hid1]), dtype = tf.float32) w2 = tf.Variable(initializer([neurons_hid1, neurons_hid2]), dtype = tf.float32) w3 = tf.Variable(initializer([neurons_hid2, neurons_hid3]), dtype = tf.float32) w4 = tf.Variable(initializer([neurons_hid3, num_outputs]), dtype = tf.float32)
18-05-28-Complete-Guide-to-Tensorflow-for-Deep-Learning-with-Python/06-Autoencoders/02-Stacked-Autoencoder-Example.ipynb
arcyfelix/Courses
apache-2.0
Biases
b1 = tf.Variable(tf.zeros(neurons_hid1)) b2 = tf.Variable(tf.zeros(neurons_hid2)) b3 = tf.Variable(tf.zeros(neurons_hid3)) b4 = tf.Variable(tf.zeros(num_outputs))
18-05-28-Complete-Guide-to-Tensorflow-for-Deep-Learning-with-Python/06-Autoencoders/02-Stacked-Autoencoder-Example.ipynb
arcyfelix/Courses
apache-2.0
Activation Function and Layers
act_func = tf.nn.relu hid_layer1 = act_func(tf.matmul(X, w1) + b1) hid_layer2 = act_func(tf.matmul(hid_layer1, w2) + b2) hid_layer3 = act_func(tf.matmul(hid_layer2, w3) + b3) output_layer = tf.matmul(hid_layer3, w4) + b4
18-05-28-Complete-Guide-to-Tensorflow-for-Deep-Learning-with-Python/06-Autoencoders/02-Stacked-Autoencoder-Example.ipynb
arcyfelix/Courses
apache-2.0
Loss Function
loss = tf.reduce_mean(tf.square(output_layer - X))
18-05-28-Complete-Guide-to-Tensorflow-for-Deep-Learning-with-Python/06-Autoencoders/02-Stacked-Autoencoder-Example.ipynb
arcyfelix/Courses
apache-2.0
Optimizer
#tf.train.RMSPropOptimizer optimizer = tf.train.AdamOptimizer(learning_rate) train = optimizer.minimize(loss)
18-05-28-Complete-Guide-to-Tensorflow-for-Deep-Learning-with-Python/06-Autoencoders/02-Stacked-Autoencoder-Example.ipynb
arcyfelix/Courses
apache-2.0
Intialize Variables
init = tf.global_variables_initializer() saver = tf.train.Saver() num_epochs = 5 batch_size = 150 with tf.Session() as sess: sess.run(init) # Epoch == Entire Training Set for epoch in range(num_epochs): num_batches = mnist.train.num_examples // batch_size # 150 batch size for iteration in range(num_batches): X_batch, y_batch = mnist.train.next_batch(batch_size) sess.run(train, feed_dict = {X: X_batch}) training_loss = loss.eval(feed_dict={X: X_batch}) print("Epoch {} Complete. Training Loss: {}".format(epoch, training_loss)) saver.save(sess, "./checkpoint/stacked_autoencoder.ckpt")
18-05-28-Complete-Guide-to-Tensorflow-for-Deep-Learning-with-Python/06-Autoencoders/02-Stacked-Autoencoder-Example.ipynb
arcyfelix/Courses
apache-2.0
Test Autoencoder output on Test Data
num_test_images = 10 with tf.Session() as sess: saver.restore(sess,"./checkpoint/stacked_autoencoder.ckpt") results = output_layer.eval(feed_dict = {X : mnist.test.images[:num_test_images]}) # Compare original images with their reconstructions f, a = plt.subplots(2, 10, figsize = (20, 4)) for i in range(num_test_images): a[0][i].imshow(np.reshape(mnist.test.images[i], (28, 28))) a[1][i].imshow(np.reshape(results[i], (28, 28)))
18-05-28-Complete-Guide-to-Tensorflow-for-Deep-Learning-with-Python/06-Autoencoders/02-Stacked-Autoencoder-Example.ipynb
arcyfelix/Courses
apache-2.0
Basic 1D non-linear regression with Keras TODO: see https://stackoverflow.com/questions/44998910/keras-model-to-fit-polynomial Install Keras https://keras.io/#installation Install dependencies Install TensorFlow backend: https://www.tensorflow.org/install/ pip install tensorflow Insall h5py (required if you plan on saving Keras models to disk): http://docs.h5py.org/en/latest/build.html#wheels pip install h5py Install pydot (used by visualization utilities to plot model graphs): https://github.com/pydot/pydot#installation pip install pydot Install Keras pip install keras Import packages and check versions
import tensorflow as tf tf.__version__ import keras keras.__version__ import h5py h5py.__version__ import pydot pydot.__version__
nb_dev_python/python_keras_1d_non-linear_regression.ipynb
jdhp-docs/python_notebooks
mit
Make the dataset
df_train = gen_1d_polynomial_samples(n_samples=100, noise_std=0.05) x_train = df_train.x.values y_train = df_train.y.values plt.plot(x_train, y_train, ".k"); df_test = gen_1d_polynomial_samples(n_samples=100, noise_std=None) x_test = df_test.x.values y_test = df_test.y.values plt.plot(x_test, y_test, ".k");
nb_dev_python/python_keras_1d_non-linear_regression.ipynb
jdhp-docs/python_notebooks
mit
Make the regressor
model = keras.models.Sequential() #model.add(keras.layers.Dense(units=1000, activation='relu', input_dim=1)) #model.add(keras.layers.Dense(units=1)) #model.add(keras.layers.Dense(units=1000, activation='relu')) #model.add(keras.layers.Dense(units=1)) model.add(keras.layers.Dense(units=5, activation='relu', input_dim=1)) model.add(keras.layers.Dense(units=1)) model.add(keras.layers.Dense(units=5, activation='relu')) model.add(keras.layers.Dense(units=1)) model.add(keras.layers.Dense(units=5, activation='relu')) model.add(keras.layers.Dense(units=1)) model.compile(loss='mse', optimizer='adam') model.summary() hist = model.fit(x_train, y_train, batch_size=100, epochs=3000, verbose=None) plt.plot(hist.history['loss']); model.evaluate(x_test, y_test) y_predicted = model.predict(x_test) plt.plot(x_test, y_test, ".r") plt.plot(x_test, y_predicted, ".k");
nb_dev_python/python_keras_1d_non-linear_regression.ipynb
jdhp-docs/python_notebooks
mit
Next, let's define a new class to represent each player in the game. I have provided a rough framework of the class definition along with comments along the way to help you complete it. Places where you should write code are denoted by comments inside [] brackets and CAPITAL TEXT.
class Player: # create here two local variables to store a unique ID for each player and the player's current 'pot' of money # [FILL IN YOUR VARIABLES HERE] # in the __init__() function, use the two input variables to initialize the ID and starting pot of each player def __init__(self, inputID, startingPot): # [CREATE YOUR INITIALIZATIONS HERE] # create a function for playing the game. This function starts by taking an input for the dealer's card # and picking a random number from the 'cards' list for the player's card def play(self, dealerCard): # we use the random.choice() function to select a random item from a list playerCard = random.choice(cards) # here we should have a conditional that tests the player's card value against the dealer card # and returns a statement saying whether the player won or lost the hand # before returning the statement, make sure to either add or subtract the stake from the player's pot so that # the 'pot' variable tracks the player's money if playerCard < dealerCard: # [INCREMENT THE PLAYER'S POT, AND RETURN A MESSAGE] else: # [INCREMENT THE PLAYER'S POT, AND RETURN A MESSAGE] # create an accessor function to return the current value of the player's pot def returnPot(self): # [FILL IN THE RETURN STATEMENT] # create an accessor function to return the player's ID def returnID(self): # [FILL IN THE RETURN STATEMENT]
notebooks/week-2/04 - Lab 2 Assignment.ipynb
tolaoniyangi/dmc
apache-2.0
<span id="alos_land_change_plat_prod">Choose Platform and Product &#9652;</span>
# Select one of the ALOS data cubes from around the world # Colombia, Vietnam, Samoa Islands ## ALOS Data Summary # There are 7 time slices (epochs) for the ALOS mosaic data. # The dates of the mosaics are centered on June 15 of each year (time stamp) # Bands: RGB (HH-HV-HH/HV), HH, HV, date, incidence angle, mask) # Years: 2007, 2008, 2009, 2010, 2015, 2016, 2017 platform = "ALOS/ALOS-2" product = "alos_palsar_mosaic"
notebooks/land_change/SAR/ALOS_Land_Change.ipynb
ceos-seo/data_cube_notebooks
apache-2.0
<a id="alos_land_change_extents"></a> Get the Extents of the Cube &#9652;
from utils.data_cube_utilities.dc_time import dt_to_str metadata = dc.load(platform=platform, product=product, measurements=[]) full_lat = metadata.latitude.values[[-1,0]] full_lon = metadata.longitude.values[[0,-1]] min_max_dates = list(map(dt_to_str, map(pd.to_datetime, metadata.time.values[[0,-1]]))) # Print the extents of the combined data. print("Latitude Extents:", full_lat) print("Longitude Extents:", full_lon) print("Time Extents:", min_max_dates)
notebooks/land_change/SAR/ALOS_Land_Change.ipynb
ceos-seo/data_cube_notebooks
apache-2.0
<a id="alos_land_change_parameters"></a> Define the Analysis Parameters &#9652;
from datetime import datetime ## Somoa ## # Apia City # lat = (-13.7897, -13.8864) # lon = (-171.8531, -171.7171) # time_extents = ("2014-01-01", "2014-12-31") # East Area # lat = (-13.94, -13.84) # lon = (-171.96, -171.8) # time_extents = ("2014-01-01", "2014-12-31") # Central Area # lat = (-14.057, -13.884) # lon = (-171.774, -171.573) # time_extents = ("2014-01-01", "2014-12-31") # Small focused area in Central Region # lat = (-13.9443, -13.884) # lon = (-171.6431, -171.573) # time_extents = ("2014-01-01", "2014-12-31") ## Kenya ## # Mombasa lat = (-4.1095, -3.9951) lon = (39.5178, 39.7341) time_extents = ("2007-01-01", "2017-12-31")
notebooks/land_change/SAR/ALOS_Land_Change.ipynb
ceos-seo/data_cube_notebooks
apache-2.0
<a id="alos_land_change_load"></a> Load and Clean Data from the Data Cube &#9652;
dataset = dc.load(product = product, platform = platform, latitude = lat, longitude = lon, time=time_extents)
notebooks/land_change/SAR/ALOS_Land_Change.ipynb
ceos-seo/data_cube_notebooks
apache-2.0
View an acquisition in dataset
# Select a baseline and analysis time slice for comparison # Make the adjustments to the years according to the following scheme # Time Slice: 0=2007, 1=2008, 2=2009, 3=2010, 4=2015, 5=2016, 6=2017) baseline_slice = dataset.isel(time = 0) analysis_slice = dataset.isel(time = -1)
notebooks/land_change/SAR/ALOS_Land_Change.ipynb
ceos-seo/data_cube_notebooks
apache-2.0
<a id="alos_land_change_rgbs"></a> View RGBs for the Baseline and Analysis Periods &#9652;
%matplotlib inline from utils.data_cube_utilities.dc_rgb import rgb # Baseline RGB rgb_dataset2 = xr.Dataset() min_ = np.min([ np.percentile(baseline_slice.hh,5), np.percentile(baseline_slice.hv,5), ]) max_ = np.max([ np.percentile(baseline_slice.hh,95), np.percentile(baseline_slice.hv,95), ]) rgb_dataset2['base.hh'] = baseline_slice.hh.clip(min_,max_)/40 rgb_dataset2['base.hv'] = baseline_slice.hv.clip(min_,max_)/20 rgb_dataset2['base.ratio'] = (baseline_slice.hh.clip(min_,max_)/baseline_slice.hv.clip(min_,max_))*75 rgb(rgb_dataset2, bands=['base.hh','base.hv','base.ratio'], width=8) # Analysis RGB rgb_dataset2 = xr.Dataset() min_ = np.min([ np.percentile(analysis_slice.hh,5), np.percentile(analysis_slice.hv,5), ]) max_ = np.max([ np.percentile(analysis_slice.hh,95), np.percentile(analysis_slice.hv,95), ]) rgb_dataset2['base.hh'] = analysis_slice.hh.clip(min_,max_)/40 rgb_dataset2['base.hv'] = analysis_slice.hv.clip(min_,max_)/20 rgb_dataset2['base.ratio'] = (analysis_slice.hh.clip(min_,max_)/analysis_slice.hv.clip(min_,max_))*75 rgb(rgb_dataset2, bands=['base.hh','base.hv','base.ratio'], width=8)
notebooks/land_change/SAR/ALOS_Land_Change.ipynb
ceos-seo/data_cube_notebooks
apache-2.0
<a id="alos_land_change_hh_hv"></a> Plot HH or HV Band for the Baseline and Analysis Periods &#9652; NOTE: The HV band is best for deforestation detection Typical radar analyses convert the backscatter values at the pixel level to dB scale.<br> The ALOS coversion (from JAXA) is: Backscatter dB = 20 * log10( backscatter intensity) - 83.0
# Plot the BASELINE and ANALYSIS slice side-by-side # Change the band (HH or HV) in the code below plt.figure(figsize = (15,6)) plt.subplot(1,2,1) (20*np.log10(baseline_slice.hv)-83).plot(vmax=0, vmin=-30, cmap = "Greys_r") plt.subplot(1,2,2) (20*np.log10(analysis_slice.hv)-83).plot(vmax=0, vmin=-30, cmap = "Greys_r")
notebooks/land_change/SAR/ALOS_Land_Change.ipynb
ceos-seo/data_cube_notebooks
apache-2.0
<a id="alos_land_change_custom_rgb"></a> Plot a Custom RGB That Uses Bands from the Baseline and Analysis Periods &#9652; The RGB image below assigns RED to the baseline year HV band and GREEN+BLUE to the analysis year HV band<br> Vegetation loss appears in RED and regrowth in CYAN. Areas of no change appear in different shades of GRAY.<br> Users can change the RGB color assignments and bands (HH, HV) in the code below
# Clipping the bands uniformly to brighten the image rgb_dataset2 = xr.Dataset() min_ = np.min([ np.percentile(baseline_slice.hv,5), np.percentile(analysis_slice.hv,5), ]) max_ = np.max([ np.percentile(baseline_slice.hv,95), np.percentile(analysis_slice.hv,95), ]) rgb_dataset2['baseline_slice.hv'] = baseline_slice.hv.clip(min_,max_) rgb_dataset2['analysis_slice.hv'] = analysis_slice.hv.clip(min_,max_) # Plot the RGB with clipped HV band values rgb(rgb_dataset2, bands=['baseline_slice.hv','analysis_slice.hv','analysis_slice.hv'], width=8)
notebooks/land_change/SAR/ALOS_Land_Change.ipynb
ceos-seo/data_cube_notebooks
apache-2.0
Select one of the plots below and adjust the threshold limits (top and bottom)
plt.figure(figsize = (15,6)) plt.subplot(1,2,1) baseline_slice.hv.plot (vmax=0, vmin=4000, cmap="Greys") plt.subplot(1,2,2) analysis_slice.hv.plot (vmax=0, vmin=4000, cmap="Greys")
notebooks/land_change/SAR/ALOS_Land_Change.ipynb
ceos-seo/data_cube_notebooks
apache-2.0
<a id="alos_land_change_change_product"></a> Plot a Change Product to Compare Two Time Periods (Epochs) &#9652;
from matplotlib.ticker import FuncFormatter def intersection_threshold_plot(first, second, th, mask = None, color_none=np.array([0,0,0]), color_first=np.array([0,255,0]), color_second=np.array([255,0,0]), color_both=np.array([255,255,255]), color_mask=np.array([127,127,127]), width = 10, *args, **kwargs): """ Given two dataarrays, create a threshold plot showing where zero, one, or both are within a threshold. Parameters ---------- first, second: xarray.DataArray The DataArrays to compare. th: tuple A 2-tuple of the minimum (inclusive) and maximum (exclusive) threshold values, respectively. mask: numpy.ndarray A NumPy array of the same shape as the dataarrays. The pixels for which it is `True` are colored `color_mask`. color_none: list-like A list-like of 3 elements - red, green, and blue values in range [0,255], used to color regions where neither first nor second have values within the threshold. Default color is black. color_first: list-like A list-like of 3 elements - red, green, and blue values in range [0,255], used to color regions where only the first has values within the threshold. Default color is green. color_second: list-like A list-like of 3 elements - red, green, and blue values in range [0,255], used to color regions where only the second has values within the threshold. Default color is red. color_both: list-like A list-like of 3 elements - red, green, and blue values in range [0,255], used to color regions where both the first and second have values within the threshold. Default color is white. color_mask: list-like A list-like of 3 elements - red, green, and blue values in range [0,255], used to color regions where `mask == True`. Overrides any other color a region may have. Default color is gray. width: int The width of the created ``matplotlib.figure.Figure``. *args: list Arguments passed to ``matplotlib.pyplot.imshow()``. **kwargs: dict Keyword arguments passed to ``matplotlib.pyplot.imshow()``. """ mask = np.zeros(first.shape).astype(bool) if mask is None else mask first_in = np.logical_and(th[0] <= first, first < th[1]) second_in = np.logical_and(th[0] <= second, second < th[1]) both_in = np.logical_and(first_in, second_in) none_in = np.invert(both_in) # The colors for each pixel. color_array = np.zeros((*first.shape, 3)).astype(np.int16) color_array[none_in] = color_none color_array[first_in] = color_first color_array[second_in] = color_second color_array[both_in] = color_both color_array[mask] = color_mask def figure_ratio(ds, fixed_width = 10): width = fixed_width height = len(ds.latitude) * (fixed_width / len(ds.longitude)) return (width, height) fig, ax = plt.subplots(figsize = figure_ratio(first,fixed_width = width)) lat_formatter = FuncFormatter(lambda y_val, tick_pos: "{0:.3f}".format(first.latitude.values[tick_pos] )) lon_formatter = FuncFormatter(lambda x_val, tick_pos: "{0:.3f}".format(first.longitude.values[tick_pos])) ax.xaxis.set_major_formatter(lon_formatter) ax.yaxis.set_major_formatter(lat_formatter) plt.title("Threshold: {} < x < {}".format(th[0], th[1])) plt.xlabel('Longitude') plt.ylabel('Latitude') plt.imshow(color_array, *args, **kwargs) plt.show() change_product_band = 'hv' baseline_epoch = "2007-07-02" analysis_epoch = "2017-07-02" threshold_range = (0, 2000) # The minimum and maximum threshold values, respectively. baseline_ds = dataset.sel(time=baseline_epoch)[change_product_band].isel(time=0) analysis_ds = dataset.sel(time=analysis_epoch)[change_product_band].isel(time=0) anomaly = analysis_ds - baseline_ds intersection_threshold_plot(baseline_ds, analysis_ds, threshold_range)
notebooks/land_change/SAR/ALOS_Land_Change.ipynb
ceos-seo/data_cube_notebooks
apache-2.0
Exercises Exercise: Our solution to the differential equations is only approximate because we used a finite step size, dt=2 minutes. If we make the step size smaller, we expect the solution to be more accurate. Run the simulation with dt=1 and compare the results. What is the largest relative error between the two solutions?
# Solution goes here # Solution goes here # Solution goes here # Solution goes here
notebooks/chap18.ipynb
AllenDowney/ModSimPy
mit
For some reason can't run these in the notebook. So have to run them with subprocess like so:
%%python from multiprocessing import Pool def f(x): return x*x if __name__ == '__main__': p = Pool(5) print(p.map(f, [1, 2, 3])) %%python from multiprocessing import Pool import numpy as np def f(x): return x*x if __name__ == '__main__': p = Pool(5) print(p.map(f, np.array([1, 2, 3])))
notebooks/troubleshooting_and_sysadmin/Iterators with Multiprocessing.ipynb
Neuroglycerin/neukrill-net-work
mit
Now doing this asynchronously:
%%python from multiprocessing import Pool import numpy as np def f(x): return x**2 if __name__ == '__main__': p = Pool(5) r = p.map_async(f, np.array([0,1,2])) print(dir(r)) print(r.get(timeout=1))
notebooks/troubleshooting_and_sysadmin/Iterators with Multiprocessing.ipynb
Neuroglycerin/neukrill-net-work
mit
Now trying to create an iterable that will precompute it's output using multiprocessing.
%%python from multiprocessing import Pool import numpy as np def f(x): return x**2 class It(object): def __init__(self,a): # store an array (2D) self.a = a # initialise pool self.p = Pool(4) # initialise index self.i = 0 # initialise pre-computed first batch self.batch = self.p.map_async(f,self.a[self.i,:]) def get(self): return self.batch.get(timeout=1) def f(self,x): return x**2 if __name__ == '__main__': it = It(np.random.randn(4,4)) print(it.get()) %%python from multiprocessing import Pool import numpy as np def f(x): return x**2 class It(object): def __init__(self,a): # store an array (2D) self.a = a # initialise pool self.p = Pool(4) # initialise index self.i = 0 # initialise pre-computed first batch self.batch = self.p.map_async(f,self.a[self.i,:]) def __iter__(self): return self def next(self): # check if we've got something pre-computed to return if self.batch: # get the output output = self.batch.get(timeout=1) #output = self.batch # prepare next batch self.i += 1 if self.i < self.a.shape[0]: self.p = Pool(4) self.batch = self.p.map_async(f,self.a[self.i,:]) #self.batch = map(self.f,self.a[self.i,:]) else: self.batch = False return output else: raise StopIteration if __name__ == '__main__': it = It(np.random.randn(4,4)) for a in it: print a
notebooks/troubleshooting_and_sysadmin/Iterators with Multiprocessing.ipynb
Neuroglycerin/neukrill-net-work
mit
Then we have to try and do a similar thing, but using the randomaugment function. In the following two cells one uses multiprocessiung and one that doesn't. Testing them by pretending to ask for a minibatch and then sleep, applying the RandomAugment function each time.
%%time %%python from multiprocessing import Pool import numpy as np import neukrill_net.augment import time class It(object): def __init__(self,a,f): # store an array (2D) self.a = a # store the function self.f = f # initialise pool self.p = Pool(4) # initialise indices self.inds = range(self.a.shape[0]) # pop a batch from top self.batch_inds = [self.inds.pop(0) for _ in range(100)] # initialise pre-computed first batch self.batch = map(self.f,self.a[self.batch_inds,:]) def __iter__(self): return self def next(self): # check if we've got something pre-computed to return if self.inds != []: # get the output output = self.batch # prepare next batch self.batch_inds = [self.inds.pop(0) for _ in range(100)] self.p = Pool(4) self.batch = map(self.f,self.a[self.batch_inds,:]) return output else: raise StopIteration if __name__ == '__main__': f = neukrill_net.augment.RandomAugment(rotate=[0,90,180,270]) it = It(np.random.randn(10000,48,48),f) for a in it: time.sleep(0.01) pass %%time %%python from multiprocessing import Pool import numpy as np import neukrill_net.augment import time class It(object): def __init__(self,a,f): # store an array (2D) self.a = a # store the function self.f = f # initialise pool self.p = Pool(8) # initialise indices self.inds = range(self.a.shape[0]) # pop a batch from top self.batch_inds = [self.inds.pop(0) for _ in range(100)] # initialise pre-computed first batch self.batch = self.p.map_async(f,self.a[self.batch_inds,:]) def __iter__(self): return self def next(self): # check if we've got something pre-computed to return if self.inds != []: # get the output output = self.batch.get(timeout=1) # prepare next batch self.batch_inds = [self.inds.pop(0) for _ in range(100)] #self.p = Pool(4) self.batch = self.p.map_async(f,self.a[self.batch_inds,:]) return output else: raise StopIteration if __name__ == '__main__': f = neukrill_net.augment.RandomAugment(rotate=[0,90,180,270]) it = It(np.random.randn(10000,48,48),f) for a in it: time.sleep(0.01) pass %%time %%python from multiprocessing import Pool import numpy as np import neukrill_net.augment import time class It(object): def __init__(self,a,f): # store an array (2D) self.a = a # store the function self.f = f # initialise pool self.p = Pool(8) # initialise indices self.inds = range(self.a.shape[0]) # pop a batch from top self.batch_inds = [self.inds.pop(0) for _ in range(100)] # initialise pre-computed first batch self.batch = self.p.map_async(f,self.a[self.batch_inds,:]) def __iter__(self): return self def next(self): # check if we've got something pre-computed to return if self.inds != []: # get the output output = self.batch.get(timeout=1) # prepare next batch self.batch_inds = [self.inds.pop(0) for _ in range(100)] #self.p = Pool(4) self.batch = self.p.map_async(f,self.a[self.batch_inds,:]) return output else: raise StopIteration if __name__ == '__main__': f = neukrill_net.augment.RandomAugment(rotate=[0,90,180,270]) it = It(np.random.randn(10000,48,48),f) for a in it: print np.array(a).shape print np.array(a).reshape(100,48,48,1).shape break
notebooks/troubleshooting_and_sysadmin/Iterators with Multiprocessing.ipynb
Neuroglycerin/neukrill-net-work
mit
We now write the adjusted history back to a new history file and then calculate the updated gravity field:
his_changed.write_history('fold_thrust_changed.his') # %%timeit # recompute block model pynoddy.compute_model('fold_thrust_changed.his', 'fold_thrust_changed_out') # %%timeit # recompute geophysical response pynoddy.compute_model('fold_thrust_changed.his', 'fold_thrust_changed_out', sim_type = 'GEOPHYSICS') # load changed block model geo_changed = pynoddy.output.NoddyOutput('fold_thrust_changed_out') # load output and visualise geophysical field geophys_changed = pynoddy.output.NoddyGeophysics('fold_thrust_changed_out') fig = plt.figure(figsize = (8,8)) ax = fig.add_subplot(111) # imshow(geophys_changed.grv_data, cmap = 'jet') cf = ax.contourf(geophys_changed.grv_data, levels, cmap = 'gray', vmin = 324, vmax = 342) cbar = plt.colorbar(cf, orientation = 'horizontal') fig = plt.figure(figsize = (8,8)) ax = fig.add_subplot(111) # imshow(geophys.grv_data - geophys_changed.grv_data, cmap = 'jet') maxval = np.ceil(np.max(np.abs(geophys.grv_data - geophys_changed.grv_data))) # comp_levels = np.arange(-maxval,1.01 * maxval, 0.05 * maxval) cf = ax.contourf(geophys.grv_data - geophys_changed.grv_data, 20, cmap = 'spectral') #, comp_levels, cmap = 'RdBu_r') cbar = plt.colorbar(cf, orientation = 'horizontal') # compare sections through model geo_changed.plot_section('y', colorbar = False) h_out.plot_section('y', colorbar = False) for i in range(4): print("Event %d" % (i+2)) print his.events[i+2].properties['Slip'] print his.events[i+2].properties['Dip'] print his.events[i+2].properties['Dip Direction'] # recompute the geology blocks for comparison: pynoddy.compute_model('fold_thrust_changed.his', 'fold_thrust_changed_out') geology_changed = pynoddy.output.NoddyOutput('fold_thrust_changed_out') geology_changed.plot_section('x', # layer_labels = his.model_stratigraphy, colorbar_orientation = 'horizontal', colorbar=False, title = '', # savefig=True, fig_filename = 'fold_thrust_NS_section.eps', cmap = 'YlOrRd') geology_changed.plot_section('y', # layer_labels = his.model_stratigraphy, colorbar_orientation = 'horizontal', title = '', cmap = 'YlOrRd', # savefig=True, fig_filename = 'fold_thrust_EW_section.eps', ve=1.5) # Calculate block difference and export as VTK for 3-D visualisation: import copy diff_model = copy.deepcopy(geology_changed) diff_model.block -= h_out.block diff_model.export_to_vtk(vtk_filename = "diff_model_fold_thrust_belt")
docs/notebooks/Paper-Fig3-4-Read-Geophysics.ipynb
flohorovicic/pynoddy
gpl-2.0
This example is a mode choice model built using the Swissmetro example dataset. First we create the Dataset and Model objects:
raw_data = pd.read_csv(lx.example_file('swissmetro.csv.gz')).rename_axis(index='CASEID') data = lx.Dataset.construct.from_idco(raw_data, alts={1:'Train', 2:'SM', 3:'Car'}) data
book/example/102-swissmetro-weighted.ipynb
jpn--/larch
gpl-3.0
The swissmetro example models exclude some observations. We can use the Dataset.query_cases method to identify the observations we would like to keep.
m = lx.Model(data.dc.query_cases("PURPOSE in (1,3) and CHOICE != 0"))
book/example/102-swissmetro-weighted.ipynb
jpn--/larch
gpl-3.0
We can attach a title to the model. The title does not affect the calculations as all; it is merely used in various output report styles.
m.title = "swissmetro example 02 (weighted logit)"
book/example/102-swissmetro-weighted.ipynb
jpn--/larch
gpl-3.0
This model adds a weighting factor.
m.weight_co_var = "1.0*(GROUP==2)+1.2*(GROUP==3)"
book/example/102-swissmetro-weighted.ipynb
jpn--/larch
gpl-3.0
The swissmetro dataset, as with all Biogeme data, is only in co format.
from larch.roles import P,X m.utility_co[1] = P("ASC_TRAIN") m.utility_co[2] = 0 m.utility_co[3] = P("ASC_CAR") m.utility_co[1] += X("TRAIN_TT") * P("B_TIME") m.utility_co[2] += X("SM_TT") * P("B_TIME") m.utility_co[3] += X("CAR_TT") * P("B_TIME") m.utility_co[1] += X("TRAIN_CO*(GA==0)") * P("B_COST") m.utility_co[2] += X("SM_CO*(GA==0)") * P("B_COST") m.utility_co[3] += X("CAR_CO") * P("B_COST")
book/example/102-swissmetro-weighted.ipynb
jpn--/larch
gpl-3.0
Larch will find all the parameters in the model, but we'd like to output them in a rational order. We can use the ordering method to do this:
m.ordering = [ ("ASCs", 'ASC.*',), ("LOS", 'B_.*',), ] # TEST from pytest import approx assert m.loglike() == approx(-7892.111473285806)
book/example/102-swissmetro-weighted.ipynb
jpn--/larch
gpl-3.0
We can estimate the models and check the results match up with those given by Biogeme:
m.set_cap(15) m.maximize_loglike(method='SLSQP') # TEST r = _ from pytest import approx assert r.loglike == approx(-5931.557677709527) m.calculate_parameter_covariance() m.parameter_summary() # TEST assert m.parameter_summary().data.to_markdown() == ''' | | Value | Std Err | t Stat | Signif | Null Value | |:----------------------|--------:|----------:|---------:|:---------|-------------:| | ('ASCs', 'ASC_CAR') | -0.114 | 0.0407 | -2.81 | ** | 0 | | ('ASCs', 'ASC_TRAIN') | -0.757 | 0.0528 | -14.32 | *** | 0 | | ('LOS', 'B_COST') | -0.0112 | 0.00049 | -22.83 | *** | 0 | | ('LOS', 'B_TIME') | -0.0132 | 0.000537 | -24.62 | *** | 0 | '''[1:-1]
book/example/102-swissmetro-weighted.ipynb
jpn--/larch
gpl-3.0
Ejercicio (1ª parcial 2018): Resolver $\frac{dy}{dx}=\frac{x y^{4}}{3} - \frac{2 y}{3 x} + \frac{1}{3 x^{3} y^{2}}$. Intentaremos con la heuística $$\xi=ax+cy+e$$ y $$\eta=bx+dy+f$$ para encontrar las simetrías
x,y,a,b,c,d,e,f=symbols('x,y,a,b,c,d,e,f',real=True) #cargamos la función F=x*y**4/3-R(2,3)*y/x+R(1,3)/x**3/y**2 F
Teoria_Basica/scripts/GruposLie.ipynb
fdmazzone/Ecuaciones_Diferenciales
gpl-2.0
Sympy no integra bien el logarítmo
s=log(abs(x)) r, s
Teoria_Basica/scripts/GruposLie.ipynb
fdmazzone/Ecuaciones_Diferenciales
gpl-2.0
Reemplacemos en la fórmula de cambios de variables $$\frac{ds}{dr}=\left.\frac{s_x+s_y F}{r_x+r_y F}\right|_{x=e^s,y=r^{1/3}e^{-2/3s}}.$$
Ecua=( (s.diff(x)+s.diff(y)*F)/(r.diff(x)+r.diff(y)*F)).simplify() r,s=symbols('r,s',real=True) Ecua=Ecua.subs({x:exp(s),y:r**R(1,3)*exp(-R(2,3)*s)}) Ecua
Teoria_Basica/scripts/GruposLie.ipynb
fdmazzone/Ecuaciones_Diferenciales
gpl-2.0
Resolvamos $\frac{dr}{ds}=\frac{1}{1+r^2}$. La solucón gral es $\arctan(r)=s+C$. Expresemos la ecuación en coordenadas cartesianas $$\arctan(x^2y^3)=\log(|x|)+C.$$
C=symbols('C',real=True) sol=Eq(atan(x**2*y**3),log(abs(x))+C) solExpl=solve(sol,y) solExpl yg=solExpl[0] yg Q=simplify(eta-xi*F) Q
Teoria_Basica/scripts/GruposLie.ipynb
fdmazzone/Ecuaciones_Diferenciales
gpl-2.0
No hay soluciones invariantes
p=plot_implicit(sol.subs(C,0),(x,-5,5),(y,-10,10),show=False) for k in range(-10,10): p.append(plot_implicit(sol.subs(C,k),(x,-5,5),(y,-10,10),show=False)[0]) p.show()
Teoria_Basica/scripts/GruposLie.ipynb
fdmazzone/Ecuaciones_Diferenciales
gpl-2.0
Supply user information
# Set the path for data file flname="/Users/guy/data/t28/jpole/T28_JPOLE2003_800.nc"
examples/T28_jpole_flight.ipynb
nguy/AWOT
gpl-2.0
<li>Set up some characteristics for plotting. <li>Use Cylindrical Equidistant Area map projection. <li>Set the spacing of the barbs and X-axis time step for labels. <li>Set the start and end times for subsetting.
proj = 'cea' Wbarb_Spacing = 300 # Spacing of wind barbs along flight path (sec) # Choose the X-axis time step (in seconds) where major labels will be XlabStride = 3600 # Should landmarks be plotted? [If yes, then modify the section below Lmarks=False # Optional variables that can be included with AWOT # Start and end times for track in Datetime instance format #start_time = "2003-08-06 00:00:00" #end_time = "2003-08-06 23:50:00" corners = [-96., 34., -98., 36.,]
examples/T28_jpole_flight.ipynb
nguy/AWOT
gpl-2.0
Read in flight data<br> NOTE: At time or writing this it is required that the time_var argument be provided to make the read function work properly. This may change in the future, but time variables are not standard even among RAF Nimbus guidelines.
fl = awot.io.read_netcdf(fname=flname, platform='t28', time_var="Time") fl.keys()
examples/T28_jpole_flight.ipynb
nguy/AWOT
gpl-2.0
Create the track figure for this flight, there appear to be some bunk data values in lat/lon
print(fl['latitude']['data'].min(), fl['latitude']['data'].max()) fl['latitude']['data'][:] = np.ma.masked_equal(fl['latitude']['data'][:], 0.) fl['longitude']['data'][:] = np.ma.masked_equal(fl['longitude']['data'][:], 0.) print(fl['latitude']['data'].min(), fl['latitude']['data'].max()) print(fl['longitude']['data'].min(), fl['longitude']['data'].max()) print(fl['altitude']['data'].max()) fig, ax = plt.subplots(1, 1, figsize=(9, 9)) # Create the basemap bm = create_basemap(corners=corners, proj=proj, resolution='l', area_thresh=1.,ax=ax) bm.drawcounties() # Instantiate the Flight plotting routines flp = FlightLevel(fl, basemap=bm) flp.plot_trackmap( # start_time=start_time, end_time=end_time, color_by_altitude=True, track_cmap='spectral', min_altitude=50., max_altitude= 5000., addlegend=True, addtitle=False)
examples/T28_jpole_flight.ipynb
nguy/AWOT
gpl-2.0
Add a custom section with an evoked slider:
# Load the evoked data evoked = read_evokeds(evoked_fname, condition='Left Auditory', baseline=(None, 0), verbose=False) evoked.crop(0, .2) times = evoked.times[::4] # Create a list of figs for the slider figs = list() for t in times: figs.append(evoked.plot_topomap(t, vmin=-300, vmax=300, res=100, show=False)) plt.close(figs[-1]) report.add_slider_to_section(figs, times, 'Evoked Response', image_format='png') # can also use 'svg' # to save report report.save('my_report.html', overwrite=True)
0.19/_downloads/81308ca6ca6807326a79661c989cfcba/plot_make_report.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Les instructions SQL s'écrivent d'une manière qui ressemble à celle de phrases ordinaires en anglais. Cette ressemblance voulue vise à faciliter l'apprentissage et la lecture. Il est néanmoins important de respecter un ordre pour les différentes instructions. Dans ce TD, nous allons écrire des commandes en SQL via Python. Pour plus de précisions sur SQL et les commandes qui existent, rendez-vous là SQL, PRINCIPES DE BASE. Se connecter à une base de données A la différence des tables qu'on utilise habituellement, la base de données n'est pas visible directement en ouvrant Excel ou un éditeur de texte. Pour avoir une vue de ce que contient la base de données, il est nécessaire d'avoir un autre type de logiciel. Pour le TD, nous vous recommandans d'installer SQLLiteSpy (disponible à cette adresse SqliteSpy ou sqlite_bro si vous voulez voir à quoi ressemble les données avant de les utiliser avec Python.
import sqlite3 # on va se connecter à une base de données SQL vide # SQLite stocke la BDD dans un simple fichier filepath = "./DataBase.db" open(filepath, 'w').close() #crée un fichier vide CreateDataBase = sqlite3.connect(filepath) QueryCurs = CreateDataBase.cursor()
_unittests/ut_helpgen/notebooks2/td2a_eco_sql.ipynb
sdpython/pyquickhelper
mit
La méthode cursor est un peu particulière : Il s'agit d'une sorte de tampon mémoire intermédiaire, destiné à mémoriser temporairement les données en cours de traitement, ainsi que les opérations que vous effectuez sur elles, avant leur transfert définitif dans la base de données. Tant que la méthode commit n'aura pas été appelée, aucun ordre ne sera appliqué à la base de données. A présent que nous sommes connectés à la base de données, on va créer une table qui contient plusieurs variables de format différents - ID sera la clé primaire de la base - Nom, Rue, Ville, Pays seront du text - Prix sera un réel
# On définit une fonction de création de table def CreateTable(nom_bdd): QueryCurs.execute('''CREATE TABLE IF NOT EXISTS ''' + nom_bdd + ''' (id INTEGER PRIMARY KEY, Name TEXT,City TEXT, Country TEXT, Price REAL)''') # On définit une fonction qui permet d'ajouter des observations dans la table def AddEntry(nom_bdd, Nom,Ville,Pays,Prix): QueryCurs.execute('''INSERT INTO ''' + nom_bdd + ''' (Name,City,Country,Price) VALUES (?,?,?,?)''',(Nom,Ville,Pays,Prix)) def AddEntries(nom_bdd, data): """ data : list with (Name,City,Country,Price) tuples to insert """ QueryCurs.executemany('''INSERT INTO ''' + nom_bdd + ''' (Name,City,Country,Price) VALUES (?,?,?,?)''',data) ### On va créer la table clients CreateTable('Clients') AddEntry('Clients','Toto','Munich','Germany',5.2) AddEntries('Clients', [('Bill','Berlin','Germany',2.3), ('Tom','Paris','France',7.8), ('Marvin','Miami','USA',15.2), ('Anna','Paris','USA',7.8)]) # on va "commit" c'est à dire qu'on va valider la transaction. # > on va envoyer ses modifications locales vers le référentiel central - la base de données SQL CreateDataBase.commit()
_unittests/ut_helpgen/notebooks2/td2a_eco_sql.ipynb
sdpython/pyquickhelper
mit
GROUP BY En pandas, l'opération GROUP BY de SQL s'effectue avec une méthode similaire : groupby groupby sert à regrouper des observations en groupes selon les modalités de certaines variables en appliquant une fonction d'aggrégation sur d'autres variables.
QueryCurs.execute('SELECT Country, count(*) FROM Clients GROUP BY Country') print(QueryCurs.fetchall())
_unittests/ut_helpgen/notebooks2/td2a_eco_sql.ipynb
sdpython/pyquickhelper
mit
Attention, en pandas, la fonction count ne fait pas la même chose qu'en SQL. count s'applique à toutes les colonnes et compte toutes les observations non nulles.
df2.groupby('Country').count()
_unittests/ut_helpgen/notebooks2/td2a_eco_sql.ipynb
sdpython/pyquickhelper
mit
Pour réaliser la même chose qu'en SQL, il faut utiliser la méthode size.
df2.groupby('Country').size()
_unittests/ut_helpgen/notebooks2/td2a_eco_sql.ipynb
sdpython/pyquickhelper
mit
Ou utiliser des fonctions lambda.
# par exemple calculer le prix moyen et le multiplier par 2 df2.groupby('Country')['Price'].apply(lambda x: 2*x.mean()) QueryCurs.execute('SELECT Country, 2*AVG(Price) FROM Clients GROUP BY Country').fetchall() QueryCurs.execute('SELECT * FROM Clients WHERE Country == "Germany"') print(QueryCurs.fetchall()) QueryCurs.execute('SELECT * FROM Clients WHERE City=="Berlin" AND Country == "Germany"') print(QueryCurs.fetchall()) QueryCurs.execute('SELECT * FROM Clients WHERE Price BETWEEN 7 AND 20') print(QueryCurs.fetchall())
_unittests/ut_helpgen/notebooks2/td2a_eco_sql.ipynb
sdpython/pyquickhelper
mit
On peut également passer par un DataFrame pandas et utiliser .to_csv()
QueryCurs.execute('''DROP TABLE Clients''') QueryCurs.close()
_unittests/ut_helpgen/notebooks2/td2a_eco_sql.ipynb
sdpython/pyquickhelper
mit
Most of the common options are: Metropolis jump: the starting standard deviation of the proposal distribution tuning: the number of iterations to tune the scale of the proposal ar_low: the lower bound of the target acceptance rate range ar_hi: the upper bound of the target acceptance rate range adapt_step: a number (bigger than 1) that will be used to modify the jump in order to keep the acceptance rate betwen ar_lo and ar_hi. Values much larger than 1 result in much more dramatic tuning. Slice width: starting width of the level set adapt: number of previous slices use in the weighted average for the next slice. If 0, the width is not dynamically tuned.
example = spvcm.upper_level.Upper_SMA(Y, X, M=W2, Z=Z, membership=membership, n_samples=500, configs=dict(tuning=250, adapt_step=1.01, debug=True, ar_low=.1, ar_hi=.4)) example.configs.Lambda.ar_hi, example.configs.Lambda.ar_low example_slicer = spvcm.upper_level.Upper_SMA(Y, X, M=W2, Z=Z, membership=membership, n_samples=500, configs=dict(Lambda_method='slice')) example_slicer.trace.plot(varnames='Lambda') plt.show() example_slicer.configs.Lambda.adapt, example_slicer.configs.Lambda.width
notebooks/model/spvcm/using_the_sampler.ipynb
weikang9009/pysal
bsd-3-clause
Preparation
# the larger the longer it takes, be sure to also adapt input layer size auf vgg network to this value INPUT_SHAPE = (64, 64) # INPUT_SHAPE = (128, 128) # INPUT_SHAPE = (256, 256) EPOCHS = 50 # Depends on harware GPU architecture, set as high as possible (this works well on K80) BATCH_SIZE = 100 !rm -rf ./tf_log # https://keras.io/callbacks/#tensorboard tb_callback = keras.callbacks.TensorBoard(log_dir='./tf_log') # To start tensorboard # tensorboard --logdir=./tf_log # open http://localhost:6006 !ls -lh import os import skimage.data import skimage.transform from keras.utils.np_utils import to_categorical import numpy as np def load_data(data_dir, type=".ppm"): num_categories = 6 # Get all subdirectories of data_dir. Each represents a label. directories = [d for d in os.listdir(data_dir) if os.path.isdir(os.path.join(data_dir, d))] # Loop through the label directories and collect the data in # two lists, labels and images. labels = [] images = [] for d in directories: label_dir = os.path.join(data_dir, d) file_names = [os.path.join(label_dir, f) for f in os.listdir(label_dir) if f.endswith(type)] # For each label, load it's images and add them to the images list. # And add the label number (i.e. directory name) to the labels list. for f in file_names: images.append(skimage.data.imread(f)) labels.append(int(d)) images64 = [skimage.transform.resize(image, INPUT_SHAPE) for image in images] y = np.array(labels) y = to_categorical(y, num_categories) X = np.array(images64) return X, y # Load datasets. ROOT_PATH = "./" original_dir = os.path.join(ROOT_PATH, "speed-limit-signs") original_images, original_labels = load_data(original_dir, type=".ppm") X, y = original_images, original_labels
notebooks/workshops/tss/cnn-imagenet-retrain.ipynb
DJCordhose/ai
mit
First Step: Load VGG pretrained on imagenet and remove classifier Hope: Feature Extraction will also work well for Speed Limit Signs Imagenet Collection of labelled images from many categories http://image-net.org/ http://image-net.org/about-stats <table class="table-stats" style="width: 500px"> <tbody><tr> <td width="25%"><b>High level category</b></td> <td width="20%"><b># synset (subcategories)</b></td> <td width="30%"><b>Avg # images per synset</b></td> <td width="25%"><b>Total # images</b></td> </tr> <tr><td>amphibian</td><td>94</td><td>591</td><td>56K</td></tr> <tr><td>animal</td><td>3822</td><td>732</td><td>2799K</td></tr> <tr><td>appliance</td><td>51</td><td>1164</td><td>59K</td></tr> <tr><td>bird</td><td>856</td><td>949</td><td>812K</td></tr> <tr><td>covering</td><td>946</td><td>819</td><td>774K</td></tr> <tr><td>device</td><td>2385</td><td>675</td><td>1610K</td></tr> <tr><td>fabric</td><td>262</td><td>690</td><td>181K</td></tr> <tr><td>fish</td><td>566</td><td>494</td><td>280K</td></tr> <tr><td>flower</td><td>462</td><td>735</td><td>339K</td></tr> <tr><td>food</td><td>1495</td><td>670</td><td>1001K</td></tr> <tr><td>fruit</td><td>309</td><td>607</td><td>188K</td></tr> <tr><td>fungus</td><td>303</td><td>453</td><td>137K</td></tr> <tr><td>furniture</td><td>187</td><td>1043</td><td>195K</td></tr> <tr><td>geological formation</td><td>151</td><td>838</td><td>127K</td></tr> <tr><td>invertebrate</td><td>728</td><td>573</td><td>417K</td></tr> <tr><td>mammal</td><td>1138</td><td>821</td><td>934K</td></tr> <tr><td>musical instrument</td><td>157</td><td>891</td><td>140K</td></tr> <tr><td>plant</td><td>1666</td><td>600</td><td>999K</td></tr> <tr><td>reptile</td><td>268</td><td>707</td><td>190K</td></tr> <tr><td>sport</td><td>166</td><td>1207</td><td>200K</td></tr> <tr><td>structure</td><td>1239</td><td>763</td><td>946K</td></tr> <tr><td>tool</td><td>316</td><td>551</td><td>174K</td></tr> <tr><td>tree</td><td>993</td><td>568</td><td>564K</td></tr> <tr><td>utensil</td><td>86</td><td>912</td><td>78K</td></tr> <tr><td>vegetable</td><td>176</td><td>764</td><td>135K</td></tr> <tr><td>vehicle</td><td>481</td><td>778</td><td>374K</td></tr> <tr><td>person</td><td>2035</td><td>468</td><td>952K</td></tr> </tbody></table> Might be more suitable for cats and dogs, but is the best we have right now
from keras import applications # applications.VGG16? vgg_model = applications.VGG16(include_top=False, weights='imagenet', input_shape=(64, 64, 3)) # vgg_model = applications.VGG16(include_top=False, weights='imagenet', input_shape=(128, 128, 3)) # vgg_model = applications.VGG16(include_top=False, weights='imagenet', input_shape=(256, 256, 3))
notebooks/workshops/tss/cnn-imagenet-retrain.ipynb
DJCordhose/ai
mit
All Convolutional Blocks are kept fully trained, we just removed the classifier part
vgg_model.summary()
notebooks/workshops/tss/cnn-imagenet-retrain.ipynb
DJCordhose/ai
mit
Next step is to push all our signs through the net just once and record the output of bottleneck features Don't get confused: this is no training, yet, this just is recording the prediction in order not to repeat this expensive step over and over again when we train the classifier later
# will take a while, but not really long depending on size and number of input images %time bottleneck_features_train = vgg_model.predict(X_train) bottleneck_features_train.shape
notebooks/workshops/tss/cnn-imagenet-retrain.ipynb
DJCordhose/ai
mit
What does this mean? 303 predictions for 303 images or 3335 predictions for 3335 images when using augmented data set 512 bottleneck feature per prediction each bottleneck feature has a size of 2x2, just a blob more or less bottleneck feature has larger size when we increase size of input images (might be a good idea) 4x4 when using 128x128 as input 8x8 when using 256x256 as input
first_bottleneck_feature = bottleneck_features_train[0,:,:, 0] first_bottleneck_feature
notebooks/workshops/tss/cnn-imagenet-retrain.ipynb
DJCordhose/ai
mit