markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Model Let's consider the optimal growth model, \begin{align} &\max\int_{0}^{\infty}e^{-\rho t}u(c(t))dt \ &\text{subject to} \ &\qquad\dot{k}(t)=f(k(t))-\delta k(t)-c(t),\ &\qquad k(0):\text{ given.} \ \end{align} We will assume the following specific function forms when necessary \begin{align} u(c) &= \frac{c^{1-\theta}}{1-\theta}, \quad \theta > 0, \ f(k) &= A k^\alpha, \quad 0 < \alpha < 1, \quad A > 0 \end{align} By using the Hamiltonian method, we have obtained the first-order dynamics of the economy \begin{align} \dot{c} &= \theta^{-1} c [f'(k) - \delta - \rho] & \text{(EE)} \ \dot{k} &= f(k) - \delta k - c. & \text{(CA)} \end{align} (EE) is the Euler equation and (CA) the capital accumulation equation. Let's draw the phase diagram on your computer. $\dot c = 0$ locus (EE) $\dot k = 0$ is equivalent to \begin{align} f'(k) = \delta + \rho \end{align} Thus, the locus is a vertical line which goes through $(k^, 0)$, where $k^$ is the unique value that satisfies $f'(k^*) = \delta + \rho$. Under the assumption that $f(k) = Ak^\alpha$, \begin{align} k^* = \left(\frac{\delta + \rho}{A \alpha}\right)^\frac{1}{\alpha - 1} \end{align} $\dot k = 0$ locus (CA) $\dot k = 0$ is equivalent to \begin{align} c = f(k) - \delta k. \end{align} Code for the loci
alpha = 0.3 delta = 0.05 rho = 0.1 theta = 1 A = 1 def f(x): return A * x**alpha kgrid = np.linspace(0.0, 7.5, 300) fig, ax = plt.subplots(1,1) # Locus obtained from (EE) kstar = ((delta + rho) / (A * alpha)) ** (1/(alpha - 1)) ax.axvline(kstar) ax.text(kstar*1.01, 0.1, '$\dot c = 0$', fontsize=16) # Locus obtained from (CA) ax.plot(kgrid, f(kgrid) - delta * kgrid) ax.text(4, 1.06*(f(4) - delta * 4), '$\dot k = 0$', fontsize=16) # axis labels ax.set_xlabel('$k$', fontsize=16) ax.set_ylabel('$c$', fontsize=16) ax.set_ylim([0.0, 1.8 * np.max(f(kgrid) - delta*kgrid)]) plt.show()
doc/python/Optimal Growth (Euler).ipynb
kenjisato/intro-macro
mit
What we want to do is to draw paths on this phase space. It is convenient to have a function that returns this kind of figure.
def phase_space(kmax, gridnum, yamp=1.8, colors=['black', 'black'], labels_on=False): kgrid = np.linspace(0.0, kmax, gridnum) fig, ax = plt.subplots(1,1) # EE locus ax.plot(kgrid, f(kgrid) - delta * kgrid, color=colors[0]) if labels_on: ax.text(4, f(4) - delta * 4, '$\dot k = 0$', fontsize=16) # CA locus kstar = ((delta + rho) / (A * alpha)) ** (1/(alpha - 1)) ax.axvline(kstar, color=colors[1]) if labels_on: ax.text(kstar*1.01, 0.1, '$\dot c = 0$', fontsize=16) # axis labels ax.set_xlabel('$k$', fontsize=16) ax.set_ylabel('$c$', fontsize=16) ax.set_ylim([0.0, yamp * np.max(f(kgrid) - delta*kgrid)]) return fig, ax
doc/python/Optimal Growth (Euler).ipynb
kenjisato/intro-macro
mit
You can draw the loci by calling the function as in the following.
fig, ax = phase_space(kmax=7, gridnum=300)
doc/python/Optimal Growth (Euler).ipynb
kenjisato/intro-macro
mit
The dynamics Discretize \begin{align} \dot{c} &= \theta^{-1} c [f'(k) - \delta - \rho] & \text{(EE)} \ \dot{k} &= f(k) - \delta k - c. & \text{(CA)} \end{align} to get the discretized dynamic equations: \begin{align} c(t+\Delta t) &= c(t){1 + \theta^{-1} [f'(k(t)) - \delta - \rho] \Delta t}& \text{(D-EE)} \ k(t+\Delta t) &= k(t) + {f(k(t)) - \delta k(t) - c(t)} \Delta t. & \text{(D-CA)} \end{align}
dt = 0.001 def f_deriv(k): """derivative of f""" return A * alpha * k ** (alpha - 1) def update(k, c): cnew = c * (1 + (f_deriv(k) - delta - rho) * dt / theta) # D-EE knew = k + (f(k) - delta * k - c) * dt return knew, cnew k_initial, c_guess = 0.4, 0.2 # Find a first-order path from the initial condition k0 and guess of c0 k0, c0 = k_initial, c_guess k, c = [k0], [c0] for i in range(10000): knew, cnew = update(k[-1], c[-1]) k.append(knew) c.append(cnew) kgrid = np.linspace(0.0, 10., 300) fig, ax = phase_space(10., 300) ax.plot(k, c)
doc/python/Optimal Growth (Euler).ipynb
kenjisato/intro-macro
mit
The blue curve shows the dynamic path of the system of differential equation. The solution moves from left to right in this case. This path doesn't seem to satisfy the transversality condition and so it's not the optimal path. What we do next is to find $c(0)$ that converges to the steady state. I will show you how to do this by “brute force.” Make many guesses about $c(0)$ and find the solution. We need to make a function to create a path that starts from $(k(0), c(0))$ and verify whether or not it's approaching to the steady state.
def compute_path(k0, c_guess, steps, ax=None, output=True): """compute a path starting from (k0, c_guess) that satisfies EE and CA""" k, c = [k0], [c_guess] for i in range(steps): knew, cnew = update(k[-1], c[-1]) # stop if the new values violate nonnegativity constraints if knew < 0: break if cnew < 0: break k.append(knew) c.append(cnew) # plot the path if ax is given if ax is not None: ax.plot(k, c) # You may want to suppress the output when you give ax. if output: return k, c
doc/python/Optimal Growth (Euler).ipynb
kenjisato/intro-macro
mit
Typical usage:
k_init = 0.4 steps = 30000 fig, ax = phase_space(40, 3000) for c_init in [0.1, 0.2, 0.3, 0.4, 0.5]: compute_path(k_init, c_init, steps, ax, output=False)
doc/python/Optimal Growth (Euler).ipynb
kenjisato/intro-macro
mit
Let's find the optimal path. The following code makes a plot that relates a guess of $c(0)$ to the final $c(t)$ and $k(t)$ for large $t$.
k_init = 0.4 steps = 30000 # set of guesses about c(0) c_guess = np.linspace(0.40, 0.50, 1000) k_final = [] c_final = [] for c0 in c_guess: k, c = compute_path(k_init, c0, steps, output=True) # Final values k_final.append(k[-1]) c_final.append(c[-1]) plt.plot(c_guess, k_final, label='lim k') plt.plot(c_guess, c_final, label='lim c') plt.legend()
doc/python/Optimal Growth (Euler).ipynb
kenjisato/intro-macro
mit
As you can clearly see, there is a critical value around 0.41. To know the exact value of the threshold, execute the following code.
cdiff = [c1 - c0 for c0, c1 in zip(c_final[:-1], c_final[1:])] c_optimal = c_guess[cdiff.index(max(cdiff))] c_optimal fig, ax = phase_space(7.5, 300) compute_path(k_init, c_optimal, steps=15000, ax=ax, output=False)
doc/python/Optimal Growth (Euler).ipynb
kenjisato/intro-macro
mit
Load capacity curves In order to use this methodology, it is necessary to provide one (or a group) of capacity curves, defined according to the format described in the RMTK manual. Please provide the location of the file containing the capacity curves using the parameter capacity_curves_file.
capacity_curves_file = "../../../../../../rmtk_data/capacity_curves_Sa-Sd.csv" capacity_curves = utils.read_capacity_curves(capacity_curves_file) utils.plot_capacity_curves(capacity_curves)
rmtk/vulnerability/derivation_fragility/equivalent_linearization/lin_miranda_2008/lin_miranda_2008.ipynb
ccasotto/rmtk
agpl-3.0
Load ground motion records Please indicate the path to the folder containing the ground motion records to be used in the analysis through the parameter gmrs_folder. Note: Each accelerogram needs to be in a separate CSV file as described in the RMTK manual. The parameters minT and maxT are used to define the period bounds when plotting the spectra for the provided ground motion fields.
gmrs_folder = "../../../../../../rmtk_data/accelerograms" gmrs = utils.read_gmrs(gmrs_folder) minT, maxT = 0.1, 2.0 utils.plot_response_spectra(gmrs, minT, maxT)
rmtk/vulnerability/derivation_fragility/equivalent_linearization/lin_miranda_2008/lin_miranda_2008.ipynb
ccasotto/rmtk
agpl-3.0
Load damage state thresholds Please provide the path to your damage model file using the parameter damage_model_file in the cell below. The damage types currently supported are: capacity curve dependent, spectral displacement and interstorey drift. If the damage model type is interstorey drift the user can provide the pushover curve in terms of Vb-dfloor to be able to convert interstorey drift limit states to roof displacements and spectral displacements, otherwise a linear relationship is assumed.
damage_model_file = "../../../../../../rmtk_data/damage_model.csv" damage_model = utils.read_damage_model(damage_model_file)
rmtk/vulnerability/derivation_fragility/equivalent_linearization/lin_miranda_2008/lin_miranda_2008.ipynb
ccasotto/rmtk
agpl-3.0
Obtain the damage probability matrix
PDM, Sds = lin_miranda_2008.calculate_fragility(capacity_curves, gmrs, damage_model)
rmtk/vulnerability/derivation_fragility/equivalent_linearization/lin_miranda_2008/lin_miranda_2008.ipynb
ccasotto/rmtk
agpl-3.0
Fit lognormal CDF fragility curves The following parameters need to be defined in the cell below in order to fit lognormal CDF fragility curves to the damage probability matrix obtained above: 1. IMT: This parameter specifies the intensity measure type to be used. Currently supported options are "PGA", "Sd" and "Sa". 2. period: This parameter defines the time period of the fundamental mode of vibration of the structure. 3. damping_ratio: This parameter defines the damping ratio for the structure. 4. regression_method: This parameter defines the regression method to be used for estimating the parameters of the fragility functions. The valid options are "least squares" and "max likelihood".
IMT = "Sd" period = 2.0 damping_ratio = 0.05 regression_method = "max likelihood" fragility_model = utils.calculate_mean_fragility(gmrs, PDM, period, damping_ratio, IMT, damage_model, regression_method)
rmtk/vulnerability/derivation_fragility/equivalent_linearization/lin_miranda_2008/lin_miranda_2008.ipynb
ccasotto/rmtk
agpl-3.0
Plot fragility functions The following parameters need to be defined in the cell below in order to plot the lognormal CDF fragility curves obtained above: * minIML and maxIML: These parameters define the limits of the intensity measure level for plotting the functions
minIML, maxIML = 0.01, 2.00 utils.plot_fragility_model(fragility_model, minIML, maxIML)
rmtk/vulnerability/derivation_fragility/equivalent_linearization/lin_miranda_2008/lin_miranda_2008.ipynb
ccasotto/rmtk
agpl-3.0
Save fragility functions The derived parametric fragility functions can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above: 1. taxonomy: This parameter specifies a taxonomy string for the the fragility functions. 2. minIML and maxIML: These parameters define the bounds of applicability of the functions. 3. output_type: This parameter specifies the file format to be used for saving the functions. Currently, the formats supported are "csv" and "nrml".
taxonomy = "RC" minIML, maxIML = 0.01, 2.00 output_type = "nrml" output_path = "../../../../../../rmtk_data/output/" utils.save_mean_fragility(taxonomy, fragility_model, minIML, maxIML, output_type, output_path)
rmtk/vulnerability/derivation_fragility/equivalent_linearization/lin_miranda_2008/lin_miranda_2008.ipynb
ccasotto/rmtk
agpl-3.0
Obtain vulnerability function A vulnerability model can be derived by combining the set of fragility functions obtained above with a consequence model. In this process, the fractions of buildings in each damage state are multiplied by the associated damage ratio from the consequence model, in order to obtain a distribution of loss ratio for each intensity measure level. The following parameters need to be defined in the cell below in order to calculate vulnerability functions using the above derived fragility functions: 1. cons_model_file: This parameter specifies the path of the consequence model file. 2. imls: This parameter specifies a list of intensity measure levels in increasing order at which the distribution of loss ratios are required to be calculated. 3. distribution_type: This parameter specifies the type of distribution to be used for calculating the vulnerability function. The distribution types currently supported are "lognormal", "beta", and "PMF".
cons_model_file = "../../../../../../rmtk_data/cons_model.csv" imls = [0.05, 0.10, 0.15, 0.20, 0.25, 0.30, 0.35, 0.40, 0.45, 0.50, 0.60, 0.70, 0.80, 0.90, 1.00, 1.20, 1.40, 1.60, 1.80, 2.00] distribution_type = "lognormal" cons_model = utils.read_consequence_model(cons_model_file) vulnerability_model = utils.convert_fragility_vulnerability(fragility_model, cons_model, imls, distribution_type)
rmtk/vulnerability/derivation_fragility/equivalent_linearization/lin_miranda_2008/lin_miranda_2008.ipynb
ccasotto/rmtk
agpl-3.0
Plot vulnerability function
utils.plot_vulnerability_model(vulnerability_model)
rmtk/vulnerability/derivation_fragility/equivalent_linearization/lin_miranda_2008/lin_miranda_2008.ipynb
ccasotto/rmtk
agpl-3.0
Save vulnerability function The derived parametric or nonparametric vulnerability function can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above: 1. taxonomy: This parameter specifies a taxonomy string for the the fragility functions. 3. output_type: This parameter specifies the file format to be used for saving the functions. Currently, the formats supported are "csv" and "nrml".
taxonomy = "RC" output_type = "nrml" output_path = "../../../../../../rmtk_data/output/" utils.save_vulnerability(taxonomy, vulnerability_model, output_type, output_path)
rmtk/vulnerability/derivation_fragility/equivalent_linearization/lin_miranda_2008/lin_miranda_2008.ipynb
ccasotto/rmtk
agpl-3.0
<h3> Extract sample data from BigQuery </h3> The dataset that we will use is <a href="https://bigquery.cloud.google.com/table/nyc-tlc:yellow.trips">a BigQuery public dataset</a>. Click on the link, and look at the column names. Switch to the Details tab to verify that the number of records is one billion, and then switch to the Preview tab to look at a few rows. Let's write a SQL query to pick up interesting fields from the dataset. It's a good idea to get the timestamp in a predictable format.
%%bigquery SELECT FORMAT_TIMESTAMP( "%Y-%m-%d %H:%M:%S %Z", pickup_datetime) AS pickup_datetime, pickup_longitude, pickup_latitude, dropoff_longitude, dropoff_latitude, passenger_count, trip_distance, tolls_amount, fare_amount, total_amount FROM `nyc-tlc.yellow.trips` LIMIT 10
notebooks/launching_into_ml/solutions/1_explore_data.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Let's increase the number of records so that we can do some neat graphs. There is no guarantee about the order in which records are returned, and so no guarantee about which records get returned if we simply increase the LIMIT. To properly sample the dataset, let's use the HASH of the pickup time and return 1 in 100,000 records -- because there are 1 billion records in the data, we should get back approximately 10,000 records if we do this. We will also store the BigQuery result in a Pandas dataframe named "trips"
%%bigquery trips SELECT FORMAT_TIMESTAMP( "%Y-%m-%d %H:%M:%S %Z", pickup_datetime) AS pickup_datetime, pickup_longitude, pickup_latitude, dropoff_longitude, dropoff_latitude, passenger_count, trip_distance, tolls_amount, fare_amount, total_amount FROM `nyc-tlc.yellow.trips` WHERE ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 100000)) = 1 print(len(trips)) # We can slice Pandas dataframes as if they were arrays trips[:10]
notebooks/launching_into_ml/solutions/1_explore_data.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
<h3> Exploring data </h3> Let's explore this dataset and clean it up as necessary. We'll use the Python Seaborn package to visualize graphs and Pandas to do the slicing and filtering.
ax = sns.regplot( x="trip_distance", y="fare_amount", fit_reg=False, ci=None, truncate=True, data=trips, ) ax.figure.set_size_inches(10, 8)
notebooks/launching_into_ml/solutions/1_explore_data.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Hmm ... do you see something wrong with the data that needs addressing? It appears that we have a lot of invalid data that is being coded as zero distance and some fare amounts that are definitely illegitimate. Let's remove them from our analysis. We can do this by modifying the BigQuery query to keep only trips longer than zero miles and fare amounts that are at least the minimum cab fare ($2.50). Note the extra WHERE clauses.
%%bigquery trips SELECT FORMAT_TIMESTAMP( "%Y-%m-%d %H:%M:%S %Z", pickup_datetime) AS pickup_datetime, pickup_longitude, pickup_latitude, dropoff_longitude, dropoff_latitude, passenger_count, trip_distance, tolls_amount, fare_amount, total_amount FROM `nyc-tlc.yellow.trips` WHERE ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 100000)) = 1 AND trip_distance > 0 AND fare_amount >= 2.5 print(len(trips)) ax = sns.regplot( x="trip_distance", y="fare_amount", fit_reg=False, ci=None, truncate=True, data=trips, ) ax.figure.set_size_inches(10, 8)
notebooks/launching_into_ml/solutions/1_explore_data.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
What's up with the streaks around 45 dollars and 50 dollars? Those are fixed-amount rides from JFK and La Guardia airports into anywhere in Manhattan, i.e. to be expected. Let's list the data to make sure the values look reasonable. Let's also examine whether the toll amount is captured in the total amount.
tollrides = trips[trips["tolls_amount"] > 0] tollrides[tollrides["pickup_datetime"] == "2012-02-27 09:19:10 UTC"] notollrides = trips[trips["tolls_amount"] == 0] notollrides[notollrides["pickup_datetime"] == "2012-02-27 09:19:10 UTC"]
notebooks/launching_into_ml/solutions/1_explore_data.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Looking at a few samples above, it should be clear that the total amount reflects fare amount, toll and tip somewhat arbitrarily -- this is because when customers pay cash, the tip is not known. So, we'll use the sum of fare_amount + tolls_amount as what needs to be predicted. Tips are discretionary and do not have to be included in our fare estimation tool. Let's also look at the distribution of values within the columns.
trips.describe()
notebooks/launching_into_ml/solutions/1_explore_data.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Hmm ... The min, max of longitude look strange. Finally, let's actually look at the start and end of a few of the trips.
def showrides(df, numlines): lats = [] lons = [] for iter, row in df[:numlines].iterrows(): lons.append(row["pickup_longitude"]) lons.append(row["dropoff_longitude"]) lons.append(None) lats.append(row["pickup_latitude"]) lats.append(row["dropoff_latitude"]) lats.append(None) sns.set_style("darkgrid") plt.figure(figsize=(10, 8)) plt.plot(lons, lats) showrides(notollrides, 10) showrides(tollrides, 10)
notebooks/launching_into_ml/solutions/1_explore_data.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
As you'd expect, rides that involve a toll are longer than the typical ride. <h3> Quality control and other preprocessing </h3> We need to do some clean-up of the data: <ol> <li>New York city longitudes are around -74 and latitudes are around 41.</li> <li>We shouldn't have zero passengers.</li> <li>Clean up the total_amount column to reflect only fare_amount and tolls_amount, and then remove those two columns.</li> <li>Before the ride starts, we'll know the pickup and dropoff locations, but not the trip distance (that depends on the route taken), so remove it from the ML dataset</li> <li>Discard the timestamp</li> </ol> We could do preprocessing in BigQuery, similar to how we removed the zero-distance rides, but just to show you another option, let's do this in Python. In production, we'll have to carry out the same preprocessing on the real-time input data. This sort of preprocessing of input data is quite common in ML, especially if the quality-control is dynamic.
def preprocess(trips_in): trips = trips_in.copy(deep=True) trips.fare_amount = trips.fare_amount + trips.tolls_amount del trips["tolls_amount"] del trips["total_amount"] del trips["trip_distance"] # we won't know this in advance! qc = np.all( [ trips["pickup_longitude"] > -78, trips["pickup_longitude"] < -70, trips["dropoff_longitude"] > -78, trips["dropoff_longitude"] < -70, trips["pickup_latitude"] > 37, trips["pickup_latitude"] < 45, trips["dropoff_latitude"] > 37, trips["dropoff_latitude"] < 45, trips["passenger_count"] > 0, ], axis=0, ) return trips[qc] tripsqc = preprocess(trips) tripsqc.describe()
notebooks/launching_into_ml/solutions/1_explore_data.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
The quality control has removed about 300 rows (11400 - 11101) or about 3% of the data. This seems reasonable. Let's move on to creating the ML datasets. <h3> Create ML datasets </h3> Let's split the QCed data randomly into training, validation and test sets. Note that this is not the entire data. We have 1 billion taxicab rides. This is just splitting the 10,000 rides to show you how it's done on smaller datasets. In reality, we'll have to do it on all 1 billion rides and this won't scale.
shuffled = tripsqc.sample(frac=1) trainsize = int(len(shuffled["fare_amount"]) * 0.70) validsize = int(len(shuffled["fare_amount"]) * 0.15) df_train = shuffled.iloc[:trainsize, :] df_valid = shuffled.iloc[trainsize : (trainsize + validsize), :] # noqa: E203 df_test = shuffled.iloc[(trainsize + validsize) :, :] # noqa: E203 df_train.head(n=1) df_train.describe() df_valid.describe() df_test.describe()
notebooks/launching_into_ml/solutions/1_explore_data.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Let's write out the three dataframes to appropriately named csv files. We can use these csv files for local training (recall that these files represent only 1/100,000 of the full dataset) just to verify our code works, before we run it on all the data.
def to_csv(df, filename): outdf = df.copy(deep=False) outdf.loc[:, "key"] = np.arange(0, len(outdf)) # rownumber as key # Reorder columns so that target is first column cols = outdf.columns.tolist() cols.remove("fare_amount") cols.insert(0, "fare_amount") print(cols) # new order of columns outdf = outdf[cols] outdf.to_csv(filename, header=False, index_label=False, index=False) to_csv(df_train, "taxi-train.csv") to_csv(df_valid, "taxi-valid.csv") to_csv(df_test, "taxi-test.csv") !head -10 taxi-valid.csv
notebooks/launching_into_ml/solutions/1_explore_data.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
<h3> Verify that datasets exist </h3>
!ls -l *.csv
notebooks/launching_into_ml/solutions/1_explore_data.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
We have 3 .csv files corresponding to train, valid, test. The ratio of file-sizes correspond to our split of the data.
%%bash head taxi-train.csv
notebooks/launching_into_ml/solutions/1_explore_data.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Looks good! We now have our ML datasets and are ready to train ML models, validate them and evaluate them. <h3> Benchmark </h3> Before we start building complex ML models, it is a good idea to come up with a very simple model and use that as a benchmark. My model is going to be to simply divide the mean fare_amount by the mean trip_distance to come up with a rate and use that to predict. Let's compute the RMSE of such a model.
def distance_between(lat1, lon1, lat2, lon2): # Haversine formula to compute distance "as the crow flies". lat1_r = np.radians(lat1) lat2_r = np.radians(lat2) lon_diff_r = np.radians(lon2 - lon1) sin_prod = np.sin(lat1_r) * np.sin(lat2_r) cos_prod = np.cos(lat1_r) * np.cos(lat2_r) * np.cos(lon_diff_r) minimum = np.minimum(1, sin_prod + cos_prod) dist = np.degrees(np.arccos(minimum)) * 60 * 1.515 * 1.609344 return dist def estimate_distance(df): return distance_between( df["pickuplat"], df["pickuplon"], df["dropofflat"], df["dropofflon"] ) def compute_rmse(actual, predicted): return np.sqrt(np.mean((actual - predicted) ** 2)) def print_rmse(df, rate, name): print( "{1} RMSE = {0}".format( compute_rmse(df["fare_amount"], rate * estimate_distance(df)), name ) ) FEATURES = ["pickuplon", "pickuplat", "dropofflon", "dropofflat", "passengers"] TARGET = "fare_amount" columns = list([TARGET]) columns.append("pickup_datetime") columns.extend(FEATURES) # in CSV, target is first column, after the features columns.append("key") df_train = pd.read_csv("taxi-train.csv", header=None, names=columns) df_valid = pd.read_csv("taxi-valid.csv", header=None, names=columns) df_test = pd.read_csv("taxi-test.csv", header=None, names=columns) rate = df_train["fare_amount"].mean() / estimate_distance(df_train).mean() print(f"Rate = ${rate}/km") print_rmse(df_train, rate, "Train") print_rmse(df_valid, rate, "Valid") print_rmse(df_test, rate, "Test")
notebooks/launching_into_ml/solutions/1_explore_data.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
<h2>Benchmark on same dataset</h2> The RMSE depends on the dataset, and for comparison, we have to evaluate on the same dataset each time. We'll use this query in later labs:
validation_query = """ SELECT (tolls_amount + fare_amount) AS fare_amount, pickup_datetime, pickup_longitude AS pickuplon, pickup_latitude AS pickuplat, dropoff_longitude AS dropofflon, dropoff_latitude AS dropofflat, passenger_count*1.0 AS passengers, "unused" AS key FROM `nyc-tlc.yellow.trips` WHERE ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 10000)) = 2 AND trip_distance > 0 AND fare_amount >= 2.5 AND pickup_longitude > -78 AND pickup_longitude < -70 AND dropoff_longitude > -78 AND dropoff_longitude < -70 AND pickup_latitude > 37 AND pickup_latitude < 45 AND dropoff_latitude > 37 AND dropoff_latitude < 45 AND passenger_count > 0 """ client = bigquery.Client() df_valid = client.query(validation_query).to_dataframe() print_rmse(df_valid, 2.59988, "Final Validation Set")
notebooks/launching_into_ml/solutions/1_explore_data.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Relaxation stage Firstly, all required modules are imported.
import sys sys.path.append('../') from sim import Sim from atlases import BoxAtlas from meshes import RectangularMesh from energies.exchange import UniformExchange from energies.demag import Demag from energies.zeeman import FixedZeeman
new/notebooks/fmr_standard_problem.ipynb
fangohr/oommf-python
bsd-2-clause
Now, the simulation object can be created and exchange, demagnetisation, and Zeeman energies are added.
# Create a BoxAtlas object. atlas = BoxAtlas(cmin, cmax) # Create a mesh object. mesh = RectangularMesh(atlas, d) # Create a simulation object. sim = Sim(mesh, Ms, name='fmr_standard_problem') # Add exchange energy. sim.add(UniformExchange(A)) # Add demagnetisation energy. sim.add(Demag()) # Add Zeeman energy. sim.add(FixedZeeman(H))
new/notebooks/fmr_standard_problem.ipynb
fangohr/oommf-python
bsd-2-clause
At this point, the system is initialised in the out-of-plane direction. As an example, we use a python function. This initialisation can also be achieved using the tuple or list object.
# Python function for initialising the system's magnetisation. def m_init(pos): return (0, 0, 1) # Initialise the magnetisation. sim.set_m(m_init) # The same initialisation can be achieved using: # sim.set_m((0, 0, 1)) # sim.set_m([0, 0, 1]) # sim.set_m(np.array([0, 0, 1]))
new/notebooks/fmr_standard_problem.ipynb
fangohr/oommf-python
bsd-2-clause
Finally, the system is relaxed for $5 \,\text{ns}$.
sim.run_until(5e-9)
new/notebooks/fmr_standard_problem.ipynb
fangohr/oommf-python
bsd-2-clause
We can now load the relaxed state to the Field object and plot the $z$ slice of magnetisation.
%matplotlib inline sim.m.plot_slice('z', 5e-9)
new/notebooks/fmr_standard_problem.ipynb
fangohr/oommf-python
bsd-2-clause
Dynamic stage In the dynamic stage, we use the relaxed state from the relaxation stage.
# Change external magnetic field. H = 8e4 * np.array([0.81923192051904048, 0.57346234436332832, 0.0]) sim.set_H(H)
new/notebooks/fmr_standard_problem.ipynb
fangohr/oommf-python
bsd-2-clause
In this stage, the Gilbert damping is reduced.
sim.alpha = 0.008
new/notebooks/fmr_standard_problem.ipynb
fangohr/oommf-python
bsd-2-clause
Finally, we run the multiple stage simulation.
total_time = 20e-9 stages = 4000 sim.run_until(total_time, stages)
new/notebooks/fmr_standard_problem.ipynb
fangohr/oommf-python
bsd-2-clause
Postprocessing From the obtained vector field samples, we can compute the average of magnetisation $y$ component and plot its time evolution.
import glob import matplotlib.pyplot as plt from field import load_oommf_file # Compute the <my> t_list = [] myav = [] for i in range(stages): omf_filename = glob.glob('fmr_standard_problem/fmr_standard_problem-Oxs_TimeDriver-Spin-%09d-*.omf' % i)[0] m_field = load_oommf_file(omf_filename) t_list.append(i*total_time/stages) myav.append(m_field.average()[1]) t_array = np.array(t_list) myav = np.array(myav) # Plot <my> time evolution. plt.plot(t_array/1e-9, myav) plt.xlabel('t (ns)') plt.ylabel('my average') plt.grid()
new/notebooks/fmr_standard_problem.ipynb
fangohr/oommf-python
bsd-2-clause
From the $<m_{y}>$ time evolution, we can compute and plot its Fourier transform.
import scipy.fftpack psd = np.log10(np.abs(scipy.fftpack.fft(myav))**2) f_axis = scipy.fftpack.fftfreq(stages, d=total_time/stages) plt.plot(f_axis/1e9, psd) plt.xlim([0, 12]) plt.ylim([-4.5, 2]) plt.xlabel('f (GHz)') plt.ylabel('Psa (a.u.)') plt.grid()
new/notebooks/fmr_standard_problem.ipynb
fangohr/oommf-python
bsd-2-clause
<hr> Over-­abbreviated Names<a name="abbr"></a> Since the most of data being manually uploaded, there are lot of abbreviations in street names,locality names. Where they are filtered and replaced with full names.
#the city below can be hoodi or bunkyo for st_type, ways in city_types.iteritems(): for name in ways: better_name = update_name(name, mapping) if name != better_name: print name, "=>", better_name #few examples Bunkyo: Meidai Jr. High Sch. => Meidai Junior High School St. Mary's Cathedral => Saint Mary's Cathedral Shinryukei brdg. E. => Shinryukei Bridge East Iidabashi Sta. E. => Iidabashi Station East ... Hoodi: St. Thomas School => Saint Thomas School Opp. Jagrithi Apartment => Opposite Jagrithi Apartment ...
P3 wrangle_data/DataWrangling_ganga.ipynb
gangadhara691/gangadhara691.github.io
mit
<hr> Merging Both cities<a name="combine_cities"></a> These two maps are selected since ,right now i am living at Hoodi,Bengaluru. And one day i want do my masters in japan in robotics,so i had selected locality of University of tokyo, Bunkyo.I really wanted to explore differences between the regions. I need to add a tag named "city" so i can differentiate them from the database. <hr> 2. Data Overview<a name="data_overview"></a> This section contains basic statistics about the dataset and the MongoDB queries used to gather them. File Sizes
bangalore.osm -40MB bangalore.osm.json-51MB tokyo1.osm- 82MB tokyo1.osm.json-102.351MB
P3 wrangle_data/DataWrangling_ganga.ipynb
gangadhara691/gangadhara691.github.io
mit
Number of documents
print "Bunkyo:",mongo_db.cities.find({'city':'bunkyo'}).count() print "Hoodi:",mongo_db.cities.find({'city':'hoodi'}).count()
P3 wrangle_data/DataWrangling_ganga.ipynb
gangadhara691/gangadhara691.github.io
mit
Bunkyo: 1268292 Hoodi: 667842 Number of node nodes.
print "Bunkyo:",mongo_db.cities.find({"type":"node", 'city':'bunkyo'}).count() print "Hoodi:",mongo_db.cities.find({"type":"node", 'city':'hoodi'}).count() Bunkyo: 1051170 Hoodi: 548862
P3 wrangle_data/DataWrangling_ganga.ipynb
gangadhara691/gangadhara691.github.io
mit
Number of way nodes.
print "Bunkyo:",mongo_db.cities.find({'type':'way', 'city':'bunkyo'}).count() print "Hoodi:",mongo_db.cities.find({'type':'way', 'city':'hoodi'}).count() Bunkyo: 217122 Hoodi: 118980
P3 wrangle_data/DataWrangling_ganga.ipynb
gangadhara691/gangadhara691.github.io
mit
Total Number of contributor.
print "Constributors:", len(mongo_db.cities.distinct("created.user")) Contributors: 858
P3 wrangle_data/DataWrangling_ganga.ipynb
gangadhara691/gangadhara691.github.io
mit
<hr> 3. Additional Data Exploration using MongoDB<a name="exploration"></a> I am going to use the pipeline function to retrive data from the database
def pipeline(city): p= [{"$match":{"created.user":{"$exists":1}, "city":city}}, {"$group": {"_id": {"City":"$city", "User":"$created.user"}, "contribution": {"$sum": 1}}}, {"$project": {'_id':0, "City":"$_id.City", "User_Name":"$_id.User", "Total_contribution":"$contribution"}}, {"$sort": {"Total_contribution": -1}}, {"$limit" : 5 }] return p result1 =mongo_db["cities"].aggregate(pipeline('bunkyo')) for each in result1: print(each) print("\n") result2 =mongo_db["cities"].aggregate(pipeline('hoodi')) for each in result2: print(each)
P3 wrangle_data/DataWrangling_ganga.ipynb
gangadhara691/gangadhara691.github.io
mit
The top contributors for hoodi are no where near since bunkyo being a more compact region than hoodi ,there are more places to contribute. <hr> To get the top Amenities in Hoodi and Bunkyo I will be showing the pipeline that will go in the above mentioned "Pipleline" function
pipeline=[{"$match":{"Additional Information.amenity":{"$exists":1}, "city":city}}, {"$group": {"_id": {"City":"$city", "Amenity":"$Additional Information.amenity"}, "count": {"$sum": 1}}}, {"$project": {'_id':0, "City":"$_id.City", "Amenity":"$_id.Amenity", "Count":"$count"}}, {"$sort": {"Count": -1}}, {"$limit" : 10 }]
P3 wrangle_data/DataWrangling_ganga.ipynb
gangadhara691/gangadhara691.github.io
mit
As compared to hoodi ,bunkyo have few atms,And parking can be commonly found in bunkyo locality <hr> popular places of worship
p = [{"$match":{"Additional Information.amenity":{"$exists":1}, "Additional Information.amenity":"place_of_worship", "city":city}}, {"$group":{"_id": {"City":"$city", "Religion":"$Additional Information.religion"}, "count":{"$sum":1}}}, {"$project":{"_id":0, "City":"$_id.City", "Religion":"$_id.Religion", "Count":"$count"}}, {"$sort":{"Count":-1}}, {"$limit":6}]
P3 wrangle_data/DataWrangling_ganga.ipynb
gangadhara691/gangadhara691.github.io
mit
As expected japan is popular with buddism, but india being a secular country it will be having most of the reglious places of worship,where hinduism being majority <hr> popular restaurants
p = [{"$match":{"Additional Information.amenity":{"$exists":1}, "Additional Information.amenity":"restaurant", "city":city}}, {"$group":{"_id":{"City":"$city", "Food":"$Additional Information.cuisine"}, "count":{"$sum":1}}}, {"$project":{"_id":0, "City":"$_id.City", "Food":"$_id.Food", "Count":"$count"}}, {"$sort":{"Count":-1}}, {"$limit":6}]
P3 wrangle_data/DataWrangling_ganga.ipynb
gangadhara691/gangadhara691.github.io
mit
{u'Count': 582, u'City': u'bunkyo'} {u'Food': u'japanese', u'City': u'bunkyo', u'Count': 192} {u'Food': u'chinese', u'City': u'bunkyo', u'Count': 126} {u'Food': u'italian', u'City': u'bunkyo', u'Count': 69} {u'Food': u'indian', u'City': u'bunkyo', u'Count': 63} {u'Food': u'sushi', u'City': u'bunkyo', u'Count': 63} {u'Count': 213, u'City': u'hoodi'} {u'Food': u'regional', u'City': u'hoodi', u'Count': 75} {u'Food': u'indian', u'City': u'hoodi', u'Count': 69} {u'Food': u'chinese', u'City': u'hoodi', u'Count': 36} {u'Food': u'international', u'City': u'hoodi', u'Count': 24} {u'Food': u'Andhra', u'City': u'hoodi', u'Count': 21} Indian style cusine in Bunkyo seems famous, Which will be better if i go to japan and do my higher studies there. <hr> popular fast food joints
p = [{"$match":{"Additional Information.amenity":{"$exists":1}, "Additional Information.amenity":"fast_food", "city":city}}, {"$group":{"_id":{"City":"$city", "Food":"$Additional Information.cuisine"}, "count":{"$sum":1}}}, {"$project":{"_id":0, "City":"$_id.City", "Food":"$_id.Food", "Count":"$count"}}, {"$sort":{"Count":-1}}, {"$limit":6}]
P3 wrangle_data/DataWrangling_ganga.ipynb
gangadhara691/gangadhara691.github.io
mit
Burger seems very popular among japanese in fast foods,i was expecting ramen to be more popular , but in hoodi pizza is really common,being a metropolitan city. <hr> ATM's near locality
p = [{"$match":{"Additional Information.amenity":{"$exists":1}, "Additional Information.amenity":"atm", "city":city}}, {"$group":{"_id":{"City":"$city", "Name":"$Additional Information.name:en"}, "count":{"$sum":1}}}, {"$project":{"_id":0, "City":"$_id.City", "Name":"$_id.Name", "Count":"$count"}}, {"$sort":{"Count":-1}}, {"$limit":4}]
P3 wrangle_data/DataWrangling_ganga.ipynb
gangadhara691/gangadhara691.github.io
mit
There are quite a few ATM in Bunkyo as compared to hoodi <hr> Martial arts or Dojo Center near locality
## Martial arts or Dojo Center near locality import re pat = re.compile(r'dojo', re.I) d=mongo_db.cities.aggregate([{"$match":{ "$or": [ { "Additional Information.name": {'$regex': pat}} ,{"Additional Information.amenity": {'$regex': pat}}]}} ,{"$group":{"_id":{"City":"$city" , "Sport":"$Additional Information.name"}}}]) for each in d: print(each) bunkyo: {u'_id': {u'City': u'bunkyo', u'Sport': u'Aikikai Hombu Dojo'}} {u'_id': {u'City': u'bunkyo', u'Sport': u'Kodokan Dojo'}} hoodi: {u'_id': {u'City': u'hoodi', u'Sport': u"M S Gurukkal's Kalari Academy"}}
P3 wrangle_data/DataWrangling_ganga.ipynb
gangadhara691/gangadhara691.github.io
mit
I wanted to learn martial arts , In japan is known for its akido and other ninjistsu martial arts , where i can find some in bunkyo Where as in hoodi,india Kalaripayattu Martial Arts are one of the ancient arts that ever existed. <hr> most popular shops.
p = [{"$match":{"Additional Information.shop":{"$exists":1}, "city":city}}, {"$group":{"_id":{"City":"$city", "Shop":"$Additional Information.shop"}, "count":{"$sum":1}}}, {"$project": {'_id':0, "City":"$_id.City", "Shop":"$_id.Shop", "Count":"$count"}}, {"$sort":{"Count":-1}}, {"$limit":10}] {u'Shop': u'convenience', u'City': u'bunkyo', u'Count': 1035} {u'Shop': u'clothes', u'City': u'bunkyo', u'Count': 282} {u'Shop': u'books', u'City': u'bunkyo', u'Count': 225} {u'Shop': u'mobile_phone', u'City': u'bunkyo', u'Count': 186} {u'Shop': u'confectionery', u'City': u'bunkyo', u'Count': 156} {u'Shop': u'supermarket', u'City': u'bunkyo', u'Count': 150} {u'Shop': u'computer', u'City': u'bunkyo', u'Count': 126} {u'Shop': u'hairdresser', u'City': u'bunkyo', u'Count': 90} {u'Shop': u'electronics', u'City': u'bunkyo', u'Count': 90} {u'Shop': u'anime', u'City': u'bunkyo', u'Count': 90} {u'Shop': u'clothes', u'City': u'hoodi', u'Count': 342} {u'Shop': u'supermarket', u'City': u'hoodi', u'Count': 129} {u'Shop': u'bakery', u'City': u'hoodi', u'Count': 120} {u'Shop': u'shoes', u'City': u'hoodi', u'Count': 72} {u'Shop': u'furniture', u'City': u'hoodi', u'Count': 72} {u'Shop': u'sports', u'City': u'hoodi', u'Count': 66} {u'Shop': u'electronics', u'City': u'hoodi', u'Count': 60} {u'Shop': u'beauty', u'City': u'hoodi', u'Count': 54} {u'Shop': u'car', u'City': u'hoodi', u'Count': 36} {u'Shop': u'convenience', u'City': u'hoodi', u'Count': 36} The general stores are quite common in both the places
P3 wrangle_data/DataWrangling_ganga.ipynb
gangadhara691/gangadhara691.github.io
mit
most popular supermarkets
p = [{"$match":{"Additional Information.shop":{"$exists":1}, "city":city, "Additional Information.shop":"supermarket"}}, {"$group":{"_id":{"City":"$city", "Supermarket":"$Additional Information.name"}, "count":{"$sum":1}}}, {"$project": {'_id':0, "City":"$_id.City", "Supermarket":"$_id.Supermarket", "Count":"$count"}}, {"$sort":{"Count":-1}}, {"$limit":5}] {u'Count': 120, u'City': u'bunkyo'} {u'Count': 9, u'City': u'bunkyo', u'Supermarket': u'Maruetsu'} {u'Count': 3, u'City': u'bunkyo', u'Supermarket': u"Y's Mart"} {u'Count': 3, u'City': u'bunkyo', u'Supermarket': u'SainE'} {u'Count': 3, u'City': u'bunkyo', u'Supermarket': u'DAIMARU Peacock'} {u'Count': 9, u'City': u'hoodi', u'Supermarket': u'Reliance Fresh'} {u'Count': 9, u'City': u'hoodi'} {u'Count': 6, u'City': u'hoodi', u'Supermarket': u"Nilgiri's"} {u'Count': 3, u'City': u'hoodi', u'Supermarket': u'Royal Mart Supermarket'} {u'Count': 3, u'City': u'hoodi', u'Supermarket': u'Safal'}
P3 wrangle_data/DataWrangling_ganga.ipynb
gangadhara691/gangadhara691.github.io
mit
Uploaded RH and temp data into Python¶ First I upload my data set(s). I am working with environmental data from different locations in the church at differnet dates. Files include: environmental characteristics (CO2, temperature (deg C), and relative humidity (RH) (%) measurements). I can discard the CO2_2 column values since they are false measurements logged from an empty input jack in the CO2 HOBOWare ^(r) device.
#I import a temp and RH data file env=pd.read_table('../Data/CO2May.csv', sep=',') #assigning columns names env.columns=[['test', 'time','temp C', 'RH %', 'CO2_1', 'CO2_2']] #I display my dataframe env #change data time variable to actual values of time. env['time']= pd.to_datetime(env['time']) #print the new table and the type of data. print(env) env.dtypes
organ_pitch/Scripts/.ipynb_checkpoints/upload_env_data-checkpoint.ipynb
taliamo/Final_Project
mit
Next 1. Create a function for expected pitch (frequency of sound waves) from CO2 data 2. Add expected_frequency to dataframe Calculated pitch from CO2 levels Here I use Cramer's equation for frequency of sound from CO2 concentration (1992). freq = a0 + a1(T) + ... + (a9 +...) +... + a14(xc^2) where xc is the mole fraction of CO2 and T is temperature. Full derivation of these equations can be found in the "Doc" directory. I will later plot measured pitch (frequency) data points from my "pitch" data frame on top of these calculated frequency values for comparison.
#Here I am trying to create a function for the above equation. #I want to plug in each CO2_ave value for a time stamp (row) from the "env" data frame above. #define coefficients (Cramer, 1992) a0 = 331.5024 #a1 = 0.603055 #a2 = -0.000528 a9 = -(-85.20931) #need to account for negative values #a10 = -0.228525 a14 = 29.179762 #xc = CO2 values from dataframe #test function def test_cramer(): assert a0 + ((a9)*400)/100 + a14*((400/1000000)**2) == 672.33964466, 'Equation failure' return() test_cramer() #This function also converts ppm to mole fraction (just quantity as a proportion of total) def cramer(data): '''Calculate pitch from CO2_1 concentration''' calc_freq = a0 + ((a9)*data)/100 + a14*((data/1000000)**2) return(calc_freq) #run the cramer values for the calculated frequency #calc_freq = cramer(env['calc_freq']) #define the new column as the output of the cramer function #env['calc_freq'] = calc_freq #Run the function for the input column (CO2 values) env['calc_freq'] = cramer(env['CO2_1']) cramer(env['CO2_1']) #check the dataframe #calculated frequency values seem reasonable based on changes in CO2 env #Now I call in my measured pitch data, #to be able to visually compare calculated and measured #Import the measured pitch values--the output of pitch_data.py script measured_freq = pd.read_table('../Data/pitches.csv', sep=',') #change data time variable to actual values of time. env['time']= pd.to_datetime(env['time']) #I test to make sure I'm importing the correct data measured_freq
organ_pitch/Scripts/.ipynb_checkpoints/upload_env_data-checkpoint.ipynb
taliamo/Final_Project
mit
Visualizing the expected pitch values by time 1. Plot calculated frequency, CO2 (ppm), and measured frequency values
print(calc_freq) #define variables from dataframe columns CO2_1 = env[['CO2_1']] calc_freq=env[['calc_freq']] #measured_pitch = output_from_'pitch_data.py' #want to set x-axis as date_time #how do I format the ax2 y axis scale def make_plot(variable_1, variable_2): '''Make a three variable plot with two axes''' #plot title plt.title('CO2 and Calculated Pitch', fontsize='14') #twinx layering ax1=plt.subplot() ax2=ax1.twinx() #ax3=ax1.twinx() #call data for the plot ax1.plot(CO2_1, color='g', linewidth=1) ax2.plot(calc_freq, color= 'm', linewidth=1) #ax3.plot(measured_freq, color = 'b', marker= 'x') #axis labeling ax1.yaxis.set_tick_params(labelcolor='grey') ax1.set_xlabel('Sample Number') ax1.set_ylabel('CO2 (ppm)', fontsize=12, color = 'g') ax2.set_ylabel('Calculated Pitch (Hz)', fontsize=12, color='m') #ax3.set_ylabel('Measured Pitch') #axis limits ax1.set_ylim([400,1300]) ax2.set_ylim([600, 1500]) #plt.savefig('../Figures/fig1.pdf') #Close function return()#'../Figures/fig1.pdf') #Call my function to test it make_plot(CO2_1, calc_freq) measured_freq.head() env.head() Freq vs. CO2 plt.plot(env.CO2_1, measured_freq.time, color='g', linewidth=1) #def make_fig(datasets, variable_1, variable_2, savename): #twinx layering ax1=plt.subplot() ax2=ax1.twinx() #plot 2 variables in predertermined plot above ax1.plot(dataset.index, variable_1, 'k-', linewidth=2) ax2.plot(dataset.index, variable_2, ) #moving plots lines variable_2_spine=ax2.spines['right'] variable_2_spine.set_position(('axes', 1.2)) ax1.yaxi.set_tick_params(labelcolor='k') ax1.set_ylabel(variable_1.name, fontsize=13, colour = 'k') ax2.sey_ylabel(variable_2.name + '($^o$C)', fontsize=13, color='grey') #plt.savefig(savename) return(savename) fig = plt.figure(figsize=(11,14)) plt.suptitle('') ax1.plot(colum1, colum2, 'k-', linewidth=2) " " ax1.set_ylim([0,1]) ax2.set_ylim([0,1]) ax1.set_xlabel('name', fontsize=14, y=0) ax1.set_ylabel ax2.set_ylabel #convert 'object' (CO2_1) to float new = pd.Series([env.CO2_1], name = 'CO2_1') CO2_1 = new.tolist() CO2_array = np.array(CO2_1) #Test type of data in "CO2_1" column env.CO2_1.dtypes #How can I format it so it's not an object? cramer(CO2_array) #'float' object not callable--the data in "CO2_1" are objects and cannot be called into the equation #cramer(env.CO2_ave) env.dtypes env.CO2_1.dtypes new = pd.Series([env.CO2_1], name = 'CO2_1') CO2_1 = new.tolist() CO2_array = np.array(CO2_1) #Test type of data in "CO2_1" column env.CO2_1.dtypes cramer(CO2_array) type(CO2_array) # To choose which CO2 value to use, I first visualize which seems normal #Create CO2-only dataframs CO2 = env[['CO2_1', 'CO2_2']] #Make a plot CO2_fig = plt.plot(CO2) plt.ylabel('CO2 (ppm)') plt.xlabel('Sample number') plt.title('Two CO2 sensors, same time and place') #plt.savefig('CO2_fig.pdf') input_file = env #Upload environmental data file env = pd.read_table('', sep=',') #assigning columns names env.columns=[['test', 'date_time','temp C', 'RH %', 'CO2_1', 'CO2_2']] #change data time variable to actual values of time. env['date_time']= pd.to_datetime(env['date_time']) #test function #def test_cramer(): #assert a0 + ((a9)*400)/100 + a14*((400/1000000)**2) == 672.339644669, 'Equation failure, math-mess-up' #return() #Call the test function #test_cramer() #pitch calculator function from Cramer equation def cramer(data): '''Calculate pitch from CO2_1 concentration''' calc_freq = a0 + ((a9*data)/100) + a14*((data)**2) return(calc_freq) #Run the function for the input column (CO2 values) to get a new column of calculated_frequency env['calc_freq'] = cramer(env['CO2_1']) #Import the measured pitch values--the output of pitch_data.py script measured_freq = pd.read_table('../organ_pitch/Data/munged_pitch.csv', sep=',') #change data time variable to actual values of time. env['time']= pd.to_datetime(env['time']) #Function to make and save a plot
organ_pitch/Scripts/.ipynb_checkpoints/upload_env_data-checkpoint.ipynb
taliamo/Final_Project
mit
That's everything we need for a working function! Let's walk through it: def keyword: required before writing any function, to tell Python "hey! this is a function!" Function name: one word (can "fake" spaces with underscores), which is the name of the function and how we'll refer to it later Arguments: a comma-separated list of arguments the function takes to perform its task. If no arguments are needed (as above), then just open-paren-close-paren. Colon: the colon indicates the end of the function header and the start of the actual function's code. pass: since Python is sensitive to whitespace, we can't leave a function body blank; luckily, there's the pass keyword that does pretty much what it sounds like--no operation at all, just a placeholder. Admittedly, our function doesn't really do anything interesting. It takes no parameters, and the function body consists exclusively of a placeholder keyword that also does nothing. Still, it's a perfectly valid function! Other notes on functions You can define functions (as we did just before) almost anywhere in your code. As we'll see when we get to functional programming, you can literally define functions in the middle of a line of code. Still, good coding practices behooves you to generally group your function definitions together, e.g. at the top of your module. Invoking or activating a function is referred to as calling the function. When you call a function, you type its name, an open parenthesis, any arguments you're sending to the function, and a closing parenthesis. If there are no arguments, then calling the function is as simple as typing the function name and an open-close pair of parentheses. Functions can be part of modules. You've already seen some of these in action: the numpy.array() functionality is indeed a function. When a function is in a module, to call it you need to prepend the name of the module (and any submodules), add a dot "." between the module names, and then call the function as you normally would. Though not recommended, it's possible to import only select functions from a module, so you no longer have to specify the module name in front of the function name when calling the function. This uses the from keyword during import:
from numpy import array
lectures/L9.ipynb
eds-uga/csci1360e-su16
mit
Now the array() method can be called directly without prepending the package name numpy in front. USE THIS CAUTIOUSLY: if you accidentally name a variable array later in your code, you will get some very strange errors! Part 2: Function Arguments Arguments (or parameters), as stated before, are the function's input; the "$x$" to our "$f$", as it were. You can specify as many arguments as want, separating them by commas:
def one_arg(arg1): pass def two_args(arg1, arg2): pass def three_args(arg1, arg2, arg3): pass # And so on...
lectures/L9.ipynb
eds-uga/csci1360e-su16
mit
Like functions, you can name the arguments anything you want, though also like functions you'll probably want to give them more meaningful names besides arg1, arg2, and arg3. When these become just three functions among hundreds in a massive codebase written by dozens of different people, it's helpful when the code itself gives you hints as to what it does. When you call a function, you'll need to provide the same number of arguments in the function call as appear in the function header, otherwise Python will yell at you.
try: one_arg("some arg") except Exception as e: print("one_arg FAILED: {}".format(e)) else: print("one_arg SUCCEEDED") try: two_args("only1arg") except Exception as e: print("two_args FAILED: {}".format(e)) else: print("two_args SUCCEEDED")
lectures/L9.ipynb
eds-uga/csci1360e-su16
mit
To be fair, it's a pretty easy error to diagnose, but still something to keep in mind--especially as we move beyond basic "positional" arguments (as they are so called in the previous error message) into optional arguments. Default arguments "Positional" arguments--the only kind we've seen so far--are required. If the function header specifies a positional argument, then every single call to that functions needs to have that argument specified. There are cases, however, where it can be helpful to have optional, or default, arguments. In this case, when the function is called, the programmer can decide whether or not they want to override the default values. You can specify default arguments in the function header:
def func_with_default_arg(positional, default = 10): print("'{}' with default arg {}".format(positional, default)) func_with_default_arg("Input string") func_with_default_arg("Input string", default = 999)
lectures/L9.ipynb
eds-uga/csci1360e-su16
mit
If you look through the NumPy online documentation, you'll find most of its functions have entire books' worth of default arguments. The numpy.array function we've been using has quite a few; the only positional (required) argument for that function is some kind of list/array structure to wrap a NumPy array around. Everything else it tries to figure out on its own, unless the programmer explicitly specifies otherwise.
import numpy as np x = np.array([1, 2, 3]) y = np.array([1, 2, 3], dtype = float) # Specifying the data type of the array, using "dtype" print(x) print(y)
lectures/L9.ipynb
eds-uga/csci1360e-su16
mit
Notice the decimal points that follow the values in the second array! This is NumPy's way of showing that these numbers are floats, not integers! In this example, NumPy detected that our initial list contained integers, and we see in the first example that it left the integer type alone. But, in the second example, we override its default behavior in determining the data type of the elements of the resulting NumPy array. This is a very powerful mechanism for occasionally tweaking the behavior of functions without having to write entirely new ones. Let's do one more small example before moving on to return values. Let's build a method which prints out a list of video games in someone's Steam library.
def games_in_library(username, library): print("User '{}' owns: ".format(username)) for game in library: print(game) print() games_in_library('fps123', ['DOTA 2', 'Left 4 Dead', 'Doom', 'Counterstrike', 'Team Fortress 2']) games_in_library('rts456', ['Civilization V', 'Cities: Skylines', 'Sins of a Solar Empire']) games_in_library('smrt789', ['Binding of Isaac', 'Monaco'])
lectures/L9.ipynb
eds-uga/csci1360e-su16
mit
In this example, our function games_in_library has two positional arguments: username, which is the Steam username of the person, and library, which is a list of video game titles. The function simply prints out the username and the titles they own. Part 3: Return Values Just as functions [can] take input, they also [can] return output for the programmer to decide what to do with. Almost any function you will ever write will most likely have a return value of some kind. If not, your function may not be "well-behaved", aka sticking to the general guideline of doing one thing very well. There are certainly some cases where functions won't return anything--functions that just print things, functions that run forever (yep, they exist!), functions designed specifically to test other functions--but these are highly specialized cases we are not likely to encounter in this course. Keep this in mind as a "rule of thumb." To return a value from a function, just use the return keyword:
def identity_function(in_arg): return in_arg x = "this is the function input" return_value = identity_function(x) print(return_value)
lectures/L9.ipynb
eds-uga/csci1360e-su16
mit
This is pretty basic: the function returns back to the programmer as output whatever was passed into the function as input. Hence, "identity function." Anything you can pass in as function parameters, you can return as function output, including lists:
def explode_string(some_string): list_of_characters = [] for index in range(len(some_string)): list_of_characters.append(some_string[index]) return list_of_characters words = "Blahblahblah" output = explode_string(words) print(output)
lectures/L9.ipynb
eds-uga/csci1360e-su16
mit
PCA We start by performing PCA (principal component analysis), which finds patterns that capture most of the variance in the data. First load toy example data, and cache it to speed up repeated queries.
rawdata = tsc.loadExample('fish-series') data = rawdata.toTimeSeries().normalize() data.cache() data.dims;
worker/notebooks/thunder/tutorials/factorization.ipynb
CodeNeuro/notebooks
mit
Run PCA with three components
from thunder import PCA model = PCA(k=2).fit(data)
worker/notebooks/thunder/tutorials/factorization.ipynb
CodeNeuro/notebooks
mit
Fitting PCA adds two attributes to model: comps, which are the principal components, and scores, which are the data represented in principal component space. In this case, the input data were space-by-time, so the components are temporal basis functions, and the scores are spatial basis functions. Look at the results first by plotting the components, the temporal basis functions.
plt.plot(model.comps.T);
worker/notebooks/thunder/tutorials/factorization.ipynb
CodeNeuro/notebooks
mit
The scores are spatial basis functions. We can pack them into a local array and look at them as images one by one.
imgs = model.scores.pack() imgs.shape image(imgs[0,:,:,0], clim=(-0.05,0.05)) image(imgs[1,:,:,0], clim=(-0.05,0.05))
worker/notebooks/thunder/tutorials/factorization.ipynb
CodeNeuro/notebooks
mit
Clearly there is some spatial structure to each component, but looking at them one by one can be difficult. A useful trick is to look at two components at once via a color code that converts the scores into polar coordinates. The color (hue) shows the relative amount of the two components, and the brightness shows the total amplitude.
maps = Colorize(cmap='polar', scale=4).transform(imgs) from numpy import amax image(amax(maps,2))
worker/notebooks/thunder/tutorials/factorization.ipynb
CodeNeuro/notebooks
mit
To get more intuition for these colors, we can get the scores from a random subset of pixels. This will return two numbers per pixel, the projection onto the first and second principal component, and we threshold based on the norm so we are sure to retrieve pixels with at least some structure. Then we make a scatter plot of the two quantities against one another, using the same color conversion as used to generate the map.
pts = model.scores.subset(500, thresh=0.01, stat='norm') from numpy import newaxis, squeeze clrs = Colorize(cmap='polar', scale=4).transform([pts[:,0][:,newaxis], pts[:,1][:,newaxis]]).squeeze() plt.scatter(pts[:,0],pts[:,1], c=clrs, s=75, alpha=0.7);
worker/notebooks/thunder/tutorials/factorization.ipynb
CodeNeuro/notebooks
mit
Recall that each of these points represents a single pixel. Another way to better understand the PCA space is to plot the time series corresponding to each of these pixels, reconstructed using the first two principal components.
from numpy import asarray recon = asarray(map(lambda x: (x[0] * model.comps[0, :] + x[1] * model.comps[1, :]).tolist(), pts)) plt.gca().set_color_cycle(clrs) plt.plot(recon.T);
worker/notebooks/thunder/tutorials/factorization.ipynb
CodeNeuro/notebooks
mit
NMF Non-negative matrix factorization is an alternative decomposition. It is meant to be applied to data that are strictly positive, which is often approximately true of neural responses. Like PCA, it also returns a set of temporal and spatial basis functions, but unlike PCA, it tends to return basis functions that do not mix responses from different regions together. We can run NMF on the same data and look at the basis functions it recovers.
from thunder import NMF model = NMF(k=3, maxIter=20).fit(data)
worker/notebooks/thunder/tutorials/factorization.ipynb
CodeNeuro/notebooks
mit
After fitting, model will have two attributes, h and w. For these data, h contains the temporal basis functions, and w contains the spatial basis functions. Let's look at both.
plt.plot(model.h.T); imgs = model.w.pack() image(imgs[0][:,:,0]) image(imgs[1][:,:,0]) image(imgs[2][:,:,0])
worker/notebooks/thunder/tutorials/factorization.ipynb
CodeNeuro/notebooks
mit
For NMF, a useful way to look at the basis functions is to encode each one as a separate color channel. We can do that using colorization with an rgb conversion, which simply maps the spatial basis functions directly to red, green, and blue values, and applies a global scaling factor which controls overall brightness.
maps = Colorize(cmap='rgb', scale=1.0).transform(imgs) image(maps[:,:,0,:])
worker/notebooks/thunder/tutorials/factorization.ipynb
CodeNeuro/notebooks
mit
One problem with this way to look at NMF components is that the scale of the different components can cause some to dominante others. We also might like more control over color assignments. The indexed colorization option lets you specify one color per channel, and automatically normalizes the amplitude of each one.
maps = Colorize(cmap='indexed', colors=[ "hotpink", "cornflowerblue", "mediumseagreen"], scale=1).transform(imgs) image(maps[:,:,0,:])
worker/notebooks/thunder/tutorials/factorization.ipynb
CodeNeuro/notebooks
mit
With these plots, it can be useful to add in a background image (for example, the mean). In this case, we also show how to select and colorize just two of the three map components against a background.
ref = rawdata.seriesMean().pack() maps = Colorize(cmap='indexed', colors=['red', 'blue'], scale=1).transform(imgs[[0,2]], background=ref, mixing=0.5) image(maps[:,:,0,:])
worker/notebooks/thunder/tutorials/factorization.ipynb
CodeNeuro/notebooks
mit
ICA Independent component analysis is a final factorization approach. Unlike NMF, it does not require non-negative signals, but whereas PCA finds basis functions that maximize explained variance, ICA finds basis functions that maximize the non-Gaussianity of the recovered signals, and in practice, they tend to be both more distinct as well as spatially sparse.
from thunder import ICA model = ICA(k=10,c=3).fit(data) sns.set_style('darkgrid') plt.plot(model.a);
worker/notebooks/thunder/tutorials/factorization.ipynb
CodeNeuro/notebooks
mit
Some signals will be positive and others negative. This is expected because sign is arbitrary in ICA. It is useful to look at absolute value when making maps.
imgs = model.sigs.pack() maps = Colorize(cmap='indexed', colors=['red','green', 'blue'], scale=3).transform(abs(imgs)) image(maps[:,:,0,:])
worker/notebooks/thunder/tutorials/factorization.ipynb
CodeNeuro/notebooks
mit
Overview Use linked DMA channels to perform "scan" across multiple ADC input channels. After each scan, use DMA scatter chain to write the converted ADC values to a separate output array for each ADC channel. The length of the output array to allocate for each ADC channel is determined by the sample_count in the example below. See diagram below. Channel configuration ## DMA channel $i$ copies conesecutive SC1A configurations to the ADC SC1A register. Each SC1A configuration selects an analog input channel. Channel $i$ is initially triggered by software trigger (i.e., DMA_SSRT = i), starting the ADC conversion for the first ADC channel configuration. Loading of subsequent ADC channel configurations is triggered through minor loop linking of DMA channel $ii$ to DMA channel $i$. DMA channel $ii$ is triggered by ADC conversion complete (i.e., COCO), and copies the output result of the ADC to consecutive locations in the result array. Channel $ii$ has minor loop link set to channel $i$, which triggers the loading of the next channel SC1A configuration to be loaded immediately after the current ADC result has been copied to the result array. After $n$ triggers of channel $i$, the result array contains $n$ ADC results, one result per channel in the SC1A table. N.B., Only the trigger for the first ADC channel is an explicit software trigger. All remaining triggers occur through minor-loop DMA channel linking from channel $ii$ to channel $i$. After each scan through all ADC channels is complete, the ADC readings are scattered using the selected "scatter" DMA channel through a major-loop link between DMA channel $ii$ and the "scatter" channel. <img src="multi-channel_ADC_multi-samples_using_DMA.jpg" style="max-height: 600px" /> Device Connect to device
import arduino_helpers.hardware.teensy as teensy from arduino_rpc.protobuf import resolve_field_values from teensy_minimal_rpc import SerialProxy import teensy_minimal_rpc.DMA as DMA import teensy_minimal_rpc.ADC as ADC import teensy_minimal_rpc.SIM as SIM import teensy_minimal_rpc.PIT as PIT # Disconnect from existing proxy (if available) try: del proxy except NameError: pass proxy = SerialProxy() proxy.pin_mode(teensy.LED_BUILTIN, 1) from IPython.display import display proxy.update_sim_SCGC6(SIM.R_SCGC6(PDB=True)) sim_scgc6 = SIM.R_SCGC6.FromString(proxy.read_sim_SCGC6().tostring()) display(resolve_field_values(sim_scgc6)[['full_name', 'value']].T) # proxy.update_pit_registers(PIT.Registers(MCR=PIT.R_MCR(MDIS=False))) # pit_registers = PIT.Registers.FromString(proxy.read_pit_registers().tostring()) # display(resolve_field_values(pit_registers)[['full_name', 'value']].T) import numpy as np # CORE_PIN13_PORTSET = CORE_PIN13_BITMASK; # CORE_PIN13_PORTCLEAR = CORE_PIN13_BITMASK; #define CORE_PIN13_PORTCLEAR GPIOC_PCOR #define CORE_PIN13_PORTSET GPIOC_PSOR #define GPIOC_PCOR (*(volatile uint32_t *)0x400FF088) // Port Clear Output Register #define GPIOC_PSOR (*(volatile uint32_t *)0x400FF084) // Port Set Output Register CORE_PIN13_BIT = 5 GPIOC_PCOR = 0x400FF088 # Port Clear Output Register GPIOC_PSOR = 0x400FF084 # Port Set Output Register proxy.mem_cpy_host_to_device(GPIOC_PSOR, np.uint32(1 << CORE_PIN13_BIT).tostring()) proxy.update_dma_mux_chcfg(0, DMA.MUX_CHCFG(ENBL=1, TRIG=0, SOURCE=48)) proxy.update_dma_registers(DMA.Registers(SERQ=0)) proxy.update_dma_registers(DMA.Registers(CERQ=0)) resolve_field_values(DMA.MUX_CHCFG.FromString(proxy.read_dma_mux_chcfg(0).tostring()))[['full_name', 'value']] print proxy.update_pit_timer_config(0, PIT.TimerConfig(LDVAL=int(48e6))) print proxy.update_pit_timer_config(0, PIT.TimerConfig(TCTRL=PIT.R_TCTRL(TEN=True))) pit0 = PIT.TimerConfig.FromString(proxy.read_pit_timer_config(0).tostring()) display(resolve_field_values(pit0)[['full_name', 'value']].T) PIT_LDVAL0 = 0x40037100 # Timer Load Value Register PIT_CVAL0 = 0x40037104 # Current Timer Value Register PIT_TCTRL0 = 0x40037108 # Timer Control Register proxy.mem_cpy_host_to_device(PIT_TCTRL0, np.uint32(1).tostring()) proxy.mem_cpy_device_to_host(PIT_TCTRL0, 4).view('uint32')[0] proxy.digital_write(teensy.LED_BUILTIN, 0) proxy.update_dma_registers(DMA.Registers(SSRT=0)) proxy.free_all() toggle_pin_addr = proxy.mem_alloc(4) proxy.mem_cpy_host_to_device(toggle_pin_addr, np.uint32(1 << CORE_PIN13_BIT).tostring()) tcds_addr = proxy.mem_aligned_alloc(32, 2 * 32) hw_tcds_addr = 0x40009000 tcd_addrs = [tcds_addr + 32 * i for i in xrange(2)] # Create Transfer Control Descriptor configuration for first chunk, encoded # as a Protocol Buffer message. tcd0_msg = DMA.TCD(CITER_ELINKNO=DMA.R_TCD_ITER_ELINKNO(ITER=1), BITER_ELINKNO=DMA.R_TCD_ITER_ELINKNO(ITER=1), ATTR=DMA.R_TCD_ATTR(SSIZE=DMA.R_TCD_ATTR._32_BIT, DSIZE=DMA.R_TCD_ATTR._32_BIT), NBYTES_MLNO=4, SADDR=int(toggle_pin_addr), SOFF=0, SLAST=0, DADDR=int(GPIOC_PSOR), DOFF=0, # DLASTSGA=0, # CSR=DMA.R_TCD_CSR(START=0, DONE=False, ESG=False)) # proxy.update_dma_TCD(0, tcd0_msg) DLASTSGA=int(tcd_addrs[1]), CSR=DMA.R_TCD_CSR(START=0, DONE=False, ESG=True)) # # Convert Protocol Buffer encoded TCD to bytes structure. tcd0 = proxy.tcd_msg_to_struct(tcd0_msg) # Create binary TCD struct for each TCD protobuf message and copy to device # memory. for i in xrange(2): tcd_i = tcd0.copy() tcd_i['DADDR'] = [GPIOC_PSOR, GPIOC_PCOR][i] tcd_i['DLASTSGA'] = tcd_addrs[(i + 1) % len(tcd_addrs)] tcd_i['CSR'] |= (1 << 4) proxy.mem_cpy_host_to_device(tcd_addrs[i], tcd_i.tostring()) # Load initial TCD in scatter chain to DMA channel chosen to handle scattering. proxy.mem_cpy_host_to_device(hw_tcds_addr, tcd0.tostring()) proxy.update_dma_registers(DMA.Registers(SSRT=0)) dma_channel_scatter = 0 dma_channel_i = 1 dma_channel_ii = 2
teensy_minimal_rpc/notebooks/dma-examples/Example - [BROKEN] Periodic multi-channel ADC multiple samples using DMA and PIT.ipynb
wheeler-microfluidics/teensy-minimal-rpc
gpl-3.0
Configure ADC sample rate, etc.
# Set ADC parameters proxy.setAveraging(16, teensy.ADC_0) proxy.setResolution(16, teensy.ADC_0) proxy.setConversionSpeed(teensy.ADC_MED_SPEED, teensy.ADC_0) proxy.setSamplingSpeed(teensy.ADC_MED_SPEED, teensy.ADC_0) proxy.update_adc_registers( teensy.ADC_0, ADC.Registers(CFG2=ADC.R_CFG2(MUXSEL=ADC.R_CFG2.B)))
teensy_minimal_rpc/notebooks/dma-examples/Example - [BROKEN] Periodic multi-channel ADC multiple samples using DMA and PIT.ipynb
wheeler-microfluidics/teensy-minimal-rpc
gpl-3.0
Pseudo-code to set DMA channel $i$ to be triggered by ADC0 conversion complete. DMAMUX0_CFGi[SOURCE] = DMAMUX_SOURCE_ADC0 // Route ADC0 as DMA channel source. DMAMUX0_CFGi[TRIG] = 0 // Disable periodic trigger. DMAMUX0_CFGi[ENBL] = 1 // Enable the DMAMUX configuration for channel. DMA_ERQ[i] = 1 // DMA request input signals and this enable request flag // must be asserted before a channel’s hardware service // request is accepted (21.3.3/394). DMA_SERQ = i // Can use memory mapped convenience register to set instead. Set DMA mux source for channel 0 to ADC0
DMAMUX_SOURCE_ADC0 = 40 # from `kinetis.h` DMAMUX_SOURCE_ADC1 = 41 # from `kinetis.h` # DMAMUX0_CFGi[SOURCE] = DMAMUX_SOURCE_ADC0 // Route ADC0 as DMA channel source. # DMAMUX0_CFGi[TRIG] = 0 // Disable periodic trigger. # DMAMUX0_CFGi[ENBL] = 1 // Enable the DMAMUX configuration for channel. proxy.update_dma_mux_chcfg(dma_channel_ii, DMA.MUX_CHCFG(SOURCE=DMAMUX_SOURCE_ADC0, TRIG=False, ENBL=True)) # DMA request input signals and this enable request flag # must be asserted before a channel’s hardware service # request is accepted (21.3.3/394). # DMA_SERQ = i proxy.update_dma_registers(DMA.Registers(SERQ=dma_channel_ii)) proxy.enableDMA(teensy.ADC_0) proxy.DMA_registers().loc[''] dmamux = DMA.MUX_CHCFG.FromString(proxy.read_dma_mux_chcfg(dma_channel_ii).tostring()) resolve_field_values(dmamux)[['full_name', 'value']] adc0 = ADC.Registers.FromString(proxy.read_adc_registers(teensy.ADC_0).tostring()) resolve_field_values(adc0)[['full_name', 'value']].loc[['CFG2', 'SC1A', 'SC3']]
teensy_minimal_rpc/notebooks/dma-examples/Example - [BROKEN] Periodic multi-channel ADC multiple samples using DMA and PIT.ipynb
wheeler-microfluidics/teensy-minimal-rpc
gpl-3.0
Analog channel list List of channels to sample. Map channels from Teensy references (e.g., A0, A1, etc.) to the Kinetis analog pin numbers using the adc.CHANNEL_TO_SC1A_ADC0 mapping.
import re import numpy as np import pandas as pd import arduino_helpers.hardware.teensy.adc as adc # The number of samples to record for each ADC channel. sample_count = 10 teensy_analog_channels = ['A0', 'A1', 'A0', 'A3', 'A0'] sc1a_pins = pd.Series(dict([(v, adc.CHANNEL_TO_SC1A_ADC0[getattr(teensy, v)]) for v in dir(teensy) if re.search(r'^A\d+', v)])) channel_sc1as = np.array(sc1a_pins[teensy_analog_channels].tolist(), dtype='uint32')
teensy_minimal_rpc/notebooks/dma-examples/Example - [BROKEN] Periodic multi-channel ADC multiple samples using DMA and PIT.ipynb
wheeler-microfluidics/teensy-minimal-rpc
gpl-3.0
Allocate and initialize device arrays SD1A register configuration for each ADC channel in the channel_sc1as list. Copy channel_sc1as list to device. ADC result array Initialize to zero.
proxy.free_all() N = np.dtype('uint16').itemsize * channel_sc1as.size # Allocate source array adc_result_addr = proxy.mem_alloc(N) # Fill result array with zeros proxy.mem_fill_uint8(adc_result_addr, 0, N) # Copy channel SC1A configurations to device memory adc_sda1s_addr = proxy.mem_aligned_alloc_and_set(4, channel_sc1as.view('uint8')) # Allocate source array samples_addr = proxy.mem_alloc(sample_count * N) tcds_addr = proxy.mem_aligned_alloc(32, sample_count * 32) hw_tcds_addr = 0x40009000 tcd_addrs = [tcds_addr + 32 * i for i in xrange(sample_count)] hw_tcd_addrs = [hw_tcds_addr + 32 * i for i in xrange(sample_count)] # Fill result array with zeros proxy.mem_fill_uint8(samples_addr, 0, sample_count * N) # Create Transfer Control Descriptor configuration for first chunk, encoded # as a Protocol Buffer message. tcd0_msg = DMA.TCD(CITER_ELINKNO=DMA.R_TCD_ITER_ELINKNO(ITER=1), BITER_ELINKNO=DMA.R_TCD_ITER_ELINKNO(ITER=1), ATTR=DMA.R_TCD_ATTR(SSIZE=DMA.R_TCD_ATTR._16_BIT, DSIZE=DMA.R_TCD_ATTR._16_BIT), NBYTES_MLNO=channel_sc1as.size * 2, SADDR=int(adc_result_addr), SOFF=2, SLAST=-channel_sc1as.size * 2, DADDR=int(samples_addr), DOFF=2 * sample_count, DLASTSGA=int(tcd_addrs[1]), CSR=DMA.R_TCD_CSR(START=0, DONE=False, ESG=True)) # Convert Protocol Buffer encoded TCD to bytes structure. tcd0 = proxy.tcd_msg_to_struct(tcd0_msg) # Create binary TCD struct for each TCD protobuf message and copy to device # memory. for i in xrange(sample_count): tcd_i = tcd0.copy() tcd_i['SADDR'] = adc_result_addr tcd_i['DADDR'] = samples_addr + 2 * i tcd_i['DLASTSGA'] = tcd_addrs[(i + 1) % len(tcd_addrs)] tcd_i['CSR'] |= (1 << 4) proxy.mem_cpy_host_to_device(tcd_addrs[i], tcd_i.tostring()) # Load initial TCD in scatter chain to DMA channel chosen to handle scattering. proxy.mem_cpy_host_to_device(hw_tcd_addrs[dma_channel_scatter], tcd0.tostring()) print 'ADC results:', proxy.mem_cpy_device_to_host(adc_result_addr, N).view('uint16') print 'Analog pins:', proxy.mem_cpy_device_to_host(adc_sda1s_addr, len(channel_sc1as) * channel_sc1as.dtype.itemsize).view('uint32')
teensy_minimal_rpc/notebooks/dma-examples/Example - [BROKEN] Periodic multi-channel ADC multiple samples using DMA and PIT.ipynb
wheeler-microfluidics/teensy-minimal-rpc
gpl-3.0
Configure DMA channel $i$
ADC0_SC1A = 0x4003B000 # ADC status and control registers 1 sda1_tcd_msg = DMA.TCD(CITER_ELINKNO=DMA.R_TCD_ITER_ELINKNO(ELINK=False, ITER=channel_sc1as.size), BITER_ELINKNO=DMA.R_TCD_ITER_ELINKNO(ELINK=False, ITER=channel_sc1as.size), ATTR=DMA.R_TCD_ATTR(SSIZE=DMA.R_TCD_ATTR._32_BIT, DSIZE=DMA.R_TCD_ATTR._32_BIT), NBYTES_MLNO=4, SADDR=int(adc_sda1s_addr), SOFF=4, SLAST=-channel_sc1as.size * 4, DADDR=int(ADC0_SC1A), DOFF=0, DLASTSGA=0, CSR=DMA.R_TCD_CSR(START=0, DONE=False)) proxy.update_dma_TCD(dma_channel_i, sda1_tcd_msg)
teensy_minimal_rpc/notebooks/dma-examples/Example - [BROKEN] Periodic multi-channel ADC multiple samples using DMA and PIT.ipynb
wheeler-microfluidics/teensy-minimal-rpc
gpl-3.0
Configure DMA channel $ii$
ADC0_RA = 0x4003B010 # ADC data result register ADC0_RB = 0x4003B014 # ADC data result register tcd_msg = DMA.TCD(CITER_ELINKYES=DMA.R_TCD_ITER_ELINKYES(ELINK=True, LINKCH=1, ITER=channel_sc1as.size), BITER_ELINKYES=DMA.R_TCD_ITER_ELINKYES(ELINK=True, LINKCH=1, ITER=channel_sc1as.size), ATTR=DMA.R_TCD_ATTR(SSIZE=DMA.R_TCD_ATTR._16_BIT, DSIZE=DMA.R_TCD_ATTR._16_BIT), NBYTES_MLNO=2, SADDR=ADC0_RA, SOFF=0, SLAST=0, DADDR=int(adc_result_addr), DOFF=2, DLASTSGA=-channel_sc1as.size * 2, CSR=DMA.R_TCD_CSR(START=0, DONE=False, MAJORELINK=True, MAJORLINKCH=dma_channel_scatter)) proxy.update_dma_TCD(dma_channel_ii, tcd_msg)
teensy_minimal_rpc/notebooks/dma-examples/Example - [BROKEN] Periodic multi-channel ADC multiple samples using DMA and PIT.ipynb
wheeler-microfluidics/teensy-minimal-rpc
gpl-3.0
Trigger sample scan across selected ADC channels
# Clear output array to zero. proxy.mem_fill_uint8(adc_result_addr, 0, N) proxy.mem_fill_uint8(samples_addr, 0, sample_count * N) # Software trigger channel $i$ to copy *first* SC1A configuration, which # starts ADC conversion for the first channel. # # Conversions for subsequent ADC channels are triggered through minor-loop # linking from DMA channel $ii$ to DMA channel $i$ (*not* through explicit # software trigger). print 'ADC results:' for i in xrange(sample_count): proxy.update_dma_registers(DMA.Registers(SSRT=dma_channel_i)) # Display converted ADC values (one value per channel in `channel_sd1as` list). print ' Iteration %s:' % i, proxy.mem_cpy_device_to_host(adc_result_addr, N).view('uint16') print '' print 'Samples by channel:' # Trigger once per chunk # for i in xrange(sample_count): # proxy.update_dma_registers(DMA.Registers(SSRT=0)) device_dst_data = proxy.mem_cpy_device_to_host(samples_addr, sample_count * N) pd.DataFrame(device_dst_data.view('uint16').reshape(-1, sample_count).T, columns=teensy_analog_channels)
teensy_minimal_rpc/notebooks/dma-examples/Example - [BROKEN] Periodic multi-channel ADC multiple samples using DMA and PIT.ipynb
wheeler-microfluidics/teensy-minimal-rpc
gpl-3.0
The import here was very simple, because this notebook is in the same folder as the hello_quantum.py file. If this is not the case, you'll have to change the path. See the Hello_Qiskit notebook for an example of this. Once the import has been done, you can set up and display the visualization.
grid = hello_quantum.pauli_grid() grid.update_grid()
community/games/game_engines/Making_your_own_hello_quantum.ipynb
antoniomezzacapo/qiskit-tutorial
apache-2.0
This has attributes and methods which create and run quantum circuits with Qiskit.
for gate in [['x','1'],['h','0'],['z','0'],['h','1'],['z','1']]: command = 'grid.qc.'+gate[0]+'(grid.qr['+gate[1]+'])' eval(command) grid.update_grid()
community/games/game_engines/Making_your_own_hello_quantum.ipynb
antoniomezzacapo/qiskit-tutorial
apache-2.0
There is also an alternative visualization, which can be used to better represent non-Clifford gates.
grid = hello_quantum.pauli_grid(mode='line') grid.update_grid()
community/games/game_engines/Making_your_own_hello_quantum.ipynb
antoniomezzacapo/qiskit-tutorial
apache-2.0
The run_game function, can also be used to implement custom 'Hello Quantum' games within a notebook. This is called with hello_quantum.run_game(initialize, success_condition, allowed_gates, vi, qubit_names) where the arguments set up the puzzle by specifying the following information. initialize * List of gates applied to the initial 00 state to get the starting state of the puzzle. * Supported single qubit gates (applied to qubit '0' or '1') are 'x', 'y', 'z', 'h', 'ry(pi/4)' * Supported two qubit gates are 'cz' and 'cx'. For these, specify only the target qubit. * Example: initialize = [['x', '0'],['cx', '1']] success_condition * Values for pauli observables that must be obtained for the puzzle to declare success. * Example: success_condition = {'IZ': 1.0} allowed_gates * For each qubit, specify which operations are allowed in this puzzle. * For operations that don't need a qubit to be specified ('cz' and 'unbloch'), assign the operation to 'both' instead of qubit '0' or '1'. * Gates are expressed as a dict with an int as value. * If this is non-zero, it specifies the exact number of times the gate must be used for the puzzle to be successfully solved. * If it is zero, the player can use the gate any number of times. * Example: allowed_gates = {'0': {'h':0}, '1': {'h':0}, 'both': {'cz': 1}} vi * Some visualization information as a three element list. These specify: * Which qubits are hidden (empty list if both shown). * Whether both circles shown for each qubit? (use True for qubit puzzles and False for bit puzzles). * Whether the correlation circles (the four in the middle) are shown. * Example: vi = [[], True, True] qubit_names * The two qubits are always called '0' and '1' internally. But for the player, we can display different names. * Example: qubit_names = {'0':'qubit 0', '1':'qubit 1'} The puzzle defined by the examples given here can be run in the following cell. See also the many examples in the Hello_Qiskit notebook.
initialize = [['x', '0'],['cx', '1']] success_condition = {'IZ': 1.0} allowed_gates = {'0': {'h':0}, '1': {'h':0}, 'both': {'cz': 1}} vi = [[], True, True] qubit_names = {'0':'qubit 0', '1':'qubit 1'} puzzle = hello_quantum.run_game(initialize, success_condition, allowed_gates, vi, qubit_names)
community/games/game_engines/Making_your_own_hello_quantum.ipynb
antoniomezzacapo/qiskit-tutorial
apache-2.0
Creating a Series You can convert a list,numpy array, or dictionary to a Series:
labels = ['a', 'b', 'c'] my_list = [10, 20, 30] arr = np.array([10, 20, 30]) d = {'a': 10,'b': 20,'c': 30}
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/03-General Pandas/02-Series.ipynb
arcyfelix/Courses
apache-2.0
Using Lists
pd.Series(data = my_list) pd.Series(data = my_list, index = labels) pd.Series(my_list, labels)
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/03-General Pandas/02-Series.ipynb
arcyfelix/Courses
apache-2.0
NumPy Arrays
pd.Series(arr) pd.Series(arr, labels)
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/03-General Pandas/02-Series.ipynb
arcyfelix/Courses
apache-2.0
Dictionary
pd.Series(d)
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/03-General Pandas/02-Series.ipynb
arcyfelix/Courses
apache-2.0
Data in a Series A pandas Series can hold a variety of object types:
pd.Series(data = labels) # Even functions (although unlikely that you will use this) pd.Series([sum, print, len])
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/03-General Pandas/02-Series.ipynb
arcyfelix/Courses
apache-2.0
Using an Index The key to using a Series is understanding its index. Pandas makes use of these index names or numbers by allowing for fast look ups of information (works like a hash table or dictionary). Let's see some examples of how to grab information from a Series. Let us create two sereis, ser1 and ser2:
ser1 = pd.Series([1, 2, 3, 4], index = ['USA', 'Germany', 'USSR', 'Japan']) ser1 ser2 = pd.Series([1, 2, 5, 4], index = ['USA', 'Germany', 'Italy', 'Japan']) ser2 ser1['USA']
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/03-General Pandas/02-Series.ipynb
arcyfelix/Courses
apache-2.0