markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
We generate two random patterns and test whether the field (h) is dependent on the initial state. We see very similar normal behavior for both of them
x = np.dot(nn.w, np.sign(prng.normal(size=n_dim))) y = np.dot(nn.w, np.sign(prng.normal(size=n_dim))) fig = plt.figure(figsize=(16, 12)) ax = fig.add_subplot(111) ax.hist(x) ax.hist(y) print(np.std(x))
notebooks/2016-12-11(Study of connectivity distribution).ipynb
h-mayorquin/hopfield_sequences
mit
We now then try this with 10 patterns to see how the distributions behave
fig = plt.figure(figsize=(16, 12)) ax = fig.add_subplot(111) for i in range(10): x = np.dot(nn.w, np.sign(prng.normal(size=n_dim))) ax.hist(x, alpha=0.5)
notebooks/2016-12-11(Study of connectivity distribution).ipynb
h-mayorquin/hopfield_sequences
mit
We see that the normal distribution is mainted, then we calculate the field h (result of the np.dot(w, s) calculation) for a bunch of different initial random states and concatenate the results to see how the whole distribution looks like
n_dim = 400 nn = Hopfield(n_dim=n_dim, T=T, prng=prng) list_of_patterns = nn.generate_random_patterns(n_dim) nn.train(list_of_patterns, normalize=normalize) x = np.empty(n_dim) for i in range(N_samples): h = np.dot(nn.w, np.sign(prng.normal(size=n_dim))) x = np.concatenate((x, h)) fig = plt.figure(figsize=(16, 12)) ax = fig.add_subplot(111) n, bins, patches = ax.hist(x, bins=30) print(np.var(x)) print(nn.sigma)
notebooks/2016-12-11(Study of connectivity distribution).ipynb
h-mayorquin/hopfield_sequences
mit
Dependence on network size Now we test test how the histogram looks for different sizes
n_dimensions = [200, 800, 2000, 5000] fig = plt.figure(figsize=(16, 12)) gs = gridspec.GridSpec(2, 2) for index, n_dim in enumerate(n_dimensions): nn = Hopfield(n_dim=n_dim, T=T, prng=prng) list_of_patterns = nn.generate_random_patterns(n_store) nn.train(list_of_patterns, normalize=normalize) x = np.empty(n_dim) for i in range(N_samples): h = np.dot(nn.w, np.sign(prng.normal(size=n_dim))) x = np.concatenate((x, h)) ax = fig.add_subplot(gs[index//2, index%2]) ax.set_xlim([-1, 1]) ax.set_title('n_dim = ' + str(n_dim) + ' std = ' + str(np.std(x))) weights = np.ones_like(x)/float(len(x)) n, bins, patches = ax.hist(x, bins=30, weights=weights, normed=False)
notebooks/2016-12-11(Study of connectivity distribution).ipynb
h-mayorquin/hopfield_sequences
mit
Now we calculate the variance of the h vector as a function of the dimension
n_dimensions = np.logspace(1, 4, num=20) variances = [] standar_deviations = [] for index, n_dim in enumerate(n_dimensions): print('number', index, 'of', n_dimensions.size, ' n_dim =', n_dim) n_dim = int(n_dim) nn = Hopfield(n_dim=n_dim, T=T, prng=prng) list_of_patterns = nn.generate_random_patterns(n_store) nn.train(list_of_patterns, normalize=normalize) x = np.empty(n_dim) for i in range(N_samples): h = np.dot(nn.w, np.sign(prng.normal(size=n_dim))) x = np.concatenate((x, h)) variances.append(np.var(x)) standar_deviations.append(np.std(x)) fig = plt.figure(figsize=(16, 12)) ax = fig.add_subplot(111) ax.semilogx(n_dimensions, variances,'*-', markersize=16, label='var') ax.semilogx(n_dimensions, standar_deviations, '*-', markersize=16, label='std') ax.axhline(y=nn.sigma, color='k', label='nn.sigma') ax.legend()
notebooks/2016-12-11(Study of connectivity distribution).ipynb
h-mayorquin/hopfield_sequences
mit
TFP, backed by Jax Jax-backed TFP is a work in progress, but many distributions and bijectors are currently working! How do you use the alternative backend? <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/probability/blob/main/discussion/examples/TFP_and_Jax.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/probability/blob/main/discussion/examples/TFP_and_Jax.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> </table> Importing
# Importing the TFP with Jax backend !pip3 install -q 'tfp-nightly[jax]' tf-nightly-cpu # We (currently) still require TF, but TF's smaller CPU build will work. import tensorflow_probability as tfp tfp = tfp.experimental.substrates.jax tf = tfp.tf2jax # Standard TFP Imports tfd = tfp.distributions tfb = tfp.bijectors tfpk = tfp.math.psd_kernels # Jax imports import jax import jax.numpy as np from jax import random # Other imports import matplotlib.pyplot as plt import seaborn as sns sns.set(style='white')
discussion/examples/TFP_and_Jax.ipynb
tensorflow/probability
apache-2.0
TF Interface to Jax We've reimplemented the TF API, but with Jax functions instead of TF functions and DeviceArrays instead of TF Tensors.
tf.ones(5) tf.matmul(tf.ones([1, 2]), tf.ones([2, 4]))
discussion/examples/TFP_and_Jax.ipynb
tensorflow/probability
apache-2.0
Some differences: Shapes are tuples, not TensorShapes
tf.ones(5).shape
discussion/examples/TFP_and_Jax.ipynb
tensorflow/probability
apache-2.0
Randomness is stateless, like in Jax and requires Jax PRNGKeys to operate.
tf.random.stateless_uniform([1, 2], seed=random.PRNGKey(0))
discussion/examples/TFP_and_Jax.ipynb
tensorflow/probability
apache-2.0
Placeholders don't exist.
tf.compat.v1.placeholder_with_default(tf.ones(5), (5,))
discussion/examples/TFP_and_Jax.ipynb
tensorflow/probability
apache-2.0
Math libraries TFP's math libraries are now largely working, i.e. tfp.math Bijectors Most bijectors have tests passing! Unary bijectors
bij = tfb.Shift(1.)(tfb.Scale(3.)) print(bij.forward(np.ones(5))) print(bij.inverse(np.ones(5)))
discussion/examples/TFP_and_Jax.ipynb
tensorflow/probability
apache-2.0
Meta bijectors
b = tfb.FillScaleTriL(diag_bijector=tfb.Exp(), diag_shift=None) print(b.forward(x=[0., 0., 0.])) print(b.inverse(y=[[1., 0], [.5, 2]])) b = tfb.Chain([tfb.Exp(), tfb.Softplus()]) # or: # b = tfb.Exp()(tfb.Softplus()) print(b.forward(-np.ones(5)))
discussion/examples/TFP_and_Jax.ipynb
tensorflow/probability
apache-2.0
MCMC coming soon We are migrating TFP's random samplers to be internally-stateless, then will update MCMC to support JAX. Some don't work yet For example: FFJORD, MAF (WIP), Real NVP Distributions When sampling, we need to pass in a seed.
dist = tfd.Normal(loc=0., scale=1.) print(dist.sample(seed=random.PRNGKey(0)))
discussion/examples/TFP_and_Jax.ipynb
tensorflow/probability
apache-2.0
Jax distributions obey the same batching semantics as their TensorFlow counterparts.
dist = tfd.Normal(np.zeros(5), np.ones(5)) s = dist.sample(sample_shape=(10, 2), seed=random.PRNGKey(0)) print(dist.log_prob(s).shape) dist = tfd.Independent(tfd.Normal(np.zeros(5), np.ones(5)), 1) s = dist.sample(sample_shape=(10, 2), seed=random.PRNGKey(0)) print(dist.log_prob(s).shape)
discussion/examples/TFP_and_Jax.ipynb
tensorflow/probability
apache-2.0
Most meta distributions are working!
dist = tfd.TransformedDistribution( tfd.MultivariateNormalDiag(tf.zeros(5), tf.ones(5)), tfb.Exp()) # or: # dist = tfb.Exp()(tfd.MultivariateNormalDiag(tf.zeros(5), tf.ones(5))) s = dist.sample(sample_shape=2, seed=random.PRNGKey(0)) print(s) print(dist.log_prob(s).shape)
discussion/examples/TFP_and_Jax.ipynb
tensorflow/probability
apache-2.0
Gaussian processes and PSD kernels also work.
k1, k2, k3 = random.split(random.PRNGKey(0), 3) observation_noise_variance = 0.01 f = lambda x: np.sin(10*x[..., 0]) * np.exp(-x[..., 0]**2) observation_index_points = tf.random.stateless_uniform( [50], minval=-1.,maxval= 1., seed=k1)[..., np.newaxis] observations = f(observation_index_points) + tfd.Normal(loc=0., scale=np.sqrt(observation_noise_variance)).sample(seed=k2) index_points = np.linspace(-1., 1., 100)[..., np.newaxis] kernel = tfpk.ExponentiatedQuadratic(length_scale=0.1) gprm = tfd.GaussianProcessRegressionModel( kernel=kernel, index_points=index_points, observation_index_points=observation_index_points, observations=observations, observation_noise_variance=observation_noise_variance) samples = gprm.sample(10, seed=k3) for i in range(10): plt.plot(index_points, samples[i]) plt.show()
discussion/examples/TFP_and_Jax.ipynb
tensorflow/probability
apache-2.0
Get information about the datatype 'Bodemlocatie' Other datatypes are also possible: * Bodemsite: BodemsiteSearch * Bodemmonster: BodemmonsterSearch * Bodemobservatie: BodemobservatieSearch
from pydov.search.bodemlocatie import BodemlocatieSearch bodemlocatie = BodemlocatieSearch()
docs/notebooks/search_bodem.ipynb
DOV-Vlaanderen/pydov
mit
A description is provided for the 'Bodemlocatie' datatype:
bodemlocatie.get_description()
docs/notebooks/search_bodem.ipynb
DOV-Vlaanderen/pydov
mit
The different fields that are available for objects of the 'Bodemlocatie' datatype can be requested with the get_fields() method:
fields = bodemlocatie.get_fields() # print available fields for f in fields.values(): print(f['name'])
docs/notebooks/search_bodem.ipynb
DOV-Vlaanderen/pydov
mit
You can get more information of a field by requesting it from the fields dictionary: * name: name of the field * definition: definition of this field * cost: currently this is either 1 or 10, depending on the datasource of the field. It is an indication of the expected time it will take to retrieve this field in the output dataframe. * notnull: whether the field is mandatory or not * type: datatype of the values of this field
fields['type']
docs/notebooks/search_bodem.ipynb
DOV-Vlaanderen/pydov
mit
Optionally, if the values of the field have a specific domain the possible values are listed as values:
fields['type']['values']
docs/notebooks/search_bodem.ipynb
DOV-Vlaanderen/pydov
mit
Example use cases Get bodemsites in a bounding box Get data for all the bodemsites that are geographically located completely within the bounds of the specified box. The coordinates are in the Belgian Lambert72 (EPSG:31370) coordinate system and are given in the order of lower left x, lower left y, upper right x, upper right y. The same methods can be used for other bodem objects.
from pydov.search.bodemsite import BodemsiteSearch bodemsite = BodemsiteSearch() from pydov.util.location import Within, Box df = bodemsite.search(location=Within(Box(148000, 160800, 160000, 169500))) df.head()
docs/notebooks/search_bodem.ipynb
DOV-Vlaanderen/pydov
mit
The dataframe contains a list of bodemsites. The available data are flattened to represent unique attributes per row of the dataframe. Using the pkey_bodemsite field one can request the details of this bodemsite in a webbrowser:
for pkey_bodemsite in set(df.pkey_bodemsite): print(pkey_bodemsite)
docs/notebooks/search_bodem.ipynb
DOV-Vlaanderen/pydov
mit
Get bodemlocaties with specific properties Next to querying bodem objects based on their geographic location within a bounding box, we can also search for bodem objects matching a specific set of properties. The same methods can be used for all bodem objects. For this we can build a query using a combination of the 'Bodemlocatie' fields and operators provided by the WFS protocol. A list of possible operators can be found below:
[i for i,j in inspect.getmembers(sys.modules['owslib.fes'], inspect.isclass) if 'Property' in i]
docs/notebooks/search_bodem.ipynb
DOV-Vlaanderen/pydov
mit
In this example we build a query using the PropertyIsEqualTo operator to find all bodemlocaties with bodemstreek 'zandstreek'. We use max_features=10 to limit the results to 10.
from owslib.fes import PropertyIsEqualTo query = PropertyIsEqualTo(propertyname='bodemstreek', literal='Zandstreek') df = bodemlocatie.search(query=query, max_features=10) df.head()
docs/notebooks/search_bodem.ipynb
DOV-Vlaanderen/pydov
mit
Once again we can use the pkey_bodemlocatie as a permanent link to the information of these bodemlocaties:
for pkey_bodemlocatie in set(df.pkey_bodemlocatie): print(pkey_bodemlocatie)
docs/notebooks/search_bodem.ipynb
DOV-Vlaanderen/pydov
mit
Get all direct and indirect bodemobservaties in bodemlocatie Get all bodemobservaties in a specific bodemlocatie. Direct means bodemobservaties directly linked with a bodemlocatie. Indirect means bodemobservaties linked with child-objects of the bodemlocatie, like bodemmonsters.
from pydov.search.bodemobservatie import BodemobservatieSearch from pydov.search.bodemlocatie import BodemlocatieSearch bodemobservatie = BodemobservatieSearch() bodemlocatie = BodemlocatieSearch() from owslib.fes import PropertyIsEqualTo from pydov.util.query import Join bodemlocaties = bodemlocatie.search(query=PropertyIsEqualTo(propertyname='naam', literal='VMM_INF_52'), return_fields=('pkey_bodemlocatie',)) bodemobservaties = bodemobservatie.search(query=Join(bodemlocaties, 'pkey_bodemlocatie')) bodemobservaties.head()
docs/notebooks/search_bodem.ipynb
DOV-Vlaanderen/pydov
mit
Get all bodemobservaties in a bodemmonster Get all bodemobservaties linked with a bodemmonster
from pydov.search.bodemmonster import BodemmonsterSearch bodemmonster = BodemmonsterSearch() bodemmonsters = bodemmonster.search(query=PropertyIsEqualTo(propertyname = 'identificatie', literal='A0057359'), return_fields=('pkey_bodemmonster',)) bodemobservaties = bodemobservatie.search(query=Join(bodemmonsters, on = 'pkey_parent', using='pkey_bodemmonster')) bodemobservaties.head()
docs/notebooks/search_bodem.ipynb
DOV-Vlaanderen/pydov
mit
Find all soil locations with a given soil classification Get all soil locations with a given soil classification:
from owslib.fes import PropertyIsEqualTo from pydov.util.query import Join from pydov.search.bodemclassificatie import BodemclassificatieSearch from pydov.search.bodemlocatie import BodemlocatieSearch bodemclassificatie = BodemclassificatieSearch() bl_Scbz = bodemclassificatie.search(query=PropertyIsEqualTo('bodemtype', 'Scbz'), return_fields=['pkey_bodemlocatie']) bodemlocatie = BodemlocatieSearch() bl = bodemlocatie.search(query=Join(bl_Scbz, 'pkey_bodemlocatie')) bl.head()
docs/notebooks/search_bodem.ipynb
DOV-Vlaanderen/pydov
mit
We can also get their observations:
from pydov.search.bodemobservatie import BodemobservatieSearch bodemobservatie = BodemobservatieSearch() obs = bodemobservatie.search(query=Join(bl_Scbz, 'pkey_bodemlocatie'), max_features=10) obs.head()
docs/notebooks/search_bodem.ipynb
DOV-Vlaanderen/pydov
mit
Get all depth intervals and observations from a soil location
from pydov.search.bodemlocatie import BodemlocatieSearch from pydov.search.bodemdiepteinterval import BodemdiepteintervalSearch from pydov.util.query import Join from owslib.fes import PropertyIsEqualTo bodemlocatie = BodemlocatieSearch() bodemdiepteinterval = BodemdiepteintervalSearch() bodemlocaties = bodemlocatie.search(query=PropertyIsEqualTo(propertyname='naam', literal='VMM_INF_52'), return_fields=('pkey_bodemlocatie',)) bodemdiepteintervallen = bodemdiepteinterval.search( query=Join(bodemlocaties, on='pkey_bodemlocatie')) bodemdiepteintervallen
docs/notebooks/search_bodem.ipynb
DOV-Vlaanderen/pydov
mit
And get their observations:
from pydov.search.bodemobservatie import BodemobservatieSearch bodemobservatie = BodemobservatieSearch() bodemobservaties = bodemobservatie.search(query=Join( bodemdiepteintervallen, on='pkey_parent', using='pkey_diepteinterval')) bodemobservaties.head()
docs/notebooks/search_bodem.ipynb
DOV-Vlaanderen/pydov
mit
Find all bodemlocaties where observations exist for organic carbon percentage in East-Flanders between 0 and 30 cm deep Get boundaries of East-Flanders by using a WFS
from owslib.etree import etree from owslib.wfs import WebFeatureService from pydov.util.location import ( GmlFilter, Within, ) provinciegrenzen = WebFeatureService( 'https://geoservices.informatievlaanderen.be/overdrachtdiensten/VRBG/wfs', version='1.1.0') provincie_filter = PropertyIsEqualTo(propertyname='NAAM', literal='Oost-Vlaanderen') provincie_poly = provinciegrenzen.getfeature( typename='VRBG:Refprv', filter=etree.tostring(provincie_filter.toXML()).decode("utf8")).read()
docs/notebooks/search_bodem.ipynb
DOV-Vlaanderen/pydov
mit
Get bodemobservaties in East-Flanders with the requested properties
from owslib.fes import PropertyIsEqualTo from owslib.fes import And from pydov.search.bodemobservatie import BodemobservatieSearch bodemobservatie = BodemobservatieSearch() # Select only layers with the boundaries 10-30 bodemobservaties = bodemobservatie.search( location=GmlFilter(provincie_poly, Within), query=And([ PropertyIsEqualTo(propertyname="parameter", literal="Organische C - percentage"), PropertyIsEqualTo(propertyname="diepte_tot_cm", literal = '30'), PropertyIsEqualTo(propertyname="diepte_van_cm", literal = '0') ])) bodemobservaties.head()
docs/notebooks/search_bodem.ipynb
DOV-Vlaanderen/pydov
mit
Now we have all observations with the requested properties. Next we need to link them with the bodemlocatie
from pydov.search.bodemlocatie import BodemlocatieSearch from pydov.util.query import Join import pandas as pd # Find bodemlocatie information for all observations bodemlocatie = BodemlocatieSearch() bodemlocaties = bodemlocatie.search(query=Join(bodemobservaties, on = 'pkey_bodemlocatie', using='pkey_bodemlocatie')) # remove x, y, mv_mtaw from observatie dataframe to prevent duplicates while merging bodemobservaties = bodemobservaties.drop(['x', 'y', 'mv_mtaw'], axis=1) # Merge the bodemlocatie information together with the observation information merged = pd.merge(bodemobservaties, bodemlocaties, on="pkey_bodemlocatie", how='left') merged.head()
docs/notebooks/search_bodem.ipynb
DOV-Vlaanderen/pydov
mit
To export the results to CSV, you can use for example: python merged.to_csv("test.csv") We can plot also the results on a map This can take some time!
import folium from folium.plugins import MarkerCluster from pyproj import Transformer # convert the coordinates to lat/lon for folium def convert_latlon(x1, y1): transformer = Transformer.from_crs("epsg:31370", "epsg:4326", always_xy=True) x2,y2 = transformer.transform(x1, y1) return x2, y2 #convert coordinates to wgs84 merged['lon'], merged['lat'] = zip(*map(convert_latlon, merged['x'], merged['y'])) # Get only location and value loclist = merged[['lat', 'lon']].values.tolist() # initialize the Folium map on the centre of the selected locations, play with the zoom until ok fmap = folium.Map(location=[merged['lat'].mean(), merged['lon'].mean()], zoom_start=10) marker_cluster = MarkerCluster().add_to(fmap) for loc in range(0, len(loclist)): popup = 'Bodemlocatie: ' + merged['pkey_bodemlocatie'][loc] popup = popup + '<br> Bodemobservatie: ' + merged['pkey_bodemobservatie'][loc] popup = popup + '<br> Value: ' + merged['waarde'][loc] + "%" folium.Marker(loclist[loc], popup=popup).add_to(marker_cluster) fmap
docs/notebooks/search_bodem.ipynb
DOV-Vlaanderen/pydov
mit
Calculate carbon stock in Ghent in the layer 0 - 23 cm At the moment, there are no bulkdensities available. As soon as there are observations with bulkdensities, this example can be used to calculate a carbon stock in a layer. Get boundaries of Ghent using WFS
from owslib.etree import etree from owslib.fes import PropertyIsEqualTo from owslib.wfs import WebFeatureService from pydov.util.location import ( GmlFilter, Within, ) stadsgrenzen = WebFeatureService( 'https://geoservices.informatievlaanderen.be/overdrachtdiensten/VRBG/wfs', version='1.1.0') gent_filter = PropertyIsEqualTo(propertyname='NAAM', literal='Gent') gent_poly = stadsgrenzen.getfeature( typename='VRBG:Refgem', filter=etree.tostring(gent_filter.toXML()).decode("utf8")).read()
docs/notebooks/search_bodem.ipynb
DOV-Vlaanderen/pydov
mit
First get all observations in Ghent for organisch C percentage in requested layer
from owslib.fes import PropertyIsEqualTo, PropertyIsGreaterThan, PropertyIsLessThan from owslib.fes import And from pydov.search.bodemobservatie import BodemobservatieSearch bodemobservatie = BodemobservatieSearch() # all layers intersect the layer 0-23cm carbon_observaties = bodemobservatie.search( location=GmlFilter(gent_poly, Within), query=And([ PropertyIsEqualTo(propertyname="parameter", literal="Organische C - percentage"), PropertyIsGreaterThan(propertyname="diepte_tot_cm", literal = '0'), PropertyIsLessThan(propertyname="diepte_van_cm", literal = '23') ]), return_fields=('pkey_bodemlocatie', 'waarde')) carbon_observaties = carbon_observaties.rename(columns={"waarde": "organic_c_percentage"}) carbon_observaties.head()
docs/notebooks/search_bodem.ipynb
DOV-Vlaanderen/pydov
mit
Then get all observations in Ghent for bulkdensity in requested layer
density_observaties = bodemobservatie.search( location=GmlFilter(gent_poly, Within), query=And([ PropertyIsEqualTo(propertyname="parameter", literal="Bulkdensiteit - gemeten"), PropertyIsGreaterThan(propertyname="diepte_tot_cm", literal = '0'), PropertyIsLessThan(propertyname="diepte_van_cm", literal = '23') ]), return_fields=('pkey_bodemlocatie', 'waarde')) density_observaties = density_observaties.rename(columns={"waarde": "bulkdensity"}) density_observaties.head()
docs/notebooks/search_bodem.ipynb
DOV-Vlaanderen/pydov
mit
Merge results together based on their bodemlocatie. Only remains the records where both parameters exists
import pandas as pd merged = pd.merge(carbon_observaties, density_observaties, on="pkey_bodemlocatie") merged.head()
docs/notebooks/search_bodem.ipynb
DOV-Vlaanderen/pydov
mit
Filter Aardewerk soil locations Since we know that Aardewerk soil locations make use of a specific suffix, a query could be built filtering these out. Since we only need to match a partial string in the name, we will build a query using the PropertyIsLike operator to find all Aardewerk bodemlocaties. We use max_features=10 to limit the results to 10.
from owslib.fes import PropertyIsLike query = PropertyIsLike(propertyname='naam', literal='KART_PROF_%', wildCard='%') df = bodemlocatie.search(query=query, max_features=10) df.head()
docs/notebooks/search_bodem.ipynb
DOV-Vlaanderen/pydov
mit
As seen in the soil data example, we can use the pkey_bodemlocatie as a permanent link to the information of these bodemlocaties:
for pkey_bodemlocatie in set(df.pkey_bodemlocatie): print(pkey_bodemlocatie)
docs/notebooks/search_bodem.ipynb
DOV-Vlaanderen/pydov
mit
What about Spark DataFrames? No problem! We can easily perform the same steps on a Spark DataFrame. One important thing to note there is that we need to include a jar file when we create our Spark session. This is used by spark to create the histograms using Histogrammar. The jar file will be automatically downloaded the first time you run this command.
# download histogrammar jar files if not already installed, used for histogramming of spark dataframe try: from pyspark.sql import SparkSession from pyspark.sql.functions import col from pyspark import __version__ as pyspark_version pyspark_installed = True except ImportError: print("pyspark needs to be installed for this example") pyspark_installed = False # this is the jar file for spark 3.0 # for spark 2.X, in the jars string, for both jar files change "_2.12" into "_2.11". if pyspark_installed: scala = '2.12' if int(pyspark_version[0]) >= 3 else '2.11' hist_jar = f'io.github.histogrammar:histogrammar_{scala}:1.0.20' hist_spark_jar = f'io.github.histogrammar:histogrammar-sparksql_{scala}:1.0.20' spark = SparkSession.builder.config( "spark.jars.packages", f'{hist_spark_jar},{hist_jar}' ).getOrCreate() sdf = spark.createDataFrame(df)
histogrammar/notebooks/histogrammar_tutorial_advanced.ipynb
histogrammar/histogrammar-python
apache-2.0
Filling histograms with spark Filling histograms with spark dataframes is just as simple as it is with pandas dataframes.
# example: filling from a pandas dataframe hist = hg.SparselyHistogram(binWidth=100, quantity='transaction') hist.fill.numpy(df) hist.plot.matplotlib(); # for spark you will need this spark column function: if pyspark_installed: from pyspark.sql.functions import col
histogrammar/notebooks/histogrammar_tutorial_advanced.ipynb
histogrammar/histogrammar-python
apache-2.0
Let's make the same histogram but from a spark dataframe. There are just two differences: - When declaring a histogram, always set quantity to col('columns_name') instead of 'columns_name' - When filling the histogram from a dataframe, use the fill.sparksql() method instead of fill.numpy().
# example: filling from a pandas dataframe if pyspark_installed: hist = hg.SparselyHistogram(binWidth=100, quantity=col('transaction')) hist.fill.sparksql(sdf) hist.plot.matplotlib();
histogrammar/notebooks/histogrammar_tutorial_advanced.ipynb
histogrammar/histogrammar-python
apache-2.0
Apart from these two differences, all functionality is the same between pandas and spark histograms! Like pandas, we can also do directly from the dataframe:
if pyspark_installed: h2 = sdf.hg_SparselyProfileErr(25, col('longitude'), col('age')) h2.plot.matplotlib(); if pyspark_installed: h3 = sdf.hg_TwoDimensionallySparselyHistogram(25, col('longitude'), 10, col('latitude')) h3.plot.matplotlib();
histogrammar/notebooks/histogrammar_tutorial_advanced.ipynb
histogrammar/histogrammar-python
apache-2.0
All examples below also work with spark dataframes. Making many histograms at once Histogrammar has a nice method to make many histograms in one go. See here. By default automagical binning is applied to make the histograms.
hists = df.hg_make_histograms() # histogrammar has made histograms of all features, using an automated binning. hists.keys() h = hists['transaction'] h.plot.matplotlib(); # you can select which features you want to histogram with features=: hists = df.hg_make_histograms(features = ['longitude', 'age', 'eyeColor']) # you can also make multi-dimensional histograms # here longitude is the first axis of each histogram. hists = df.hg_make_histograms(features = ['longitude:age', 'longitude:age:eyeColor'])
histogrammar/notebooks/histogrammar_tutorial_advanced.ipynb
histogrammar/histogrammar-python
apache-2.0
Working with timestamps
# Working with a dedicated time axis, make histograms of each feature over time. hists = df.hg_make_histograms(time_axis="date") hists.keys() h2 = hists['date:age'] h2.plot.matplotlib();
histogrammar/notebooks/histogrammar_tutorial_advanced.ipynb
histogrammar/histogrammar-python
apache-2.0
Histogrammar does not support pandas' timestamps natively, but converts timestamps into nanoseconds since 1970-1-1.
h2.bin_edges()
histogrammar/notebooks/histogrammar_tutorial_advanced.ipynb
histogrammar/histogrammar-python
apache-2.0
The datatype shows the datetime though:
h2.datatype # convert these back to timestamps with: pd.Timestamp(h2.bin_edges()[0]) # For the time axis, you can set the binning specifications with time_width and time_offset: hists = df.hg_make_histograms(time_axis="date", time_width='28d', time_offset='2014-1-4', features=['date:isActive', 'date:age']) hists['date:isActive'].plot.matplotlib();
histogrammar/notebooks/histogrammar_tutorial_advanced.ipynb
histogrammar/histogrammar-python
apache-2.0
Setting binning specifications
# histogram selections. Here 'date' is the first axis of each histogram. features=[ 'date', 'latitude', 'longitude', 'age', 'eyeColor', 'favoriteFruit', 'transaction' ] # Specify your own binning specifications for individual features or combinations thereof. # This bin specification uses open-ended ("sparse") histograms; unspecified features get # auto-binned. The time-axis binning, when specified here, needs to be in nanoseconds. bin_specs={ 'longitude': {'binWidth': 10.0, 'origin': 0.0}, 'latitude': {'edges': [-100, -75, -25, 0, 25, 75, 100]}, 'age': {'num': 100, 'low': 0, 'high': 100}, 'transaction': {'centers': [-1000, -500, 0, 500, 1000, 1500]}, 'date': {'binWidth': pd.Timedelta('4w').value, 'origin': pd.Timestamp('2015-1-1').value} } # this binning specification is making: # - a sparse histogram for: longitude # - an irregular binned histogram for: latitude # - a closed-range evenly spaced histogram for: age # - a histogram centered around bin centers for: transaction hists = df.hg_make_histograms(features=features, bin_specs=bin_specs) hists.keys() hists['transaction'].plot.matplotlib(); # all available bin specifications are (just examples): bin_specs = {'x': {'bin_width': 1, 'bin_offset': 0}, # SparselyBin histogram 'y': {'num': 10, 'low': 0.0, 'high': 2.0}, # Bin histogram 'x:y': [{}, {'num': 5, 'low': 0.0, 'high': 1.0}], # SparselyBin vs Bin histograms 'a': {'edges': [0, 2, 10, 11, 21, 101]}, # IrregularlyBin histogram 'b': {'centers': [1, 6, 10.5, 16, 20, 100]}, # CentrallyBin histogram 'c': {'max': True}, # Maximize histogram 'd': {'min': True}, # Minimize histogram 'e': {'sum': True}, # Sum histogram 'z': {'deviate': True}, # Deviate histogram 'f': {'average': True}, # Average histogram 'a:f': [{'edges': [0, 10, 101]}, {'average': True}], # IrregularlyBin vs Average histograms 'g': {'thresholds': [0, 2, 10, 11, 21, 101]}, # Stack histogram 'h': {'bag': True}, # Bag histogram } # to set binning specs for a specific 2d histogram, you can do this: # if these are not provide, the 1d binning specifications are picked up for 'a:f' bin_specs = {'a:f': [{'edges': [0, 10, 101]}, {'average': True}]} # For example features = ['latitude:age', 'longitude:age', 'age', 'longitude'] bin_specs = { 'latitude': {'binWidth': 25}, 'longitude:': {'edges': [-100, -75, -25, 0, 25, 75, 100]}, 'age': {'deviate': True}, 'longitude:age': [{'binWidth': 25}, {'average': True}], } hists = df.hg_make_histograms(features=features, bin_specs=bin_specs) h = hists['latitude:age'] h.bins hists['longitude:age'].plot.matplotlib();
histogrammar/notebooks/histogrammar_tutorial_advanced.ipynb
histogrammar/histogrammar-python
apache-2.0
We just defined a BitwiseAnd class that takes one integer column as input, and returns a scalar output of the same type as the input. This matches both the requirements of a reduction and the spepcifics of the function that we want to implement. Note: It is very important that you write the correct argument rules and output type here. The expression will not work otherwise. Step 2: Define the API Because every reduction in ibis has the ability to filter out values during aggregation (a typical feature in databases and analytics tools), to make an expression out of BitwiseAnd we need to pass an additional argument: where to our BitwiseAnd constructor.
from ibis.expr.types import IntegerColumn # not IntegerValue! reductions are only valid on columns def bitwise_and(integer_column, where=None): return BitwiseAnd(integer_column, where=where).to_expr() IntegerColumn.bitwise_and = bitwise_and
docs/source/notebooks/tutorial/10-Adding-a-new-reduction-expression.ipynb
deepfield/ibis
apache-2.0
Interlude: Create some expressions using bitwise_and
import ibis t = ibis.table([('bigint_col', 'int64'), ('string_col', 'string')], name='t') t.bigint_col.bitwise_and() t.bigint_col.bitwise_and(t.string_col == '1')
docs/source/notebooks/tutorial/10-Adding-a-new-reduction-expression.ipynb
deepfield/ibis
apache-2.0
Step 3: Turn the Expression into SQL
import sqlalchemy as sa @ibis.postgres.compiles(BitwiseAnd) def compile_sha1(translator, expr): # pull out the arguments to the expression arg, where = expr.op().args # compile the argument compiled_arg = translator.translate(arg) # call the appropriate postgres function agg = sa.func.bit_and(compiled_arg) # handle a non-None filter clause if where is not None: return agg.filter(translator.translate(where)) return agg
docs/source/notebooks/tutorial/10-Adding-a-new-reduction-expression.ipynb
deepfield/ibis
apache-2.0
Step 4: Putting it all Together Connect to the ibis_testing database NOTE: To be able to execute the rest of this notebook you need to run the following command from your ibis clone: sh ci/build.sh
con = ibis.postgres.connect( user='postgres', host='postgres', password='postgres', database='ibis_testing' )
docs/source/notebooks/tutorial/10-Adding-a-new-reduction-expression.ipynb
deepfield/ibis
apache-2.0
Create and execute a bitwise_and expression
t = con.table('functional_alltypes') t expr = t.bigint_col.bitwise_and() expr sql_expr = expr.compile() print(sql_expr) expr.execute()
docs/source/notebooks/tutorial/10-Adding-a-new-reduction-expression.ipynb
deepfield/ibis
apache-2.0
Let's see what a bitwise_and call looks like with a where argument
expr = t.bigint_col.bitwise_and(where=(t.bigint_col == 10) | (t.bigint_col == 40)) expr result = expr.execute() result
docs/source/notebooks/tutorial/10-Adding-a-new-reduction-expression.ipynb
deepfield/ibis
apache-2.0
Let's confirm that taking bitwise AND of 10 and 40 is in fact 8
10 & 40 print(' {:0>8b}'.format(10)) print('& {:0>8b}'.format(40)) print('-' * 10) print(' {:0>8b}'.format(10 & 40))
docs/source/notebooks/tutorial/10-Adding-a-new-reduction-expression.ipynb
deepfield/ibis
apache-2.0
For the estimation of the 2D SFS, realSFS has only taken sites that had data from at least 9 individuals in each population (see assembly.sh, lines 1423 onwards).
sfs2d_unfolded.S()
Data_analysis/SNP-indel-calling/dadi/05_2D.ipynb
claudiuskerth/PhDthesis
mit
The 2D spectrum contains counts from 60k sites that are variable in par or ery or both.
import pylab %matplotlib inline # note this needs to be in the same cell as the dadi plotting function call to take effect pylab.rcParams['font.size'] = 14.0 pylab.rcParams['figure.figsize'] = [12.0, 10.0] dadi.Plotting.plot_single_2d_sfs(sfs2d_unfolded, vmin=1, cmap=pylab.cm.jet) %psource dadi.Plotting.plot_single_2d_sfs
Data_analysis/SNP-indel-calling/dadi/05_2D.ipynb
claudiuskerth/PhDthesis
mit
More colormaps
sfs2d_folded = sfs2d_unfolded.fold() # plot the folded GLOBAL minor allele frequency spectrum dadi.Plotting.plot_single_2d_sfs(sfs2d_folded, vmin=1, cmap=pylab.cm.jet) # setting the smallest grid size slightly larger than the largest population sample size (36) pts_l = [40, 50, 60]
Data_analysis/SNP-indel-calling/dadi/05_2D.ipynb
claudiuskerth/PhDthesis
mit
The fitting of parameters for various 1D models to the SFS's of par and ery has indicated the following: - ery has undergone a population size increase by >20 fold (between about 1-2 $\times2N_{ref}$ generations ago) and later (<1 $\times2N_{ref}$ generations ago) a decrease to about 15% of the ancient populations size - par has undergone only one size change to <10% of the ancient population size, this is inferred to have happened in the distant past, about 2-4 ($\times 2N_{ref}$) generations ago I think it would be good to incorporate this information in the specification of a more complex 2D model.
%pinfo dadi.Demographics2D.split_mig
Data_analysis/SNP-indel-calling/dadi/05_2D.ipynb
claudiuskerth/PhDthesis
mit
There are a couple of built-in models that I could use, but I think I need a custom model here that includes the information from the 1D model fitting. I would like to write a model function that specifies an ancient split between ery and par, then a population decline in par that lasts until the present and later an exponential growth in ery that is more recently followed by a population decline. An alternative model to test would be a population decline in the ancestral population, followed by the split between, later population increase in ery which is more recently followed by a population decline.
def split_1grow_2decline_1decline_nomig((nu1s, nu2s, nu2f, nu1b, nu1f, Ts, T2, Tb, Tf), (n1, n2), pts): """ model function: specifies an ancient split, followed by growth in pop1 and decline in pop2, later also decline in pop1 nu1s: rel. size of pop1 after split nu2s: rel. size of pop2 after split nu2f: final rel. size for pop2 nu1b: rel. size of pop1 after first size change nu1f: final rel. size of pop1 Ts: time betweem population split and size change in pop2 T2: time between size change in pop2 and first size change in pop1 Tb: time between first and second size change in pop1 Tf: time between second size change in pop1 and present The population split happend Tf+Tb+T2+Ts (x2N) generations in the past. n1,n2: sample sizes pts: number of grid points to use in extrapolation """ # define grid xx = yy = dadi.Numerics.default_grid(pts) # phi for the equilibrium ancestral pop phi = dadi.PhiManip.phi_1D(xx) # population split into pop1 and pop2 phi = dadi.PhiManip.phi_1D_to_2D(xx, phi) # stepwise change in size for pop1 and pop2 after split phi = dadi.Integration.two_pops(phi, xx, Ts, nu2=nu2s, nu1=nu1s, m12=0, m21=0) # stepwise change in size for pop2 only phi = dadi.Integration.two_pops(phi, xx, T2, nu2=nu2f, nu1=nu1s, m12=0, m21=0) # stepwise change in size for pop1 only phi = dadi.Integration.two_pops(phi, xx, Tb, nu2=nu2f, nu1=nu1b, m12=0, m21=0) # stepwise change in size for pop1 only phi = dadi.Integration.two_pops(phi, xx, Tf, nu2=nu2f, nu1=nu1f, m12=0, m21=0) # calculate spectrum sfs = dadi.Spectrum.from_phi(phi, (n1, n2), (xx, yy)) return sfs
Data_analysis/SNP-indel-calling/dadi/05_2D.ipynb
claudiuskerth/PhDthesis
mit
I wonder which population dadi assumes to be pop1. In the sfs2d spectrum object, ery is pop1 and par is pop2.
?dadi.PhiManip.phi_1D_to_2D # create link to function that specifies the model func = split_1grow_2decline_1decline_nomig # create extrapolating version of the model function func_ex = dadi.Numerics.make_extrap_log_func(func) ?split_1grow_2decline_1decline_nomig nu1s = 0.5 nu2s = 0.5 nu2f = 0.05 nu1b = 40 nu1f = 0.15 Ts = 0.1 T2 = 0.1 Tb = 0.1 Tf = 0.1 sfs2d_folded.sample_sizes model_spectrum = func_ex((nu1s, nu2s, nu2f, nu1b, nu1f, Ts, T2, Tb, Tf), sfs2d_folded.sample_sizes, pts_l) theta = dadi.Inference.optimal_sfs_scaling(model_spectrum.fold(), sfs2d_folded) theta dadi.Plotting.plot_2d_comp_multinom(model_spectrum.fold(), sfs2d_folded, vmin=1)
Data_analysis/SNP-indel-calling/dadi/05_2D.ipynb
claudiuskerth/PhDthesis
mit
Thought: For feature extraction, it would probably be faster to extract all time domain vectors $y$ into a NumPy array and perform the necessary LibROSA operations across the rows of the vector, possibly leveraging under-the-hood efficiencies. "1min 43s per loop" below
for i, row in df.iterrows(): session = nonrealtimetools.Session() builder = gk.generator.gendy1.make_builder(row) out = gk.generator.gendy1.build_out(builder) synthdef = builder.build() with session.at(0): synth_a = session.add_synth(duration=10, synthdef=synthdef) gk.util.render_session(session, this_dir, row["hash"]) y, sr = librosa.load(os.path.join(this_dir, "aif_files", row["hash"] + ".aiff")) _y_normed = librosa.util.normalize(y) _mfcc = librosa.feature.mfcc(y=_y_normed, sr=sr, n_mfcc=13) _cent = np.mean(librosa.feature.spectral_centroid(y=_y_normed, sr=sr)) _mfcc_mean = gk.feature_extraction.get_stats(_mfcc)["mean"] X_row = np.append(_mfcc_mean, _cent) if i==0: X_mtx = X_row else: X_mtx = np.vstack((X_mtx, X_row)) X_mtx.shape def col_rename_4_mfcc(c): if (c < 13): return "mfcc_mean_{}".format(c) else: return "spectral_centroid" pd.DataFrame(X_mtx).rename_axis(lambda c: col_rename_4_mfcc(c), axis=1) from sklearn import linear_model from sklearn import model_selection from sklearn import preprocessing import sklearn as sk import matplotlib.pyplot as plt %matplotlib inline pmtx.shape X_mtx.shape X_mtx[0] X_train, X_test, y_train, y_test = sk.model_selection.train_test_split( X_mtx, pmtx, test_size=0.4, random_state=1) # Create linear regression objectc regr = linear_model.LinearRegression() # Train the model using the training sets regr.fit(X_train, y_train) # The coefficients print('Coefficients: \n', regr.coef_) # The mean squared error print("Mean squared error: %.2f" % np.mean((regr.predict(X_test) - y_test) ** 2)) # Explained variance score: 1 is perfect prediction print('Variance score: %.2f' % regr.score(X_test, y_test))
Notebooks/Experiments/17_01_10_regress_w_gendy_dists.ipynb
spacecoffin/GravelKicker
apache-2.0
In a nutshell, method 1 generates an array with shape (x, y, z) -- specifically, (540, 717, 1358). The method 2 generates a numpy array with shape (z, y, x) -- specifically, (1358, 717, 540). Since we want the first column to be z slices, the original method was granting me x-slices (hence the cigar-tube dimensions). In order to interconvert, we can either just use the rawData approach after directly calling from ndstore, or we can take our numpy array after loading from nibabel and use numpy's swapaxes method to just swap two of the dimensions (shown below).
## if we have (i, j, k), we want (k, j, i) (converts nibabel format to sitk format) new_im = newer_img.swapaxes(0,2) # just swap i and k
Tony/ipynb/Ilastik on Raw and HistEq Fear199 Data.ipynb
NeuroDataDesign/seelviz
apache-2.0
Task 2: Generating raw TIFF slices. Now that I have appropiate coordinates, I generated a subset of TIFF slices to run the training module for the image classifier. Using the script here:
plane = 0; for plane in (0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 100, 101, 102, 103, 104): output = np.asarray(rawData[plane]) ## Save as TIFF for Ilastik scipy.misc.toimage(output).save('RAWoutfile' + TokenName + 'ITK' + str(plane) + '.tiff')
Tony/ipynb/Ilastik on Raw and HistEq Fear199 Data.ipynb
NeuroDataDesign/seelviz
apache-2.0
Take a first look at the data
data = apc.asbestos() data.head()
apc/vignettes/vignette_mesothelioma.ipynb
JonasHarnau/apc
gpl-3.0
Set up a model and attach the data to it.
model = apc.Model() model.data_from_df(data)
apc/vignettes/vignette_mesothelioma.ipynb
JonasHarnau/apc
gpl-3.0
Now we look at a first plot of the data. We plot the response over each of the three time-scales.
model.plot_data_sums(figsize=(10,4))
apc/vignettes/vignette_mesothelioma.ipynb
JonasHarnau/apc
gpl-3.0
Martinez Miranda et al. (2015), drop age groups older than 89 due to sparsity. We redo the plot looking exclusively at data for these age groups.
model.sub_model(age_from_to=(80,None)).plot_data_sums(figsize=(10,4))
apc/vignettes/vignette_mesothelioma.ipynb
JonasHarnau/apc
gpl-3.0
We can see that there is indeed a sharp drop towards the end of the sample. Thus, we set up a sub-model that does not include these groups.
model = model.sub_model(age_from_to=(None, 89))
apc/vignettes/vignette_mesothelioma.ipynb
JonasHarnau/apc
gpl-3.0
To confirm, we take a look at the data in vector form as organized by data_from_df:
model.data_vector.tail()
apc/vignettes/vignette_mesothelioma.ipynb
JonasHarnau/apc
gpl-3.0
Success! The oldest age groups have been removed. Next, we plot the data of one time-scale within another.
model.plot_data_within(figsize=(10,8), logy=True)
apc/vignettes/vignette_mesothelioma.ipynb
JonasHarnau/apc
gpl-3.0
From the cohort within period plot (bottom middle), we can see that mortality seems to slowly taper off for the 1917-1938 cohorts while that for the 1939-1960 appears to still be rising. Next, we fit a deviance table of a Poisson model to the data.
model.fit_table('poisson_response') model.deviance_table
apc/vignettes/vignette_mesothelioma.ipynb
JonasHarnau/apc
gpl-3.0
We see that an age-period-cohort model cannot be rejected with a p-value of 0.85 (against a saturated model with as many parameters as observations). The same holds for an age-cohort model with a p-value of 0.78. A reduction from an age-period-cohort to an age-cohort model yields a p-value of 0.03. Miranda et al. point out that it may still be acceptable to use this model since it eases forecasting substantially: it makes it unnecessary to extrapolate the period parameters into the future which would introduce another source of uncertainty. Further, simpler models often seem to be beneficial for forecasting. Remark: see Nielsen (2014) for an explanation of the individual predictors. We thus fit an age-cohort model to the data.
model.fit('poisson_response', 'AC')
apc/vignettes/vignette_mesothelioma.ipynb
JonasHarnau/apc
gpl-3.0
We can now plot the parameters and their standard errors.
model.plot_parameters(around_coef=False)
apc/vignettes/vignette_mesothelioma.ipynb
JonasHarnau/apc
gpl-3.0
The level combined with the two linear trends specify a plane. The detrended double sums of double differences in the bottom row show deviations over and above this plane. To obtain the fitted value of the linear predictor for a given age and cohort, we add together the level, and the value of the linear trends and detrended double sums at the relevant age and cohort. The fitted value for the response would be the exponential of this value. The detrended double sums start and end in zero by design. We can move on to look at a residual plot.
model.plot_residuals('deviance')
apc/vignettes/vignette_mesothelioma.ipynb
JonasHarnau/apc
gpl-3.0
We would like this plot to look like white noise. While this looks quite reasonable for the upper age groups, the pattern appears somewhat different for the lower ages up to about 37. It seems that the fit there is generally somewhat better. From what we saw above, this may relate to the fact that the death counts for these age groups are quite low so that predicting something close to zero is going to give good results. Fitting these cells seems somewhat 'easier'. Next, we move on to forecast from the model. The idea is to forecast mortality for future periods based on parameter estimates that are already available from the data. We can visualize that in a heatmap plot in age-cohort space.
model.plot_data_heatmaps(space='AC')
apc/vignettes/vignette_mesothelioma.ipynb
JonasHarnau/apc
gpl-3.0
Here, the idea is to fill in the empty values in the bottom right triangle. Estimates for age and cohort effects for these cells are already available. Since we do not have a period effect in the model, this is all we need. We can now forecast from the model.
model.forecast()
apc/vignettes/vignette_mesothelioma.ipynb
JonasHarnau/apc
gpl-3.0
This call generated (distribution) forecasts for individual cells, as well as aggregated by age, period and cohort. In the heatmap plot above, these correspond to row, column and (counter-)diagonal sums in the lower right triangle, respectively. Finally, a forecast for the total, that is the sum over all cells in the triangle, is available. We find the peak in the point forecasts.
peak_year = model.forecasts['Period']['point_forecast'].idxmax() print('Peak year is {}.'.format(peak_year)) model.forecasts['Period'].loc[peak_year-2:peak_year+2]
apc/vignettes/vignette_mesothelioma.ipynb
JonasHarnau/apc
gpl-3.0
We can see that the generated arrays include not just the point forecast but also standard errors (broken down into process and estimation error) and quantile forecasts. Next, we plot the forecasts aggregated by period.
model.plot_forecast()
apc/vignettes/vignette_mesothelioma.ipynb
JonasHarnau/apc
gpl-3.0
The plot includes one and two standard error bands. If we look closely, we can see that the fit seems to be somewhat worse for the last couple periods before the sample ends. One way to correct this is apply intercept correction. Martinez Miranda et al. (2015) suggest to multiply the point forecasts by the ratio of the last realization to the last fitted value.
final_realized = model.data_vector.sum(level='Period').sort_index().iloc[-1][0] final_fitted = model.fitted_values.sum(level='Period').sort_index().iloc[-1] print('Death counts for last period: {}'.format(final_realized)) print('Fitted value for last period: {:.2f}'.format(final_fitted)) print('Intercept correction factor: {:.2f}'.format(final_realized/final_fitted))
apc/vignettes/vignette_mesothelioma.ipynb
JonasHarnau/apc
gpl-3.0
We take a look at the plot with intercept correction, limiting our attention toe the period from 1990 to 2040.
model.plot_forecast(ic=True, from_to=(1990,2040))
apc/vignettes/vignette_mesothelioma.ipynb
JonasHarnau/apc
gpl-3.0
This plot does have a more natural appearance, lacking the jump at the end of the sample. We can also look at forecasts over a different time scale, for example by age. In this case, we already have some data available for all age groups under consideration. We may then be more interested not just in the forecasts, but maybe in the sum of response and forecast.
model.plot_forecast(by='Age', aggregate=True)
apc/vignettes/vignette_mesothelioma.ipynb
JonasHarnau/apc
gpl-3.0
Creating and Modeling a Noisy Training Set Our biggest step in the data programming pipeline is the creation - and modeling - of a noisy training set. We'll approach this in three main steps: Creating labeling functions (LFs): This is where most of our development time would actually go into if this were a real application. Labeling functions encode our heuristics and weak supervision signals to generate (noisy) labels for our training candidates. Applying the LFs: Here, we actually use them to label our candidates! Training a generative model of our training set: Here we learn a model over our LFs, learning their respective accuracies automatically. This will allow us to combine them into a single, higher-quality label set. We'll also add some detail on how to go about developing labeling functions and then debugging our model of them to improve performance. 1. Creating Labeling Functions In Snorkel, our primary interface through which we provide training signal to the end extraction model we are training is by writing labeling functions (LFs) (as opposed to hand-labeling massive training sets). We'll go through some examples for our spouse extraction task below. A labeling function is just a Python function that accepts a Candidate and returns 1 to mark the Candidate as true, -1 to mark the Candidate as false, and 0 to abstain from labeling the Candidate (note that the non-binary classification setting is covered in the advanced tutorials!). In the next stages of the Snorkel pipeline, we'll train a model to learn the accuracies of the labeling functions and trewieght them accordingly, and then use them to train a downstream model. It turns out by doing this, we can get high-quality models even with lower-quality labeling functions. So they don't need to be perfect! Now on to writing some:
import re from snorkel.lf_helpers import ( get_left_tokens, get_right_tokens, get_between_tokens, get_text_between, get_tagged_text, )
tutorials/intro/Intro_Tutorial_2.ipynb
jasontlam/snorkel
apache-2.0
Now I’ve harped on about vectorization in the last couple of videos and I’ve told you that it’s great but I haven’t shown you how it’s so great. Here are the two powerful reasons - Concise - Efficient The fundamental idea behind array programming is that operations apply at once to an entire set of values. This makes it a high-level programming model as it allows the programmer to think and operate on whole aggregates of data, without having to resort to explicit loops of individual scalar operations. You can read more here: https://en.wikipedia.org/wiki/Array_programming
npa
3 - NumPy Basics/3-3 NumPy Array Basics - Vectorization.ipynb
anabranch/data_analysis_with_python_and_pandas
apache-2.0
With vectorization we can apply changes to the entire array extremely efficiently, no more for loops. If we want to double the array, we just multiply by 2 if we want to cube it we just cube it.
npa * 2 npa ** 3 [x * 2 for x in npa]
3 - NumPy Basics/3-3 NumPy Array Basics - Vectorization.ipynb
anabranch/data_analysis_with_python_and_pandas
apache-2.0
So who cares? Again it’s going to be efficiency thing just like boolean selection Let’s try something a bit more complex. Define a function named new_func that cubes the value if it is less than 5 and squares it if it is greater or equal to 5.
def new_func(numb): if numb < 10: return numb**3 else: return numb**2 new_func(npa)
3 - NumPy Basics/3-3 NumPy Array Basics - Vectorization.ipynb
anabranch/data_analysis_with_python_and_pandas
apache-2.0
However we can’t just pass in the whole vector because we’re going to get this array ambiguity.
?np.vectorize
3 - NumPy Basics/3-3 NumPy Array Basics - Vectorization.ipynb
anabranch/data_analysis_with_python_and_pandas
apache-2.0
We need to vectorize this operation and we do that with np.vectorize We can then apply that to our entire array and it takes care of the complexity for us. We can think in terms of the data without having to think about each individual element.
vect_new_func = np.vectorize(new_func) type(vect_new_func) vect_new_func(npa) [new_func(x) for x in npa]
3 - NumPy Basics/3-3 NumPy Array Basics - Vectorization.ipynb
anabranch/data_analysis_with_python_and_pandas
apache-2.0
It's also much faster to vectorize operations and while these are simple examples the benefits will become apparent as we continue through this course. this has changed since python3 and the list comprehension has gotten much faster. However, this doesn't mean that vectorization is slower, just that it's a bit heavier because it places a lot more tools at your disposal like we'll see in the next video.
%timeit [new_func(x) for x in npa] %timeit vect_new_func(npa) npa2 = np.random.random_integers(0,100,20*1000)
3 - NumPy Basics/3-3 NumPy Array Basics - Vectorization.ipynb
anabranch/data_analysis_with_python_and_pandas
apache-2.0
Speed comparisons with size.
%timeit [new_func(x) for x in npa2] %timeit vect_new_func(npa2)
3 - NumPy Basics/3-3 NumPy Array Basics - Vectorization.ipynb
anabranch/data_analysis_with_python_and_pandas
apache-2.0
Line magics Prefix and infix support
%plc (1 and? 1) %plc (True ⊕ False ⊕ True ⊕ False ⊕ True) %plc ( ( ∧ 1 1 1 ) ∨ ( ∧ 1 1 0 ) )
Hy -level PLCParser.ipynb
markomanninen/PLCParser
mit
Cell magics Prefix and infix support
%%plc #$(1 and? (or? 0 1))
Hy -level PLCParser.ipynb
markomanninen/PLCParser
mit
Registering additional operators
%%plc ; register + sign for infix notation #>+ ; evaluate code #$(1 + (2 + (3)))
Hy -level PLCParser.ipynb
markomanninen/PLCParser
mit
Adding more complex custom operators
%%plc ; use operator macro to add mean operator with custom symbol (defoperator mean x̄ [&rest args]   (/ (sum args) (len args))) ; try prefix notation with nested structure (print (x̄ 1 2 3 4)) (print (x̄ 1 2 (x̄ 3 4))) ; note that infix notation in cell magics needs to be prefixed with ; #$ reader macro marker while in line magics it is not required (print #$(1 x̄ 2 x̄ 3 x̄ 4))
Hy -level PLCParser.ipynb
markomanninen/PLCParser
mit
Order of precedence By default order of precedence is from left to right. Here we will use defoperators to define additional operators beyond logical ones. Then for variety we use defmixfix macro to evaluate clause. First evaluation will give 9 as an answer because evaluation is started from 1 + 2 and then that is multiplied
%%plc (defoperators * +) (print "First" (defmixfix 1 + 2 * 3)) (defprecedence * +) (print "Second" (defmixfix 1 + 2 * 3))
Hy -level PLCParser.ipynb
markomanninen/PLCParser
mit
Mixing Hy and Python on same cell
# the first line is hy code supporting infix and prefix logical clauses %plc ( 1 and? 1 or? (0) ) # the second line is python code. this is possible because above code is line magics [a for a in (1, 2, 3)]
Hy -level PLCParser.ipynb
markomanninen/PLCParser
mit
Normal Hy language support
%%plc ; just define a function ... (defn f [x] (print x)) ; ... and call it (f 3.1416) ; cant use python code in plc cell magics! %%plc ; set up variables (setv A True B True C True) (setv clause "( A ∧ B ∧ C )") ; use variables on clause (print clause "=" #$( A ∧ B ∧ C ))
Hy -level PLCParser.ipynb
markomanninen/PLCParser
mit