markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
3.4. Integrated Timestep Is Required: TRUE    Type: INTEGER    Cardinality: 1.1 Timestep for the aerosol model (in seconds)
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s)
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
3.5. Integrated Scheme Type Is Required: TRUE    Type: ENUM    Cardinality: 1.1 Specify the type of timestep scheme
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Explicit" # "Implicit" # "Semi-implicit" # "Semi-analytic" # "Impact solver" # "Back Euler" # "Newton Raphson" # "Rosenbrock" # "Other: [Please specify]" # TODO - please enter value(s)
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
4. Key Properties --> Meteorological Forcings ** 4.1. Variables 3D Is Required: FALSE    Type: STRING    Cardinality: 0.1 Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
4.2. Variables 2D Is Required: FALSE    Type: STRING    Cardinality: 0.1 Two dimensionsal forcing variables, e.g. land-sea mask definition
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
4.3. Frequency Is Required: FALSE    Type: INTEGER    Cardinality: 0.1 Frequency with which meteological forcings are applied (in seconds).
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s)
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
5. Key Properties --> Resolution Resolution in the aersosol model grid 5.1. Name Is Required: TRUE    Type: STRING    Cardinality: 1.1 This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.resolution.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
5.2. Canonical Horizontal Resolution Is Required: FALSE    Type: STRING    Cardinality: 0.1 Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
5.3. Number Of Horizontal Gridpoints Is Required: FALSE    Type: INTEGER    Cardinality: 0.1 Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s)
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
5.4. Number Of Vertical Levels Is Required: FALSE    Type: INTEGER    Cardinality: 0.1 Number of vertical levels resolved on computational grid.
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s)
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
5.5. Is Adaptive Grid Is Required: FALSE    Type: BOOLEAN    Cardinality: 0.1 Default is False. Set true if grid resolution changes during execution.
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s)
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
6. Key Properties --> Tuning Applied Tuning methodology for aerosol model 6.1. Description Is Required: TRUE    Type: STRING    Cardinality: 1.1 General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
6.2. Global Mean Metrics Used Is Required: FALSE    Type: STRING    Cardinality: 0.N List set of metrics of the global mean state used in tuning model/component
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
6.3. Regional Metrics Used Is Required: FALSE    Type: STRING    Cardinality: 0.N List of regional metrics of mean state used in tuning model/component
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
6.4. Trend Metrics Used Is Required: FALSE    Type: STRING    Cardinality: 0.N List observed trend metrics used in tuning model/component
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
7. Transport Aerosol transport 7.1. Overview Is Required: TRUE    Type: STRING    Cardinality: 1.1 Overview of transport in atmosperic aerosol model
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.transport.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
7.2. Scheme Is Required: TRUE    Type: ENUM    Cardinality: 1.1 Method for aerosol transport modeling
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.transport.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Uses Atmospheric chemistry transport scheme" # "Specific transport scheme (eulerian)" # "Specific transport scheme (semi-lagrangian)" # "Specific transport scheme (eulerian and semi-lagrangian)" # "Specific transport scheme (lagrangian)" # TODO - please enter value(s)
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
7.3. Mass Conservation Scheme Is Required: TRUE    Type: ENUM    Cardinality: 1.N Method used to ensure mass conservation.
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Uses Atmospheric chemistry transport scheme" # "Mass adjustment" # "Concentrations positivity" # "Gradients monotonicity" # "Other: [Please specify]" # TODO - please enter value(s)
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
7.4. Convention Is Required: TRUE    Type: ENUM    Cardinality: 1.N Transport by convention
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.transport.convention') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Uses Atmospheric chemistry transport scheme" # "Convective fluxes connected to tracers" # "Vertical velocities connected to tracers" # "Other: [Please specify]" # TODO - please enter value(s)
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
8. Emissions Atmospheric aerosol emissions 8.1. Overview Is Required: TRUE    Type: STRING    Cardinality: 1.1 Overview of emissions in atmosperic aerosol model
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
8.2. Method Is Required: TRUE    Type: ENUM    Cardinality: 1.N Method used to define aerosol species (several methods allowed because the different species may not use the same method).
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Prescribed (climatology)" # "Prescribed CMIP6" # "Prescribed above surface" # "Interactive" # "Interactive above surface" # "Other: [Please specify]" # TODO - please enter value(s)
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
8.3. Sources Is Required: FALSE    Type: ENUM    Cardinality: 0.N Sources of the aerosol species are taken into account in the emissions scheme
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.sources') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Vegetation" # "Volcanos" # "Bare ground" # "Sea surface" # "Lightning" # "Fires" # "Aircraft" # "Anthropogenic" # "Other: [Please specify]" # TODO - please enter value(s)
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
8.4. Prescribed Climatology Is Required: FALSE    Type: ENUM    Cardinality: 0.1 Specify the climatology type for aerosol emissions
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Interannual" # "Annual" # "Monthly" # "Daily" # TODO - please enter value(s)
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
8.5. Prescribed Climatology Emitted Species Is Required: FALSE    Type: STRING    Cardinality: 0.1 List of aerosol species emitted and prescribed via a climatology
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
8.6. Prescribed Spatially Uniform Emitted Species Is Required: FALSE    Type: STRING    Cardinality: 0.1 List of aerosol species emitted and prescribed as spatially uniform
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
8.7. Interactive Emitted Species Is Required: FALSE    Type: STRING    Cardinality: 0.1 List of aerosol species emitted and specified via an interactive method
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
8.8. Other Emitted Species Is Required: FALSE    Type: STRING    Cardinality: 0.1 List of aerosol species emitted and specified via an "other method"
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.other_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
8.9. Other Method Characteristics Is Required: FALSE    Type: STRING    Cardinality: 0.1 Characteristics of the "other method" used for aerosol emissions
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
9. Concentrations Atmospheric aerosol concentrations 9.1. Overview Is Required: TRUE    Type: STRING    Cardinality: 1.1 Overview of concentrations in atmosperic aerosol model
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.concentrations.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
9.2. Prescribed Lower Boundary Is Required: FALSE    Type: STRING    Cardinality: 0.1 List of species prescribed at the lower boundary.
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
9.3. Prescribed Upper Boundary Is Required: FALSE    Type: STRING    Cardinality: 0.1 List of species prescribed at the upper boundary.
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
9.4. Prescribed Fields Mmr Is Required: FALSE    Type: STRING    Cardinality: 0.1 List of species prescribed as mass mixing ratios.
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
9.5. Prescribed Fields Mmr Is Required: FALSE    Type: STRING    Cardinality: 0.1 List of species prescribed as AOD plus CCNs.
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
10. Optical Radiative Properties Aerosol optical and radiative properties 10.1. Overview Is Required: TRUE    Type: STRING    Cardinality: 1.1 Overview of optical and radiative properties
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
11. Optical Radiative Properties --> Absorption Absortion properties in aerosol scheme 11.1. Black Carbon Is Required: FALSE    Type: FLOAT    Cardinality: 0.1 Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s)
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
11.2. Dust Is Required: FALSE    Type: FLOAT    Cardinality: 0.1 Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s)
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
11.3. Organics Is Required: FALSE    Type: FLOAT    Cardinality: 0.1 Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s)
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
12. Optical Radiative Properties --> Mixtures ** 12.1. External Is Required: TRUE    Type: BOOLEAN    Cardinality: 1.1 Is there external mixing with respect to chemical composition?
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s)
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
12.2. Internal Is Required: TRUE    Type: BOOLEAN    Cardinality: 1.1 Is there internal mixing with respect to chemical composition?
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s)
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
12.3. Mixing Rule Is Required: FALSE    Type: STRING    Cardinality: 0.1 If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
13. Optical Radiative Properties --> Impact Of H2o ** 13.1. Size Is Required: TRUE    Type: BOOLEAN    Cardinality: 1.1 Does H2O impact size?
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s)
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
13.2. Internal Mixture Is Required: TRUE    Type: BOOLEAN    Cardinality: 1.1 Does H2O impact internal mixture?
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s)
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
14. Optical Radiative Properties --> Radiative Scheme Radiative scheme for aerosol 14.1. Overview Is Required: TRUE    Type: STRING    Cardinality: 1.1 Overview of radiative scheme
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
14.2. Shortwave Bands Is Required: TRUE    Type: INTEGER    Cardinality: 1.1 Number of shortwave bands
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s)
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
14.3. Longwave Bands Is Required: TRUE    Type: INTEGER    Cardinality: 1.1 Number of longwave bands
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s)
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
15. Optical Radiative Properties --> Cloud Interactions Aerosol-cloud interactions 15.1. Overview Is Required: TRUE    Type: STRING    Cardinality: 1.1 Overview of aerosol-cloud interactions
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
15.2. Twomey Is Required: TRUE    Type: BOOLEAN    Cardinality: 1.1 Is the Twomey effect included?
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s)
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
15.3. Twomey Minimum Ccn Is Required: FALSE    Type: INTEGER    Cardinality: 0.1 If the Twomey effect is included, then what is the minimum CCN number?
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s)
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
15.4. Drizzle Is Required: TRUE    Type: BOOLEAN    Cardinality: 1.1 Does the scheme affect drizzle?
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s)
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
15.5. Cloud Lifetime Is Required: TRUE    Type: BOOLEAN    Cardinality: 1.1 Does the scheme affect cloud lifetime?
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s)
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
15.6. Longwave Bands Is Required: TRUE    Type: INTEGER    Cardinality: 1.1 Number of longwave bands
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s)
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
16. Model Aerosol model 16.1. Overview Is Required: TRUE    Type: STRING    Cardinality: 1.1 Overview of atmosperic aerosol model
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.model.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
16.2. Processes Is Required: TRUE    Type: ENUM    Cardinality: 1.N Processes included in the Aerosol model.
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.model.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Dry deposition" # "Sedimentation" # "Wet deposition (impaction scavenging)" # "Wet deposition (nucleation scavenging)" # "Coagulation" # "Oxidation (gas phase)" # "Oxidation (in cloud)" # "Condensation" # "Ageing" # "Advection (horizontal)" # "Advection (vertical)" # "Heterogeneous chemistry" # "Nucleation" # TODO - please enter value(s)
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
16.3. Coupling Is Required: FALSE    Type: ENUM    Cardinality: 0.N Other model components coupled to the Aerosol model
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.model.coupling') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Radiation" # "Land surface" # "Heterogeneous chemistry" # "Clouds" # "Ocean" # "Cryosphere" # "Gas phase chemistry" # "Other: [Please specify]" # TODO - please enter value(s)
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
16.4. Gas Phase Precursors Is Required: TRUE    Type: ENUM    Cardinality: 1.N List of gas phase aerosol precursors.
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.model.gas_phase_precursors') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "DMS" # "SO2" # "Ammonia" # "Iodine" # "Terpene" # "Isoprene" # "VOC" # "NOx" # "Other: [Please specify]" # TODO - please enter value(s)
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
16.5. Scheme Type Is Required: TRUE    Type: ENUM    Cardinality: 1.N Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.model.scheme_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Bulk" # "Modal" # "Bin" # "Other: [Please specify]" # TODO - please enter value(s)
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
16.6. Bulk Scheme Species Is Required: TRUE    Type: ENUM    Cardinality: 1.N List of species covered by the bulk scheme.
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.model.bulk_scheme_species') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Sulphate" # "Nitrate" # "Sea salt" # "Dust" # "Ice" # "Organic" # "Black carbon / soot" # "SOA (secondary organic aerosols)" # "POM (particulate organic matter)" # "Polar stratospheric ice" # "NAT (Nitric acid trihydrate)" # "NAD (Nitric acid dihydrate)" # "STS (supercooled ternary solution aerosol particule)" # "Other: [Please specify]" # TODO - please enter value(s)
notebooks/messy-consortium/cmip6/models/emac-2-53-aerchem/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
Explore the Data Play around with view_sentence_range to view different parts of the data.
view_sentence_range = (0, 10) """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()}))) scenes = text.split('\n\n') print('Number of scenes: {}'.format(len(scenes))) sentence_count_scene = [scene.count('\n') for scene in scenes] print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene))) sentences = [sentence for scene in scenes for sentence in scene.split('\n')] print('Number of lines: {}'.format(len(sentences))) word_count_sentence = [len(sentence.split()) for sentence in sentences] print('Average number of words in each line: {}'.format(np.average(word_count_sentence))) print() print('The sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
DLND-tv-script-generation/dlnd_tv_script_generation.ipynb
Kulbear/deep-learning-nano-foundation
mit
Implement Preprocessing Functions The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below: - Lookup Table - Tokenize Punctuation Lookup Table To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries: - Dictionary to go from the words to an id, we'll call vocab_to_int - Dictionary to go from the id to word, we'll call int_to_vocab Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
import numpy as np import problem_unittests as tests from collections import Counter def create_lookup_tables(text): """ Create lookup tables for vocabulary :param text: The text of tv scripts split into words :return: A tuple of dicts (vocab_to_int, int_to_vocab) """ counts = Counter(text) vocab = sorted(counts, key=counts.get, reverse=True) vocab_to_int = {word: ii for ii, word in enumerate(vocab, 1)} int_to_vocab = {ii: word for ii, word in enumerate(vocab, 1)} return (vocab_to_int, int_to_vocab) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_create_lookup_tables(create_lookup_tables)
DLND-tv-script-generation/dlnd_tv_script_generation.ipynb
Kulbear/deep-learning-nano-foundation
mit
Tokenize Punctuation We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!". Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token: - Period ( . ) - Comma ( , ) - Quotation Mark ( " ) - Semicolon ( ; ) - Exclamation mark ( ! ) - Question mark ( ? ) - Left Parentheses ( ( ) - Right Parentheses ( ) ) - Dash ( -- ) - Return ( \n ) This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
def token_lookup(): """ Generate a dict to turn punctuation into a token. :return: Tokenize dictionary where the key is the punctuation and the value is the token """ # TODO: Implement Function punct_list = {'.': '||period||', ',': '||comma||', '"': '||quotation_mark||', ';': '||semicolon||', '!': '||exclamation_mark||', '?': '||question_mark||', '(': '||left_parentheses||', ')': '||right_parentheses||', '--': '||dash||', '\n': '||return||'} return punct_list """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_tokenize(token_lookup)
DLND-tv-script-generation/dlnd_tv_script_generation.ipynb
Kulbear/deep-learning-nano-foundation
mit
Preprocess all the data and save it Running the code cell below will preprocess all the data and save it to file.
""" DON'T MODIFY ANYTHING IN THIS CELL """ # Preprocess Training, Validation, and Testing Data helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
DLND-tv-script-generation/dlnd_tv_script_generation.ipynb
Kulbear/deep-learning-nano-foundation
mit
Check Point This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
""" DON'T MODIFY ANYTHING IN THIS CELL """ import helper import numpy as np import problem_unittests as tests int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess() len(int_text)
DLND-tv-script-generation/dlnd_tv_script_generation.ipynb
Kulbear/deep-learning-nano-foundation
mit
Build the Neural Network You'll build the components necessary to build a RNN by implementing the following functions below: - get_inputs - get_init_cell - get_embed - build_rnn - build_nn - get_batches Check the Version of TensorFlow and Access to GPU
""" DON'T MODIFY ANYTHING IN THIS CELL """ from distutils.version import LooseVersion import warnings import tensorflow as tf # Check TensorFlow Version assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer' print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
DLND-tv-script-generation/dlnd_tv_script_generation.ipynb
Kulbear/deep-learning-nano-foundation
mit
Input Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders: - Input text placeholder named "input" using the TF Placeholder name parameter. - Targets placeholder - Learning Rate placeholder Return the placeholders in the following the tuple (Input, Targets, LearingRate)
def get_inputs(): """ Create TF Placeholders for input, targets, and learning rate. :return: Tuple (input, targets, learning rate) """ inputs = tf.placeholder(tf.int32, [None, None], name='input') targets = tf.placeholder(tf.int32, [None, None], name='targets') learning_rate = tf.placeholder(tf.float32, name='learning_rate') return (inputs, targets, learning_rate) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_inputs(get_inputs)
DLND-tv-script-generation/dlnd_tv_script_generation.ipynb
Kulbear/deep-learning-nano-foundation
mit
Build RNN Cell and Initialize Stack one or more BasicLSTMCells in a MultiRNNCell. - The Rnn size should be set using rnn_size - Initalize Cell State using the MultiRNNCell's zero_state() function - Apply the name "initial_state" to the initial state using tf.identity() Return the cell and initial state in the following tuple (Cell, InitialState)
def get_init_cell(batch_size, rnn_size): """ Create an RNN Cell and initialize it. :param batch_size: Size of batches :param rnn_size: Size of RNNs :return: Tuple (cell, initialize state) """ lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size) cell = tf.contrib.rnn.MultiRNNCell([lstm] * 2) initial_state = cell.zero_state(batch_size, tf.float32) initial_state = tf.identity(initial_state, name= "initial_state") return (cell, initial_state) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_init_cell(get_init_cell)
DLND-tv-script-generation/dlnd_tv_script_generation.ipynb
Kulbear/deep-learning-nano-foundation
mit
Word Embedding Apply embedding to input_data using TensorFlow. Return the embedded sequence.
def get_embed(input_data, vocab_size, embed_dim): """ Create embedding for <input_data>. :param input_data: TF placeholder for text input. :param vocab_size: Number of words in vocabulary. :param embed_dim: Number of embedding dimensions :return: Embedded input. """ embedding = tf.Variable(tf.truncated_normal((vocab_size, embed_dim), stddev=0.25)) embed = tf.nn.embedding_lookup(embedding, input_data) return embed """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_embed(get_embed)
DLND-tv-script-generation/dlnd_tv_script_generation.ipynb
Kulbear/deep-learning-nano-foundation
mit
Build RNN You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN. - Build the RNN using the tf.nn.dynamic_rnn() - Apply the name "final_state" to the final state using tf.identity() Return the outputs and final_state state in the following tuple (Outputs, FinalState)
def build_rnn(cell, inputs): """ Create a RNN using a RNN Cell :param cell: RNN Cell :param inputs: Input text data :return: Tuple (Outputs, Final State) """ outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32) final_state = tf.identity(final_state, name="final_state") return outputs, final_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_build_rnn(build_rnn)
DLND-tv-script-generation/dlnd_tv_script_generation.ipynb
Kulbear/deep-learning-nano-foundation
mit
Build the Neural Network Apply the functions you implemented above to: - Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function. - Build RNN using cell and your build_rnn(cell, inputs) function. - Apply a fully connected layer with a linear activation and vocab_size as the number of outputs. Return the logits and final state in the following tuple (Logits, FinalState)
def build_nn(cell, rnn_size, input_data, vocab_size): """ Build part of the neural network :param cell: RNN cell :param rnn_size: Size of rnns :param input_data: Input data :param vocab_size: Vocabulary size :return: Tuple (Logits, FinalState) """ inputs = get_embed(input_data, vocab_size, rnn_size) outputs, final_state = build_rnn(cell, inputs) logits = tf.contrib.layers.fully_connected(outputs, vocab_size, activation_fn=None) return logits, final_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_build_nn(build_nn)
DLND-tv-script-generation/dlnd_tv_script_generation.ipynb
Kulbear/deep-learning-nano-foundation
mit
Batches Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements: - The first element is a single batch of input with the shape [batch size, sequence length] - The second element is a single batch of targets with the shape [batch size, sequence length] If you can't fill the last batch with enough data, drop the last batch. For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3) would return a Numpy array of the following: ``` [ # First Batch [ # Batch of Input [[ 1 2 3], [ 7 8 9]], # Batch of targets [[ 2 3 4], [ 8 9 10]] ], # Second Batch [ # Batch of Input [[ 4 5 6], [10 11 12]], # Batch of targets [[ 5 6 7], [11 12 13]] ] ] ```
def get_batches(int_text, batch_size, seq_length): """ Return batches of input and target :param int_text: Text with the words replaced by their ids :param batch_size: The size of batch :param seq_length: The length of sequence :return: Batches as a Numpy array """ slice_size = batch_size * seq_length n_batches = len(int_text) // slice_size # We will drop the last few words to keep the batches in equal size used_data = int_text[0:n_batches * slice_size + 1] batches = [] for i in range(n_batches): input_batch = [] target_batch = [] for j in range(batch_size): start_idx = i * batch_size + j * seq_length end_idx = i * batch_size + (j + 1) * seq_length input_batch.append(used_data[start_idx: end_idx]) target_batch.append(used_data[start_idx + 1: end_idx + 1]) batches.append([input_batch, target_batch]) return np.array(batches) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_batches(get_batches)
DLND-tv-script-generation/dlnd_tv_script_generation.ipynb
Kulbear/deep-learning-nano-foundation
mit
Neural Network Training Hyperparameters Tune the following parameters: Set num_epochs to the number of epochs. Set batch_size to the batch size. Set rnn_size to the size of the RNNs. Set seq_length to the length of sequence. Set learning_rate to the learning rate. Set show_every_n_batches to the number of batches the neural network should print progress.
# Number of Epochs num_epochs = 50 # Batch Size batch_size = 128 # RNN Size rnn_size = 1024 # Sequence Length seq_length = 16 # Learning Rate learning_rate = 0.001 # Show stats for every n number of batches show_every_n_batches = 11 """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ save_dir = './save'
DLND-tv-script-generation/dlnd_tv_script_generation.ipynb
Kulbear/deep-learning-nano-foundation
mit
Build the Graph Build the graph using the neural network you implemented.
""" DON'T MODIFY ANYTHING IN THIS CELL """ from tensorflow.contrib import seq2seq train_graph = tf.Graph() with train_graph.as_default(): vocab_size = len(int_to_vocab) input_text, targets, lr = get_inputs() input_data_shape = tf.shape(input_text) cell, initial_state = get_init_cell(input_data_shape[0], rnn_size) logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size) # Probabilities for generating words probs = tf.nn.softmax(logits, name='probs') # Loss function cost = seq2seq.sequence_loss( logits, targets, tf.ones([input_data_shape[0], input_data_shape[1]]) ) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients] train_op = optimizer.apply_gradients(capped_gradients)
DLND-tv-script-generation/dlnd_tv_script_generation.ipynb
Kulbear/deep-learning-nano-foundation
mit
Train Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
""" DON'T MODIFY ANYTHING IN THIS CELL """ batches = get_batches(int_text, batch_size, seq_length) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(num_epochs): state = sess.run(initial_state, {input_text: batches[0][0]}) for batch_i, (x, y) in enumerate(batches): feed = { input_text: x, targets: y, initial_state: state, lr: learning_rate} train_loss, state, _ = sess.run([cost, final_state, train_op], feed) # Show every <show_every_n_batches> batches if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0: print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format( epoch_i, batch_i, len(batches), train_loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_dir) print('Model Trained and Saved')
DLND-tv-script-generation/dlnd_tv_script_generation.ipynb
Kulbear/deep-learning-nano-foundation
mit
Save Parameters Save seq_length and save_dir for generating a new TV script.
""" DON'T MODIFY ANYTHING IN THIS CELL """ # Save parameters for checkpoint helper.save_params((seq_length, save_dir))
DLND-tv-script-generation/dlnd_tv_script_generation.ipynb
Kulbear/deep-learning-nano-foundation
mit
Checkpoint
""" DON'T MODIFY ANYTHING IN THIS CELL """ import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess() seq_length, load_dir = helper.load_params()
DLND-tv-script-generation/dlnd_tv_script_generation.ipynb
Kulbear/deep-learning-nano-foundation
mit
Implement Generate Functions Get Tensors Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names: - "input:0" - "initial_state:0" - "final_state:0" - "probs:0" Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
def get_tensors(loaded_graph): """ Get input, initial state, final state, and probabilities tensor from <loaded_graph> :param loaded_graph: TensorFlow graph loaded from file :return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor) """ inputs = loaded_graph.get_tensor_by_name("input:0") initial_state = loaded_graph.get_tensor_by_name("initial_state:0") final_state = loaded_graph.get_tensor_by_name("final_state:0") probs = loaded_graph.get_tensor_by_name("probs:0") return (inputs, initial_state, final_state, probs) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_tensors(get_tensors)
DLND-tv-script-generation/dlnd_tv_script_generation.ipynb
Kulbear/deep-learning-nano-foundation
mit
Choose Word Implement the pick_word() function to select the next word using probabilities.
from random import randint def pick_word(probabilities, int_to_vocab): """ Pick the next word in the generated text :param probabilities: Probabilites of the next word :param int_to_vocab: Dictionary of word ids as the keys and words as the values :return: String of the predicted word """ return int_to_vocab[np.argmax(probabilities)] """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_pick_word(pick_word)
DLND-tv-script-generation/dlnd_tv_script_generation.ipynb
Kulbear/deep-learning-nano-foundation
mit
Generate TV Script This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
gen_length = 300 # homer_simpson, moe_szyslak, or Barney_Gumble prime_word = 'moe_szyslak' """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_dir + '.meta') loader.restore(sess, load_dir) # Get Tensors from loaded model input_text, initial_state, final_state, probs = get_tensors(loaded_graph) # Sentences generation setup gen_sentences = [prime_word + ':'] prev_state = sess.run(initial_state, {input_text: np.array([[1]])}) # Generate sentences for n in range(gen_length): # Dynamic Input dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]] dyn_seq_length = len(dyn_input[0]) # Get Prediction probabilities, prev_state = sess.run( [probs, final_state], {input_text: dyn_input, initial_state: prev_state}) pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab) gen_sentences.append(pred_word) # Remove tokens tv_script = ' '.join(gen_sentences) for key, token in token_dict.items(): ending = ' ' if key in ['\n', '(', '"'] else '' tv_script = tv_script.replace(' ' + token.lower(), key) tv_script = tv_script.replace('\n ', '\n') tv_script = tv_script.replace('( ', '(') print(tv_script)
DLND-tv-script-generation/dlnd_tv_script_generation.ipynb
Kulbear/deep-learning-nano-foundation
mit
2. Save frame and display JPG
frame = hdmi.frame() orig_img_path = '/home/xilinx/jupyter_notebooks/examples/data/orig.jpg' frame.save_as_jpeg(orig_img_path) Image(filename=orig_img_path)
Pynq-Z1/notebooks/examples/video_filters.ipynb
JorisBolsens/PYNQ
bsd-3-clause
3. Gray Scale filter This cell should take ~50s to complete. Note that there are better ways (e.g., openCV, etc.) to do grayscale conversion, but this is just an example of doing that without using any additional library.
from pynq.drivers.video import MAX_FRAME_WIDTH frame_i = frame.frame height = hdmi.frame_height() width = hdmi.frame_width() for y in range(0, height): for x in range(0, width): offset = 3 * (y * MAX_FRAME_WIDTH + x) gray = round((0.299*frame_i[offset+2]) + (0.587*frame_i[offset+0]) + (0.114*frame_i[offset+1])) frame_i[offset:offset+3] = gray,gray,gray gray_img_path = '/home/xilinx/jupyter_notebooks/examples/data/gray.jpg' frame.save_as_jpeg(gray_img_path) Image(filename=gray_img_path)
Pynq-Z1/notebooks/examples/video_filters.ipynb
JorisBolsens/PYNQ
bsd-3-clause
4. Sobel filter This cell should take ~80s to complete. Note that there are better ways (e.g., openCV, etc.) to do sobel filter, but this is just an example of doing that without using any additional library. Compute the Sobel Filter output with sobel operator: $G_x= \begin{bmatrix} -1 & 0 & +1 \ -2 & 0 & +2 \ -1 & 0 & +1 \end{bmatrix} $ $G_y= \begin{bmatrix} +1 & +2 & +1 \ 0 & 0 & 0 \ -1 & -2 & -1 \end{bmatrix} $
height = 1080 width = 1920 sobel = Frame(1920, 1080) frame_i = frame.frame for y in range(1,height-1): for x in range(1,width-1): offset = 3 * (y * MAX_FRAME_WIDTH + x) upper_row_offset = offset - MAX_FRAME_WIDTH*3 lower_row_offset = offset + MAX_FRAME_WIDTH*3 gx = abs(-frame_i[lower_row_offset-3] + frame_i[lower_row_offset+3] - 2*frame_i[offset-3] + 2*frame_i[offset+3] - frame_i[upper_row_offset-3] + frame_i[upper_row_offset+3]) gy = abs(frame_i[lower_row_offset-3] + 2*frame_i[lower_row_offset] + frame_i[lower_row_offset+3] - frame_i[upper_row_offset-3] - 2*frame_i[upper_row_offset] - frame_i[upper_row_offset+3]) grad = min(gx + gy,255) sobel.frame[offset:offset+3] = grad,grad,grad sobel_img_path = '/home/xilinx/jupyter_notebooks/examples/data/sobel.jpg' sobel.save_as_jpeg(sobel_img_path) Image(filename=sobel_img_path)
Pynq-Z1/notebooks/examples/video_filters.ipynb
JorisBolsens/PYNQ
bsd-3-clause
5: Free up space
hdmi.stop() del sobel del hdmi
Pynq-Z1/notebooks/examples/video_filters.ipynb
JorisBolsens/PYNQ
bsd-3-clause
We'll gather the contents of a single message. 2017_Jan_0 is one that includes a personal signature, as well as the standard Full Disclosure footer. 2017_Jan_45 is a message that includes a PGP signature.
year = '2005' month = 'Jan' id = '0' url = 'http://seclists.org/fulldisclosure/' + year + '/' + month + '/' + id r = requests.get(url) content = r.text from IPython.display import Pretty Pretty(content)
Parsers/SecLists/Reply-Parse.ipynb
sailuh/perceive
gpl-2.0
Each message in the FD list is wrapped in seclists.org code, including navigation, ads, and trackers, all irrelevant to us. The body of the reply is contained between two comments, &lt;!--X-Body-of-Message--&gt; and &lt;!--X-Body-of-Message-End--&gt;. BeautifulSoup isn't great at handling comments, so we first use simple indexing to extract the relevant chars. We'll then send it through BeautifulSoup so we can use its .text property to strip out the html tags. BS4 automatically adds tags to create valid html, so remember to parse using the generated &lt;body&gt; tags. What we end up with is a plaintext version of the message's body.
start = content.index('<!--X-Body-of-Message-->') + 24 end = content.index('<!--X-Body-of-Message-End-->') body = content[start:end] soup = BeautifulSoup(body, 'html5lib') bodyhtml = soup.find('body') raw = bodyhtml.text Pretty(raw)
Parsers/SecLists/Reply-Parse.ipynb
sailuh/perceive
gpl-2.0
Signature extraction Messages to the FD list usually end with a common footer: 2002-2005: _______________________________________________ Full-Disclosure - We believe in it. Charter: http://lists.netsys.com/full-disclosure-charter.html 2005-2014: _______________________________________________ Full-Disclosure - We believe in it. Charter: http://lists.grok.org.uk/full-disclosure-charter.html Hosted and sponsored by Secunia - http://secunia.com/ 2014-onward: _______________________________________________ Sent through the Full Disclosure mailing list http://nmap.org/mailman/listinfo/fulldisclosure Web Archives &amp; RSS: http://seclists.org/fulldisclosure/ We'll look for the first line (47 underscores), then test the lines below to make sure it's a match. If so, we'll strip out that footer from our content.
workcopy = raw footers = [m.start() for m in re.finditer('_{47}', workcopy)] for f in reversed(footers): possible = workcopy[f:f+190] lines = possible.splitlines() if(len(lines) == 4 and lines[1][0:15] == 'Full-Disclosure' and lines[2][0:8] == 'Charter:' and lines[3][0:20] == 'Hosted and sponsored'): workcopy = workcopy[:f] + workcopy[f+213:] continue if(len(lines) == 4 and lines[1][0:16] == 'Sent through the' and lines[2][0:17] == 'https://nmap.org/' and lines[3][0:14] == 'Web Archives &'): workcopy = workcopy[:f] + workcopy[f+211:] continue possible = workcopy[f:f+146] lines = possible.splitlines() if(len(lines) == 3 and lines[1][0:15] == 'Full-Disclosure' and lines[2][0:8] == 'Charter:'): workcopy = workcopy[:f] + workcopy[f+146:] continue print(workcopy)
Parsers/SecLists/Reply-Parse.ipynb
sailuh/perceive
gpl-2.0
PGP messages As can be expected, many messages offer a PGP signature validation. This isn't useful to our processing, so we'll take it out. First, we define get_raw_message with code we've used previously. We then create strip_pgp, looking for the PGP signature. We can just use simple text searches again, with an exception of using RE for the Hash, which can change. http://seclists.org/fulldisclosure/2017/Oct/11 is a message that includes a PGP signature, so we'll use that to test.
def get_raw_message(url): r = requests.get(url) content = r.text start = content.index('<!--X-Body-of-Message-->') + 24 end = content.index('<!--X-Body-of-Message-End-->') body = content[start:end] soup = BeautifulSoup(body, 'html5lib') bodyhtml = soup.find('body') return bodyhtml.text #rawmsg = get_raw_message('http://seclists.org/fulldisclosure/2017/Oct/11') rawmsg = get_raw_message('http://seclists.org/fulldisclosure/2005/Jan/719') def strip_pgp(raw): try: pgp_sig_start = raw.index('-----BEGIN PGP SIGNATURE-----') pgp_sig_end = raw.index('-----END PGP SIGNATURE-----') + 27 cleaned = raw[:pgp_sig_start] + raw[pgp_sig_end:] # if we find a public key block, then strip that out try: pgp_pk_start = raw.index('-----BEGIN PGP PUBLIC KEY BLOCK-----') pgp_pk_end = raw.index('-----END PGP PUBLIC KEY BLOCK-----') + 35 cleaned = cleaned[:pgp_pk_start] + cleaned[pgp_pk_end:] except ValueError as ve: pass # finally, try to remove the signed message header pgp_msg = raw.index('-----BEGIN PGP SIGNED MESSAGE-----') pgp_hash = re.search('Hash:(.)+\n', raw) if pgp_hash is not None: first_hash = pgp_hash.span(0) if first_hash[0] == pgp_msg + 35: #if we found a hash designation immediately after the header, strip that too cleaned = cleaned[:pgp_msg] + cleaned[first_hash[1]:] else: #just strip the header cleaned = cleaned[:pgp_msg] + cleaned[pgp_msg + 34:] else: cleaned = cleaned[:pgp_msg] + cleaned[pgp_msg + 34:] return cleaned except ValueError as ve: return raw unpgp = strip_pgp(rawmsg) Pretty(unpgp) #Pretty(strip_pgp(raw))
Parsers/SecLists/Reply-Parse.ipynb
sailuh/perceive
gpl-2.0
Talon processing Next, we'll attempt to use talon to strip out the signature from the message. Talon provides two different ways to find the signature, "brute force" and "machine learning". We'll try the brute force method first.
import talon from talon.signature.bruteforce import extract_signature reply, signature = extract_signature(raw) if(not signature is None): Pretty(signature) Pretty(reply)
Parsers/SecLists/Reply-Parse.ipynb
sailuh/perceive
gpl-2.0
At least for 2017_Jan_0, it is pretty effective. 2017_Jan_45 was not successful at all. Now, we'll try the machine learning style, to compare.
talon.init() from talon import signature reply_ml, sig_ml = signature.extract(raw, sender="[email protected]") print(sig_ml) #reply_ml
Parsers/SecLists/Reply-Parse.ipynb
sailuh/perceive
gpl-2.0
This doesn't seem to output anything. I'm unclear whether or not this library is already trained; documentation states that it was trained on the authors' personal email and an ENRON set. There is an open issue on github https://github.com/mailgun/talon/issues/143 from July asking about the same thing. We will stick with the "brute force" method for now, and continue to look for more libraries. Extract HTML tags We'll use a fairly simple regex to extract any tags from the reply. &lt;([^\s&gt;]+)(\s|/&gt;)+ * [^\s&gt;]+ one or more non-whitespace characters, followed by: * \s|/ either a whitespace character, or a slash (/) for self-closing tags. We then use a dictionary to count the instances of each unique tag.
rx = re.compile('<([^\s>]+)(\s|/>)+') tags = {} for tag in rx.findall(str(bodyhtml)): tagtype = tag[0] if not tagtype.startswith('/'): if tagtype in tags: tags[tagtype] = tags[tagtype] + 1 else: tags[tagtype] = 1 print(tags)
Parsers/SecLists/Reply-Parse.ipynb
sailuh/perceive
gpl-2.0
Extract link domains We'll record what domains are linked to in each message. We use BeautifulSoup to pull out all &lt;a&gt; tags, then urlparse to determine the domain within.
from urllib.parse import urlparse sites = {} atags = bodyhtml.find_all('a') hrefs = [link.get('href') for link in atags] for link in hrefs: parsedurl = urlparse(link) site = parsedurl.netloc if site in sites: sites[site] = sites[site] + 1 else: sites[site] = 1 sites
Parsers/SecLists/Reply-Parse.ipynb
sailuh/perceive
gpl-2.0
Add SECOORA models and observations.
from utilities import titles, fix_url secoora_models = ['SABGOM', 'USEAST', 'USF_ROMS', 'USF_SWAN', 'USF_FVCOM'] for secoora_model in secoora_models: if titles[secoora_model] not in dap_urls: log.warning('{} not in the NGDC csw'.format(secoora_model)) dap_urls.append(titles[secoora_model]) # NOTE: USEAST is not archived at the moment! dap_urls = [fix_url(start, url) if 'SABGOM' in url else url for url in dap_urls]
notebooks/timeSeries/ssv/00-velocity_secoora.ipynb
ocefpaf/secoora
mit
FIXME: deal with ($u$, $v$) and speed, direction.
from iris.exceptions import CoordinateNotFoundError, ConstraintMismatchError from utilities import TimeoutException, secoora_buoys, get_cubes urls = list(secoora_buoys()) buoys = dict() for url in urls: try: cubes = get_cubes(url, name_list=name_list, bbox=bbox, time=(start, stop)) buoy = url.split('/')[-1].split('.nc')[0] buoys.update({buoy: cubes[0]}) except (RuntimeError, ValueError, TimeoutException, ConstraintMismatchError, CoordinateNotFoundError) as e: log.warning('Cannot get cube for: {}\n{}'.format(url, e)) name_list buoys units=iris.unit.Unit('m s-1')
notebooks/timeSeries/ssv/00-velocity_secoora.ipynb
ocefpaf/secoora
mit
Make sure your HDFS is still on and the input files (the three books) are still in the input folder. Create the input RDD from the files on the HDFS (hdfs://localhost:54310/user/ubuntu/input).
lines = sc.textFile('hdfs://localhost:54310/user/ubuntu/input') lines.count()
instructor-notes/3-pyspark-wordcount.ipynb
dsiufl/2015-Fall-Hadoop
mit
Simple Word Count Perform the counting, by flatMap, map, and reduceByKey.
from operator import add counts = lines.flatMap(lambda x: x.split()).map(lambda x: (x, 1)).reduceByKey(add)
instructor-notes/3-pyspark-wordcount.ipynb
dsiufl/2015-Fall-Hadoop
mit
Take the top 10 frequently used words
counts.takeOrdered(10, lambda x: -x[1])
instructor-notes/3-pyspark-wordcount.ipynb
dsiufl/2015-Fall-Hadoop
mit
Pattern Matching WordCount Read the pattern file into a set. (file: /home/ubuntu/shortcourse/notes/scripts/wordcount2/wc2-pattern.txt)
pattern = set() f = open('/home/ubuntu/shortcourse/notes/scripts/wordcount2/wc2-pattern.txt') for line in f: words = line.split() for word in words: pattern.add(word)
instructor-notes/3-pyspark-wordcount.ipynb
dsiufl/2015-Fall-Hadoop
mit
Perform the counting, by flatMap, filter, map, and reduceByKey.
result = lines.flatMap(lambda x: x.split()).filter(lambda x: x in pattern).map(lambda x: (x, 1)).reduceByKey(add)
instructor-notes/3-pyspark-wordcount.ipynb
dsiufl/2015-Fall-Hadoop
mit
Collect and show the results.
result.collect() # stop the spark context sc.stop()
instructor-notes/3-pyspark-wordcount.ipynb
dsiufl/2015-Fall-Hadoop
mit
Make a project We have a few LAS files in a folder; we can load them all at once with standard POSIX file globbing syntax:
p = welly.read_las("../../tests/assets/example_*.las")
docs/_userguide/Projects.ipynb
agile-geoscience/welly
apache-2.0
Now we have a project, containing two files:
p
docs/_userguide/Projects.ipynb
agile-geoscience/welly
apache-2.0
You can pass in a list of files or URLs:
p = welly.read_las(['../../tests/assets/P-129_out.LAS', 'https://geocomp.s3.amazonaws.com/data/P-130.LAS', 'https://geocomp.s3.amazonaws.com/data/R-39.las', ])
docs/_userguide/Projects.ipynb
agile-geoscience/welly
apache-2.0
This project has three wells:
p
docs/_userguide/Projects.ipynb
agile-geoscience/welly
apache-2.0