markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
---|---|---|---|---|
Rendering Points and Lines
Mayavi has several ways to render 3D line and point data. The default is to use surfaces, which uses more resources. There are kwargs that can be changed to make it render with 2-D lines and points that make plotting large amounts of data more efficient.
LinePlot | # plot the data as a line
# change the tube radius to see the difference
mlab.figure('Line')
mlab.clf()
mlab.plot3d(y[:,0], y[:,1], y[:,2], tube_radius=.1)
mlab.colorbar()
# plot the data as a line, with color representing the time evolution
mlab.figure('Line')
mlab.clf()
mlab.plot3d(y[:,0], y[:,1], y[:,2], t, tube_radius=None, )
mlab.colorbar() | code_examples/python_mayavi/mayavi_intermediate.ipynb | thehackerwithin/berkeley | bsd-3-clause |
Point Plot | # plot the data as a line, with color representing the time evolution
mlab.figure()
# By default, mayavi will plot points as spheres, so each point will
# be represented by a surface.
# Using mode='2dvertex' is needed for plotting large numbers of points.
mlab.figure('Points')
mlab.clf()
mlab.points3d(y[:,0], y[:,1], y[:,2], t, mode='2dvertex')
mlab.colorbar( title='time')
mlab.axes() | code_examples/python_mayavi/mayavi_intermediate.ipynb | thehackerwithin/berkeley | bsd-3-clause |
Line + Point Plot | # plot the data as a line, with color representing the time evolution
mlab.figure('Line and Points')
mlab.clf()
# plot the data as a line, with color representing the time evolution
mlab.plot3d(y[:,0], y[:,1], y[:,2], t, tube_radius=None, line_width=1 )
mlab.colorbar()
# By default, mayavi will plot points as spheres, so each point will
# be represented by a surface.
# Using mode='2dvertex' is needed for plotting large numbers of points.
mlab.points3d(y[:,0], y[:,1], y[:,2], t, scale_factor=.3, scale_mode='none')
#mode='2dvertex')
mlab.colorbar( title='time') | code_examples/python_mayavi/mayavi_intermediate.ipynb | thehackerwithin/berkeley | bsd-3-clause |
Contour Plot
Let's see how long the particle spends in each location | h3d = np.histogramdd(y, bins=50)
# generate the midpoint coordinates
xg,yg,zg = h3d[1]
xm = xg[1:] - .5*(xg[1]-xg[0])
ym = yg[1:] - .5*(yg[1]-yg[0])
zm = zg[1:] - .5*(zg[1]-zg[0])
xg, yg, zg = np.meshgrid(xm, ym, zm)
mlab.figure('contour')
mlab.clf()
mlab.contour3d( h3d[0], opacity=.5, contours=25 ) | code_examples/python_mayavi/mayavi_intermediate.ipynb | thehackerwithin/berkeley | bsd-3-clause |
Animation
Animation can be accomplished with a mlab.animate decorator. You must define a function that yields to the animate decorator. The yield defines when mayavi will rerender the image. | # plot the data as a line
mlab.figure('Animate')
mlab.clf()
# mlab.plot3d(y[:,0], y[:,1], y[:,2], tube_radius=None)
# mlab.colorbar()
a = mlab.points3d(y0[0], y0[1], y0[2], mode='2dvertex')
# number of points to plot
# n_plot = n_time
n_plot = 1000
@mlab.animate(delay=10, ui=True )
def anim():
for i in range(n_time):
# a.mlab_source.set(x=y[i,0],y=y[i,1],z=y[i,2], color=(1,0,0))
mlab.points3d(y[i,0],y[i,1],y[i,2], mode='2dvertex', reset_zoom=False)
yield
anim() | code_examples/python_mayavi/mayavi_intermediate.ipynb | thehackerwithin/berkeley | bsd-3-clause |
First we need to define materials that will be used in the problem. We'll create three materials for the fuel, water, and cladding of the fuel pins. | # 1.6 enriched fuel
fuel = openmc.Material(name='1.6% Fuel')
fuel.set_density('g/cm3', 10.31341)
fuel.add_nuclide('U235', 3.7503e-4)
fuel.add_nuclide('U238', 2.2625e-2)
fuel.add_nuclide('O16', 4.6007e-2)
# borated water
water = openmc.Material(name='Borated Water')
water.set_density('g/cm3', 0.740582)
water.add_nuclide('H1', 4.9457e-2)
water.add_nuclide('O16', 2.4732e-2)
water.add_nuclide('B10', 8.0042e-6)
# zircaloy
zircaloy = openmc.Material(name='Zircaloy')
zircaloy.set_density('g/cm3', 6.55)
zircaloy.add_nuclide('Zr90', 7.2758e-3) | examples/jupyter/mgxs-part-iii.ipynb | johnnyliu27/openmc | mit |
With our three materials, we can now create a Materials object that can be exported to an actual XML file. | # Instantiate a Materials object
materials_file = openmc.Materials([fuel, water, zircaloy])
# Export to "materials.xml"
materials_file.export_to_xml() | examples/jupyter/mgxs-part-iii.ipynb | johnnyliu27/openmc | mit |
Now let's move on to the geometry. This problem will be a square array of fuel pins and control rod guide tubes for which we can use OpenMC's lattice/universe feature. The basic universe will have three regions for the fuel, the clad, and the surrounding coolant. The first step is to create the bounding surfaces for fuel and clad, as well as the outer bounding surfaces of the problem. | # Create cylinders for the fuel and clad
fuel_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, R=0.39218)
clad_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, R=0.45720)
# Create boundary planes to surround the geometry
min_x = openmc.XPlane(x0=-10.71, boundary_type='reflective')
max_x = openmc.XPlane(x0=+10.71, boundary_type='reflective')
min_y = openmc.YPlane(y0=-10.71, boundary_type='reflective')
max_y = openmc.YPlane(y0=+10.71, boundary_type='reflective')
min_z = openmc.ZPlane(z0=-10., boundary_type='reflective')
max_z = openmc.ZPlane(z0=+10., boundary_type='reflective') | examples/jupyter/mgxs-part-iii.ipynb | johnnyliu27/openmc | mit |
With the surfaces defined, we can now construct a fuel pin cell from cells that are defined by intersections of half-spaces created by the surfaces. | # Create a Universe to encapsulate a fuel pin
fuel_pin_universe = openmc.Universe(name='1.6% Fuel Pin')
# Create fuel Cell
fuel_cell = openmc.Cell(name='1.6% Fuel')
fuel_cell.fill = fuel
fuel_cell.region = -fuel_outer_radius
fuel_pin_universe.add_cell(fuel_cell)
# Create a clad Cell
clad_cell = openmc.Cell(name='1.6% Clad')
clad_cell.fill = zircaloy
clad_cell.region = +fuel_outer_radius & -clad_outer_radius
fuel_pin_universe.add_cell(clad_cell)
# Create a moderator Cell
moderator_cell = openmc.Cell(name='1.6% Moderator')
moderator_cell.fill = water
moderator_cell.region = +clad_outer_radius
fuel_pin_universe.add_cell(moderator_cell) | examples/jupyter/mgxs-part-iii.ipynb | johnnyliu27/openmc | mit |
Likewise, we can construct a control rod guide tube with the same surfaces. | # Create a Universe to encapsulate a control rod guide tube
guide_tube_universe = openmc.Universe(name='Guide Tube')
# Create guide tube Cell
guide_tube_cell = openmc.Cell(name='Guide Tube Water')
guide_tube_cell.fill = water
guide_tube_cell.region = -fuel_outer_radius
guide_tube_universe.add_cell(guide_tube_cell)
# Create a clad Cell
clad_cell = openmc.Cell(name='Guide Clad')
clad_cell.fill = zircaloy
clad_cell.region = +fuel_outer_radius & -clad_outer_radius
guide_tube_universe.add_cell(clad_cell)
# Create a moderator Cell
moderator_cell = openmc.Cell(name='Guide Tube Moderator')
moderator_cell.fill = water
moderator_cell.region = +clad_outer_radius
guide_tube_universe.add_cell(moderator_cell) | examples/jupyter/mgxs-part-iii.ipynb | johnnyliu27/openmc | mit |
Using the pin cell universe, we can construct a 17x17 rectangular lattice with a 1.26 cm pitch. | # Create fuel assembly Lattice
assembly = openmc.RectLattice(name='1.6% Fuel Assembly')
assembly.pitch = (1.26, 1.26)
assembly.lower_left = [-1.26 * 17. / 2.0] * 2 | examples/jupyter/mgxs-part-iii.ipynb | johnnyliu27/openmc | mit |
Next, we create a NumPy array of fuel pin and guide tube universes for the lattice. | # Create array indices for guide tube locations in lattice
template_x = np.array([5, 8, 11, 3, 13, 2, 5, 8, 11, 14, 2, 5, 8,
11, 14, 2, 5, 8, 11, 14, 3, 13, 5, 8, 11])
template_y = np.array([2, 2, 2, 3, 3, 5, 5, 5, 5, 5, 8, 8, 8, 8,
8, 11, 11, 11, 11, 11, 13, 13, 14, 14, 14])
# Initialize an empty 17x17 array of the lattice universes
universes = np.empty((17, 17), dtype=openmc.Universe)
# Fill the array with the fuel pin and guide tube universes
universes[:,:] = fuel_pin_universe
universes[template_x, template_y] = guide_tube_universe
# Store the array of universes in the lattice
assembly.universes = universes | examples/jupyter/mgxs-part-iii.ipynb | johnnyliu27/openmc | mit |
OpenMC requires that there is a "root" universe. Let us create a root cell that is filled by the assembly and then assign it to the root universe. | # Create root Cell
root_cell = openmc.Cell(name='root cell')
root_cell.fill = assembly
# Add boundary planes
root_cell.region = +min_x & -max_x & +min_y & -max_y & +min_z & -max_z
# Create root Universe
root_universe = openmc.Universe(universe_id=0, name='root universe')
root_universe.add_cell(root_cell) | examples/jupyter/mgxs-part-iii.ipynb | johnnyliu27/openmc | mit |
We now must create a geometry that is assigned a root universe and export it to XML. | # Create Geometry and set root Universe
geometry = openmc.Geometry(root_universe)
# Export to "geometry.xml"
geometry.export_to_xml() | examples/jupyter/mgxs-part-iii.ipynb | johnnyliu27/openmc | mit |
With the geometry and materials finished, we now just need to define simulation parameters. In this case, we will use 10 inactive batches and 40 active batches each with 2500 particles. | # OpenMC simulation parameters
batches = 50
inactive = 10
particles = 10000
# Instantiate a Settings object
settings_file = openmc.Settings()
settings_file.batches = batches
settings_file.inactive = inactive
settings_file.particles = particles
settings_file.output = {'tallies': False}
# Create an initial uniform spatial source distribution over fissionable zones
bounds = [-10.71, -10.71, -10, 10.71, 10.71, 10.]
uniform_dist = openmc.stats.Box(bounds[:3], bounds[3:], only_fissionable=True)
settings_file.source = openmc.source.Source(space=uniform_dist)
# Export to "settings.xml"
settings_file.export_to_xml() | examples/jupyter/mgxs-part-iii.ipynb | johnnyliu27/openmc | mit |
Let us also create a Plots file that we can use to verify that our fuel assembly geometry was created successfully. | # Instantiate a Plot
plot = openmc.Plot(plot_id=1)
plot.filename = 'materials-xy'
plot.origin = [0, 0, 0]
plot.pixels = [250, 250]
plot.width = [-10.71*2, -10.71*2]
plot.color_by = 'material'
# Instantiate a Plots object, add Plot, and export to "plots.xml"
plot_file = openmc.Plots([plot])
plot_file.export_to_xml() | examples/jupyter/mgxs-part-iii.ipynb | johnnyliu27/openmc | mit |
With the plots.xml file, we can now generate and view the plot. OpenMC outputs plots in .ppm format, which can be converted into a compressed format like .png with the convert utility. | # Run openmc in plotting mode
openmc.plot_geometry(output=False)
# Convert OpenMC's funky ppm to png
!convert materials-xy.ppm materials-xy.png
# Display the materials plot inline
Image(filename='materials-xy.png') | examples/jupyter/mgxs-part-iii.ipynb | johnnyliu27/openmc | mit |
As we can see from the plot, we have a nice array of fuel and guide tube pin cells with fuel, cladding, and water!
Create an MGXS Library
Now we are ready to generate multi-group cross sections! First, let's define a 2-group structure using the built-in EnergyGroups class. | # Instantiate a 2-group EnergyGroups object
groups = openmc.mgxs.EnergyGroups()
groups.group_edges = np.array([0., 0.625, 20.0e6]) | examples/jupyter/mgxs-part-iii.ipynb | johnnyliu27/openmc | mit |
Next, we will instantiate an openmc.mgxs.Library for the energy groups with the fuel assembly geometry. | # Initialize a 2-group MGXS Library for OpenMOC
mgxs_lib = openmc.mgxs.Library(geometry)
mgxs_lib.energy_groups = groups | examples/jupyter/mgxs-part-iii.ipynb | johnnyliu27/openmc | mit |
Now, we must specify to the Library which types of cross sections to compute. In particular, the following are the multi-group cross section MGXS subclasses that are mapped to string codes accepted by the Library class:
TotalXS ("total")
TransportXS ("transport" or "nu-transport with nu set to True)
AbsorptionXS ("absorption")
CaptureXS ("capture")
FissionXS ("fission" or "nu-fission" with nu set to True)
KappaFissionXS ("kappa-fission")
ScatterXS ("scatter" or "nu-scatter" with nu set to True)
ScatterMatrixXS ("scatter matrix" or "nu-scatter matrix" with nu set to True)
Chi ("chi")
ChiPrompt ("chi prompt")
InverseVelocity ("inverse-velocity")
PromptNuFissionXS ("prompt-nu-fission")
DelayedNuFissionXS ("delayed-nu-fission")
ChiDelayed ("chi-delayed")
Beta ("beta")
In this case, let's create the multi-group cross sections needed to run an OpenMOC simulation to verify the accuracy of our cross sections. In particular, we will define "nu-transport", "nu-fission", '"fission", "nu-scatter matrix" and "chi" cross sections for our Library.
Note: A variety of different approximate transport-corrected total multi-group cross sections (and corresponding scattering matrices) can be found in the literature. At the present time, the openmc.mgxs module only supports the "P0" transport correction. This correction can be turned on and off through the boolean Library.correction property which may take values of "P0" (default) or None. | # Specify multi-group cross section types to compute
mgxs_lib.mgxs_types = ['nu-transport', 'nu-fission', 'fission', 'nu-scatter matrix', 'chi'] | examples/jupyter/mgxs-part-iii.ipynb | johnnyliu27/openmc | mit |
Now we must specify the type of domain over which we would like the Library to compute multi-group cross sections. The domain type corresponds to the type of tally filter to be used in the tallies created to compute multi-group cross sections. At the present time, the Library supports "material", "cell", "universe", and "mesh" domain types. We will use a "cell" domain type here to compute cross sections in each of the cells in the fuel assembly geometry.
Note: By default, the Library class will instantiate MGXS objects for each and every domain (material, cell or universe) in the geometry of interest. However, one may specify a subset of these domains to the Library.domains property. In our case, we wish to compute multi-group cross sections in each and every cell since they will be needed in our downstream OpenMOC calculation on the identical combinatorial geometry mesh. | # Specify a "cell" domain type for the cross section tally filters
mgxs_lib.domain_type = 'cell'
# Specify the cell domains over which to compute multi-group cross sections
mgxs_lib.domains = geometry.get_all_material_cells().values() | examples/jupyter/mgxs-part-iii.ipynb | johnnyliu27/openmc | mit |
We can easily instruct the Library to compute multi-group cross sections on a nuclide-by-nuclide basis with the boolean Library.by_nuclide property. By default, by_nuclide is set to False, but we will set it to True here. | # Compute cross sections on a nuclide-by-nuclide basis
mgxs_lib.by_nuclide = True | examples/jupyter/mgxs-part-iii.ipynb | johnnyliu27/openmc | mit |
Lastly, we use the Library to construct the tallies needed to compute all of the requested multi-group cross sections in each domain and nuclide. | # Construct all tallies needed for the multi-group cross section library
mgxs_lib.build_library() | examples/jupyter/mgxs-part-iii.ipynb | johnnyliu27/openmc | mit |
The tallies can now be export to a "tallies.xml" input file for OpenMC.
NOTE: At this point the Library has constructed nearly 100 distinct Tally objects. The overhead to tally in OpenMC scales as $O(N)$ for $N$ tallies, which can become a bottleneck for large tally datasets. To compensate for this, the Python API's Tally, Filter and Tallies classes allow for the smart merging of tallies when possible. The Library class supports this runtime optimization with the use of the optional merge paramter (False by default) for the Library.add_to_tallies_file(...) method, as shown below. | # Create a "tallies.xml" file for the MGXS Library
tallies_file = openmc.Tallies()
mgxs_lib.add_to_tallies_file(tallies_file, merge=True) | examples/jupyter/mgxs-part-iii.ipynb | johnnyliu27/openmc | mit |
In addition, we instantiate a fission rate mesh tally to compare with OpenMOC. | # Instantiate a tally Mesh
mesh = openmc.Mesh(mesh_id=1)
mesh.type = 'regular'
mesh.dimension = [17, 17]
mesh.lower_left = [-10.71, -10.71]
mesh.upper_right = [+10.71, +10.71]
# Instantiate tally Filter
mesh_filter = openmc.MeshFilter(mesh)
# Instantiate the Tally
tally = openmc.Tally(name='mesh tally')
tally.filters = [mesh_filter]
tally.scores = ['fission', 'nu-fission']
# Add tally to collection
tallies_file.append(tally)
# Export all tallies to a "tallies.xml" file
tallies_file.export_to_xml()
# Run OpenMC
openmc.run() | examples/jupyter/mgxs-part-iii.ipynb | johnnyliu27/openmc | mit |
Tally Data Processing
Our simulation ran successfully and created statepoint and summary output files. We begin our analysis by instantiating a StatePoint object. | # Load the last statepoint file
sp = openmc.StatePoint('statepoint.50.h5') | examples/jupyter/mgxs-part-iii.ipynb | johnnyliu27/openmc | mit |
The statepoint is now ready to be analyzed by the Library. We simply have to load the tallies from the statepoint into the Library and our MGXS objects will compute the cross sections for us under-the-hood. | # Initialize MGXS Library with OpenMC statepoint data
mgxs_lib.load_from_statepoint(sp) | examples/jupyter/mgxs-part-iii.ipynb | johnnyliu27/openmc | mit |
Voila! Our multi-group cross sections are now ready to rock 'n roll!
Extracting and Storing MGXS Data
The Library supports a rich API to automate a variety of tasks, including multi-group cross section data retrieval and storage. We will highlight a few of these features here. First, the Library.get_mgxs(...) method allows one to extract an MGXS object from the Library for a particular domain and cross section type. The following cell illustrates how one may extract the NuFissionXS object for the fuel cell.
Note: The MGXS.get_mgxs(...) method will accept either the domain or the integer domain ID of interest. | # Retrieve the NuFissionXS object for the fuel cell from the library
fuel_mgxs = mgxs_lib.get_mgxs(fuel_cell, 'nu-fission') | examples/jupyter/mgxs-part-iii.ipynb | johnnyliu27/openmc | mit |
The NuFissionXS object supports all of the methods described previously in the openmc.mgxs tutorials, such as Pandas DataFrames:
Note that since so few histories were simulated, we should expect a few division-by-error errors as some tallies have not yet scored any results. | df = fuel_mgxs.get_pandas_dataframe()
df | examples/jupyter/mgxs-part-iii.ipynb | johnnyliu27/openmc | mit |
Similarly, we can use the MGXS.print_xs(...) method to view a string representation of the multi-group cross section data. | fuel_mgxs.print_xs() | examples/jupyter/mgxs-part-iii.ipynb | johnnyliu27/openmc | mit |
One can export the entire Library to HDF5 with the Library.build_hdf5_store(...) method as follows: | # Store the cross section data in an "mgxs/mgxs.h5" HDF5 binary file
mgxs_lib.build_hdf5_store(filename='mgxs.h5', directory='mgxs') | examples/jupyter/mgxs-part-iii.ipynb | johnnyliu27/openmc | mit |
The HDF5 store will contain the numerical multi-group cross section data indexed by domain, nuclide and cross section type. Some data workflows may be optimized by storing and retrieving binary representations of the MGXS objects in the Library. This feature is supported through the Library.dump_to_file(...) and Library.load_from_file(...) routines which use Python's pickle module. This is illustrated as follows. | # Store a Library and its MGXS objects in a pickled binary file "mgxs/mgxs.pkl"
mgxs_lib.dump_to_file(filename='mgxs', directory='mgxs')
# Instantiate a new MGXS Library from the pickled binary file "mgxs/mgxs.pkl"
mgxs_lib = openmc.mgxs.Library.load_from_file(filename='mgxs', directory='mgxs') | examples/jupyter/mgxs-part-iii.ipynb | johnnyliu27/openmc | mit |
The Library class may be used to leverage the energy condensation features supported by the MGXS class. In particular, one can use the Library.get_condensed_library(...) with a coarse group structure which is a subset of the original "fine" group structure as shown below. | # Create a 1-group structure
coarse_groups = openmc.mgxs.EnergyGroups(group_edges=[0., 20.0e6])
# Create a new MGXS Library on the coarse 1-group structure
coarse_mgxs_lib = mgxs_lib.get_condensed_library(coarse_groups)
# Retrieve the NuFissionXS object for the fuel cell from the 1-group library
coarse_fuel_mgxs = coarse_mgxs_lib.get_mgxs(fuel_cell, 'nu-fission')
# Show the Pandas DataFrame for the 1-group MGXS
coarse_fuel_mgxs.get_pandas_dataframe() | examples/jupyter/mgxs-part-iii.ipynb | johnnyliu27/openmc | mit |
Verification with OpenMOC
Of course it is always a good idea to verify that one's cross sections are accurate. We can easily do so here with the deterministic transport code OpenMOC. We first construct an equivalent OpenMOC geometry. | # Create an OpenMOC Geometry from the OpenMC Geometry
openmoc_geometry = get_openmoc_geometry(mgxs_lib.geometry) | examples/jupyter/mgxs-part-iii.ipynb | johnnyliu27/openmc | mit |
Now, we can inject the multi-group cross sections into the equivalent fuel assembly OpenMOC geometry. The openmoc.materialize module supports the loading of Library objects from OpenMC as illustrated below. | # Load the library into the OpenMOC geometry
materials = load_openmc_mgxs_lib(mgxs_lib, openmoc_geometry) | examples/jupyter/mgxs-part-iii.ipynb | johnnyliu27/openmc | mit |
We are now ready to run OpenMOC to verify our cross-sections from OpenMC. | # Generate tracks for OpenMOC
track_generator = openmoc.TrackGenerator(openmoc_geometry, num_azim=32, azim_spacing=0.1)
track_generator.generateTracks()
# Run OpenMOC
solver = openmoc.CPUSolver(track_generator)
solver.computeEigenvalue() | examples/jupyter/mgxs-part-iii.ipynb | johnnyliu27/openmc | mit |
We report the eigenvalues computed by OpenMC and OpenMOC here together to summarize our results. | # Print report of keff and bias with OpenMC
openmoc_keff = solver.getKeff()
openmc_keff = sp.k_combined.nominal_value
bias = (openmoc_keff - openmc_keff) * 1e5
print('openmc keff = {0:1.6f}'.format(openmc_keff))
print('openmoc keff = {0:1.6f}'.format(openmoc_keff))
print('bias [pcm]: {0:1.1f}'.format(bias)) | examples/jupyter/mgxs-part-iii.ipynb | johnnyliu27/openmc | mit |
There is a non-trivial bias between the eigenvalues computed by OpenMC and OpenMOC. One can show that these biases do not converge to <100 pcm with more particle histories. For heterogeneous geometries, additional measures must be taken to address the following three sources of bias:
Appropriate transport-corrected cross sections
Spatial discretization of OpenMOC's mesh
Constant-in-angle multi-group cross sections
Flux and Pin Power Visualizations
We will conclude this tutorial by illustrating how to visualize the fission rates computed by OpenMOC and OpenMC. First, we extract volume-integrated fission rates from OpenMC's mesh fission rate tally for each pin cell in the fuel assembly. | # Get the OpenMC fission rate mesh tally data
mesh_tally = sp.get_tally(name='mesh tally')
openmc_fission_rates = mesh_tally.get_values(scores=['nu-fission'])
# Reshape array to 2D for plotting
openmc_fission_rates.shape = (17,17)
# Normalize to the average pin power
openmc_fission_rates /= np.mean(openmc_fission_rates[openmc_fission_rates > 0.]) | examples/jupyter/mgxs-part-iii.ipynb | johnnyliu27/openmc | mit |
Next, we extract OpenMOC's volume-averaged fission rates into a 2D 17x17 NumPy array. | # Create OpenMOC Mesh on which to tally fission rates
openmoc_mesh = openmoc.process.Mesh()
openmoc_mesh.dimension = np.array(mesh.dimension)
openmoc_mesh.lower_left = np.array(mesh.lower_left)
openmoc_mesh.upper_right = np.array(mesh.upper_right)
openmoc_mesh.width = openmoc_mesh.upper_right - openmoc_mesh.lower_left
openmoc_mesh.width /= openmoc_mesh.dimension
# Tally OpenMOC fission rates on the Mesh
openmoc_fission_rates = openmoc_mesh.tally_fission_rates(solver)
openmoc_fission_rates = np.squeeze(openmoc_fission_rates)
openmoc_fission_rates = np.fliplr(openmoc_fission_rates)
# Normalize to the average pin fission rate
openmoc_fission_rates /= np.mean(openmoc_fission_rates[openmoc_fission_rates > 0.]) | examples/jupyter/mgxs-part-iii.ipynb | johnnyliu27/openmc | mit |
Now we can easily use Matplotlib to visualize the fission rates from OpenMC and OpenMOC side-by-side. | # Ignore zero fission rates in guide tubes with Matplotlib color scheme
openmc_fission_rates[openmc_fission_rates == 0] = np.nan
openmoc_fission_rates[openmoc_fission_rates == 0] = np.nan
# Plot OpenMC's fission rates in the left subplot
fig = plt.subplot(121)
plt.imshow(openmc_fission_rates, interpolation='none', cmap='jet')
plt.title('OpenMC Fission Rates')
# Plot OpenMOC's fission rates in the right subplot
fig2 = plt.subplot(122)
plt.imshow(openmoc_fission_rates, interpolation='none', cmap='jet')
plt.title('OpenMOC Fission Rates') | examples/jupyter/mgxs-part-iii.ipynb | johnnyliu27/openmc | mit |
Efter at have hentet vores rensede data, hvor vi minder os selv om at vi har: <br>
* dansker_set
* topklub_set
* ikke_topklub_set
* overall_set
Det første, vi gerne vil kigge lidt på, er, om vi var grundige nok i vores foranalyse. Derfor laver vi et heatmap, der skal fortælle os hvor stor sammenhængen er (korrelation) mellem kolonnerne i forhold til hinanden. | corr = overall_set.corr()
fig = plt.figure(figsize=(20, 16))
ax = sb.heatmap(corr, xticklabels=corr.columns.values,
yticklabels=corr.columns.values,
linewidths=0.25, vmax=1.0, square=True,
linecolor='black', annot=False
)
plt.show() | Teknisk Tirsdag Tutorial (Supervised Learning).ipynb | mssalvador/Fifa2018 | apache-2.0 |
Hvad vi ser her, er en korrelationsmatrix. Jo mørkere farver, des højere korrelation, rød for positiv- og blå for negativ-korrelation. <br>
Vi ser altså at der er høj korrelation, i vores nedre højre hjørne; Dette er spilpositionerne. Vi ser også et stort blåt kryds, som er målmandsdata. Disse har meget negativ korrelation med resten af vores datasæt. (Dobbeltklik evt. på plottet, hvis det er meget svært at læse teksten)<br>
Derudover kan vi se, at ID kolonnen slet ikke korrelere. Man kan derfor vælge at tage den ud.
Vi tilføjer nu vores "kendte" labels til vores data. (Hvis man spiller for en af vores topklubber, får man et 1-tal, og ellers får man et 0) <br>
Vi deler også vores træningssæt op i en X matrix med alle vores numeriske features, og en y vektor med alle vores labels. | overall_set['label'] = overall_set['Club'].isin(topklub_set.Club).astype(int)
y = overall_set['label']
X = overall_set.iloc[:,0:-1].select_dtypes(include=['float64', 'int64']) | Teknisk Tirsdag Tutorial (Supervised Learning).ipynb | mssalvador/Fifa2018 | apache-2.0 |
Vi kan kigge lidt overordnet på tallene mellem de 2 klasser. | overall_set.groupby('label').mean() | Teknisk Tirsdag Tutorial (Supervised Learning).ipynb | mssalvador/Fifa2018 | apache-2.0 |
Observationer
Alderen siger ikke rigtig noget om, hvorvidt du spiller for en topklub eller ej
Topklubsspillere er i gennemsnittet en faktor 10 mere værd end ikke-topklub spillere
Topklubsspillere er i gennemsnittet generelt ca. 10+ på alt i forhold til ikke-topklub spillere
Vi er nu klar til at gå i gang med vores første Machine Learning algoritme.
På forhånd ved vi, at der i vores træningssæt er {{y.where(y==1).count()}} som spiller i topklubber, og {{y.where(y==0).count()}} der ikke gør. <br>
Der er en 50/50 chance for at ramme rigtigt, hvis man bare gætte tilfældigt. Vi håber derfor, at algoritmen kan slå den 50% svarrate.
Logistisk regression | # hent nødvendige pakker fra Scikit Learn biblioteket (generelt super hvis man vil lave data science)
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split | Teknisk Tirsdag Tutorial (Supervised Learning).ipynb | mssalvador/Fifa2018 | apache-2.0 |
Vi fitter nu en logistic regression classifier til vores data, og fitter en model, så den kan genkende om man spiller for en topklub eller ej, og evaluere resultatet: | model = LogisticRegression()
model = model.fit(X,y)
model.score(X,y) | Teknisk Tirsdag Tutorial (Supervised Learning).ipynb | mssalvador/Fifa2018 | apache-2.0 |
Altså har vores model ret i
{{'{:.0f}'.format(100*model.score(X, y))}}% af tiden i træningssættet. <br>
Pretty good!! Den har altså fundet nogle mønstre der kan mappe data til labels, og gætter ikke bare.
Men vi kan ikke vide, om den har overfittet, og derved har tilpasset sig for godt til sit kendte data, så nyt data vil blive fejlmappet. <br>
Hvad vi kan prøve, er at splitte vores træningssæt op i et trænings- og testsæt. På den måde kan vi først fitte og derefter evaluere på "nyt" kendt data, om den stadig performer som forventet. | X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2)
print('Træningsæt størrelse: {} - Testsæt størrelse: {}'.format(len(X_train), len(X_test))) | Teknisk Tirsdag Tutorial (Supervised Learning).ipynb | mssalvador/Fifa2018 | apache-2.0 |
Og vi er nu klar til at prøve igen!
Logistisk regression 2.0
Igen fitter vi en logistisk regression til vores træningsdata, og danner en model, men denne gang uden at bruge testdatasættet. | model2 = LogisticRegression()
model2 = model2.fit(X_train, y_train)
model2.score(X_train, y_train) | Teknisk Tirsdag Tutorial (Supervised Learning).ipynb | mssalvador/Fifa2018 | apache-2.0 |
Modellen matcher nu
{{'{:.0f}'.format(100*model2.score(X, y))}}% af tiden i træningssættet. <br>
Men har den overfittet?
Evaluering af modellen
Vi genererer derfor vores y forudsigelse og også sandsynlighederne for vores testsæt, da disse bruges til at evaluere modellen. | y_pred = model2.predict(X_test)
y_probs = model2.predict_proba(X_test)
# Evalueringsmålinger
from sklearn import metrics
print('Nøjagtigheden af vores logistiske regressions models prediction på testsættet er {:.0f}'.format(100*metrics.accuracy_score(y_test, y_pred))+'%', '\n')
print('Arealet under vores ROC AUC kurve er {:.0f}'.format(100*metrics.roc_auc_score(y_test, y_probs[:, 1]))+'%') | Teknisk Tirsdag Tutorial (Supervised Learning).ipynb | mssalvador/Fifa2018 | apache-2.0 |
Det ser jo ret fornuftigt ud.<br>
For at sige noget om vores nye model, kan vi også lave en "confusion_matrix"
<img src='http://revolution-computing.typepad.com/.a/6a010534b1db25970b01bb08c97955970d-pi',
align="center"
width="40%"
alt="confusion matrix">
T og F står for henholdsvist True og False<br>
P og N står for henholdsvist Positive og Negative | confusion_matrix = metrics.confusion_matrix(y_test, y_pred)
print(confusion_matrix) | Teknisk Tirsdag Tutorial (Supervised Learning).ipynb | mssalvador/Fifa2018 | apache-2.0 |
Resultatet fortæller os, at vi har {{confusion_matrix[0,0]}}+{{confusion_matrix[1,1]}} = {{confusion_matrix[0,0]+confusion_matrix[1,1]}} korrekte forudsigelser og {{confusion_matrix[0,1]}}+{{confusion_matrix[1,0]}} = {{confusion_matrix[0,1]+confusion_matrix[1,0]}} ukorrekte
Man kan også bede classifieren om en rapport: | print(metrics.classification_report(y_test, y_pred)) | Teknisk Tirsdag Tutorial (Supervised Learning).ipynb | mssalvador/Fifa2018 | apache-2.0 |
Logistisk regression med krydsvalidering
Vi er egentlig meget tilfredse med vores model, men ofte kan det være en god idé at teste på flere små testsæt, og holde dem op mod hinanden. <br>
Her laver vi en 10-folds krydsvalidering og får altså 10 scorer ud: | # 10-folds cross-validation
from sklearn.model_selection import cross_val_score
scores = cross_val_score(LogisticRegression(), X, y, scoring='accuracy', cv=10)
print(scores,'\n')
print(scores.mean()) | Teknisk Tirsdag Tutorial (Supervised Learning).ipynb | mssalvador/Fifa2018 | apache-2.0 |
Her preformer modellen altså i gennemsnit
{{'{:.0f}'.format(100*scores.mean())}}%.
Det lyder meget lovende, men vi holder os til vores model2 og kan nu prøve modellen af på det rigtige datasæt
Danskersættet
Vi skal nu prøve vores model på vores danske spillere<br>
Opgave:
Vi skal lave prediction og probability på vores danske spillere, ligesom vi gjorde tidligere for testsættet. (Lige under Evaluering af modellen)<br>
Husk din dataframe kun må indeholder numeriske værdier, når vi bruger modellen.<br>
Fx. "df.select_dtypes(include=['float64', 'int64'])" | dansker_pred = None ### Fjern NONE og UDFYLD MIG ###
dansker_probs = None ### Fjern NONE og UDFYLD MIG ### | Teknisk Tirsdag Tutorial (Supervised Learning).ipynb | mssalvador/Fifa2018 | apache-2.0 |
Modellen har fundet {{np.bincount(dansker_pred)[0]}} nuller og {{np.bincount(dansker_pred)[1]}} ét-taller
Hvis du satte top_klub_ratio til 75 i Opgave 1 i Data Cleaning, skulle der være omkring 27-28 ét-taller. <br>
top_klub_ratio blev sat til: {{top_klub_ratio}}
Vi tilføjer disse kolonner til vores dataframe. | dansker_set_df = dansker_set.copy()
dansker_set_df[['prob1','prob2']] = pd.DataFrame(dansker_probs, index=dansker_set.index)
dansker_set_df['Probabilities [0,1]'] = dansker_set_df[['prob1','prob2']].values.tolist()
dansker_set_df['Prediction'] = pd.Series(dansker_pred, index=dansker_set.index)
del dansker_set_df['prob1'], dansker_set_df['prob2']
# dansker_set_df.head() | Teknisk Tirsdag Tutorial (Supervised Learning).ipynb | mssalvador/Fifa2018 | apache-2.0 |
Og sortere listen, så de bedste danske spillere står øvers, og tilføjer et index, så vi kan få et bedre overblik | dansker_set_df.loc[:,'pred=1'] = dansker_set_df['Probabilities [0,1]'].map(lambda x: x[1]).sort_values(ascending=False)
dansker_sorted = dansker_set_df.sort_values('pred=1', ascending=False)
dansker_sorted = dansker_sorted[['Name', 'Club', 'Overall', 'Potential', 'Probabilities [0,1]', 'Prediction']]
dansker_sorted.loc[:,'in'] = np.arange(1, len(dansker_set_df)+1)
dansker_sorted.set_index('in') | Teknisk Tirsdag Tutorial (Supervised Learning).ipynb | mssalvador/Fifa2018 | apache-2.0 |
Efter flot hattrick mod Irland, kan man vidst ikke være i tvivl om Kong Christian tager pladsen på tronen
<img src='kongen.png',
align="center"
width="40%"
alt="kongen">
Men hvilke danske spillere spiller egentlig for topklubber, og hvordan er de rangeret i forhold til vores model? | dansker_sorted[dansker_sorted['Club'].isin(top_clubs)].set_index('in') | Teknisk Tirsdag Tutorial (Supervised Learning).ipynb | mssalvador/Fifa2018 | apache-2.0 |
Man kan undre sig over hvad Jacob Larsen laver hos stopklubben Borussia Dortmund, men en hurtig googling viser, at han simpelthen blev headhuntet til klubben som 16-årig.
Og så er der jo nok nogen, der vil spørger - Hvad med Bendtner?
Så han skal da også lige have en plads i vores analyse: | dansker_sorted.loc[dansker_sorted.Name == 'N. Bendtner'].set_index('in') | Teknisk Tirsdag Tutorial (Supervised Learning).ipynb | mssalvador/Fifa2018 | apache-2.0 |
Opgave:
Vi kan også kigge på ham i det store billede. Prøv evt. at lege lidt rundt med forskellige spillere eller andre features.<br>
Er der noget specielt, der kunne være sjovt at kigge på? | df.loc[df.Name == 'N. Bendtner'] | Teknisk Tirsdag Tutorial (Supervised Learning).ipynb | mssalvador/Fifa2018 | apache-2.0 |
Ekstra lege/analyse opgaver
Danske Rezan Corlu som ellers ligger ret lavt selv på potentiale har alligevel sikret sig en plads hos A.S. Roma i en alder af 20 år.
Men hvordan var det egentlig med de topklub spillere? Hvor langt ned kan man gå i potentiale, og stadig spille for en topklub? | top_df = df[df.Club.isin(top_clubs)]
top_df[top_df.Overall < 70].sort_values('Overall', ascending=True) | Teknisk Tirsdag Tutorial (Supervised Learning).ipynb | mssalvador/Fifa2018 | apache-2.0 |
Vi kan altså se, at der bliver satset på ungdommen, hvor deres kommende potentiale nok taler for deres plads i en storklub.<br>
Men hvad så med ikke-topklubsspillere og deres performance? | bund_df = df[~df.Club.isin(top_clubs)]
bund_df[bund_df.Overall > 70] | Teknisk Tirsdag Tutorial (Supervised Learning).ipynb | mssalvador/Fifa2018 | apache-2.0 |
Måske er de 22 klubber, vi har udvalgt ikke helt nok til at beskrive topklubber | top_clubs | Teknisk Tirsdag Tutorial (Supervised Learning).ipynb | mssalvador/Fifa2018 | apache-2.0 |
Значение теорем сходимости (Б.Т. Поляк Введение в оптимизацию, гл. 1, $\S$ 6)
Что дают теоремы сходимости
класс задач, для которых можно рассчитывать на применимость метода (важно не завышать условия!)
выпуклость
гладкость
качественное поведение метода
существенно ли начальное приближение
по какому функционалу есть сходимость
оценку скорости сходимости
теоретическая оценка поведения метода без проведения экспериментов
определение факторов, которые влияют на сходимость (обусловленность, размерность, etc)
иногда заранее можно выбрать число итераций для достижения заданной точности
Что НЕ дают теоремы сходимости
сходимость метода ничего не говорит о целесообразности его применения
оценки сходимости зависят от неизвестных констант - неконструктивный характер
учёт ошибок округления и точности решения вспомогательных задач
Мораль: нужно проявлять разумную осторожность
и здравый смысл!
Классификация задач
Безусловная оптимизация
целевая функция липшицева
градиент целевой функции липшицев
Условная оптимизация
многогранник
множество простой структуры
общего вида
Классификация методов
Какой размер истории нужно хранить для обновления?
Одношаговые методы
$$
x_{k+1} = \Phi(x_k)
$$
Многошаговые методы
$$
x_{k+1} = \Phi(x_k, x_{k-1}, ...)
$$
Какой порядок поизводных нужно вычислить?
Методы нулевого порядка: оракул возвращает только значение функции $f(x)$
Методы первого порядка: оракул возвращает значение функции $f(x)$ и её градиент $f'(x)$
Методы второго порядка: оракул возвращает значение функции $f(x)$, её градиент $f'(x)$ и гессиан $f''(x)$.
Q: существуют ли методы более высокого порядка?
А: Implementable tensor methods in unconstrained convex optimization by Y. Nesterov, 2019
Одномерная минимизация
Определение. Функция $f(x)$ называется унимодальной на $[a, b]$, если существует такая точка $x^ \in [a, b]$, что
- $f(x_1) > f(x_2)$ для любых $a \leq x_1 < x_2 < x^$,
и
- $f(x_1) < f(x_2)$ для любых $x^* < x_1 < x_2 \leq b$.
Вопрос: какая геометрия унимодальных функций?
Метод дихотомии
Идея из информатики первого семестра:
делим отрезок $[a,b]$ на две равные части
пока не найдём минимум унимодальной функции.
$N$ - число вычислений функции $f$
$K = \frac{N - 1}{2}$ - число итераций
Тогда
$$
|x_{K+1} - x^*| \leq \frac{b_{K+1} - a_{K+1}}{2} = \left( \frac{1}{2} \right)^{\frac{N-1}{2}} (b - a) \approx 0.5^{K} (b - a)
$$ | def binary_search(f, a, b, epsilon, callback=None):
c = (a + b) / 2.0
while abs(b - a) > epsilon:
# Check left subsegment
y = (a + c) / 2.0
if f(y) <= f(c):
b = c
c = y
else:
# Check right subsegment
z = (b + c) / 2.0
if f(c) <= f(z):
a = y
b = z
else:
a = c
c = z
if callback is not None:
callback(a, b)
return c
def my_callback(a, b, left_bound, right_bound, approximation):
left_bound.append(a)
right_bound.append(b)
approximation.append((a + b) / 2.0)
import numpy as np
left_boud_bs = []
right_bound_bs = []
approximation_bs = []
callback_bs = lambda a, b: my_callback(a, b,
left_boud_bs, right_bound_bs, approximation_bs)
# Target unimodal function on given segment
f = lambda x: (x - 2) * x * (x + 2)**2 # np.power(x+2, 2)
# f = lambda x: -np.sin(x)
x_true = -2
# x_true = np.pi / 2.0
a = -3
b = -1.5
epsilon = 1e-8
x_opt = binary_search(f, a, b, epsilon, callback_bs)
print(np.abs(x_opt - x_true))
plt.figure(figsize=(10,6))
plt.plot(np.linspace(a,b), f(np.linspace(a,b)))
plt.title("Objective function", fontsize=28)
plt.xticks(fontsize = 28)
_ = plt.yticks(fontsize = 28) | Spring2021/intro_gd.ipynb | amkatrutsa/MIPT-Opt | mit |
Метод золотого сечения
Идея:
делить отрезок $[a,b]$ не на две равные насти,
а в пропорции "золотого сечения".
Оценим скорость сходимости аналогично методу дихотомии:
$$
|x_{K+1} - x^*| \leq b_{K+1} - a_{K+1} = \left( \frac{1}{\tau} \right)^{N-1} (b - a) \approx 0.618^K(b-a),
$$
где $\tau = \frac{\sqrt{5} + 1}{2}$.
Константа геометрической прогрессии больше, чем у метода дихотомии
Количество вызовов функции меньше, чем у метода дихотомии | def golden_search(f, a, b, tol=1e-5, callback=None):
tau = (np.sqrt(5) + 1) / 2.0
y = a + (b - a) / tau**2
z = a + (b - a) / tau
while b - a > tol:
if f(y) <= f(z):
b = z
z = y
y = a + (b - a) / tau**2
else:
a = y
y = z
z = a + (b - a) / tau
if callback is not None:
callback(a, b)
return (a + b) / 2.0
left_boud_gs = []
right_bound_gs = []
approximation_gs = []
cb_gs = lambda a, b: my_callback(a, b, left_boud_gs, right_bound_gs, approximation_gs)
x_gs = golden_search(f, a, b, epsilon, cb_gs)
print(f(x_opt))
print(f(x_gs))
print(np.abs(x_opt - x_true)) | Spring2021/intro_gd.ipynb | amkatrutsa/MIPT-Opt | mit |
Сравнение методов одномерной минимизации | plt.figure(figsize=(10,6))
plt.semilogy(np.arange(1, len(approximation_bs) + 1), np.abs(x_true - np.array(approximation_bs, dtype=np.float64)), label="Binary search")
plt.semilogy(np.arange(1, len(approximation_gs) + 1), np.abs(x_true - np.array(approximation_gs, dtype=np.float64)), label="Golden search")
plt.xlabel(r"Number of iterations, $k$", fontsize=26)
plt.ylabel("Error rate upper bound", fontsize=26)
plt.legend(loc="best", fontsize=26)
plt.xticks(fontsize = 26)
_ = plt.yticks(fontsize = 26)
%timeit binary_search(f, a, b, epsilon)
%timeit golden_search(f, a, b, epsilon) | Spring2021/intro_gd.ipynb | amkatrutsa/MIPT-Opt | mit |
Пример иного поведения методов
$$
f(x) = \sin(\sin(\sin(\sqrt{x}))), \; x \in [2, 60]
$$ | f = lambda x: np.sin(np.sin(np.sin(np.sqrt(x))))
x_true = (3 * np.pi / 2)**2
a = 2
b = 60
epsilon = 1e-8
plt.plot(np.linspace(a,b), f(np.linspace(a,b)))
plt.xticks(fontsize = 28)
_ = plt.yticks(fontsize = 28) | Spring2021/intro_gd.ipynb | amkatrutsa/MIPT-Opt | mit |
Сравнение скорости сходимости и времени работы методов
Метод дихотомии | left_boud_bs = []
right_bound_bs = []
approximation_bs = []
callback_bs = lambda a, b: my_callback(a, b,
left_boud_bs, right_bound_bs, approximation_bs)
x_opt = binary_search(f, a, b, epsilon, callback_bs)
print(np.abs(x_opt - x_true)) | Spring2021/intro_gd.ipynb | amkatrutsa/MIPT-Opt | mit |
Метод золотого сечения | left_boud_gs = []
right_bound_gs = []
approximation_gs = []
cb_gs = lambda a, b: my_callback(a, b, left_boud_gs, right_bound_gs, approximation_gs)
x_gs = golden_search(f, a, b, epsilon, cb_gs)
print(np.abs(x_opt - x_true)) | Spring2021/intro_gd.ipynb | amkatrutsa/MIPT-Opt | mit |
Сходимость | plt.figure(figsize=(8,6))
plt.semilogy(np.abs(x_true - np.array(approximation_bs, dtype=np.float64)), label="Binary")
plt.semilogy(np.abs(x_true - np.array(approximation_gs, dtype=np.float64)), label="Golden")
plt.legend(fontsize=28)
plt.xticks(fontsize=28)
_ = plt.yticks(fontsize=28)
plt.xlabel(r"Number of iterations, $k$", fontsize=26)
plt.ylabel("Error rate upper bound", fontsize=26) | Spring2021/intro_gd.ipynb | amkatrutsa/MIPT-Opt | mit |
Время работы | %timeit binary_search(f, a, b, epsilon)
%timeit golden_search(f, a, b, epsilon) | Spring2021/intro_gd.ipynb | amkatrutsa/MIPT-Opt | mit |
Резюме
Введение в численные методы оптимизации
Общая схема работы метода
Способы сравнения методов оптимизации
Зоопарк задач и методов
Одномерная минимизация
Методы спуска. Градиентный спуск и его ускоренные модификации
Что такое методы спуска?
Последовательность $x_k$ генерируется по правилу
$$
x_{k+1} = x_k + \alpha_k h_k
$$
так что
$$
f(x_{k+1}) < f(x_k)
$$
Направление $h_k$ называется направлением убывания.
Замечание: существуют методы, которые не требуют монотонного убывания функции от итерации к итерации.
```python
def DescentMethod(f, x0, epsilon, **kwargs):
x = x0
while StopCriterion(x, f, **kwargs) > epsilon:
h = ComputeDescentDirection(x, f, **kwargs)
alpha = SelectStepSize(x, h, f, **kwargs)
x = x + alpha * h
return x
```
Способ 1: направление убывания
Рассмотрим линейную аппроксимацию дифференцируемой функции $f$ вдоль некоторого направления убывания $h, \|h\|_2 = 1$:
$$
f(x + \alpha h) = f(x) + \alpha \langle f'(x), h \rangle + o(\alpha)
$$
Из условия убывания
$$
f(x) + \alpha \langle f'(x), h \rangle + o(\alpha) < f(x)
$$
и переходя к пределу при $\alpha \rightarrow 0$:
$$
\langle f'(x), h \rangle \leq 0
$$
Также из неравенства Коши-Буняковского-Шварца
$$
\langle f'(x), h \rangle \geq -\| f'(x) \|_2 \| h \|_2 = -\| f'(x) \|_2
$$
Таким образом, направление антиградиента
$$
h = -\dfrac{f'(x)}{\|f'(x)\|_2}
$$
даёт направление наискорейшего локального убывания функции$~f$.
В итоге метод имеет вид
$$
x_{k+1} = x_k - \alpha f'(x_k)
$$
Способ 2: схема Эйлера решения ОДУ
Рассмотрим обыкновенное диференциальное уравнение вида:
$$
\frac{dx}{dt} = -f'(x(t))
$$
и дискретизуем его на равномерной сетке с шагом $\alpha$:
$$
\frac{x_{k+1} - x_k}{\alpha} = -f'(x_k),
$$
где $x_k \equiv x(t_k)$ и $\alpha = t_{k+1} - t_k$ - шаг сетки.
Отсюда получаем выражение для $x_{k+1}$
$$
x_{k+1} = x_k - \alpha f'(x_k),
$$
которое в точности совпадает с выражением для градиентного спуска.
Такая схема называется явной или прямой схемой Эйлера.
Q: какая схема называется неявной или обратной?
Способ 3: минимизация квадратичной оценки сверху
(А. В. Гасников "Метод универсального градиентного спуска" https://arxiv.org/abs/1711.00394)
Глобальная оценка сверху на функцию $f$ в точке $x_k$:
$$
f(y) \leq f(x_k) + \langle f'(x_k), y - x_k \rangle + \frac{L}{2} \|y - x_k \|_2^2 = g(y),
$$
где $\lambda_{\max}(f''(x)) \leq L$ для всех допустимых $x$.
Справа — квадратичная форма, точка минимума которой имеет аналитическое выражение:
\begin{align}
& g'(y^) = 0 \
& f'(x_k) + L (y^ - x_k) = 0 \
& y^ = x_k - \frac{1}{L}f'(x_k) = x_{k+1}
\end{align*}
Этот способ позволяет оценить значение шага как $\frac{1}{L}$. Однако часто константа $L$ неизвестна.
Итого: метод градиентного спуска — дёшево и сердито
```python
def GradientDescentMethod(f, x0, epsilon, **kwargs):
x = x0
while StopCriterion(x, f, **kwargs) > epsilon:
h = ComputeGradient(x, f, **kwargs)
alpha = SelectStepSize(x, h, f, **kwargs)
x = x - alpha * h
return x
```
Как выбрать шаг $\alpha_k$? (J. Nocedal, S. Wright Numerical Optimization, $\S$ 3.1.)
Список подходов:
- Постоянный шаг
$$
\alpha_k = \overline{\alpha}
$$
Априорно заданная последовательность, например
$$
\alpha_k = \dfrac{\overline{\alpha}}{\sqrt{k+1}}
$$
Наискорейший спуск
$$
\alpha_k = \arg\min_{\alpha \geq 0} f(x_k - \alpha f'(x_k))
$$
Требование достаточного убывания, требование существенного убывания и условие кривизны: для некоторых $\beta_1, \beta_2$, таких что $0 < \beta_1 < \beta_2 < 1$ найти $x_{k+1}$ такую что
Достаточное убывание: $f(x_{k+1}) \leq f(x_k) + \beta_1 \alpha_k \langle f'(x_k), h_k \rangle$ или
$ f(x_k) - f(x_{k+1}) \geq \beta_1 \alpha_k \langle f'(x_k), h_k \rangle
$
Существенное убывание: $f(x_{k+1}) \geq f(x_k) + \beta_2 \alpha_k \langle f'(x_k), h_k \rangle$ или
$
f(x_k) - f(x_{k+1}) \leq \beta_2 \alpha_k \langle f'(x_k), h_k \rangle
$
Условие кривизны: $\langle f'(x_{k+1}), h_k \rangle \geq \beta_2 \langle f'(x_k), h_k \rangle$
Обычно коэффициенты выбирают так: $\beta_1 \in (0, 0.3)$, а $\beta_2 \in (0.9, 1)$
Анализ и мотивация подходов к выбору шага $\alpha_k$
Постоянный шаг: самое простое и неэффективное решение
Априорно заданная последовательность: немногим лучше постоянного шага
Наискорейший спуск: самое лучшее решение, но применимо только если вспомогательная задача решается аналитически или ооооооочень быстро. <br></br>
То есть почти всегда неприменимо :)
Требование достаточного убывания, требование существенного убывания и условие кривизны:
требование достаточного убывания гарантирует, что функция в точке $x_{k+1}$ не превосходит линейной аппроксимации с коэффициентом наклона $\beta_1$
требование существенного убывания гарантирует, что функция в точке $x_{k+1}$ убывает не меньше, чем линейная аппроксимация c коэффициентом наклона $\beta_2$
условие кривизны гарантирует, что угол наклона касательной в точке $x_{k+1}$ не меньше, чем угол наклона касательной в точке $x_k$, <br></br>
умноженный на $\beta_2$
Требование существенного убывания и условие кривизны обеспечивают убывание функции по выбранному направлению $h_k$. Обычно выбирают одно из них.
Альтернативные названия
Требование достаточного убывания $\equiv$ правило Армихо
Требование достаточного убывания + условие кривизны $\equiv$ правило Вольфа
Требование достаточного убывания + требование существенного убывания $\equiv$ правило Гольдштейна
Зачем нужно условие существенного убывания? | %matplotlib notebook
import matplotlib.pyplot as plt
plt.rc("text", usetex=True)
import ipywidgets as ipywidg
import numpy as np
import liboptpy.unconstr_solvers as methods
import liboptpy.step_size as ss
from tqdm import tqdm
f = lambda x: np.power(x, 2)
gradf = lambda x: 2 * x
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
def update(x0, step):
gd = methods.fo.GradientDescent(f, gradf, ss.ConstantStepSize(step))
_ = gd.solve(np.array([x0]), max_iter=10)
x_hist = gd.get_convergence()
x = np.linspace(-5, 5)
ax.clear()
ax.plot(x, f(x), color="r", label="$f(x) = x^2$")
y_hist = np.array([f(x) for x in x_hist])
x_hist = np.array(x_hist)
plt.quiver(x_hist[:-1], y_hist[:-1], x_hist[1:]-x_hist[:-1], y_hist[1:]-y_hist[:-1],
scale_units='xy', angles='xy', scale=1, width=0.005, color="green", label="Descent path")
ax.legend()
fig.canvas.draw()
step_slider = ipywidg.FloatSlider(value=0.8, min=0, max=1.2, step=0.1, description="Step")
x0_slider = ipywidg.FloatSlider(value=1.5, min=-4, max=4, step=0.1, description="Initial point")
_ = ipywidg.interact(update, x0=x0_slider, step=step_slider)
def plot_alpha(f, grad, x, h, alphas, beta1, beta2):
df = np.zeros_like(alphas)
for i, alpha in enumerate(alphas):
df[i] = f(x + alpha * h)
upper_bound = f(x) + beta1 * alphas * grad(x) * h
lower_bound = f(x) + beta2 * alphas * grad(x) * h
plt.plot(alphas, df, label=r"$f(x + \alpha h)$")
plt.plot(alphas, upper_bound, label="Upper bound")
plt.plot(alphas, lower_bound, label="Lower bound")
plt.xlabel(r"$\alpha$", fontsize=18)
plt.legend(loc="best", fontsize=18)
f = lambda x: x**2
grad = lambda x: 2 * x
beta1 = 0.1
beta2 = 0.9
x0 = 0.5
plot_alpha(f, grad, x0, -grad(x0), np.linspace(1e-3, 1.01, 10), beta1, beta2) | Spring2021/intro_gd.ipynb | amkatrutsa/MIPT-Opt | mit |
$f(x) = x\log x$ | x_range = np.linspace(1e-10, 4)
plt.plot(x_range, x_range * np.log(x_range))
x0 = 1
f = lambda x: x * np.log(x)
grad = lambda x: np.log(x) + 1
beta1 = 0.3
beta2 = 0.7
plot_alpha(f, grad, x0, -grad(x0), np.linspace(1e-3, 0.9, 10), beta1, beta2) | Spring2021/intro_gd.ipynb | amkatrutsa/MIPT-Opt | mit |
Backtracking
```python
def SelectStepSize(x, f, h, rho, alpha0, beta1, beta2):
# 0 < rho < 1
# alpha0 - initial guess of step size
# beta1 and beta2 - constants from conditions
alpha = alpha0
# Check violating sufficient decrease and curvature conditions
while (f(x - alpha * h) >= f(x) + beta1 * alpha grad_f(x_k).dot(h)) and
(grad_f(x - alpha * h).dot(h) <= beta2 * grad_f(x_k).dot(h)):
alpha *= rho
return alpha
```
Теоремы сходимости (Б.Т. Поляк Введение в оптимизацию, гл. 1, $\S$ 4; гл. 3, $\S$ 1; Ю.Е. Нестеров Введение в выпуклую оптимизацию, $\S$ 2.2)
От общего к частному:
Теорема 1.
Пусть
$f(x)$ дифференцируема на $\mathbb{R}^n$,
градиент $f(x)$ удовлетворяет условию Липшица с константой $L$
$f(x)$ ограничена снизу
$\alpha = const$ и $0 < \alpha < \frac{2}{L}$
Тогда для градиентного метода выполнено:
$$
\lim\limits_{k \to \infty} f'(x_k) = 0,
$$
а функция монотонно убывает $f(x_{k+1}) < f(x_k)$.
Теорема 2. Пусть
- $f(x)$ дифференцируема на $\mathbb{R}^n$
- $f(x)$ выпукла
- $f'(x)$ удовлетворяет условию Липшица с константой $L$
- $\alpha = \dfrac{1}{L}$
Тогда
$$
f(x_k) - f^ \leq \dfrac{2L \| x_0 - x^\|^2_2}{k+4}
$$
Теорема 3.
Пусть
- $f(x)$ дважды дифференцируема и $\mu\mathbf{I} \preceq f''(x) \preceq L\mathbf{I}$ для всех $x$
- $\alpha = const$ и $0 < \alpha < \frac{2}{L}$
Тогда
$$
\| x_k - x^\|_2 \leq \|x_0 - x^\|_2 q^k, \qquad q = \max(|1 - \alpha l|, |1 - \alpha L|) < 1
$$
и минимальное $q^ = \dfrac{L - \mu}{L + \mu}$ при $\alpha^ = \dfrac{2}{L + \mu}$
От чего зависит $q^*$ и как это использовать?
Из Теоремы 3 имеем
$$
q^* = \dfrac{L - \mu}{L + \mu} = \dfrac{L/\mu - 1}{L/\mu + 1} = \dfrac{M - 1}{M + 1},
$$
где $M$ - оценка числа обусловленности $f''(x)$.
Вопрос: что такое число обусловленности матрицы?
При $M \gg 1$, $q^ \to 1 \Rightarrow$ оооочень медленная сходимости градиентного метода. Например при $M = 100$: $q^ \approx 0.98 $
При $M \simeq 1$, $q^ \to 0 \Rightarrow$ ускорение сходимости градиентного метода. Например при $M = 4$: $q^ = 0.6 $
Вопрос: какая геометрия у этого требования?
Мораль: необходимо сделать оценку $M$ как можно ближе к 1!
О том, как это сделать, Вам будет предложено подумать в домашнем задании :)
Вычислительный аспект и эксперименты
Для каждого шага метода нужно хранить только текущую точку и вектор градиента: $O(n)$ памяти
Поиск $\alpha_k$:
дан априори
ищется из аналитического решения задачи наискорейшего спуска
заканчивается за конечное число шагов
Для каждого шага метода нужно вычислять линейную комбинацию векторов: $O(n)$ вычислений + высокопроизводительные реализации
Pеализация градиентного спуска | def GradientDescent(f, gradf, x0, epsilon, num_iter, line_search,
disp=False, callback=None, **kwargs):
x = x0.copy()
iteration = 0
opt_arg = {"f": f, "grad_f": gradf}
for key in kwargs:
opt_arg[key] = kwargs[key]
while True:
gradient = gradf(x)
alpha = line_search(x, -gradient, **opt_arg)
x = x - alpha * gradient
if callback is not None:
callback(x)
iteration += 1
if disp:
print("Current function val =", f(x))
print("Current gradient norm = ", np.linalg.norm(gradf(x)))
if np.linalg.norm(gradf(x)) < epsilon:
break
if iteration >= num_iter:
break
res = {"x": x, "num_iter": iteration, "tol": np.linalg.norm(gradf(x))}
return res | Spring2021/intro_gd.ipynb | amkatrutsa/MIPT-Opt | mit |
Выбор шага
Реализации различных способов выбора шага приведены тут
Зависимость от обусловленности матрицы $f''(x)$
Рассмотрим задачу
$$
\min f(x),
$$
где
$$ f(x) = x^{\top}Ax, \; A = \begin{bmatrix} 1 & 0\ 0 & \gamma \end{bmatrix} $$
$$
f'(x) = 2Ax
$$ | def my_f(x, A):
return 0.5 * x.dot(A.dot(x))
def my_gradf(x, A):
return A.dot(x)
plt.rc("text", usetex=True)
gammas = [0.1, 0.5, 1, 2, 3, 4, 5, 10, 20, 50, 100, 1000, 5000, 10000]
# gammas = [1]
num_iter_converg = []
for g in gammas:
A = np.array([[1, 0],
[0, g]], dtype=np.float64)
f = lambda x: my_f(x, A)
gradf = lambda x: my_gradf(x, A)
# x0 = np.random.rand(A.shape[0])
# x0 = np.sort(x0)
# x0 = x0[::-1]
x0 = np.array([g, 1], dtype=np.float64)
# print x0[1] / x0[0]
gd = methods.fo.GradientDescent(f, gradf, ss.ExactLineSearch4Quad(A))
x = gd.solve(x0, tol=1e-7, max_iter=100)
num_iter_converg.append(len(gd.get_convergence()))
plt.figure(figsize=(8, 6))
plt.loglog(gammas, num_iter_converg)
plt.xticks(fontsize = 20)
plt.yticks(fontsize = 20)
plt.xlabel(r"$\gamma$", fontsize=20)
plt.ylabel(r"Number of iterations with $\varepsilon = 10^{-7}$", fontsize=20) | Spring2021/intro_gd.ipynb | amkatrutsa/MIPT-Opt | mit |
При неудачном начальном приближении сходимость для плохо обусловенной задачи очень медленная
При случайном начальном приближении сходимость может быть гораздо быстрее теоретических оценок
Эксперимент на многомерной задаче
Пусть $A \in \mathbb{R}^{m \times n}$. Рассмотрим систему линейных неравенств: $Ax \leq 1$ при условии $|x_i| \leq 1$ для всех $i$.
Определение. Аналитическим центром системы неравенств $Ax \leq 1$ при условии $|x_i| \leq 1$ является решение задачи
$$
f(x) = - \sum_{i=1}^m \log(1 - a_i^{\top}x) - \sum_{i = 1}^n \log (1 - x^2_i) \to \min_x
$$
$$
f'(x) - ?
$$
Точное решение с помощью CVXPy | import numpy as np
n = 1000
m = 2000
A = np.random.rand(n, m)
x = cvx.Variable(n)
obj = cvx.Minimize(cvx.sum(-cvx.log(1 - A.T * x)) -
cvx.sum(cvx.log(1 - cvx.square(x))))
prob = cvx.Problem(obj)
prob.solve(solver="SCS", verbose=True)
x = x.value
print("Optimal value =", prob.value) | Spring2021/intro_gd.ipynb | amkatrutsa/MIPT-Opt | mit |
Решение с помощью градиентного спуска | import cvxpy as cvx
print(cvx.installed_solvers())
# !pip install jax
# !pip install jaxlib
import jax.numpy as jnp
import jax
# from jax.config import config
# config.update("jax_enable_x64", True)
A = jnp.array(A)
print(A.dtype)
x0 = jnp.zeros(n)
f = lambda x: -jnp.sum(jnp.log(1 - A.T@x)) - jnp.sum(jnp.log(1 - x*x))
grad_f = lambda x: jnp.sum(A @ (jnp.diagflat(1 / (1 - A.T @ x))), \
axis=1) + 2 * x / (1 - jnp.power(x, 2))
grad_f_jax = jax.grad(f)
print(jnp.linalg.norm(grad_f(x0) - grad_f_jax(x0))) | Spring2021/intro_gd.ipynb | amkatrutsa/MIPT-Opt | mit |
Подробнее про jax, его возможности и особенности можно посмотреть например тут | gd = methods.fo.GradientDescent(f, grad_f_jax, ss.Backtracking("Armijo", rho=0.5, beta=0.1, init_alpha=1.))
x = gd.solve(x0, tol=1e-5, max_iter=100, disp=True)
x_conv = gd.get_convergence()
grad_conv = [jnp.linalg.norm(grad_f_jax(x)) for x in x_conv]
plt.figure(figsize=(8,6))
plt.semilogy(grad_conv, label=r"$\| f'(x_k) \|_2$")
plt.semilogy([np.linalg.norm(x - np.array(x_k)) for x_k in x_conv], label=r"$\|x_k - x^*\|_2$")
plt.semilogy([np.linalg.norm(prob.value - f(np.array(x_k))) for x_k in x_conv], label=r"$\|f(x_k) - f^*\|_2$")
plt.semilogy([np.linalg.norm(np.array(x_conv[i]) - np.array(x_conv[i+1])) for i in range(len(x_conv) - 1)], label=r"$\|x_k - x_{k+1}\|_2$")
plt.semilogy([np.linalg.norm(f(np.array(x_conv[i])) - f(np.array(x_conv[i+1]))) for i in range(len(x_conv) - 1)], label=r"$\|f(x_k) - f(x_{k+1})\|_2$")
plt.xlabel(r"Number of iteration, $k$", fontsize=20)
plt.ylabel(r"Convergence rate", fontsize=20)
plt.xticks(fontsize = 20)
plt.yticks(fontsize = 20)
plt.legend(loc="best", fontsize=20)
plt.tight_layout() | Spring2021/intro_gd.ipynb | amkatrutsa/MIPT-Opt | mit |
Request Data
In order to request data we are using Graphql (a query language for APIs, more info at: http://graphql.org/).
We provide the function to make a data request, all you need is a query and variables | def makeGraphqlRequest(query, variables):
return GraphQLClient.request(query, variables) | spot-oa/oa/proxy/ipynb_templates/Advanced_Mode_master.ipynb | LedaLima/incubator-spot | apache-2.0 |
Now that we have a function, we can run a query like this:
*Note: There's no need to manually set the date for the query, by default the code will read the date from the current path | suspicious_query = """query($date:SpotDateType) {
proxy {
suspicious(date:$date)
{ clientIp
clientToServerBytes
datetime
duration
host
networkContext
referer
requestMethod
responseCode
responseCodeLabel
responseContentType
score
serverIp
serverToClientBytes
uri
uriPath
uriPort
uriQuery
uriRep
userAgent
username
webCategory
}
}
}"""
##If you want to use a different date for your query, switch the
##commented/uncommented following lines
variables={
'date': datetime.datetime.strptime(date, '%Y%m%d').strftime('%Y-%m-%d')
# 'date': "2016-10-08"
}
suspicious_request = makeGraphqlRequest(suspicious_query,variables)
##The variable suspicious_request will contain the resulting data from the query.
results = suspicious_request['data']['proxy']['suspicious']
| spot-oa/oa/proxy/ipynb_templates/Advanced_Mode_master.ipynb | LedaLima/incubator-spot | apache-2.0 |
Pandas Dataframes
The following cell loads the results into a pandas dataframe
For more information on how to use pandas, you can learn more here: https://pandas.pydata.org/pandas-docs/stable/10min.html | df = pd.read_json(json.dumps(results))
##Printing only the selected column list from the dataframe
##Unless specified otherwise,
print df[['clientIp','uriQuery','datetime','clientToServerBytes','serverToClientBytes', 'host']]
| spot-oa/oa/proxy/ipynb_templates/Advanced_Mode_master.ipynb | LedaLima/incubator-spot | apache-2.0 |
Additional operations
Additional operations can be performed on the dataframe like sorting the data, filtering it and grouping it
Filtering the data | ##Filter results where the destination port = 3389
##The resulting data will be stored in df2
df2 = df[df['clientIp'].isin(['10.173.202.136'])]
print df2[['clientIp','uriQuery','datetime','host']] | spot-oa/oa/proxy/ipynb_templates/Advanced_Mode_master.ipynb | LedaLima/incubator-spot | apache-2.0 |
Ordering the data | srtd = df.sort_values(by="host")
print srtd[['host','clientIp','uriQuery','datetime']] | spot-oa/oa/proxy/ipynb_templates/Advanced_Mode_master.ipynb | LedaLima/incubator-spot | apache-2.0 |
Grouping the data | ## This command will group the results by pairs of source-destination IP
## summarizing all other columns
grpd = df.groupby(['clientIp','host']).sum()
## This will print the resulting dataframe displaying the input and output bytes columnns
print grpd[["clientToServerBytes","serverToClientBytes"]] | spot-oa/oa/proxy/ipynb_templates/Advanced_Mode_master.ipynb | LedaLima/incubator-spot | apache-2.0 |
Reset Scored Connections
Uncomment and execute the following cell to reset all scored connections for this day | # reset_scores = """mutation($date:SpotDateType!) {
# proxy{
# resetScoredConnections(date:$date){
# success
# }
# }
# }"""
# variables={
# 'date': datetime.datetime.strptime(date, '%Y%m%d').strftime('%Y-%m-%d')
# }
# request = makeGraphqlRequest(reset_scores,variables)
# print request['data']['proxy']['resetScoredConnections']['success'] | spot-oa/oa/proxy/ipynb_templates/Advanced_Mode_master.ipynb | LedaLima/incubator-spot | apache-2.0 |
Sandbox
At this point you can perform your own analysis using the previously provided functions as a guide.
Happy threat hunting! | #Your code here | spot-oa/oa/proxy/ipynb_templates/Advanced_Mode_master.ipynb | LedaLima/incubator-spot | apache-2.0 |
Load the database | reload(ximudata)
dbfilename = "/home/kjartan/Dropbox/Public/nvg201209.hdf5"
db = ximudata.NVGData(dbfilename); | notebooks/.ipynb_checkpoints/Get started-checkpoint.ipynb | alfkjartan/nvgimu | gpl-3.0 |
Explore contents of the database file | dbfile = db.hdfFile;
print "Subjects: ", dbfile.keys()
print "Trials: ", dbfile['S5'].keys()
print "IMUs: ", dbfile['S5/B'].keys()
print "Attributes of example trial", dbfile['S5/B'].attrs.keys()
print "Shape of example IMU data entry", dbfile['S5/B/N'].shape
| notebooks/.ipynb_checkpoints/Get started-checkpoint.ipynb | alfkjartan/nvgimu | gpl-3.0 |
The content of the raw IMU file
The columns of the IMU data contain:
0: Packet number,
1: Gyroscope X (deg/s),
2: Gyroscope Y (deg/s),
3: Gyroscope Z (deg/s),
4: Accelerometer X (g),
5: Accelerometer Y (g),
6: Accelerometer Z (g),
7: Magnetometer X (G),
8: Magnetometer Y (G),
9: Magnetometer Z (G)
Plot example data | db.plot_imu_data() | notebooks/.ipynb_checkpoints/Get started-checkpoint.ipynb | alfkjartan/nvgimu | gpl-3.0 |
Implemented analysis methods | print [s for s in dir(db) if s.startswith("get")] | notebooks/.ipynb_checkpoints/Get started-checkpoint.ipynb | alfkjartan/nvgimu | gpl-3.0 |
Is it possible to make the sketch so coarse that its estimates are wrong even for this data set? | s = Sketch(0.9, 0.9, 10)
f = open('CM_small.txt')
results_coarse_CM = CM_top_users(f,s)
print(results_coarse_CM) | count-min-101/CountMinSketch.ipynb | fionapigott/Data-Science-45min-Intros | unlicense |
Yes! (if you try enough) Why?
The 'w' parameter goes like ceiling(exp(1)/epsilon), which is always >=~ 3.
The 'd' parameter goes like ceiling(log(1/delta), which is always >= 1.
So, you're dealing with a space with minimum size 3 x 1. With 10 records, it's possible that all 4 users map their counts to the point. So it's possible to see an estimate as high as 10, in this case.
Now for a larger data set. | f = open('CM_large.txt')
%time results_exact = exact_top_users(f)
print(results_exact)
# this could take a few minutes
f = open('CM_large.txt')
s = Sketch(10**-4, 0.001, 10)
%time results_CM = CM_top_users(f,s)
print(results_CM) | count-min-101/CountMinSketch.ipynb | fionapigott/Data-Science-45min-Intros | unlicense |
For this precision and dataset size, the CM algo takes much longer than the exact solution. In fact, the crossover point at which the CM sketch can achieve reasonable accuracy in the same time as the exact solution is a very large number of entries. | for item in zip(results_exact,results_CM):
print(item)
# the CM sketch gets the top entry (an outlier) correct but doesn't do well estimating the order of the more degenerate counts
# let's decrease the precision via both the epsilon and delta parameters, and see whether it still gets the "heavy-hitter" correct
f = open('CM_large.txt')
s = Sketch(10**-3, 0.01, 10)
%time results_CM = CM_top_users(f,s)
print(results_CM)
# nope...sketch is too coarse, too many collisions, and the prominence of user 'Euph0r1a__ 129' is obscured
for item in zip(results_exact,results_CM):
print(item) | count-min-101/CountMinSketch.ipynb | fionapigott/Data-Science-45min-Intros | unlicense |
使用 Tensorflow Lattice 实现道德形状约束
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://tensorflow.google.cn/lattice/tutorials/shape_constraints_for_ethics"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">在 TensorFlow.org 上查看 </a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/lattice/tutorials/shape_constraints_for_ethics.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">在 Google Colab 中运行 </a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/lattice/tutorials/shape_constraints_for_ethics.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png">在 GitHub 中查看源代码</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/lattice/tutorials/shape_constraints_for_ethics.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png"> 下载笔记本</a></td>
</table>
概述
本教程演示了如何使用 TensorFlow Lattice (TFL) 库训练对行为负责,并且不违反特定的道德或公平假设的模型。特别是,我们将侧重于使用单调性约束来避免对某些特性的不公平惩罚。本教程包括 Serena Wang 和 Maya Gupta 在 AISTATS 2020 上发表的论文 Deontological Ethics By Monotonicity Shape Constraints 中的实验演示。
我们将在公共数据集上使用 TFL Canned Estimator,但请注意,本教程中的所有内容也可以使用通过 TFL Keras 层构造的模型来完成。
在继续之前,请确保您的运行时已安装所有必需的软件包(如下方代码单元中导入的软件包)。
设置
安装 TF Lattice 软件包: | #@test {"skip": true}
!pip install tensorflow-lattice seaborn | site/zh-cn/lattice/tutorials/shape_constraints_for_ethics.ipynb | tensorflow/docs-l10n | apache-2.0 |
导入所需的软件包: | import tensorflow as tf
import logging
import matplotlib.pyplot as plt
import numpy as np
import os
import pandas as pd
import seaborn as sns
from sklearn.model_selection import train_test_split
import sys
import tensorflow_lattice as tfl
logging.disable(sys.maxsize) | site/zh-cn/lattice/tutorials/shape_constraints_for_ethics.ipynb | tensorflow/docs-l10n | apache-2.0 |
本教程中使用的默认值: | # List of learning rate hyperparameters to try.
# For a longer list of reasonable hyperparameters, try [0.001, 0.01, 0.1].
LEARNING_RATES = [0.01]
# Default number of training epochs and batch sizes.
NUM_EPOCHS = 1000
BATCH_SIZE = 1000
# Directory containing dataset files.
DATA_DIR = 'https://raw.githubusercontent.com/serenalwang/shape_constraints_for_ethics/master' | site/zh-cn/lattice/tutorials/shape_constraints_for_ethics.ipynb | tensorflow/docs-l10n | apache-2.0 |
案例研究 1:法学院入学
在本教程的第一部分中,我们将考虑一个使用法学院招生委员会 (LSAC) 的 Law School Admissions 数据集的案例研究。我们将训练分类器利用以下两个特征来预测学生是否会通过考试:学生的 LSAT 分数和本科生的 GPA。
假设分类器的分数用于指导法学院的招生或奖学金评定。根据基于成绩的社会规范,我们预期具有更高 GPA 和更高 LSAT 分数的学生应当从分类器中获得更高的分数。但是,我们会观察到,模型很容易违反这些直观的规范,有时会惩罚 GPA 或 LSAT 分数较高的人员。
为了解决这种不公平的惩罚问题,我们可以施加单调性约束,这样在其他条件相同的情况下,模型永远不会惩罚更高的 GPA 或更高的 LSAT 分数。在本教程中,我们将展示如何使用 TFL 施加这些单调性约束。
加载法学院数据 | # Load data file.
law_file_name = 'lsac.csv'
law_file_path = os.path.join(DATA_DIR, law_file_name)
raw_law_df = pd.read_csv(law_file_path, delimiter=',') | site/zh-cn/lattice/tutorials/shape_constraints_for_ethics.ipynb | tensorflow/docs-l10n | apache-2.0 |
预处理数据集: | # Define label column name.
LAW_LABEL = 'pass_bar'
def preprocess_law_data(input_df):
# Drop rows with where the label or features of interest are missing.
output_df = input_df[~input_df[LAW_LABEL].isna() & ~input_df['ugpa'].isna() &
(input_df['ugpa'] > 0) & ~input_df['lsat'].isna()]
return output_df
law_df = preprocess_law_data(raw_law_df) | site/zh-cn/lattice/tutorials/shape_constraints_for_ethics.ipynb | tensorflow/docs-l10n | apache-2.0 |
将数据划分为训练/验证/测试集 | def split_dataset(input_df, random_state=888):
"""Splits an input dataset into train, val, and test sets."""
train_df, test_val_df = train_test_split(
input_df, test_size=0.3, random_state=random_state)
val_df, test_df = train_test_split(
test_val_df, test_size=0.66, random_state=random_state)
return train_df, val_df, test_df
law_train_df, law_val_df, law_test_df = split_dataset(law_df) | site/zh-cn/lattice/tutorials/shape_constraints_for_ethics.ipynb | tensorflow/docs-l10n | apache-2.0 |
可视化数据分布
首先,我们可视化数据的分布。我们将为所有通过考试的学生以及所有未通过考试的学生绘制 GPA 和 LSAT 分数。 | def plot_dataset_contour(input_df, title):
plt.rcParams['font.family'] = ['serif']
g = sns.jointplot(
x='ugpa',
y='lsat',
data=input_df,
kind='kde',
xlim=[1.4, 4],
ylim=[0, 50])
g.plot_joint(plt.scatter, c='b', s=10, linewidth=1, marker='+')
g.ax_joint.collections[0].set_alpha(0)
g.set_axis_labels('Undergraduate GPA', 'LSAT score', fontsize=14)
g.fig.suptitle(title, fontsize=14)
# Adust plot so that the title fits.
plt.subplots_adjust(top=0.9)
plt.show()
law_df_pos = law_df[law_df[LAW_LABEL] == 1]
plot_dataset_contour(
law_df_pos, title='Distribution of students that passed the bar')
law_df_neg = law_df[law_df[LAW_LABEL] == 0]
plot_dataset_contour(
law_df_neg, title='Distribution of students that failed the bar') | site/zh-cn/lattice/tutorials/shape_constraints_for_ethics.ipynb | tensorflow/docs-l10n | apache-2.0 |
训练校准线性模型以预测考试的通过情况
接下来,我们将通过 TFL 训练校准线性模型,以预测学生是否会通过考试。两个输入特征分别是 LSAT 分数和本科 GPA,而训练标签将是学生是否通过了考试。
我们首先在没有任何约束的情况下训练校准线性模型。然后,我们在具有单调性约束的情况下训练校准线性模型,并观察模型输出和准确率的差异。
用于训练 TFL 校准线性 Estimator 的辅助函数
下面这些函数将用于此法学院案例研究以及下面的信用违约案例研究。 | def train_tfl_estimator(train_df, monotonicity, learning_rate, num_epochs,
batch_size, get_input_fn,
get_feature_columns_and_configs):
"""Trains a TFL calibrated linear estimator.
Args:
train_df: pandas dataframe containing training data.
monotonicity: if 0, then no monotonicity constraints. If 1, then all
features are constrained to be monotonically increasing.
learning_rate: learning rate of Adam optimizer for gradient descent.
num_epochs: number of training epochs.
batch_size: batch size for each epoch. None means the batch size is the full
dataset size.
get_input_fn: function that returns the input_fn for a TF estimator.
get_feature_columns_and_configs: function that returns TFL feature columns
and configs.
Returns:
estimator: a trained TFL calibrated linear estimator.
"""
feature_columns, feature_configs = get_feature_columns_and_configs(
monotonicity)
model_config = tfl.configs.CalibratedLinearConfig(
feature_configs=feature_configs, use_bias=False)
estimator = tfl.estimators.CannedClassifier(
feature_columns=feature_columns,
model_config=model_config,
feature_analysis_input_fn=get_input_fn(input_df=train_df, num_epochs=1),
optimizer=tf.keras.optimizers.Adam(learning_rate))
estimator.train(
input_fn=get_input_fn(
input_df=train_df, num_epochs=num_epochs, batch_size=batch_size))
return estimator
def optimize_learning_rates(
train_df,
val_df,
test_df,
monotonicity,
learning_rates,
num_epochs,
batch_size,
get_input_fn,
get_feature_columns_and_configs,
):
"""Optimizes learning rates for TFL estimators.
Args:
train_df: pandas dataframe containing training data.
val_df: pandas dataframe containing validation data.
test_df: pandas dataframe containing test data.
monotonicity: if 0, then no monotonicity constraints. If 1, then all
features are constrained to be monotonically increasing.
learning_rates: list of learning rates to try.
num_epochs: number of training epochs.
batch_size: batch size for each epoch. None means the batch size is the full
dataset size.
get_input_fn: function that returns the input_fn for a TF estimator.
get_feature_columns_and_configs: function that returns TFL feature columns
and configs.
Returns:
A single TFL estimator that achieved the best validation accuracy.
"""
estimators = []
train_accuracies = []
val_accuracies = []
test_accuracies = []
for lr in learning_rates:
estimator = train_tfl_estimator(
train_df=train_df,
monotonicity=monotonicity,
learning_rate=lr,
num_epochs=num_epochs,
batch_size=batch_size,
get_input_fn=get_input_fn,
get_feature_columns_and_configs=get_feature_columns_and_configs)
estimators.append(estimator)
train_acc = estimator.evaluate(
input_fn=get_input_fn(train_df, num_epochs=1))['accuracy']
val_acc = estimator.evaluate(
input_fn=get_input_fn(val_df, num_epochs=1))['accuracy']
test_acc = estimator.evaluate(
input_fn=get_input_fn(test_df, num_epochs=1))['accuracy']
print('accuracies for learning rate %f: train: %f, val: %f, test: %f' %
(lr, train_acc, val_acc, test_acc))
train_accuracies.append(train_acc)
val_accuracies.append(val_acc)
test_accuracies.append(test_acc)
max_index = val_accuracies.index(max(val_accuracies))
return estimators[max_index] | site/zh-cn/lattice/tutorials/shape_constraints_for_ethics.ipynb | tensorflow/docs-l10n | apache-2.0 |
用于配置法学院数据集特征的辅助函数
下面这些辅助函数专用于法学院案例研究。 | def get_input_fn_law(input_df, num_epochs, batch_size=None):
"""Gets TF input_fn for law school models."""
return tf.compat.v1.estimator.inputs.pandas_input_fn(
x=input_df[['ugpa', 'lsat']],
y=input_df['pass_bar'],
num_epochs=num_epochs,
batch_size=batch_size or len(input_df),
shuffle=False)
def get_feature_columns_and_configs_law(monotonicity):
"""Gets TFL feature configs for law school models."""
feature_columns = [
tf.feature_column.numeric_column('ugpa'),
tf.feature_column.numeric_column('lsat'),
]
feature_configs = [
tfl.configs.FeatureConfig(
name='ugpa',
lattice_size=2,
pwl_calibration_num_keypoints=20,
monotonicity=monotonicity,
pwl_calibration_always_monotonic=False),
tfl.configs.FeatureConfig(
name='lsat',
lattice_size=2,
pwl_calibration_num_keypoints=20,
monotonicity=monotonicity,
pwl_calibration_always_monotonic=False),
]
return feature_columns, feature_configs | site/zh-cn/lattice/tutorials/shape_constraints_for_ethics.ipynb | tensorflow/docs-l10n | apache-2.0 |
用于可视化训练的模型输出的辅助函数 | def get_predicted_probabilities(estimator, input_df, get_input_fn):
predictions = estimator.predict(
input_fn=get_input_fn(input_df=input_df, num_epochs=1))
return [prediction['probabilities'][1] for prediction in predictions]
def plot_model_contour(estimator, input_df, num_keypoints=20):
x = np.linspace(min(input_df['ugpa']), max(input_df['ugpa']), num_keypoints)
y = np.linspace(min(input_df['lsat']), max(input_df['lsat']), num_keypoints)
x_grid, y_grid = np.meshgrid(x, y)
positions = np.vstack([x_grid.ravel(), y_grid.ravel()])
plot_df = pd.DataFrame(positions.T, columns=['ugpa', 'lsat'])
plot_df[LAW_LABEL] = np.ones(len(plot_df))
predictions = get_predicted_probabilities(
estimator=estimator, input_df=plot_df, get_input_fn=get_input_fn_law)
grid_predictions = np.reshape(predictions, x_grid.shape)
plt.rcParams['font.family'] = ['serif']
plt.contour(
x_grid,
y_grid,
grid_predictions,
colors=('k',),
levels=np.linspace(0, 1, 11))
plt.contourf(
x_grid,
y_grid,
grid_predictions,
cmap=plt.cm.bone,
levels=np.linspace(0, 1, 11)) # levels=np.linspace(0,1,8));
plt.xticks(fontsize=20)
plt.yticks(fontsize=20)
cbar = plt.colorbar()
cbar.ax.set_ylabel('Model score', fontsize=20)
cbar.ax.tick_params(labelsize=20)
plt.xlabel('Undergraduate GPA', fontsize=20)
plt.ylabel('LSAT score', fontsize=20) | site/zh-cn/lattice/tutorials/shape_constraints_for_ethics.ipynb | tensorflow/docs-l10n | apache-2.0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.