markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
μ–΄λ–€ ν•¨μˆ˜μΈκ°€? help λ₯Ό μ΄μš©ν•˜μ—¬ ν™•μΈν•˜λ©΄ λ‹€μŒκ³Ό κ°™λ‹€.
help(g) g
ref_materials/excs/Lab-07.ipynb
liganega/Gongsu-DataSci
gpl-3.0
즉, 인자λ₯Ό ν•˜λ‚˜ λ°›λŠ” ν•¨μˆ˜μ΄λ©° f_expλ₯Ό μ΄μš©ν•΄ μ •μ™Έλ˜μ—ˆμŒμ„ μ•Œ 수 μžˆλ‹€. μ‹€μ œλ‘œ gλŠ” μ•„λž˜μ™€ 같이 μ •μ˜λ˜μ–΄ μžˆλ‹€. gλ₯Ό μ •μ˜ν•˜κΈ° μœ„ν•΄ fun_2_fun(f) ν•¨μˆ˜λ₯Ό ν˜ΈμΆœν•  λ•Œ μ‚¬μš©λœ 인자 f λŒ€μ‹ μ— exp2 ν•¨μˆ˜λ₯Ό μ‚½μž…ν•˜μ˜€κΈ° λ•Œλ¬Έμ— gκ°€ μ•„λž˜μ™€ 같이 μ •μ˜λœ ν•¨μˆ˜μž„μ„ μ•Œ 수 μžˆλ‹€. g(x) = fun_2_fun(exp2) = f_exp(x) # f_exp λ₯Ό μ •μ˜ν•  λ•Œ exp2 κ°€ μ‚¬μš©λ¨μ— μ€‘μ˜ = exp2(x) ** x = (x**2) ** x = x ** (2*x) μ—°μŠ΅ 6 κ²¬λ³Έλ‹΅μ•ˆ 2
def fun_2_fun(f): return lambda x: f(x) ** x print(f1(2)) print(fun_2_fun(f1)(2))
ref_materials/excs/Lab-07.ipynb
liganega/Gongsu-DataSci
gpl-3.0
First we will make a default NormalFault.
grid = RasterModelGrid((6, 6), xy_spacing=10) grid.add_zeros("topographic__elevation", at="node") nf = NormalFault(grid) plt.figure() imshow_grid(grid, nf.faulted_nodes.astype(int), cmap="viridis") plt.plot(grid.x_of_node, grid.y_of_node, "c.") plt.show()
notebooks/tutorials/normal_fault/normal_fault_component_tutorial.ipynb
landlab/landlab
mit
This fault has a strike of NE and dips to the SE. Thus the uplifted nodes (shown in yellow) are in the NW half of the domain. The default NormalFault will not uplift the boundary nodes. We change this by using the keyword argument include_boundaries. If this is specified, the elevation of the boundary nodes is calculated as an average of the faulted nodes adjacent to the boundaries. This occurs because most Landlab erosion components do not operate on boundary nodes.
nf = NormalFault(grid, include_boundaries=True) plt.figure() imshow_grid(grid, nf.faulted_nodes.astype(int), cmap="viridis") plt.plot(grid.x_of_node, grid.y_of_node, "c.") plt.show()
notebooks/tutorials/normal_fault/normal_fault_component_tutorial.ipynb
landlab/landlab
mit
We can add functionality to the NormalFault with other keyword arguments. We can change the fault strike and dip, as well as specify a time series of fault uplift through time.
grid = RasterModelGrid((60, 100), xy_spacing=10) z = grid.add_zeros("topographic__elevation", at="node") nf = NormalFault(grid, fault_trace={"x1": 0, "y1": 200, "y2": 30, "x2": 600}) imshow_grid(grid, nf.faulted_nodes.astype(int), cmap="viridis")
notebooks/tutorials/normal_fault/normal_fault_component_tutorial.ipynb
landlab/landlab
mit
By reversing the order of (x1, y1) and (x2, y2) we can reverse the location of the upthrown nodes (all else equal).
grid = RasterModelGrid((60, 100), xy_spacing=10) z = grid.add_zeros("topographic__elevation", at="node") nf = NormalFault(grid, fault_trace={"y1": 30, "x1": 600, "x2": 0, "y2": 200}) imshow_grid(grid, nf.faulted_nodes.astype(int), cmap="viridis")
notebooks/tutorials/normal_fault/normal_fault_component_tutorial.ipynb
landlab/landlab
mit
We can also specify complex time-rock uplift rate histories, but we'll explore that later in the tutorial. Next let's make a landscape evolution model with a normal fault. Here we'll use a HexModelGrid to highlight that we can use both raster and non-raster grids with this component. We will do a series of three numerical experiments and will want to keep a few parameters constant. Since you might want to change them, we are making it easier to change all of them together. They are defined in the next block:
# here are the parameters to change K = 0.0005 # stream power coefficient, bigger = streams erode more quickly U = 0.0001 # uplift rate in meters per year dt = 1000 # time step in years dx = 10 # space step in meters nr = 60 # number of model rows nc = 100 # number of model columns # instantiate the grid grid = HexModelGrid((nr, nc), dx, node_layout="rect") # add a topographic__elevation field with noise z = grid.add_zeros("topographic__elevation", at="node") z[grid.core_nodes] += 100.0 + np.random.randn(grid.core_nodes.size) fr = FlowAccumulator(grid) fs = FastscapeEroder(grid, K_sp=K) nf = NormalFault(grid, fault_trace={"x1": 0, "x2": 800, "y1": 0, "y2": 500}) # Run this model for 300 100-year timesteps (30,000 years). for i in range(300): nf.run_one_step(dt) fr.run_one_step() fs.run_one_step(dt) z[grid.core_nodes] += 0.0001 * dt # plot the final topography imshow_grid(grid, z)
notebooks/tutorials/normal_fault/normal_fault_component_tutorial.ipynb
landlab/landlab
mit
As we can see, the upper left portion of the grid has been uplifted an a stream network has developed over the whole domain. How might this change when we also uplift the boundaries nodes?
# instantiate the grid grid = HexModelGrid((nr, nc), 10, node_layout="rect") # add a topographic__elevation field with noise z = grid.add_zeros("topographic__elevation", at="node") z[grid.core_nodes] += 100.0 + np.random.randn(grid.core_nodes.size) fr = FlowAccumulator(grid) fs = FastscapeEroder(grid, K_sp=K) nf = NormalFault( grid, fault_trace={"x1": 0, "x2": 800, "y1": 0, "y2": 500}, include_boundaries=True ) # Run this model for 300 100-year timesteps (30,000 years). for i in range(300): nf.run_one_step(dt) fr.run_one_step() fs.run_one_step(dt) z[grid.core_nodes] += U * dt # plot the final topography imshow_grid(grid, z)
notebooks/tutorials/normal_fault/normal_fault_component_tutorial.ipynb
landlab/landlab
mit
We can see that when the boundary nodes are not included, the faulted region is impacted by the edge boundary conditions differently. Depending on your application, one or the other of these boundary condition options may suite your problem better. The last thing to explore is the use of the fault_rate_through_time parameter. This allows us to specify generic fault throw rate histories. For example, consider the following history, in which every 100,000 years there is a 10,000 year period in which the fault is active.
time = ( np.array( [ 0.0, 7.99, 8.00, 8.99, 9.0, 17.99, 18.0, 18.99, 19.0, 27.99, 28.00, 28.99, 29.0, ] ) * 10 * dt ) rate = np.array([0, 0, 0.01, 0.01, 0, 0, 0.01, 0.01, 0, 0, 0.01, 0.01, 0]) plt.figure() plt.plot(time, rate) plt.plot([0, 300 * dt], [0.001, 0.001]) plt.xlabel("Time [years]") plt.ylabel("Fault Throw Rate [m/yr]") plt.show()
notebooks/tutorials/normal_fault/normal_fault_component_tutorial.ipynb
landlab/landlab
mit
The default value for uplift rate is 0.001 (units unspecified as it will depend on the x and t units in a model, but in this example we assume time units of years and length units of meters). This will result in a total of 300 m of fault throw over the 300,000 year model time period. This amount of uplift can also be accommodated by faster fault motion that occurs over shorter periods of time. Next we plot the cumulative fault throw for the two cases.
t = np.arange(0, 300 * dt, dt) rate_constant = np.interp(t, [0, 300 * dt], [0.001, 0.001]) rate_variable = np.interp(t, time, rate) cumulative_rock_uplift_constant = np.cumsum(rate_constant) * dt cumulative_rock_uplift_variable = np.cumsum(rate_variable) * dt plt.figure() plt.plot(t, cumulative_rock_uplift_constant) plt.plot(t, cumulative_rock_uplift_variable) plt.xlabel("Time [years]") plt.ylabel("Cumulative Fault Throw [m]") plt.show()
notebooks/tutorials/normal_fault/normal_fault_component_tutorial.ipynb
landlab/landlab
mit
A technical note: Beyond the times specified, the internal workings of the NormalFault will use the final value provided in the rate array. Let's see how this changes the model results.
# instantiate the grid grid = HexModelGrid((nr, nc), 10, node_layout="rect") # add a topographic__elevation field with noise z = grid.add_zeros("topographic__elevation", at="node") z[grid.core_nodes] += 100.0 + np.random.randn(grid.core_nodes.size) fr = FlowAccumulator(grid) fs = FastscapeEroder(grid, K_sp=K) nf = NormalFault( grid, fault_throw_rate_through_time={"time": time, "rate": rate}, fault_trace={"x1": 0, "x2": 800, "y1": 0, "y2": 500}, include_boundaries=True, ) # Run this model for 300 100-year timesteps (30,000 years). for i in range(300): nf.run_one_step(dt) fr.run_one_step() fs.run_one_step(dt) z[grid.core_nodes] += U * dt # plot the final topography imshow_grid(grid, z)
notebooks/tutorials/normal_fault/normal_fault_component_tutorial.ipynb
landlab/landlab
mit
As you can see the resulting topography is very different than in the case with continuous uplift. For our final example, we'll use NormalFault with a more complicated model in which we have both a soil layer and bedrock. In order to move, material must convert from bedrock to soil by weathering. First we import remaining modules and set some parameter values
from landlab.components import DepthDependentDiffuser, ExponentialWeatherer # here are the parameters to change K = 0.0005 # stream power coefficient, bigger = streams erode more quickly U = 0.0001 # uplift rate in meters per year max_soil_production_rate = ( 0.001 # Maximum weathering rate for bare bedrock in meters per year ) soil_production_decay_depth = 0.7 # Characteristic weathering depth in meters linear_diffusivity = 0.01 # Hillslope diffusivity and m2 per years soil_transport_decay_depth = 0.5 # Characteristic soil transport depth in meters dt = 100 # time step in years dx = 10 # space step in meters nr = 60 # number of model rows nc = 100 # number of model columns ?ExponentialWeatherer
notebooks/tutorials/normal_fault/normal_fault_component_tutorial.ipynb
landlab/landlab
mit
Next we create the grid and run the model.
# instantiate the grid grid = HexModelGrid((nr, nc), 10, node_layout="rect") # add a topographic__elevation field with noise z = grid.add_zeros("topographic__elevation", at="node") z[grid.core_nodes] += 100.0 + np.random.randn(grid.core_nodes.size) # create a field for soil depth d = grid.add_zeros("soil__depth", at="node") # create a bedrock elevation field b = grid.add_zeros("bedrock__elevation", at="node") b[:] = z - d fr = FlowAccumulator(grid, depression_finder="DepressionFinderAndRouter", routing="D4") fs = FastscapeEroder(grid, K_sp=K) ew = ExponentialWeatherer( grid, soil_production__decay_depth=soil_production_decay_depth, soil_production__maximum_rate=max_soil_production_rate, ) dd = DepthDependentDiffuser( grid, linear_diffusivity=linear_diffusivity, soil_transport_decay_depth=soil_transport_decay_depth, ) nf = NormalFault( grid, fault_throw_rate_through_time={"time": [0, 30], "rate": [0.001, 0.001]}, fault_trace={"x1": 0, "x2": 800, "y1": 0, "y2": 500}, include_boundaries=False, ) # Run this model for 300 100-year timesteps (30,000 years). for i in range(300): # Move normal fault nf.run_one_step(dt) # Route flow fr.run_one_step() # Erode with water fs.run_one_step(dt) # We must also now erode the bedrock where relevant. If water erosion # into bedrock has occurred, the bedrock elevation will be higher than # the actual elevation, so we simply re-set bedrock elevation to the # lower of itself or the current elevation. b = grid.at_node["bedrock__elevation"] b[:] = np.minimum(b, grid.at_node["topographic__elevation"]) # Calculate regolith-production rate ew.calc_soil_prod_rate() # Generate and move soil around. This component will update both the # soil thickness and topographic elevation fields. dd.run_one_step(dt) # uplift the whole domain, we need to do this to both bedrock and topography z[grid.core_nodes] += U * dt b[grid.core_nodes] += U * dt # plot the final topography imshow_grid(grid, "topographic__elevation")
notebooks/tutorials/normal_fault/normal_fault_component_tutorial.ipynb
landlab/landlab
mit
We can also examine the soil thickness and soil production rate. Here in the soil depth, we see it is highest along the ridge crests.
# and the soil depth imshow_grid(grid, "soil__depth", cmap="viridis")
notebooks/tutorials/normal_fault/normal_fault_component_tutorial.ipynb
landlab/landlab
mit
The soil production rate is highest where the soil depth is low, as we would expect given the exponential form.
# and the soil production rate imshow_grid(grid, "soil_production__rate", cmap="viridis")
notebooks/tutorials/normal_fault/normal_fault_component_tutorial.ipynb
landlab/landlab
mit
The data is small enough to be read into memory.
[line_no//25000] import gensim # from gensim.models.doc2vec import from collections import namedtuple SentimentDocument = namedtuple('SentimentDocument', 'words tags split sentiment') alldocs = [] # will hold all docs in original order with open('./data/new_parsed_no_spam.txt') as alldata: for line_no, line in enumerate(alldata): tokens = line.split() words = tokens[1:] tags = [line_no] # `tags = [tokens[0]]` would also work at extra memory cost split = ['train','test','extra','extra'][line_no//70000] # 25k train, 25k test, 25k extra sentiment = [1.0, 0.0, 1.0, 0.0, None, None, None, None][line_no//70000] # [12.5K pos, 12.5K neg]*2 then unknown alldocs.append(SentimentDocument(words, tags, split, sentiment)) train_docs = [doc for doc in alldocs if doc.split == 'train'] test_docs = [doc for doc in alldocs if doc.split == 'test'] doc_list = alldocs[:] # for reshuffling per pass print('%d docs: %d train-sentiment, %d test-sentiment' % (len(doc_list), len(train_docs), len(test_docs)))
doc2vec-IMDB.ipynb
texib/pixnet_hackathon_2015
mit
Set-up Doc2Vec Training & Evaluation Models Approximating experiment of Le & Mikolov "Distributed Representations of Sentences and Documents", also with guidance from Mikolov's example go.sh: ./word2vec -train ../alldata-id.txt -output vectors.txt -cbow 0 -size 100 -window 10 -negative 5 -hs 0 -sample 1e-4 -threads 40 -binary 0 -iter 20 -min-count 1 -sentence-vectors 1 Parameter choices below vary: 100-dimensional vectors, as the 400d vectors of the paper don't seem to offer much benefit on this task similarly, frequent word subsampling seems to decrease sentiment-prediction accuracy, so it's left out cbow=0 means skip-gram which is equivalent to the paper's 'PV-DBOW' mode, matched in gensim with dm=0 added to that DBOW model are two DM models, one which averages context vectors (dm_mean) and one which concatenates them (dm_concat, resulting in a much larger, slower, more data-hungry model) a min_count=2 saves quite a bit of model memory, discarding only words that appear in a single doc (and are thus no more expressive than the unique-to-each doc vectors themselves)
from gensim.models import Doc2Vec import gensim.models.doc2vec from collections import OrderedDict import multiprocessing cores = multiprocessing.cpu_count() assert gensim.models.doc2vec.FAST_VERSION > -1, "this will be painfully slow otherwise" simple_models = [ # PV-DM w/concatenation - window=5 (both sides) approximates paper's 10-word total window size Doc2Vec(dm=1, dm_concat=1, size=100, window=5, negative=5, hs=0, min_count=2, workers=cores), # PV-DBOW Doc2Vec(dm=0, size=100, negative=5, hs=0, min_count=2, workers=cores), # PV-DM w/average Doc2Vec(dm=1, dm_mean=1, size=100, window=10, negative=5, hs=0, min_count=2, workers=cores), ] # speed setup by sharing results of 1st model's vocabulary scan simple_models[0].build_vocab(alldocs) # PV-DM/concat requires one special NULL word so it serves as template print(simple_models[0]) for model in simple_models[1:]: model.reset_from(simple_models[0]) print(model) models_by_name = OrderedDict((str(model), model) for model in simple_models)
doc2vec-IMDB.ipynb
texib/pixnet_hackathon_2015
mit
Following the paper, we also evaluate models in pairs. These wrappers return the concatenation of the vectors from each model. (Only the singular models are trained.)
from gensim.test.test_doc2vec import ConcatenatedDoc2Vec models_by_name['dbow+dmm'] = ConcatenatedDoc2Vec([simple_models[1], simple_models[2]]) models_by_name['dbow+dmc'] = ConcatenatedDoc2Vec([simple_models[1], simple_models[0]])
doc2vec-IMDB.ipynb
texib/pixnet_hackathon_2015
mit
Predictive Evaluation Methods Helper methods for evaluating error rate.
import numpy as np import statsmodels.api as sm from random import sample # for timing from contextlib import contextmanager from timeit import default_timer import time @contextmanager def elapsed_timer(): start = default_timer() elapser = lambda: default_timer() - start yield lambda: elapser() end = default_timer() elapser = lambda: end-start def logistic_predictor_from_data(train_targets, train_regressors): logit = sm.Logit(train_targets, train_regressors) predictor = logit.fit(disp=0) #print(predictor.summary()) return predictor def error_rate_for_model(test_model, train_set, test_set, infer=False, infer_steps=3, infer_alpha=0.1, infer_subsample=0.1): """Report error rate on test_doc sentiments, using supplied model and train_docs""" train_targets, train_regressors = zip(*[(doc.sentiment, test_model.docvecs[doc.tags[0]]) for doc in train_set]) train_regressors = sm.add_constant(train_regressors) predictor = logistic_predictor_from_data(train_targets, train_regressors) test_data = test_set if infer: if infer_subsample < 1.0: test_data = sample(test_data, int(infer_subsample * len(test_data))) test_regressors = [test_model.infer_vector(doc.words, steps=infer_steps, alpha=infer_alpha) for doc in test_data] else: test_regressors = [test_model.docvecs[doc.tags[0]] for doc in test_docs] test_regressors = sm.add_constant(test_regressors) # predict & evaluate test_predictions = predictor.predict(test_regressors) corrects = sum(np.rint(test_predictions) == [doc.sentiment for doc in test_data]) errors = len(test_predictions) - corrects error_rate = float(errors) / len(test_predictions) return (error_rate, errors, len(test_predictions), predictor)
doc2vec-IMDB.ipynb
texib/pixnet_hackathon_2015
mit
Bulk Training Using explicit multiple-pass, alpha-reduction approach as sketched in gensim doc2vec blog post – with added shuffling of corpus on each pass. Note that vector training is occurring on all documents of the dataset, which includes all TRAIN/TEST/DEV docs. Evaluation of each model's sentiment-predictive power is repeated after each pass, as an error rate (lower is better), to see the rates-of-relative-improvement. The base numbers reuse the TRAIN and TEST vectors stored in the models for the logistic regression, while the inferred results use newly-inferred TEST vectors. (On a 4-core 2.6Ghz Intel Core i7, these 20 passes training and evaluating 3 main models takes about an hour.)
from collections import defaultdict best_error = defaultdict(lambda :1.0) # to selectively-print only best errors achieved from random import shuffle import datetime alpha, min_alpha, passes = (0.025, 0.001, 20) alpha_delta = (alpha - min_alpha) / passes print("START %s" % datetime.datetime.now()) for epoch in range(passes): shuffle(doc_list) # shuffling gets best results for name, train_model in models_by_name.items(): # train duration = 'na' train_model.alpha, train_model.min_alpha = alpha, alpha with elapsed_timer() as elapsed: train_model.train(doc_list) duration = '%.1f' % elapsed() # evaluate eval_duration = '' with elapsed_timer() as eval_elapsed: err, err_count, test_count, predictor = error_rate_for_model(train_model, train_docs, test_docs) eval_duration = '%.1f' % eval_elapsed() best_indicator = ' ' if err <= best_error[name]: best_error[name] = err best_indicator = '*' print("%s%f : %i passes : %s %ss %ss" % (best_indicator, err, epoch + 1, name, duration, eval_duration)) if ((epoch + 1) % 5) == 0 or epoch == 0: eval_duration = '' with elapsed_timer() as eval_elapsed: infer_err, err_count, test_count, predictor = error_rate_for_model(train_model, train_docs, test_docs, infer=True) eval_duration = '%.1f' % eval_elapsed() best_indicator = ' ' if infer_err < best_error[name + '_inferred']: best_error[name + '_inferred'] = infer_err best_indicator = '*' print("%s%f : %i passes : %s %ss %ss" % (best_indicator, infer_err, epoch + 1, name + '_inferred', duration, eval_duration)) print('completed pass %i at alpha %f' % (epoch + 1, alpha)) alpha -= alpha_delta print("END %s" % str(datetime.datetime.now()))
doc2vec-IMDB.ipynb
texib/pixnet_hackathon_2015
mit
Achieved Sentiment-Prediction Accuracy
# print best error rates achieved for rate, name in sorted((rate, name) for name, rate in best_error.items()): print("%f %s" % (rate, name))
doc2vec-IMDB.ipynb
texib/pixnet_hackathon_2015
mit
In my testing, unlike the paper's report, DBOW performs best. Concatenating vectors from different models only offers a small predictive improvement. The best results I've seen are still just under 10% error rate, still a ways from the paper's 7.42%. Examining Results Are inferred vectors close to the precalculated ones?
doc_id = np.random.randint(simple_models[0].docvecs.count) # pick random doc; re-run cell for more examples print('for doc %d...' % doc_id) for model in simple_models: inferred_docvec = model.infer_vector(alldocs[doc_id].words) print('%s:\n %s' % (model, model.docvecs.most_similar([inferred_docvec], topn=3)))
doc2vec-IMDB.ipynb
texib/pixnet_hackathon_2015
mit
(Yes, here the stored vector from 20 epochs of training is usually one of the closest to a freshly-inferred vector for the same words. Note the defaults for inference are very abbreviated – just 3 steps starting at a high alpha – and likely need tuning for other applications.) Do close documents seem more related than distant ones?
import random doc_id = np.random.randint(simple_models[0].docvecs.count) # pick random doc, re-run cell for more examples model = random.choice(simple_models) # and a random model sims = model.docvecs.most_similar(doc_id, topn=model.docvecs.count) # get *all* similar documents print(u'TARGET (%d): Β«%sΒ»\n' % (doc_id, ' '.join(alldocs[doc_id].words))) print(u'SIMILAR/DISSIMILAR DOCS PER MODEL %s:\n' % model) for label, index in [('MOST', 0), ('MEDIAN', len(sims)//2), ('LEAST', len(sims) - 1)]: print(u'%s %s: Β«%sΒ»\n' % (label, sims[index], ' '.join(alldocs[sims[index][0]].words)))
doc2vec-IMDB.ipynb
texib/pixnet_hackathon_2015
mit
(Somewhat, in terms of reviewer tone, movie genre, etc... the MOST cosine-similar docs usually seem more like the TARGET than the MEDIAN or LEAST.) Do the word vectors show useful similarities?
word_models = simple_models[:] import random from IPython.display import HTML # pick a random word with a suitable number of occurences while True: word = random.choice(word_models[0].index2word) if word_models[0].vocab[word].count > 10: break # or uncomment below line, to just pick a word from the relevant domain: #word = 'comedy/drama' similars_per_model = [str(model.most_similar(word, topn=20)).replace('), ','),<br>\n') for model in word_models] similar_table = ("<table><tr><th>" + "</th><th>".join([str(model) for model in word_models]) + "</th></tr><tr><td>" + "</td><td>".join(similars_per_model) + "</td></tr></table>") print("most similar words for '%s' (%d occurences)" % (word, simple_models[0].vocab[word].count)) HTML(similar_table)
doc2vec-IMDB.ipynb
texib/pixnet_hackathon_2015
mit
Do the DBOW words look meaningless? That's because the gensim DBOW model doesn't train word vectors – they remain at their random initialized values – unless you ask with the dbow_words=1 initialization parameter. Concurrent word-training slows DBOW mode significantly, and offers little improvement (and sometimes a little worsening) of the error rate on this IMDB sentiment-prediction task. Words from DM models tend to show meaningfully similar words when there are many examples in the training data (as with 'plot' or 'actor'). (All DM modes inherently involve word vector training concurrent with doc vector training.) Are the word vectors from this dataset any good at analogies?
# assuming something like # https://word2vec.googlecode.com/svn/trunk/questions-words.txt # is in local directory # note: this takes many minutes for model in word_models: sections = model.accuracy('questions-words.txt') correct, incorrect = len(sections[-1]['correct']), len(sections[-1]['incorrect']) print('%s: %0.2f%% correct (%d of %d)' % (model, float(correct*100)/(correct+incorrect), correct, correct+incorrect))
doc2vec-IMDB.ipynb
texib/pixnet_hackathon_2015
mit
Even though this is a tiny, domain-specific dataset, it shows some meager capability on the general word analogies – at least for the DM/concat and DM/mean models which actually train word vectors. (The untrained random-initialized words of the DBOW model of course fail miserably.) Slop
This cell left intentionally erroneous.
doc2vec-IMDB.ipynb
texib/pixnet_hackathon_2015
mit
To mix the Google dataset (if locally available) into the word tests...
from gensim.models import Word2Vec w2v_g100b = Word2Vec.load_word2vec_format('GoogleNews-vectors-negative300.bin.gz', binary=True) w2v_g100b.compact_name = 'w2v_g100b' word_models.append(w2v_g100b)
doc2vec-IMDB.ipynb
texib/pixnet_hackathon_2015
mit
To get copious logging output from above steps...
import logging logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO) rootLogger = logging.getLogger() rootLogger.setLevel(logging.INFO)
doc2vec-IMDB.ipynb
texib/pixnet_hackathon_2015
mit
To auto-reload python code while developing...
%load_ext autoreload %autoreload 2
doc2vec-IMDB.ipynb
texib/pixnet_hackathon_2015
mit
Display mode: Pandas default
beakerx.pandas_display_default() pd.read_csv('../resources/data/interest-rates.csv')
doc/python/TableAPI.ipynb
twosigma/beakerx
apache-2.0
Display mode: TableDisplay Widget
beakerx.pandas_display_table() pd.read_csv('../resources/data/interest-rates.csv')
doc/python/TableAPI.ipynb
twosigma/beakerx
apache-2.0
Recognized Formats
TableDisplay([{'y1':4, 'm3':2, 'z2':1}, {'m3':4, 'z2':2}]) TableDisplay({"x" : 1, "y" : 2})
doc/python/TableAPI.ipynb
twosigma/beakerx
apache-2.0
Programmable Table Actions
mapList4 = [ {"a":1, "b":2, "c":3}, {"a":4, "b":5, "c":6}, {"a":7, "b":8, "c":5} ] display = TableDisplay(mapList4) def dclick(row, column, tabledisplay): tabledisplay.values[row][column] = sum(map(int,tabledisplay.values[row])) display.setDoubleClickAction(dclick) def negate(row, column, tabledisplay): tabledisplay.values[row][column] = -1 * int(tabledisplay.values[row][column]) def incr(row, column, tabledisplay): tabledisplay.values[row][column] = int(tabledisplay.values[row][column]) + 1 display.addContextMenuItem("negate", negate) display.addContextMenuItem("increment", incr) display mapList4 = [ {"a":1, "b":2, "c":3}, {"a":4, "b":5, "c":6}, {"a":7, "b":8, "c":5} ] display = TableDisplay(mapList4) #set what happens on a double click display.setDoubleClickAction("runDoubleClick") display print("runDoubleClick fired") print(display.details)
doc/python/TableAPI.ipynb
twosigma/beakerx
apache-2.0
Set index to DataFrame
df = pd.read_csv('../resources/data/interest-rates.csv') df.set_index(['m3']) df = pd.read_csv('../resources/data/interest-rates.csv') df.index = df['time'] df
doc/python/TableAPI.ipynb
twosigma/beakerx
apache-2.0
Update cell
dataToUpdate = [ {'a':1, 'b':2, 'c':3}, {'a':4, 'b':5, 'c':6}, {'a':7, 'b':8, 'c':9} ] tableToUpdate = TableDisplay(dataToUpdate) tableToUpdate tableToUpdate.values[0][0] = 99 tableToUpdate.sendModel() tableToUpdate.updateCell(2,"c",121) tableToUpdate.sendModel()
doc/python/TableAPI.ipynb
twosigma/beakerx
apache-2.0
HTML format HTML format allows markup and styling of the cell's content. Interactive JavaScript is not supported however.
table = TableDisplay({ 'w': '$2 \\sigma$', 'x': '<em style="color:red">italic red</em>', 'y': '<b style="color:blue">bold blue</b>', 'z': 'strings without markup work fine too', }) table.setStringFormatForColumn("Value", TableDisplayStringFormat.getHTMLFormat()) table
doc/python/TableAPI.ipynb
twosigma/beakerx
apache-2.0
Auto linking of URLs The normal string format automatically detects URLs and links them. An underline appears when the mouse hovers over such a string, and when you click it opens in a new window.
TableDisplay({'Two Sigma': 'http://twosigma.com', 'BeakerX': 'http://BeakerX.com'})
doc/python/TableAPI.ipynb
twosigma/beakerx
apache-2.0
Before we go any further let's take a look at this data by plotting it:
%matplotlib inline # plot signals import pylab as pl # abdominal signals for i in range(1,6): pl.figure(figsize=(14,3)) pl.plot(time_steps, data[:,i], 'r') pl.title('Abdominal %d' % (i)) pl.grid() pl.show() # thoracic signals for i in range(6,9): pl.figure(figsize=(14,3)) pl.plot(time_steps, data[:,i], 'r') pl.title('Thoracic %d' % (i)) pl.grid() pl.show()
doc/ipython-notebooks/ica/ecg_sep.ipynb
lisitsyn/shogun
bsd-3-clause
The peaks in the plot represent a heart beat but its pretty hard to interpret and I know I definitely can't see two distinc signals, lets see what we can do with ICA! In general for performing Source Separation we need at least as many mixed signals as sources we're hoping to separate and in this case we actually have a lot more (9 mixtures but there is only 2 sources, mother and baby). There are several different approaches for handling this situation, some algorithms are specifically designed to handle this case while other times the data is pre-processed with Principal Component Analysis (PCA). It is also common to simply apply the separation to all the sources and then choose some of the extracted signal manually or using some other know criteria which is what I'll be showing in this example. Now we create our ICA data set and convert to a Shogun features type:
import shogun as sg # Signal Matrix X X = (np.c_[abdominal2, abdominal3, abdominal4, abdominal5, abdominal6, thoracic7,thoracic8,thoracic9]).T # Convert to features for shogun mixed_signals = sg.features((X).astype(np.float64))
doc/ipython-notebooks/ica/ecg_sep.ipynb
lisitsyn/shogun
bsd-3-clause
Next we apply the ICA algorithm to separate the sources:
# Separating with SOBI sep = sg.transformer('SOBI') sep.put('tau', 1.0*np.arange(0,120)) sep.fit(mixed_signals) signals = sep.transform(mixed_signals) S_ = signals.get('feature_matrix')
doc/ipython-notebooks/ica/ecg_sep.ipynb
lisitsyn/shogun
bsd-3-clause
And we plot the separated signals:
# Show separation results # Separated Signal i for i in range(S_.shape[0]): pl.figure(figsize=(14,3)) pl.plot(time_steps, S_[i], 'r') pl.title('Separated Signal %d' % (i+1)) pl.grid() pl.show()
doc/ipython-notebooks/ica/ecg_sep.ipynb
lisitsyn/shogun
bsd-3-clause
Before we do training, one thing that is often beneficial is to separate the dataset into training and testing. In this case, let's randomly shuffle the data, use the first 100 data points to do training, and the remaining 50 to do testing. For more sophisticated approaches, you can use e.g. cross validation to separate your dataset into multiple training and testing splits. Read more about cross validation here.
random_index = np.random.permutation(150) features = features[random_index] labels = labels[random_index] train_features = features[:100] train_labels = labels[:100] test_features = features[100:] test_labels = labels[100:] # Let's plot the first two features together with the label. # Remember, while we are plotting the testing feature distribution # here too, you might not be supposed to do so in real research, # because one should not peek into the testing data. legend = ['rx', 'b+', 'go'] pyplot.title("Training data distribution, feature 0 and 1") for i in range(3): pyplot.plot(train_features[train_labels==i, 0], train_features[train_labels==i, 1], legend[i]) pyplot.figure() pyplot.title("Testing data distribution, feature 0 and 1") for i in range(3): pyplot.plot(test_features[test_labels==i, 0], test_features[test_labels==i, 1], legend[i])
caffe2/python/tutorials/create_your_own_dataset.ipynb
Yangqing/caffe2
apache-2.0
Now, as promised, let's put things into a Caffe2 DB. In this DB, what would happen is that we will use "train_xxx" as the key, and use a TensorProtos object to store two tensors for each data point: one as the feature and one as the label. We will use Caffe2's Python DB interface to do so.
# First, let's see how one can construct a TensorProtos protocol buffer from numpy arrays. feature_and_label = caffe2_pb2.TensorProtos() feature_and_label.protos.extend([ utils.NumpyArrayToCaffe2Tensor(features[0]), utils.NumpyArrayToCaffe2Tensor(labels[0])]) print('This is what the tensor proto looks like for a feature and its label:') print(str(feature_and_label)) print('This is the compact string that gets written into the db:') print(feature_and_label.SerializeToString()) # Now, actually write the db. def write_db(db_type, db_name, features, labels): db = core.C.create_db(db_type, db_name, core.C.Mode.write) transaction = db.new_transaction() for i in range(features.shape[0]): feature_and_label = caffe2_pb2.TensorProtos() feature_and_label.protos.extend([ utils.NumpyArrayToCaffe2Tensor(features[i]), utils.NumpyArrayToCaffe2Tensor(labels[i])]) transaction.put( 'train_%03d'.format(i), feature_and_label.SerializeToString()) # Close the transaction, and then close the db. del transaction del db write_db("minidb", "iris_train.minidb", train_features, train_labels) write_db("minidb", "iris_test.minidb", test_features, test_labels)
caffe2/python/tutorials/create_your_own_dataset.ipynb
Yangqing/caffe2
apache-2.0
Now, let's create a very simple network that only consists of one single TensorProtosDBInput operator, to showcase how we load data from the DB that we created. For training, you might want to do something more complex: creating a network, train it, get the model, and run the prediction service. To this end you can look at the MNIST tutorial for details.
net_proto = core.Net("example_reader") dbreader = net_proto.CreateDB([], "dbreader", db="iris_train.minidb", db_type="minidb") net_proto.TensorProtosDBInput([dbreader], ["X", "Y"], batch_size=16) print("The net looks like this:") print(str(net_proto.Proto())) workspace.CreateNet(net_proto) # Let's run it to get batches of features. workspace.RunNet(net_proto.Proto().name) print("The first batch of feature is:") print(workspace.FetchBlob("X")) print("The first batch of label is:") print(workspace.FetchBlob("Y")) # Let's run again. workspace.RunNet(net_proto.Proto().name) print("The second batch of feature is:") print(workspace.FetchBlob("X")) print("The second batch of label is:") print(workspace.FetchBlob("Y"))
caffe2/python/tutorials/create_your_own_dataset.ipynb
Yangqing/caffe2
apache-2.0
2. Preparting the Inputs The structure as specified is 64-atom diamond silicon at zero temperature and zero pressure. Weuse the minimal Tersoff potential [Fan 2020]. Generate the xyz.in file: Create Si Unit Cell & Add Basis
a=5.434 Si_UC = bulk('Si', 'diamond', a=a) add_basis(Si_UC) Si_UC
examples/empirical_potentials/phonon_dispersion/Phonon Dispersion.ipynb
brucefan1983/GPUMD
gpl-3.0
Transform Si to Cubic Supercell
# Create 8 atom diamond structure Si = repeat(Si_UC, [2,2,1]) Si.set_cell([a, a, a]) Si.wrap() # Complete full supercell Si = repeat(Si, [2,2,2]) Si
examples/empirical_potentials/phonon_dispersion/Phonon Dispersion.ipynb
brucefan1983/GPUMD
gpl-3.0
Write xyz.in File
ase_atoms_to_gpumd(Si, M=4, cutoff=3)
examples/empirical_potentials/phonon_dispersion/Phonon Dispersion.ipynb
brucefan1983/GPUMD
gpl-3.0
Write basis.in File The basis.in file reads: 2 0 28 4 28 0 0 0 0 1 1 1 1 ... Here the primitive cell is chosen as the unit cell. There are only two basis atoms in the unit cell, as indicated by the number 2 in the first line. The next two lines list the indices (0 and 4) and masses (both are 28 amu) for the two basis atoms. The next lines map all the atoms (including the basis atoms) in the super cell to the basis atoms: atoms equivalent to atom 0 have a label 0, and atoms equivalent to atom 1 have a label 1. Note: The basis.in file generated by this Jupyter notebook may look different, but the same concepts apply and the results will be the same.
create_basis(Si)
examples/empirical_potentials/phonon_dispersion/Phonon Dispersion.ipynb
brucefan1983/GPUMD
gpl-3.0
Write kpoints.in File The $k$ vectors are defined in the reciprocal space with respect to the unit cell chosen in the basis.in file. We use the $\Gamma-X-K-\Gamma-L$ path, with 400 $k$ points in total.
linear_path, sym_points, labels = create_kpoints(Si_UC, path='GXKGL',npoints=400)
examples/empirical_potentials/phonon_dispersion/Phonon Dispersion.ipynb
brucefan1983/GPUMD
gpl-3.0
The <code>run.in</code> file: The <code>run.in</code> input file is given below:<br> potential potentials/tersoff/Si_Fan_2019.txt 0 compute_phonon 5.0 0.005 # in units of A The first line with the potential keyword states that the potential to be used is specified in the file Si_Fan_2019.txt. The second line with the compute_phonon keyword tells that the force constants will be calculated with a cutoff of 5.0 $\mathring A$ (here the point is that first and second nearest neighbors need to be included) and a displacement of 0.005 $\mathring A$ will be used in the finite-displacement method. 3. Results and Discussion Figure Properties
aw = 2 fs = 24 font = {'size' : fs} matplotlib.rc('font', **font) matplotlib.rc('axes' , linewidth=aw) def set_fig_properties(ax_list): tl = 8 tw = 2 tlm = 4 for ax in ax_list: ax.tick_params(which='major', length=tl, width=tw) ax.tick_params(which='minor', length=tlm, width=tw) ax.tick_params(which='both', axis='both', direction='in', right=True, top=True)
examples/empirical_potentials/phonon_dispersion/Phonon Dispersion.ipynb
brucefan1983/GPUMD
gpl-3.0
Plot Phonon Dispersion The omega2.out output file is loaded and processed to create the following figure. The previously defined kpoints are used for the $x$-axis.
nu = load_omega2() figure(figsize=(10,10)) set_fig_properties([gca()]) vlines(sym_points, ymin=0, ymax=17) plot(linear_path, nu, color='C0',lw=3) xlim([0, max(linear_path)]) gca().set_xticks(sym_points) gca().set_xticklabels([r'$\Gamma$','X', 'K', r'$\Gamma$', 'L']) ylim([0, 17]) ylabel(r'$\nu$ (THz)') show()
examples/empirical_potentials/phonon_dispersion/Phonon Dispersion.ipynb
brucefan1983/GPUMD
gpl-3.0
List of data files:
from glob import glob file_list = sorted(f for f in glob(data_dir + '*.hdf5') if '_BKG' not in f) ## Selection for POLIMI 2012-11-26 datatset labels = ['17d', '27d', '7d', '12d', '22d'] files_dict = {lab: fname for lab, fname in zip(labels, file_list)} files_dict ph_sel_map = {'all-ph': Ph_sel('all'), 'Dex': Ph_sel(Dex='DAem'), 'DexDem': Ph_sel(Dex='Dem')} ph_sel = ph_sel_map[ph_sel_name] data_id, ph_sel_name
out_notebooks/usALEX-5samples-PR-raw-out-all-ph-22d.ipynb
tritemio/multispot_paper
mit
Burst search and selection
bs_kws = dict(L=10, m=10, F=7, ph_sel=ph_sel) d.burst_search(**bs_kws) th1 = 30 ds = d.select_bursts(select_bursts.size, th1=30) bursts = (bext.burst_data(ds, include_bg=True, include_ph_index=True) .round({'E': 6, 'S': 6, 'bg_d': 3, 'bg_a': 3, 'bg_aa': 3, 'nd': 3, 'na': 3, 'naa': 3, 'nda': 3, 'nt': 3, 'width_ms': 4})) bursts.head() burst_fname = ('results/bursts_usALEX_{sample}_{ph_sel}_F{F:.1f}_m{m}_size{th}.csv' .format(sample=data_id, th=th1, **bs_kws)) burst_fname bursts.to_csv(burst_fname) assert d.dir_ex == 0 assert d.leakage == 0 print(d.ph_sel) dplot(d, hist_fret); # if data_id in ['7d', '27d']: # ds = d.select_bursts(select_bursts.size, th1=20) # else: # ds = d.select_bursts(select_bursts.size, th1=30) ds = d.select_bursts(select_bursts.size, add_naa=False, th1=30) n_bursts_all = ds.num_bursts[0] def select_and_plot_ES(fret_sel, do_sel): ds_fret= ds.select_bursts(select_bursts.ES, **fret_sel) ds_do = ds.select_bursts(select_bursts.ES, **do_sel) bpl.plot_ES_selection(ax, **fret_sel) bpl.plot_ES_selection(ax, **do_sel) return ds_fret, ds_do ax = dplot(ds, hist2d_alex, S_max_norm=2, scatter_alpha=0.1) if data_id == '7d': fret_sel = dict(E1=0.60, E2=1.2, S1=0.2, S2=0.9, rect=False) do_sel = dict(E1=-0.2, E2=0.5, S1=0.8, S2=2, rect=True) ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel) elif data_id == '12d': fret_sel = dict(E1=0.30,E2=1.2,S1=0.131,S2=0.9, rect=False) do_sel = dict(E1=-0.4, E2=0.4, S1=0.8, S2=2, rect=False) ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel) elif data_id == '17d': fret_sel = dict(E1=0.01, E2=0.98, S1=0.14, S2=0.88, rect=False) do_sel = dict(E1=-0.4, E2=0.4, S1=0.80, S2=2, rect=False) ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel) elif data_id == '22d': fret_sel = dict(E1=-0.16, E2=0.6, S1=0.2, S2=0.80, rect=False) do_sel = dict(E1=-0.2, E2=0.4, S1=0.85, S2=2, rect=True) ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel) elif data_id == '27d': fret_sel = dict(E1=-0.1, E2=0.5, S1=0.2, S2=0.82, rect=False) do_sel = dict(E1=-0.2, E2=0.4, S1=0.88, S2=2, rect=True) ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel) n_bursts_do = ds_do.num_bursts[0] n_bursts_fret = ds_fret.num_bursts[0] n_bursts_do, n_bursts_fret d_only_frac = 1.*n_bursts_do/(n_bursts_do + n_bursts_fret) print ('D-only fraction:', d_only_frac) dplot(ds_fret, hist2d_alex, scatter_alpha=0.1); dplot(ds_do, hist2d_alex, S_max_norm=2, scatter=False);
out_notebooks/usALEX-5samples-PR-raw-out-all-ph-22d.ipynb
tritemio/multispot_paper
mit
Donor Leakage fit Half-Sample Mode Fit peak usng the mode computed with the half-sample algorithm (Bickel 2005).
def hsm_mode(s): """ Half-sample mode (HSM) estimator of `s`. `s` is a sample from a continuous distribution with a single peak. Reference: Bickel, Fruehwirth (2005). arXiv:math/0505419 """ s = memoryview(np.sort(s)) i1 = 0 i2 = len(s) while i2 - i1 > 3: n = (i2 - i1) // 2 w = [s[n-1+i+i1] - s[i+i1] for i in range(n)] i1 = w.index(min(w)) + i1 i2 = i1 + n if i2 - i1 == 3: if s[i1+1] - s[i1] < s[i2] - s[i1 + 1]: i2 -= 1 elif s[i1+1] - s[i1] > s[i2] - s[i1 + 1]: i1 += 1 else: i1 = i2 = i1 + 1 return 0.5*(s[i1] + s[i2]) E_pr_do_hsm = hsm_mode(ds_do.E[0]) print ("%s: E_peak(HSM) = %.2f%%" % (ds.ph_sel, E_pr_do_hsm*100))
out_notebooks/usALEX-5samples-PR-raw-out-all-ph-22d.ipynb
tritemio/multispot_paper
mit
Gaussian Fit Fit the histogram with a gaussian:
E_fitter = bext.bursts_fitter(ds_do, weights=None) E_fitter.histogram(bins=np.arange(-0.2, 1, 0.03)) E_fitter.fit_histogram(model=mfit.factory_gaussian()) E_fitter.params res = E_fitter.fit_res[0] res.params.pretty_print() E_pr_do_gauss = res.best_values['center'] E_pr_do_gauss
out_notebooks/usALEX-5samples-PR-raw-out-all-ph-22d.ipynb
tritemio/multispot_paper
mit
KDE maximum
bandwidth = 0.03 E_range_do = (-0.1, 0.15) E_ax = np.r_[-0.2:0.401:0.0002] E_fitter.calc_kde(bandwidth=bandwidth) E_fitter.find_kde_max(E_ax, xmin=E_range_do[0], xmax=E_range_do[1]) E_pr_do_kde = E_fitter.kde_max_pos[0] E_pr_do_kde
out_notebooks/usALEX-5samples-PR-raw-out-all-ph-22d.ipynb
tritemio/multispot_paper
mit
Leakage summary
mfit.plot_mfit(ds_do.E_fitter, plot_kde=True, plot_model=False) plt.axvline(E_pr_do_hsm, color='m', label='HSM') plt.axvline(E_pr_do_gauss, color='k', label='Gauss') plt.axvline(E_pr_do_kde, color='r', label='KDE') plt.xlim(0, 0.3) plt.legend() print('Gauss: %.2f%%\n KDE: %.2f%%\n HSM: %.2f%%' % (E_pr_do_gauss*100, E_pr_do_kde*100, E_pr_do_hsm*100))
out_notebooks/usALEX-5samples-PR-raw-out-all-ph-22d.ipynb
tritemio/multispot_paper
mit
Burst size distribution
nt_th1 = 50 dplot(ds_fret, hist_size, which='all', add_naa=False) xlim(-0, 250) plt.axvline(nt_th1) Th_nt = np.arange(35, 120) nt_th = np.zeros(Th_nt.size) for i, th in enumerate(Th_nt): ds_nt = ds_fret.select_bursts(select_bursts.size, th1=th) nt_th[i] = (ds_nt.nd[0] + ds_nt.na[0]).mean() - th plt.figure() plot(Th_nt, nt_th) plt.axvline(nt_th1) nt_mean = nt_th[np.where(Th_nt == nt_th1)][0] nt_mean
out_notebooks/usALEX-5samples-PR-raw-out-all-ph-22d.ipynb
tritemio/multispot_paper
mit
The following string contains the list of variables to be saved. When saving, the order of the variables is preserved.
variables = ('sample n_bursts_all n_bursts_do n_bursts_fret ' 'E_kde_w E_gauss_w E_gauss_w_sig E_gauss_w_err E_gauss_w_fiterr ' 'S_kde S_gauss S_gauss_sig S_gauss_err S_gauss_fiterr ' 'E_pr_do_kde E_pr_do_hsm E_pr_do_gauss nt_mean\n')
out_notebooks/usALEX-5samples-PR-raw-out-all-ph-22d.ipynb
tritemio/multispot_paper
mit
This is just a trick to format the different variables:
variables_csv = variables.replace(' ', ',') fmt_float = '{%s:.6f}' fmt_int = '{%s:d}' fmt_str = '{%s}' fmt_dict = {**{'sample': fmt_str}, **{k: fmt_int for k in variables.split() if k.startswith('n_bursts')}} var_dict = {name: eval(name) for name in variables.split()} var_fmt = ', '.join([fmt_dict.get(name, fmt_float) % name for name in variables.split()]) + '\n' data_str = var_fmt.format(**var_dict) print(variables_csv) print(data_str) # NOTE: The file name should be the notebook name but with .csv extension with open('results/usALEX-5samples-PR-raw-%s.csv' % ph_sel_name, 'a') as f: f.seek(0, 2) if f.tell() == 0: f.write(variables_csv) f.write(data_str)
out_notebooks/usALEX-5samples-PR-raw-out-all-ph-22d.ipynb
tritemio/multispot_paper
mit
Numpy and Scipy
import numpy as np from numpy import array, cos, diag, eye, linspace, pi from numpy import poly1d, sign, sin, sqrt, where, zeros from scipy.linalg import eigh, inv, det
dati_2017/hw03/01.ipynb
boffi/boffi.github.io
mit
Matplotlib
%matplotlib inline import matplotlib.pyplot as plt plt.style.use('seaborn-paper') plt.rcParams['figure.dpi'] = 115 plt.rcParams['figure.figsize'] = (7.5, 2.5) plt.rcParams['axes.grid'] = True
dati_2017/hw03/01.ipynb
boffi/boffi.github.io
mit
Miscellaneous definitions In the following ld and pmat are used to display mathematical formulas generated by the program, rounder ensures that a floating point number close to an integer will be rounded correctly when formatted as an integer, p is a shorthand to calling poly1d that is long and requires a single argument, vw computes the virtual work done by moments m for the curvatures c, when the lengths of the beams are l and eventually p0_p1 given an array of values p returns first p[0], p[1] then p[1], p[2] then...
def ld(*items): display(Latex('$$' + ' '.join(items) + '$$')) def pmat(mat, env='bmatrix', fmt='%+f'): opener = '\\begin{'+env+'}\n ' closer = '\n\\end{'+env+'}' formatted = '\\\\\n '.join('&'.join(fmt%elt for elt in row) for row in mat) return opener+formatted+closer def rounder(mat): return mat+0.01*sign(mat) def p(*l): return poly1d(l) def vw(emme, chi, L): return sum(((m*c).integ()(l)-(m*c).integ()(0)) for (m, c, l) in zip(emme, chi, L)) def p0_p1(p): from itertools import tee a, b = tee(p) next(b, None) return zip(a, b)
dati_2017/hw03/01.ipynb
boffi/boffi.github.io
mit
3 DOF System Input motion We need the imposed displacement, the imposed velocity (an intermediate result) and the imposed acceleration. It is convenient to express these quantities in terms of an adimensional time coordinate $a = \omega_0 t$, \begin{align} u &= \frac{4/3\omega_0 t - \sin(4/3\omega_0 t)}{2\pi} = \frac{\lambda_0 a- \sin(\lambda_0 a)}{2\pi},\ \dot{u} &= \frac{4}{3}\omega_0 \frac{1-\cos(4/3\omega_0t)}{2\pi} = \lambda_0 \omega_0 \frac{1-\cos(\lambda_0 a)}{2\pi},\ \ddot{u} &= \frac{16}{9}\omega_0^2 \frac{\sin(4/3\omega_0t)}{2\pi} = \lambda_0^2\omega_0^2 \frac{\sin(\lambda_0 a)}{2\pi}, \end{align} with $\lambda_0=4/3$. The equations above are valid in the interval $$ 0 \le t \le \frac{2\pi}{4/3 \omega_0} \rightarrow 0 \le a \le \frac{3\pi}2 $$ (we have multiplied all terms by $\omega_0$ and simplified the last term). Following a similar reasoning, the plotting interval is equal to $0\le a\le2\pi$.
l0 = 4/3 # define a function to get back the time array and the 3 dependent vars def a_uA_vA_aA(t0, t1, npoints): a = linspace(t0, t1, npoints) uA = where(a<3*pi/2, (l0*a-sin(l0*a))/2/pi, 1) vA = where(a<3*pi/2, (1-cos(l0*a))/2/pi, 0) aA = where(a<3*pi/2, 16*sin(l0*a)/18/pi, 0) return a, uA, vA, aA # and use it a, uA, vA, aA = a_uA_vA_aA(0, 2*pi, 501)
dati_2017/hw03/01.ipynb
boffi/boffi.github.io
mit
The plots
plt.plot(a/pi, uA) plt.xlabel(r'$\omega_0 t/\pi$') plt.ylabel(r'$u_A/\delta$') plt.title('Imposed support motion'); plt.plot(a/pi, vA) plt.xlabel(r'$\omega_0 t/\pi$') plt.ylabel(r'$\dot u_A/\delta\omega_0$') plt.title('Imposed support velocity'); plt.plot(a/pi, aA) plt.xlabel(r'$\omega_0 t/\pi$') plt.ylabel(r'$\ddot u_A/\delta\omega_0^2$') plt.title('Imposed support acceleration');
dati_2017/hw03/01.ipynb
boffi/boffi.github.io
mit
Equation of Motion The EoM expressed in adimensional coordinates and using adimensional structural matrices is $$ m\omega_0^2\hat{\boldsymbol M} \frac{\partial^2\boldsymbol x}{\partial a^2} + \frac{EJ}{L^3}\hat{\boldsymbol K}\boldsymbol x = m \hat{\boldsymbol M} \boldsymbol e \omega_0^2 \frac{\partial^2 u_A}{\partial a^2} $$ using the dot notation to denote derivatives with respect to $a$, if we divide both members by $m\omega_0^2$ we have $$ \hat{\boldsymbol M} \ddot{\boldsymbol x} + \hat{\boldsymbol K}\boldsymbol x = \hat{\boldsymbol M} \boldsymbol e \ddot{u}_A. $$ We must determine the influence vector $\boldsymbol e$ and the adimensional structural matrices Influence vector To impose a horizontal displacement in $A$ we must remove one constraint, so that the structure has 1 DOF as a rigid system and the influence vector must be determined by a kinematic analysis.
display(HTML(open('figures/trab1kin_conv.svg').read()))
dati_2017/hw03/01.ipynb
boffi/boffi.github.io
mit
The left beam is constrained by a roller and by the right beam, the first requires that the Centre of Instantaneous Rotation (CIR) belongs to the vertical line in $A$, while the second requires that the CIR belongs to the line that connects the hinges of the right beam. The angles of rotation are $\theta_\text{left} = u_A/L$ and $\theta_\text{right} = -2 u_A/L$ and eventually we have $x_1=x_2=x_3=2u_A$ and $$ \boldsymbol e = \begin{Bmatrix}2\2\2\end{Bmatrix}.$$
e = array((2.0, 2.0, 2.0))
dati_2017/hw03/01.ipynb
boffi/boffi.github.io
mit
Structural Matrices
display(HTML(open('figures/trab1_conv.svg').read()))
dati_2017/hw03/01.ipynb
boffi/boffi.github.io
mit
Compute the 3x3 flexibility using the Principle of Virtual Displacements and the 3x3 stiffness using inversion, while the mass matrix is directly assembled with the understanding that the lumped mass on $x_1$ is $2m$. The code uses a structure m where each of the three rows contains the computational represention (as polynomial coefficients) of the bending moments due to a unit load applied in the position of each of the three degrees of freedom, in each row six groups of polynomial coefficients, one group for each of the six intervals of definition in which the structure has been subdivided (a possible seventh interval is omitted because the bending moment is always zero for every possible unit load).
l = [1, 2, 2, 1, 1, 1] h = 0.5 ; t = 3*h m = [[p(2,0),p(h,0),p(h,1),p(h,0),p(h,h),p(1,0)], [p(2,0),p(1,0),p(0,2),p(1,0),p(1,1),p(2,0)], [p(2,0),p(h,0),p(h,1),p(h,0),p(t,h),p(2,0)]] F = array([[vw(emme, chi, l) for emme in m] for chi in m]) K = inv(F) M = array(((2.0, 0.0, 0.0), (0.0, 1.0, 0.0), (0.0, 0.0, 1.0))) iM = inv(M) ld('\\boldsymbol F = \\frac{L^3}{12EJ}\\,', pmat(rounder(F*12), fmt='%+d')) ld('\\boldsymbol K = \\frac{3 EJ}{1588L^3}\\,', pmat(rounder(K*1588/3), fmt='%+d'), '= \\frac{EJ}{L^3}\\;\\hat{\\boldsymbol K}.') ld('\\boldsymbol M = m\\,', pmat(M, fmt='%d'), '= m\\;\\hat{\\boldsymbol M}.')
dati_2017/hw03/01.ipynb
boffi/boffi.github.io
mit
The eigenvalues problem We solve immediately the eigenvalue problem because when we know the shortest modal period of vibration it is possible to choose the integration time step $h$ to avoid numerical unstability issues with the linear acceleration algorithm.
wn2, Psi = eigh(K, M) wn = sqrt(wn2) li = wn Lambda2 = diag(wn2) Lambda = diag(wn) # eigenvectors are normalized β†’ M* is a unit matrix, as well as its inverse Mstar, iMstar = eye(3), eye(3) ld(r'\boldsymbol\Omega^2 = \omega_0^2\,', pmat(Lambda2), r'=\omega_0^2\,\boldsymbol\Lambda^2.') ld(r'\boldsymbol\Omega=\omega_0\,', pmat(Lambda), r'=\omega_0\,\boldsymbol\Lambda.') ld(r'\boldsymbol T_\text{n}=\frac{2\pi}{\omega_0}\,', pmat(inv(Lambda)), r'= t_0\,\boldsymbol\Theta.') ld(r'\Psi=', pmat(Psi), '.')
dati_2017/hw03/01.ipynb
boffi/boffi.github.io
mit
Numerical Integration The shortest period is $T_3 = 2\pi\,0.562/\omega_0 \rightarrow A_3 = 1.124 \pi$ hence to avoid unstability of the linear acceleration algorithm we shall use a non dimensional time step $h<0.55A_3\approx0.6\pi$. We can anticipate that the modal response associated with mode 2 is important ($\lambda_2\approx\lambda_0$) so we choose an adimensional time step $h=A_2/20=2\pi\,0.760/20\approx0.08\pi$ that is much smaller than the maximum time step for which we have a stable behaviour. Initialization First a new, longer adimensional time vector and the corresponding support acceleration, then the efficace load vector (peff is an array with 2001 rows and 3 columns, each row corresponding to the force vector in a particular instant of time)
nsppi = 200 a, _, _, aA = a_uA_vA_aA(0, 16*pi, nsppi*16+1) peff = (- M @ e) * aA[:,None]
dati_2017/hw03/01.ipynb
boffi/boffi.github.io
mit
The constants that we need in the linear acceleration algorithm β€” note that we have an undamped system or, in other words, $\boldsymbol C = \boldsymbol 0$
h = pi/nsppi K_ = K + 6*M/h**2 F_ = inv(K_) dp_v = 6*M/h dp_a = 3*M
dati_2017/hw03/01.ipynb
boffi/boffi.github.io
mit
The integration loop First we initialize the containers where to save the new results with the initial values at $a=0$, next the loop on the values of the load at times $t_i$ and $t_{i+1}$ with $i=0,\ldots,1999$.
Xl, Vl = [zeros(3)], [zeros(3)] for p0, p1 in p0_p1(peff): x0, v0 = Xl[-1], Vl[-1] a0 = iM @ (p0 -K@x0) dp = (p1-p0) + dp_a@a0 + dp_v@v0 dx = F_@dp dv = 3*dx/h - 3*v0 - a0*h/2 Xl.append(x0+dx), Vl.append(v0+dv) Xl = array(Xl) ; Vl = array(Vl)
dati_2017/hw03/01.ipynb
boffi/boffi.github.io
mit
Plotting
for i, line in enumerate(plt.plot(a/pi, Xl), 1): line.set_label(r'$x_{%d}$'%i) plt.xlabel(r'$\omega_0 t/\pi$') plt.ylabel(r'$x_i/\delta$') plt.title('Response β€” numerical integration β€” lin.acc.') plt.legend();
dati_2017/hw03/01.ipynb
boffi/boffi.github.io
mit
Equation of Motion Denoting with $\boldsymbol x$ the dynamic component of the displacements, with $\boldsymbol x_\text{tot} = \boldsymbol x + \boldsymbol x_\text{stat} = \boldsymbol x + \boldsymbol e \;u_\mathcal{A}$ the equation of motion is (the independent variable being $a=\omega_0t$) $$ \hat{\boldsymbol M} \ddot{\boldsymbol x} + \hat{\boldsymbol K} \boldsymbol x = - \hat{\boldsymbol M} \boldsymbol e \ddot u_\mathcal{A}. $$ Using mass-normalized eigenvectors, with $\boldsymbol x = \delta\boldsymbol\Psi\boldsymbol q$ we have $$ \boldsymbol I \ddot{\boldsymbol q} + \boldsymbol\Lambda^2\boldsymbol q = \boldsymbol\Psi^T\hat{\boldsymbol M} \boldsymbol e \frac{\ddot u_A}{\delta}.$$ It is $$\frac{\ddot u_A}{\delta} = \frac{1}{2\pi}\,\lambda_0^2\,\sin(\lambda_0a)$$ and $$ \ddot q_i + \lambda_i^2 q_i = \frac{\Gamma_i}{2\pi}\,\lambda_0^2\,\sin(\lambda_0 a),\qquad\text{with } \Gamma_i = -\boldsymbol\psi_i^T \hat{\boldsymbol M} \boldsymbol e\text{ and } \lambda_0 = \frac43.$$
G = - Psi.T @ M @ e
dati_2017/hw03/01.ipynb
boffi/boffi.github.io
mit
Substituting a particular integral $\xi_i=C_i\sin(\lambda_0 a)$ in the modal equation of motion we have $$(\lambda^2_i-\lambda^2_0)\,C_i\sin(\lambda_0 a) = \frac{\Gamma_i}{2\pi}\,\lambda_0^2\,\sin(\lambda_0 a)$$ and solving w/r to $C_i$ we have $$ C_i = \frac{\Gamma_i}{2\pi}\,\frac{\lambda_0^2}{\lambda_i^2-\lambda_0^2}$$
C = G*l0**2/(li**2-l0**2)/2/pi
dati_2017/hw03/01.ipynb
boffi/boffi.github.io
mit
The modal response, taking into account that we start from rest conditions, is $$ q_i = C_i\left(\sin(\lambda_0 a) - \frac{\lambda_0}{\lambda_i}\,\sin(\lambda_i a)\right)$$ $$ \dot q_i = \lambda_0 C_i \left( \cos(\lambda_0 a) - \cos(\lambda_i a) \right).$$
for n in range(3): i = n+1 ld(r'q_%d=%+10f\left(\sin\frac43a-%10f\sin%1fa\right)' % (i,C[n],l0/li[n],li[n]), r'\qquad\text{for }0 \le a \le \frac32\pi')
dati_2017/hw03/01.ipynb
boffi/boffi.github.io
mit
Free vibration phase, $a\ge 3\pi/2 = a_1$ When the forced phase end, the system is in free vibrations and we can determine the constants of integration requiring that the displacements and velocities of the free vibration equal the displacements and velocities of the forced response at $t=t_0$. \begin{align} + (\cos\lambda_i a_1)\, A_i + (\sin\lambda_i a_1)\, B_i &= q_i(a_1) \ - (\sin\lambda_i a_1)\, A_i + (\cos\lambda_i a_1)\, B_i &= \frac{\dot q_i(a_1)}{\lambda_i} \end{align} Because the coefficients form an othogonal matrix, \begin{align} A_i &= + (\cos\lambda_i a_1)\, q_i(a_1) - (\sin\lambda_i a_1)\, \frac{\dot q_i(a_1)}{\lambda_i}\ B_i &= + (\sin\lambda_i a_1)\, q_i(a_1) + (\cos\lambda_i a_1)\, \frac{\dot q_i(a_1)}{\lambda_i}. \end{align}
a1 = 3*pi/2 q_a1 = C*(sin(l0*a1)-l0*sin(li*a1)/li) v_a1 = C*l0*(cos(l0*a1)-cos(li*a1)) ABs = [] for i in range(3): b = array((q_a1[i], v_a1[i]/li[i])) A = array(((+cos(li[i]*a1), -sin(li[i]*a1)), (+sin(li[i]*a1), +cos(li[i]*a1)))) ABs.append(A@b) ABs = array(ABs)
dati_2017/hw03/01.ipynb
boffi/boffi.github.io
mit
Analytical expressions
display(Latex(r'Modal responses for $a_1 \le a$.')) for n in range(3): i, l, A_, B_ = n+1, li[n], *ABs[n] display(Latex((r'$$q_{%d} = '+ r'%+6.3f\cos%6.3fa '+ r'%+6.3f\sin%6.3fa$$')%(i, A_, l, B_, l)))
dati_2017/hw03/01.ipynb
boffi/boffi.github.io
mit
Stitching the two responses We must evaluate numerically the analytical responses
ac = a[:,None] q = where(ac<=a1, C*(sin(l0*ac)-l0*sin(li*ac)/li), ABs[:,0]*cos(li*ac) + ABs[:,1]*sin(li*ac))
dati_2017/hw03/01.ipynb
boffi/boffi.github.io
mit
Plotting the Analytical Response First, we zoom around $a_1$ to verify the continuity of displacements and velocities
# #### Plot zooming around a1 low, hi = int(0.8*a1*nsppi/pi), int(1.2*a1*nsppi/pi) for i, line in enumerate(plt.plot(a[low:hi]/pi, q[low:hi]), 1): line.set_label('$q_{%d}$'%i) plt.title('Modal Responses, zoom on transition zone') plt.xlabel(r'$\omega_0 t/\pi$') plt.legend(loc='best') plt.show()
dati_2017/hw03/01.ipynb
boffi/boffi.github.io
mit
next, the modal responses over the interval $0 \le a \le 16\pi$
# #### Plot in 0 ≀ a ≀ 16 pi for i, line in enumerate(plt.plot(a/pi, q), 1): line.set_label('$q_{%d}$'%i) plt.title('Modal Responses') plt.xlabel(r'$\omega_0 t/\pi$') plt.legend(loc='best'); plt.xticks() plt.show();
dati_2017/hw03/01.ipynb
boffi/boffi.github.io
mit
Nodal responses
dati_2017/hw03/01.ipynb
boffi/boffi.github.io
mit
Why x = [email protected] rather than x = Psi@q? Because for different reasons (mostly, ease of use with the plotting libraries) we have all the response arrays organized in the shape of (Nsteps Γ— 3). That's equivalent to say that q and x, the Pyton objects, are isomorph to $\boldsymbol q^T$ and $\boldsymbol x^T$ and because it is $$\boldsymbol x^T = (\boldsymbol\Psi \boldsymbol q)^T = \boldsymbol q^T \boldsymbol \Psi^T,$$ in Python to write x = [email protected] we have. That said. here are the plot of the nodal responses. Compare with the numerical solutions.
for i, line in enumerate(plt.plot(a/pi, x), 1): line.set_label('$x_{%d}/\delta$'%i) plt.title('Normalized Nodal Displacements β€” analytical solution') plt.xlabel(r'$\omega_0 t / \pi$') plt.legend(loc='best') plt.show();
dati_2017/hw03/01.ipynb
boffi/boffi.github.io
mit
The mt_obj contains all the data from the edi file, e.g. impedance, tipper, frequency as well as station information (lat/long). To look at any of these parameters you can type, for example:
# To see the latitude and longitude print(mt_obj.lat, mt_obj.lon) # To see the easting, northing, and elevation print(mt_obj.east, mt_obj.north, mt_obj.elev)
examples/workshop/Workshop Exercises Core.ipynb
MTgeophysics/mtpy
gpl-3.0
There are many other parameters you can look at in the mt_obj. Just type mt_obj.[TAB] to see what is available. In the MT object are the Z and Tipper objects (mt_obj.Z; mt_obj.Tipper). These contain all information related to, respectively, the impedance tensor and the tipper.
# for example, to see the frequency values represented in the impedance tensor: print(mt_obj.Z.freq) # or to see the impedance tensor (first 4 elements) print(mt_obj.Z.z[:4]) # or the resistivity or phase (first 4 values) print(mt_obj.Z.resistivity[:4]) print(mt_obj.Z.phase[:4])
examples/workshop/Workshop Exercises Core.ipynb
MTgeophysics/mtpy
gpl-3.0
As with the MT object, you can explore the object by typing mt_obj.Z.[TAB] to see the available attributes. Plot an edi file In this example we plot MT data from an edi file.
# import required modules from mtpy.core.mt import MT import os # Define the path to your edi file and save path edi_file = "C:/mtpywin/mtpy/examples/data/edi_files_2/Synth00.edi" savepath = r"C:/tmp" # Create an MT object mt_obj = MT(edi_file) # To plot the edi file we read in in Part 1 & save to file: pt_obj = mt_obj.plot_mt_response(plot_num=1, # 1 = yx and xy; 2 = all 4 components # 3 = off diagonal + determinant plot_tipper = 'yri', plot_pt = 'y' # plot phase tensor 'y' or 'n' ) #pt_obj.save_plot(os.path.join(savepath,"Synth00.png"), fig_dpi=400)
examples/workshop/Workshop Exercises Core.ipynb
MTgeophysics/mtpy
gpl-3.0
Make some change to the data and save to a new file This example demonstrates how to resample the data onto new frequency values and write to a new edi file. In the example below, you can either choose every second frequency or resample onto five periods per decade. To do this we need to make a new Z object, and save it to a file.
# import required modules from mtpy.core.mt import MT import os # Define the path to your edi file and save path edi_file = r"C:/mtpywin/mtpy/examples/data/edi_files_2/Synth00.edi" savepath = r"C:/tmp" # Create an MT object mt_obj = MT(edi_file) # First, define a frequency array: # Every second frequency: new_freq_list = mt_obj.Z.freq[::2] # OR 5 periods per decade from 10^-4 to 10^3 seconds from mtpy.utils.calculator import get_period_list new_freq_list = 1./get_period_list(1e-4,1e3,5) # Create new Z and Tipper objects containing interpolated data new_Z_obj, new_Tipper_obj = mt_obj.interpolate(new_freq_list) # Write a new edi file using the new data mt_obj.write_mt_file( save_dir=savepath, fn_basename='Synth00_5ppd', file_type='edi', new_Z_obj=new_Z_obj, # provide a z object to update the data new_Tipper_obj=new_Tipper_obj, # provide a tipper object longitude_format='LONG', # write longitudes as 'LONG' not β€˜LON’ latlon_format='dd'# write as decimal degrees (any other input # will write as degrees:minutes:seconds )
examples/workshop/Workshop Exercises Core.ipynb
MTgeophysics/mtpy
gpl-3.0
We will now create the multi-group library using data directly from Appendix A of the C5G7 benchmark documentation. All of the data below will be created at 294K, consistent with the benchmark. This notebook will first begin by setting the group structure and building the groupwise data for UO2. As you can see, the cross sections are input in the order of increasing groups (or decreasing energy). Note: The C5G7 benchmark uses transport-corrected cross sections. So the total cross section we input here will technically be the transport cross section.
# Create a 7-group structure with arbitrary boundaries (the specific boundaries are unimportant) groups = openmc.mgxs.EnergyGroups(np.logspace(-5, 7, 8)) uo2_xsdata = openmc.XSdata('uo2', groups) uo2_xsdata.order = 0 # When setting the data let the object know you are setting the data for a temperature of 294K. uo2_xsdata.set_total([1.77949E-1, 3.29805E-1, 4.80388E-1, 5.54367E-1, 3.11801E-1, 3.95168E-1, 5.64406E-1], temperature=294.) uo2_xsdata.set_absorption([8.0248E-03, 3.7174E-3, 2.6769E-2, 9.6236E-2, 3.0020E-02, 1.1126E-1, 2.8278E-1], temperature=294.) uo2_xsdata.set_fission([7.21206E-3, 8.19301E-4, 6.45320E-3, 1.85648E-2, 1.78084E-2, 8.30348E-2, 2.16004E-1], temperature=294.) uo2_xsdata.set_nu_fission([2.005998E-2, 2.027303E-3, 1.570599E-2, 4.518301E-2, 4.334208E-2, 2.020901E-1, 5.257105E-1], temperature=294.) uo2_xsdata.set_chi([5.87910E-1, 4.11760E-1, 3.39060E-4, 1.17610E-7, 0.00000E-0, 0.00000E-0, 0.00000E-0], temperature=294.)
docs/source/examples/mg-mode-part-i.ipynb
bhermanmit/openmc
mit
We will now add the scattering matrix data. Note: Most users familiar with deterministic transport libraries are already familiar with the idea of entering one scattering matrix for every order (i.e. scattering order as the outer dimension). However, the shape of OpenMC's scattering matrix entry is instead [Incoming groups, Outgoing Groups, Scattering Order] to best enable other scattering representations. We will follow the more familiar approach in this notebook, and then use numpy's numpy.rollaxis function to change the ordering to what we need (scattering order on the inner dimension).
# The scattering matrix is ordered with incoming groups as rows and outgoing groups as columns # (i.e., below the diagonal is up-scattering). scatter_matrix = \ [[[1.27537E-1, 4.23780E-2, 9.43740E-6, 5.51630E-9, 0.00000E-0, 0.00000E-0, 0.00000E-0], [0.00000E-0, 3.24456E-1, 1.63140E-3, 3.14270E-9, 0.00000E-0, 0.00000E-0, 0.00000E-0], [0.00000E-0, 0.00000E-0, 4.50940E-1, 2.67920E-3, 0.00000E-0, 0.00000E-0, 0.00000E-0], [0.00000E-0, 0.00000E-0, 0.00000E-0, 4.52565E-1, 5.56640E-3, 0.00000E-0, 0.00000E-0], [0.00000E-0, 0.00000E-0, 0.00000E-0, 1.25250E-4, 2.71401E-1, 1.02550E-2, 1.00210E-8], [0.00000E-0, 0.00000E-0, 0.00000E-0, 0.00000E-0, 1.29680E-3, 2.65802E-1, 1.68090E-2], [0.00000E-0, 0.00000E-0, 0.00000E-0, 0.00000E-0, 0.00000E-0, 8.54580E-3, 2.73080E-1]]] scatter_matrix = np.array(scatter_matrix) scatter_matrix = np.rollaxis(scatter_matrix, 0, 3) uo2_xsdata.set_scatter_matrix(scatter_matrix, temperature=294.)
docs/source/examples/mg-mode-part-i.ipynb
bhermanmit/openmc
mit
Now that the UO2 data has been created, we can move on to the remaining materials using the same process. However, we will actually skip repeating the above for now. Our simulation will instead use the c5g7.h5 file that has already been created using exactly the same logic as above, but for the remaining materials in the benchmark problem. For now we will show how you would use the uo2_xsdata information to create an openmc.MGXSLibrary object and write to disk.
# Initialize the library mg_cross_sections_file = openmc.MGXSLibrary(groups) # Add the UO2 data to it mg_cross_sections_file.add_xsdata(uo2_xsdata) # And write to disk mg_cross_sections_file.export_to_hdf5('mgxs.h5')
docs/source/examples/mg-mode-part-i.ipynb
bhermanmit/openmc
mit
Generate 2-D C5G7 Problem Input Files To build the actual 2-D model, we will first begin by creating the materials.xml file. First we need to define materials that will be used in the problem. In other notebooks, either openmc.Nuclides or openmc.Elements were created at the equivalent stage. We can do that in multi-group mode as well. However, multi-group cross-sections are sometimes provided as macroscopic cross-sections; the C5G7 benchmark data are macroscopic. In this case, we can instead use openmc.Macroscopic objects to in-place of openmc.Nuclide or openmc.Element objects. openmc.Macroscopic, unlike openmc.Nuclide and openmc.Element objects, do not need to be provided enough information to calculate number densities, as no number densities are needed. When assigning openmc.Macroscopic objects to openmc.Material objects, the density can still be scaled by setting the density to a value that is not 1.0. This would be useful, for example, when slightly perturbing the density of water due to a small change in temperature (while of course ignoring any resultant spectral shift). The density of a macroscopic dataset is set to 1.0 in the openmc.Material object by default when an openmc.Macroscopic dataset is used; so we will show its use the first time and then afterwards it will not be required. Aside from these differences, the following code is very similar to similar code in other OpenMC example Notebooks.
# For every cross section data set in the library, assign an openmc.Macroscopic object to a material materials = {} for xs in ['uo2', 'mox43', 'mox7', 'mox87', 'fiss_chamber', 'guide_tube', 'water']: materials[xs] = openmc.Material(name=xs) materials[xs].set_density('macro', 1.) materials[xs].add_macroscopic(xs)
docs/source/examples/mg-mode-part-i.ipynb
bhermanmit/openmc
mit
Now we can go ahead and produce a materials.xml file for use by OpenMC
# Instantiate a Materials collection, register all Materials, and export to XML materials_file = openmc.Materials(materials.values()) # Set the location of the cross sections file to our pre-written set materials_file.cross_sections = 'c5g7.h5' materials_file.export_to_xml()
docs/source/examples/mg-mode-part-i.ipynb
bhermanmit/openmc
mit
Our next step will be to create the geometry information needed for our assembly and to write that to the geometry.xml file. We will begin by defining the surfaces, cells, and universes needed for each of the individual fuel pins, guide tubes, and fission chambers.
# Create the surface used for each pin pin_surf = openmc.ZCylinder(x0=0, y0=0, R=0.54, name='pin_surf') # Create the cells which will be used to represent each pin type. cells = {} universes = {} for material in materials.values(): # Create the cell for the material inside the cladding cells[material.name] = openmc.Cell(name=material.name) # Assign the half-spaces to the cell cells[material.name].region = -pin_surf # Register the material with this cell cells[material.name].fill = material # Repeat the above for the material outside the cladding (i.e., the moderator) cell_name = material.name + '_moderator' cells[cell_name] = openmc.Cell(name=cell_name) cells[cell_name].region = +pin_surf cells[cell_name].fill = materials['water'] # Finally add the two cells we just made to a Universe object universes[material.name] = openmc.Universe(name=material.name) universes[material.name].add_cells([cells[material.name], cells[cell_name]])
docs/source/examples/mg-mode-part-i.ipynb
bhermanmit/openmc
mit
The next step is to take our universes (representing the different pin types) and lay them out in a lattice to represent the assembly types
lattices = {} # Instantiate the UO2 Lattice lattices['UO2 Assembly'] = openmc.RectLattice(name='UO2 Assembly') lattices['UO2 Assembly'].dimension = [17, 17] lattices['UO2 Assembly'].lower_left = [-10.71, -10.71] lattices['UO2 Assembly'].pitch = [1.26, 1.26] u = universes['uo2'] g = universes['guide_tube'] f = universes['fiss_chamber'] lattices['UO2 Assembly'].universes = \ [[u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u], [u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u], [u, u, u, u, u, g, u, u, g, u, u, g, u, u, u, u, u], [u, u, u, g, u, u, u, u, u, u, u, u, u, g, u, u, u], [u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u], [u, u, g, u, u, g, u, u, g, u, u, g, u, u, g, u, u], [u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u], [u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u], [u, u, g, u, u, g, u, u, f, u, u, g, u, u, g, u, u], [u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u], [u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u], [u, u, g, u, u, g, u, u, g, u, u, g, u, u, g, u, u], [u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u], [u, u, u, g, u, u, u, u, u, u, u, u, u, g, u, u, u], [u, u, u, u, u, g, u, u, g, u, u, g, u, u, u, u, u], [u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u], [u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u]] # Create a containing cell and universe cells['UO2 Assembly'] = openmc.Cell(name='UO2 Assembly') cells['UO2 Assembly'].fill = lattices['UO2 Assembly'] universes['UO2 Assembly'] = openmc.Universe(name='UO2 Assembly') universes['UO2 Assembly'].add_cell(cells['UO2 Assembly']) # Instantiate the MOX Lattice lattices['MOX Assembly'] = openmc.RectLattice(name='MOX Assembly') lattices['MOX Assembly'].dimension = [17, 17] lattices['MOX Assembly'].lower_left = [-10.71, -10.71] lattices['MOX Assembly'].pitch = [1.26, 1.26] m = universes['mox43'] n = universes['mox7'] o = universes['mox87'] g = universes['guide_tube'] f = universes['fiss_chamber'] lattices['MOX Assembly'].universes = \ [[m, m, m, m, m, m, m, m, m, m, m, m, m, m, m, m, m], [m, n, n, n, n, n, n, n, n, n, n, n, n, n, n, n, m], [m, n, n, n, n, g, n, n, g, n, n, g, n, n, n, n, m], [m, n, n, g, n, o, o, o, o, o, o, o, n, g, n, n, m], [m, n, n, n, o, o, o, o, o, o, o, o, o, n, n, n, m], [m, n, g, o, o, g, o, o, g, o, o, g, o, o, g, n, m], [m, n, n, o, o, o, o, o, o, o, o, o, o, o, n, n, m], [m, n, n, o, o, o, o, o, o, o, o, o, o, o, n, n, m], [m, n, g, o, o, g, o, o, f, o, o, g, o, o, g, n, m], [m, n, n, o, o, o, o, o, o, o, o, o, o, o, n, n, m], [m, n, n, o, o, o, o, o, o, o, o, o, o, o, n, n, m], [m, n, g, o, o, g, o, o, g, o, o, g, o, o, g, n, m], [m, n, n, n, o, o, o, o, o, o, o, o, o, n, n, n, m], [m, n, n, g, n, o, o, o, o, o, o, o, n, g, n, n, m], [m, n, n, n, n, g, n, n, g, n, n, g, n, n, n, n, m], [m, n, n, n, n, n, n, n, n, n, n, n, n, n, n, n, m], [m, m, m, m, m, m, m, m, m, m, m, m, m, m, m, m, m]] # Create a containing cell and universe cells['MOX Assembly'] = openmc.Cell(name='MOX Assembly') cells['MOX Assembly'].fill = lattices['MOX Assembly'] universes['MOX Assembly'] = openmc.Universe(name='MOX Assembly') universes['MOX Assembly'].add_cell(cells['MOX Assembly']) # Instantiate the reflector Lattice lattices['Reflector Assembly'] = openmc.RectLattice(name='Reflector Assembly') lattices['Reflector Assembly'].dimension = [1,1] lattices['Reflector Assembly'].lower_left = [-10.71, -10.71] lattices['Reflector Assembly'].pitch = [21.42, 21.42] lattices['Reflector Assembly'].universes = [[universes['water']]] # Create a containing cell and universe cells['Reflector Assembly'] = openmc.Cell(name='Reflector Assembly') cells['Reflector Assembly'].fill = lattices['Reflector Assembly'] universes['Reflector Assembly'] = openmc.Universe(name='Reflector Assembly') universes['Reflector Assembly'].add_cell(cells['Reflector Assembly'])
docs/source/examples/mg-mode-part-i.ipynb
bhermanmit/openmc
mit
Let's now create the core layout in a 3x3 lattice where each lattice position is one of the assemblies we just defined. After that we can create the final cell to contain the entire core.
lattices['Core'] = openmc.RectLattice(name='3x3 core lattice') lattices['Core'].dimension= [3, 3] lattices['Core'].lower_left = [-32.13, -32.13] lattices['Core'].pitch = [21.42, 21.42] r = universes['Reflector Assembly'] u = universes['UO2 Assembly'] m = universes['MOX Assembly'] lattices['Core'].universes = [[u, m, r], [m, u, r], [r, r, r]] # Create boundary planes to surround the geometry min_x = openmc.XPlane(x0=-32.13, boundary_type='reflective') max_x = openmc.XPlane(x0=+32.13, boundary_type='vacuum') min_y = openmc.YPlane(y0=-32.13, boundary_type='vacuum') max_y = openmc.YPlane(y0=+32.13, boundary_type='reflective') # Create root Cell root_cell = openmc.Cell(name='root cell') root_cell.fill = lattices['Core'] # Add boundary planes root_cell.region = +min_x & -max_x & +min_y & -max_y # Create root Universe root_universe = openmc.Universe(name='root universe', universe_id=0) root_universe.add_cell(root_cell)
docs/source/examples/mg-mode-part-i.ipynb
bhermanmit/openmc
mit
Before we commit to the geometry, we should view it using the Python API's plotting capability
root_universe.plot(center=(0., 0., 0.), width=(3 * 21.42, 3 * 21.42), pixels=(500, 500), color_by='material')
docs/source/examples/mg-mode-part-i.ipynb
bhermanmit/openmc
mit
OK, it looks pretty good, let's go ahead and write the file
# Create Geometry and set root Universe geometry = openmc.Geometry(root_universe) # Export to "geometry.xml" geometry.export_to_xml()
docs/source/examples/mg-mode-part-i.ipynb
bhermanmit/openmc
mit
We can now create the tally file information. The tallies will be set up to give us the pin powers in this notebook. We will do this with a mesh filter, with one mesh cell per pin.
tallies_file = openmc.Tallies() # Instantiate a tally Mesh mesh = openmc.Mesh() mesh.type = 'regular' mesh.dimension = [17 * 2, 17 * 2] mesh.lower_left = [-32.13, -10.71] mesh.upper_right = [+10.71, +32.13] # Instantiate tally Filter mesh_filter = openmc.MeshFilter(mesh) # Instantiate the Tally tally = openmc.Tally(name='mesh tally') tally.filters = [mesh_filter] tally.scores = ['fission'] # Add tally to collection tallies_file.append(tally) # Export all tallies to a "tallies.xml" file tallies_file.export_to_xml()
docs/source/examples/mg-mode-part-i.ipynb
bhermanmit/openmc
mit
With the geometry and materials finished, we now just need to define simulation parameters for the settings.xml file. Note the use of the energy_mode attribute of our settings_file object. This is used to tell OpenMC that we intend to run in multi-group mode instead of the default continuous-energy mode. If we didn't specify this but our cross sections file was not a continuous-energy data set, then OpenMC would complain. This will be a relatively coarse calculation with only 500,000 active histories. A benchmark-fidelity run would of course require many more!
# OpenMC simulation parameters batches = 150 inactive = 50 particles = 5000 # Instantiate a Settings object settings_file = openmc.Settings() settings_file.batches = batches settings_file.inactive = inactive settings_file.particles = particles # Tell OpenMC this is a multi-group problem settings_file.energy_mode = 'multi-group' # Set the verbosity to 6 so we dont see output for every batch settings_file.verbosity = 6 # Create an initial uniform spatial source distribution over fissionable zones bounds = [-32.13, -10.71, -1e50, 10.71, 32.13, 1e50] uniform_dist = openmc.stats.Box(bounds[:3], bounds[3:], only_fissionable=True) settings_file.source = openmc.source.Source(space=uniform_dist) # Tell OpenMC we want to run in eigenvalue mode settings_file.run_mode = 'eigenvalue' # Export to "settings.xml" settings_file.export_to_xml()
docs/source/examples/mg-mode-part-i.ipynb
bhermanmit/openmc
mit