markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
---|---|---|---|---|
对象检测
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://tensorflow.google.cn/hub/tutorials/object_detection"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">View 在 TensorFlow.org 上查看</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/hub/tutorials/object_detection.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">在 Google Colab 中运行 </a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/hub/tutorials/object_detection.ipynb"> <img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png"> 在 GitHub 上查看源代码</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/hub/tutorials/object_detection.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png">下载笔记本</a></td>
<td><a href="https://tfhub.dev/s?q=google%2Ffaster_rcnn%2Fopenimages_v4%2Finception_resnet_v2%2F1%20OR%20google%2Ffaster_rcnn%2Fopenimages_v4%2Finception_resnet_v2%2F1"><img src="https://tensorflow.google.cn/images/hub_logo_32px.png">查看 TF Hub 模型</a></td>
</table>
此 Colab 演示如何使用经过训练的 TF-Hub 模块执行对象检测。
设置 | #@title Imports and function definitions
# For running inference on the TF-Hub module.
import tensorflow as tf
import tensorflow_hub as hub
# For downloading the image.
import matplotlib.pyplot as plt
import tempfile
from six.moves.urllib.request import urlopen
from six import BytesIO
# For drawing onto the image.
import numpy as np
from PIL import Image
from PIL import ImageColor
from PIL import ImageDraw
from PIL import ImageFont
from PIL import ImageOps
# For measuring the inference time.
import time
# Print Tensorflow version
print(tf.__version__)
# Check available GPU devices.
print("The following GPU devices are available: %s" % tf.test.gpu_device_name()) | site/zh-cn/hub/tutorials/object_detection.ipynb | tensorflow/docs-l10n | apache-2.0 |
使用示例
用于下载图像和可视化的辅助函数。
为了实现最简单的必需功能,根据 TF 对象检测 API 改编了可视化代码。 | def display_image(image):
fig = plt.figure(figsize=(20, 15))
plt.grid(False)
plt.imshow(image)
def download_and_resize_image(url, new_width=256, new_height=256,
display=False):
_, filename = tempfile.mkstemp(suffix=".jpg")
response = urlopen(url)
image_data = response.read()
image_data = BytesIO(image_data)
pil_image = Image.open(image_data)
pil_image = ImageOps.fit(pil_image, (new_width, new_height), Image.ANTIALIAS)
pil_image_rgb = pil_image.convert("RGB")
pil_image_rgb.save(filename, format="JPEG", quality=90)
print("Image downloaded to %s." % filename)
if display:
display_image(pil_image)
return filename
def draw_bounding_box_on_image(image,
ymin,
xmin,
ymax,
xmax,
color,
font,
thickness=4,
display_str_list=()):
"""Adds a bounding box to an image."""
draw = ImageDraw.Draw(image)
im_width, im_height = image.size
(left, right, top, bottom) = (xmin * im_width, xmax * im_width,
ymin * im_height, ymax * im_height)
draw.line([(left, top), (left, bottom), (right, bottom), (right, top),
(left, top)],
width=thickness,
fill=color)
# If the total height of the display strings added to the top of the bounding
# box exceeds the top of the image, stack the strings below the bounding box
# instead of above.
display_str_heights = [font.getsize(ds)[1] for ds in display_str_list]
# Each display_str has a top and bottom margin of 0.05x.
total_display_str_height = (1 + 2 * 0.05) * sum(display_str_heights)
if top > total_display_str_height:
text_bottom = top
else:
text_bottom = top + total_display_str_height
# Reverse list and print from bottom to top.
for display_str in display_str_list[::-1]:
text_width, text_height = font.getsize(display_str)
margin = np.ceil(0.05 * text_height)
draw.rectangle([(left, text_bottom - text_height - 2 * margin),
(left + text_width, text_bottom)],
fill=color)
draw.text((left + margin, text_bottom - text_height - margin),
display_str,
fill="black",
font=font)
text_bottom -= text_height - 2 * margin
def draw_boxes(image, boxes, class_names, scores, max_boxes=10, min_score=0.1):
"""Overlay labeled boxes on an image with formatted scores and label names."""
colors = list(ImageColor.colormap.values())
try:
font = ImageFont.truetype("/usr/share/fonts/truetype/liberation/LiberationSansNarrow-Regular.ttf",
25)
except IOError:
print("Font not found, using default font.")
font = ImageFont.load_default()
for i in range(min(boxes.shape[0], max_boxes)):
if scores[i] >= min_score:
ymin, xmin, ymax, xmax = tuple(boxes[i])
display_str = "{}: {}%".format(class_names[i].decode("ascii"),
int(100 * scores[i]))
color = colors[hash(class_names[i]) % len(colors)]
image_pil = Image.fromarray(np.uint8(image)).convert("RGB")
draw_bounding_box_on_image(
image_pil,
ymin,
xmin,
ymax,
xmax,
color,
font,
display_str_list=[display_str])
np.copyto(image, np.array(image_pil))
return image | site/zh-cn/hub/tutorials/object_detection.ipynb | tensorflow/docs-l10n | apache-2.0 |
应用模块
从 Open Images v4 加载公共图像,并在本地保存和显示。 | # By Heiko Gorski, Source: https://commons.wikimedia.org/wiki/File:Naxos_Taverna.jpg
image_url = "https://upload.wikimedia.org/wikipedia/commons/6/60/Naxos_Taverna.jpg" #@param
downloaded_image_path = download_and_resize_image(image_url, 1280, 856, True) | site/zh-cn/hub/tutorials/object_detection.ipynb | tensorflow/docs-l10n | apache-2.0 |
选择对象检测模块并应用于下载的图像。模块包括:
FasterRCNN+InceptionResNet V2:高准确率。
ssd+mobilenet V2:小而快。 | module_handle = "https://tfhub.dev/google/faster_rcnn/openimages_v4/inception_resnet_v2/1" #@param ["https://tfhub.dev/google/openimages_v4/ssd/mobilenet_v2/1", "https://tfhub.dev/google/faster_rcnn/openimages_v4/inception_resnet_v2/1"]
detector = hub.load(module_handle).signatures['default']
def load_img(path):
img = tf.io.read_file(path)
img = tf.image.decode_jpeg(img, channels=3)
return img
def run_detector(detector, path):
img = load_img(path)
converted_img = tf.image.convert_image_dtype(img, tf.float32)[tf.newaxis, ...]
start_time = time.time()
result = detector(converted_img)
end_time = time.time()
result = {key:value.numpy() for key,value in result.items()}
print("Found %d objects." % len(result["detection_scores"]))
print("Inference time: ", end_time-start_time)
image_with_boxes = draw_boxes(
img.numpy(), result["detection_boxes"],
result["detection_class_entities"], result["detection_scores"])
display_image(image_with_boxes)
run_detector(detector, downloaded_image_path) | site/zh-cn/hub/tutorials/object_detection.ipynb | tensorflow/docs-l10n | apache-2.0 |
更多图像
使用时间跟踪对部分其他图像进行推理。 | image_urls = [
# Source: https://commons.wikimedia.org/wiki/File:The_Coleoptera_of_the_British_islands_(Plate_125)_(8592917784).jpg
"https://upload.wikimedia.org/wikipedia/commons/1/1b/The_Coleoptera_of_the_British_islands_%28Plate_125%29_%288592917784%29.jpg",
# By Américo Toledano, Source: https://commons.wikimedia.org/wiki/File:Biblioteca_Maim%C3%B3nides,_Campus_Universitario_de_Rabanales_007.jpg
"https://upload.wikimedia.org/wikipedia/commons/thumb/0/0d/Biblioteca_Maim%C3%B3nides%2C_Campus_Universitario_de_Rabanales_007.jpg/1024px-Biblioteca_Maim%C3%B3nides%2C_Campus_Universitario_de_Rabanales_007.jpg",
# Source: https://commons.wikimedia.org/wiki/File:The_smaller_British_birds_(8053836633).jpg
"https://upload.wikimedia.org/wikipedia/commons/0/09/The_smaller_British_birds_%288053836633%29.jpg",
]
def detect_img(image_url):
start_time = time.time()
image_path = download_and_resize_image(image_url, 640, 480)
run_detector(detector, image_path)
end_time = time.time()
print("Inference time:",end_time-start_time)
detect_img(image_urls[0])
detect_img(image_urls[1])
detect_img(image_urls[2]) | site/zh-cn/hub/tutorials/object_detection.ipynb | tensorflow/docs-l10n | apache-2.0 |
2. Different ways of learning from data
Now let's say we want to predict the type of flower for a new given data point. There are multiple ways to solve this problem. We will consider these two ways in some detail:
We could find a function which can directly map an input value to it's class label.
We can find the probability distributions over the variables and then use this distribution to answer queries about the new data point.
There are a lot of algorithms for finding a mapping function. For example linear regression tries to find a linear equation which explains the data. Support vector machine tries to find a plane which separates the data points. Decision Tree tries to find a set of simple greater than and less than equations to classify the data. Let's try to apply Decision Tree on this data set.
We can plot the data and it looks something like this: | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
# Adding a little bit of noise so that it's easier to visualize
data_with_noise = data.iloc[:, :2] + np.random.normal(loc=0, scale=0.1, size=(150, 2))
plt.scatter(data_with_noise.length, data_with_noise.width, c=[ "bgr"[k] for k in data.iloc[:,2] ], s=200, alpha=0.3) | notebooks/1. Introduction to Probabilistic Graphical Models.ipynb | pgmpy/pgmpy_notebook | mit |
In the plot we can easily see that the blue points are concentrated on the top-left corner, green ones in bottom left and red ones in top right.
Now let's try to train a Decision Tree on this data. | from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(data[['length', 'width']].values, data.type.values, test_size=0.2)
classifier = DecisionTreeClassifier(max_depth=4)
classifier.fit(X_train, y_train)
classifier.predict(X_test)
classifier.score(X_test, y_test) | notebooks/1. Introduction to Probabilistic Graphical Models.ipynb | pgmpy/pgmpy_notebook | mit |
So, in this case we got a classification accuracy of 60 %.
Now moving on to our second approach using a probabilistic model.
The most obvious way to do this classification task would be to compute a Joint Probability Distribution over all these variables and then marginalize and reduce over these according to our new data point to get the probabilities of classes. | X_train, X_test = data[:120], data[120:]
X_train
# Computing the joint probability distribution over the training data
joint_prob = X_train.groupby(['length', 'width', 'type']).size() / 120
joint_prob
# Predicting values
# Selecting just the feature variables.
X_test_features = X_test.iloc[:, :2].values
X_test_actual_results = X_test.iloc[:, 2].values
predicted_values = []
for i in X_test_features:
predicted_values.append(joint_prob[i[0], i[1]].idxmax())
predicted_values = np.array(predicted_values)
predicted_values
# Comparing results with the actual data.
predicted_values == X_test_actual_results
score = (predicted_values == X_test_actual_results).sum() / 30
print(score) | notebooks/1. Introduction to Probabilistic Graphical Models.ipynb | pgmpy/pgmpy_notebook | mit |
Basic Concepts I
What is "learning from data"?
In general Learning from Data is a scientific discipline that is concerned with the design and development of algorithms that allow computers to infer (from data) a model that allows compact representation (unsupervised learning) and/or good generalization (supervised learning).
This is an important technology because it enables computational systems to adaptively improve their performance with experience accumulated from the observed data.
Most of these algorithms are based on the iterative solution of a mathematical problem that involves data and model. If there was an analytical solution to the problem, this should be the adopted one, but this is not the case for most of the cases.
So, the most common strategy for learning from data is based on solving a system of equations as a way to find a series of parameters of the model that minimizes a mathematical problem. This is called optimization.
The most important technique for solving optimization problems is gradient descend.
Preliminary: Nelder-Mead method for function minimization.
See "An Interactive Tutorial on Numerical Optimization": http://www.benfrederickson.com/numerical-optimization/
The most simple thing we can try to minimize a function $f(x)$ would be to sample two points relatively near each other, and just repeatedly take a step down away from the largest value.
The Nelder-Mead method dynamically adjusts the step size based off the loss of the new point. If the new point is better than any previously seen value, it expands the step size to accelerate towards the bottom. Likewise if the new point is worse it contracts the step size to converge around the minima. The usual settings are to half the step size when contracting and double the step size when expanding.
This method can be easily extended into higher dimensional examples, all thats required is taking one more point than there are dimensions - and then reflecting the worst point around the rest of the points to take a step down.
Gradient descend (for hackers) for function minimization: 1-D
Let's suppose that we have a function $f: \Re \rightarrow \Re$. For example:
$$f(x) = x^2$$
Our objective is to find the argument $x$ that minimizes this function (for maximization, consider $-f(x)$). To this end, the critical concept is the derivative.
The derivative of $f$ of a variable $x$, $f'(x)$ or $\frac{\mathrm{d}f}{\mathrm{d}x}$, is a measure of the rate at which the value of the function changes with respect to the change of the variable. It is defined as the following limit:
$$ f'(x) = \lim_{h \rightarrow 0} \frac{f(x + h) - f(x)}{h} $$
The derivative specifies how to scale a small change in the input in order to obtain the corresponding change in the output:
$$ f(x + h) \approx f(x) + h f'(x)$$ | # numerical derivative at a point x
def f(x):
return x**2
def fin_dif(x, f, h = 0.00001):
'''
This method returns the derivative of f at x
by using the finite difference method
'''
return (f(x+h) - f(x))/h
x = 2.0
print "{:2.4f}".format(fin_dif(x,f)) | 1. Basic Concepts I.ipynb | jvitria/DeepLearningBBVA2016 | mit |
The limit as $h$ approaches zero, if it exists, should represent the slope of the tangent line to $(x, f(x))$.
For values that are not zero it is only an approximation. | for h in np.linspace(0.0, 1.0 , 5):
print "{:3.6f}".format(f(5+h)), "{:3.6f}".format(f(5)+h*fin_dif(5,f))
x = np.linspace(-1.5,-0.5, 100)
f = [i**2 for i in x]
plt.plot(x,f, 'r-')
plt.plot([-1.5, -0.5], [2, 0.0], 'k-', lw=2)
plt.plot([-1.4, -1.0], [1.96, 1.0], 'b-', lw=2)
plt.plot([-1],[1],'o')
plt.plot([-1.4],[1.96],'o')
plt.text(-1.0, 1.2, r'$x,f(x)$')
plt.text(-1.4, 2.2, r'$(x-h),f(x-h)$')
plt.gcf().set_size_inches((12,6))
plt.grid()
plt.show | 1. Basic Concepts I.ipynb | jvitria/DeepLearningBBVA2016 | mit |
It can be shown that the “centered difference formula" is better when computing numerical derivatives:
$$ \lim_{h \rightarrow 0} \frac{f(x + h) - f(x - h)}{2h} $$
The error in the "finite difference" approximation can be derived from Taylor's theorem and, assuming that $f$ is differentiable, is $O(h)$. In the case of “centered difference" the error is $O(h^2)$.
The derivative tells how to chage $x$ in order to make a small improvement in $f$.
Then, we can follow these steps to decrease the value of the function:
Start from a random $x$ value.
Compute the derivative $f'(x) = \lim_{h \rightarrow 0} \frac{f(x + h) - f(x - h)}{2h}$.
Walk a small step in the opposite direction of the derivative, because we know that $f(x - h \mbox{ sign}(f'(x))$ is less than $f(x)$ for small enough $h$.
The search for the minima ends when the derivative is zero because we have no more information about which direction to move. $x$ is a critical o stationary point if $f'(x)=0$.
A minimum (maximum) is a critical point where $f(x)$ is lower (higher) than at all neighboring points.
There is a third class of critical points: saddle points.
If $f$ is a convex function, this should be the minimum (maximum) of our functions. In other cases it could be a local minimum (maximum) or a saddle point. | W = 400
H = 250
bp.output_notebook()
x = np.linspace(-15,15,100)
y = x**2
TOOLS = [WheelZoomTool(), ResetTool(), PanTool()]
s1 = bp.figure(width=W, plot_height=H,
title='Local minimum of function',
tools=TOOLS)
s1.line(x, y, color="navy", alpha=0.5, line_width=3)
s1.circle(0, 0, size =10, color="orange")
s1.title_text_font_size = '12pt'
s1.yaxis.axis_label_text_font_size = "14pt"
s1.xaxis.axis_label_text_font_size = "14pt"
bp.show(s1)
x = np.linspace(-15,15,100)
y = -x**2
TOOLS = [WheelZoomTool(), ResetTool(), PanTool()]
s1 = bp.figure(width=W, plot_height=H,
title='Local maximum of function',
tools=TOOLS)
s1.line(x, y, color="navy", alpha=0.5, line_width=3)
s1.circle(0, 0, size =10, color="orange")
s1.title_text_font_size = '12pt'
s1.yaxis.axis_label_text_font_size = "14pt"
s1.xaxis.axis_label_text_font_size = "14pt"
bp.show(s1)
x = np.linspace(-15,15,100)
y = x**3
TOOLS = [WheelZoomTool(), ResetTool(), PanTool()]
s1 = bp.figure(width=W, plot_height=H,
title='Saddle point of function',
tools=TOOLS)
s1.line(x, y, color="navy", alpha=0.5, line_width=3)
s1.circle(0, 0, size =10, color="orange")
s1.title_text_font_size = '12pt'
s1.yaxis.axis_label_text_font_size = "14pt"
s1.xaxis.axis_label_text_font_size = "14pt"
bp.show(s1) | 1. Basic Concepts I.ipynb | jvitria/DeepLearningBBVA2016 | mit |
There are two problems with numerical derivatives:
+ It is approximate.
+ It is very slow to evaluate (two function evaluations: $f(x + h) , f(x - h)$ ).
Our knowledge from Calculus could help!
We know that we can get an analytical expression of the derivative for some functions.
For example, let's suppose we have a simple quadratic function, $f(x)=x^2−6x+5$, and we want to find the minimum of this function.
First approach
We can solve this analytically using Calculus, by finding the derivate $f'(x) = 2x-6$ and setting it to zero:
\begin{equation}
\begin{split}
2x-6 & = & 0 \
2x & = & 6 \
x & = & 3 \
\end{split}
\end{equation} | x = np.linspace(-10,20,100)
y = x**2 - 6*x + 5
TOOLS = [WheelZoomTool(), ResetTool(), PanTool()]
s1 = bp.figure(width=W, plot_height=H,
tools=TOOLS)
s1.line(x, y, color="navy", alpha=0.5, line_width=3)
s1.circle(3, 3**2 - 6*3 + 5, size =10, color="orange")
s1.title_text_font_size = '12pt'
s1.yaxis.axis_label_text_font_size = "14pt"
s1.xaxis.axis_label_text_font_size = "14pt"
bp.show(s1) | 1. Basic Concepts I.ipynb | jvitria/DeepLearningBBVA2016 | mit |
Second approach
To find the local minimum using gradient descend: you start at a random point, and move into the direction of steepest descent relative to the derivative:
Start from a random $x$ value.
Compute the derivative $f'(x)$ analitically.
Walk a small step in the opposite direction of the derivative.
In this example, let's suppose we start at $x=15$. The derivative at this point is $2×15−6=24$.
Because we're using gradient descent, we need to subtract the gradient from our $x$-coordinate: $f(x - f'(x))$. However, notice that $15−24$ gives us $−9$, clearly overshooting over target of $3$. | x = np.linspace(-10,20,100)
y = x**2 - 6*x + 5
start = 15
TOOLS = [WheelZoomTool(), ResetTool(), PanTool()]
s1 = bp.figure(width=W, plot_height=H,
tools=TOOLS)
s1.line(x, y, color="navy", alpha=0.5, line_width=3)
s1.circle(start, start**2 - 6*start + 5, size =10, color="orange")
d = 2 * start - 6
end = start - d
s1.circle(end, end**2 - 6*end + 5, size =10, color="red")
s1.title_text_font_size = '12pt'
s1.yaxis.axis_label_text_font_size = "14pt"
s1.xaxis.axis_label_text_font_size = "14pt"
bp.show(s1) | 1. Basic Concepts I.ipynb | jvitria/DeepLearningBBVA2016 | mit |
To fix this, we multiply the gradient by a step size. This step size (often called alpha) has to be chosen carefully, as a value too small will result in a long computation time, while a value too large will not give you the right result (by overshooting) or even fail to converge.
In this example, we'll set the step size to 0.01, which means we'll subtract $24×0.01$ from $15$, which is $14.76$.
This is now our new temporary local minimum: We continue this method until we either don't see a change after we subtracted the derivative step size, or until we've completed a pre-set number of iterations. | old_min = 0
temp_min = 15
step_size = 0.01
precision = 0.0001
def f_derivative(x):
import math
return 2*x -6
mins = []
cost = []
while abs(temp_min - old_min) > precision:
old_min = temp_min
gradient = f_derivative(old_min)
move = gradient * step_size
temp_min = old_min - move
cost.append((3-temp_min)**2)
mins.append(temp_min)
# rounding the result to 2 digits because of the step size
print "Local minimum occurs at {:3.2f}.".format(round(temp_min,2)) | 1. Basic Concepts I.ipynb | jvitria/DeepLearningBBVA2016 | mit |
An important feature of gradient descent is that there should be a visible improvement over time: In this example, we simply plotted the squared distance from the local minima calculated by gradient descent and the true local minimum, cost, against the iteration during which it was calculated. As we can see, the distance gets smaller over time, but barely changes in later iterations. | TOOLS = [WheelZoomTool(), ResetTool(), PanTool()]
x, y = (zip(*enumerate(cost)))
s1 = bp.figure(width=W,
height=H,
title='Squared distance to true local minimum',
# title_text_font_size='14pt',
tools=TOOLS,
x_axis_label = 'Iteration',
y_axis_label = 'Distance'
)
s1.line(x, y, color="navy", alpha=0.5, line_width=3)
s1.title_text_font_size = '16pt'
s1.yaxis.axis_label_text_font_size = "14pt"
s1.xaxis.axis_label_text_font_size = "14pt"
bp.show(s1) | 1. Basic Concepts I.ipynb | jvitria/DeepLearningBBVA2016 | mit |
From derivatives to gradient: $n$-dimensional function minimization.
Let's consider a $n$-dimensional function $f: \Re^n \rightarrow \Re$. For example:
$$f(\mathbf{x}) = \sum_{n} x_n^2$$
Our objective is to find the argument $\mathbf{x}$ that minimizes this function.
The gradient of $f$ is the vector whose components are the $n$ partial derivatives of $f$. It is thus a vector-valued function.
The gradient points in the direction of the greatest rate of increase of the function.
$$\nabla {f} = (\frac{\partial f}{\partial x_1}, \dots, \frac{\partial f}{\partial x_n})$$ | def f(x):
return sum(x_i**2 for x_i in x)
def fin_dif_partial_centered(x, f, i, h=1e-6):
w1 = [x_j + (h if j==i else 0) for j, x_j in enumerate(x)]
w2 = [x_j - (h if j==i else 0) for j, x_j in enumerate(x)]
return (f(w1) - f(w2))/(2*h)
def fin_dif_partial_old(x, f, i, h=1e-6):
w1 = [x_j + (h if j==i else 0) for j, x_j in enumerate(x)]
return (f(w1) - f(x))/h
def gradient_centered(x, f, h=1e-6):
return[round(fin_dif_partial_centered(x,f,i,h), 10) for i,_ in enumerate(x)]
def gradient_old(x, f, h=1e-6):
return[round(fin_dif_partial_old(x,f,i,h), 10) for i,_ in enumerate(x)]
x = [1.0,1.0,1.0]
print f(x), gradient_centered(x,f)
print f(x), gradient_old(x,f) | 1. Basic Concepts I.ipynb | jvitria/DeepLearningBBVA2016 | mit |
Let's start by choosing a random vector and then walking a step in the opposite direction of the gradient vector. We will stop when the difference between the new solution and the old solution is less than a tolerance value. | # choosing a random vector
import random
import numpy as np
x = [random.randint(-10,10) for i in range(3)]
x
def step(x,grad,alpha):
return [x_i - alpha * grad_i for x_i, grad_i in zip(x,grad)]
tol = 1e-15
alpha = 0.01
while True:
grad = gradient_centered(x,f)
next_x = step(x,grad,alpha)
if euc_dist(next_x,x) < tol:
break
x = next_x
print [round(i,10) for i in x] | 1. Basic Concepts I.ipynb | jvitria/DeepLearningBBVA2016 | mit |
Learning from data
In general, we have:
A dataset $(\mathbf{x},y)$.
A target function $f_\mathbf{w}$, that we want to minimize, representing the discrepancy between our data and the model we want to fit. The model is represented by a set of parameters $\mathbf{w}$.
The gradient of the target function, $g_f$.
In the most common case $f$ represents the errors from a data representation model $M$. To fit the model is to find the optimal parameters $\mathbf{w}$ that minimize the following expression:
$$ f_\mathbf{w} = \sum_{i} (y_i - M(\mathbf{x}_i,\mathbf{w}))^2 $$
For example, $(\mathbf{x},y)$ can represent:
$\mathbf{x}$: the behavior of a "Candy Crush" player; $y$: monthly payments.
$\mathbf{x}$: sensor data about your car engine; $y$: probability of engine error.
$\mathbf{x}$: finantial data of a bank customer; $y$: customer rating.
If $y$ is a real value, it is called a regression problem.
If $y$ is binary/categorical, it is called a classification problem.
Let's suppose that $M(\mathbf{x},\mathbf{w}) = \mathbf{w} \cdot \mathbf{x}$.
Batch gradient descend
We can implement gradient descend in the following way (batch gradient descend): | # f = 2x
x = range(100)
y = [2*i for i in x]
# f_target = Sum (y - wx)**2
def target_f(x,y,w):
import numpy as np
return np.sum((np.array(y) - np.array(x) * w)**2.0)
# gradient_f = Sum 2wx**2 - 2xy
def gradient_f(x,y,w):
import numpy as np
return np.sum(2*w*(np.array(x)**2) - 2*np.array(x)*np.array(y))
def step(w,grad,alpha):
return w - alpha * grad
def min_batch(target_f, gradient_f, x, y, toler = 1e-6):
import random
alphas = [100, 10, 1, 0.1, 0.001, 0.00001]
w = random.random()
val = target_f(x,y,w)
print "First w:", w, "First Val:", val, "\n"
i = 0
while True:
i += 1
gradient = gradient_f(x,y,w)
next_ws = [step(w, gradient, alpha) for alpha in alphas]
next_vals = [target_f(x,y,w) for w in next_ws]
min_val = min(next_vals)
next_w = next_ws[next_vals.index(min_val)]
next_val = target_f(x,y,next_w)
print i, "w: {:4.4f}".format(w), "Val:{:4.4f}".format(val), "Gradient:", gradient
if (abs(val - next_val) < toler) or (i>200):
return w
else:
w, val = next_w, next_val
min_batch(target_f, gradient_f, x, y)
# Exercise:
# 1. Consider a set of 100 data points and explain the behavior of the algorithm.
# 2. How could we fix this behavior? | 1. Basic Concepts I.ipynb | jvitria/DeepLearningBBVA2016 | mit |
Stochastic Gradient Descend
The last function evals the whole dataset $(\mathbf{x}_i,y_i)$ at every step.
If the dataset is large, this strategy is too costly. In this case we will use a strategy called SGD (Stochastic Gradient Descend).
When learning from data, the cost function is additive: it is computed by adding sample reconstruction errors.
Then, we can compute the estimate the gradient (and move towards the minimum) by using only one data sample (or a small data sample).
Thus, we will find the minimum by iterating this gradient estimation over the dataset.
A full iteration over the dataset is called epoch. During an epoch, data must be used in a random order.
If we apply this method we have some theoretical guarantees to find the minimum. | import numpy as np
x = range(10)
y = [2*i for i in x]
data = zip(x,y)
def in_random_order(data):
import random
indexes = [i for i,_ in enumerate(data)]
random.shuffle(indexes)
for i in indexes:
yield data[i]
for (x_i,y_i) in in_random_order(data):
print x_i,y_i
def gradient_f_SGD(x,y,w):
import numpy as np
return 2*w*(np.array(x)**2) - 2*np.array(x)*np.array(y)
def SGD(target_f, gradient_f, x, y, alpha_0=0.01):
import numpy as np
import random
data = zip(x,y)
w = random.random()
alpha = alpha_0
min_w, min_val = float('inf'), float('inf')
iteration_no_increase = 0
while iteration_no_increase < 100:
val = sum(target_f(x_i, y_i, w) for x_i,y_i in data)
if val < min_val:
min_w, min_val = w, val
iteration_no_increase = 0
alpha = alpha_0
else:
iteration_no_increase += 1
alpha *= 0.9
for x_i, y_i in in_random_order(data):
gradient_i = gradient_f(x_i, y_i, w)
w = np.array(w) - (alpha * np.array(gradient_i))
return min_w
print "w:", SGD(target_f, gradient_f_SGD, x, y, 0.01) | 1. Basic Concepts I.ipynb | jvitria/DeepLearningBBVA2016 | mit |
Exercise: Gradient Descent and Linear Regression
The linear regression model assumes a linear relationship between data:
$$ y_i = w_1 x_i + w_0 $$
Let's generate a more realistic dataset (with noise), where $w_1 = 2$ and $w_0 = 0$: | import numpy as np
x = np.random.uniform(0,1,20)
def f(x): return x*2
noise_variance =0.2
noise = np.random.randn(x.shape[0])*noise_variance
y = f(x) + noise
plt.plot(x, y, 'o', label='y')
plt.plot([0, 1], [f(0), f(1)], 'b-', label='f(x)')
plt.xlabel('$x$', fontsize=15)
plt.ylabel('$t$', fontsize=15)
plt.ylim([0,2])
plt.title('inputs (x) vs targets (y)')
plt.grid()
plt.legend(loc=2)
plt.gcf().set_size_inches((10,6))
plt.show()
# Our model y = x * w
def nn(x, w): return x * w
# Our cost function
def cost(y, t): return ((t - y)**2).sum()
ws = np.linspace(0, 4, num=100)
cost_ws = np.vectorize(lambda w: cost(nn(x, w) , y))(ws)
# Ploting the cost function
plt.plot(ws, cost_ws, 'r-')
plt.xlabel('$w$', fontsize=15)
plt.ylabel('Cost', fontsize=15)
plt.title('Cost vs. $w$')
plt.grid()
plt.gcf().set_size_inches((10,6))
plt.show() | 1. Basic Concepts I.ipynb | jvitria/DeepLearningBBVA2016 | mit |
Complete the following code and look at the plot of the first gradient descent updates. Explore the behavior of the proposed learning rates. | def gradient(w, x, y):
return 2 * x * (nn(x, w) - y)
def step(w_k, x, y, learning_rate):
return learning_rate * gradient(w_k, x, y).sum()
w = 0.01
# define a learning_rate
learning_rate = 0.1
nb_of_iterations = 20
w_cost = [(w, cost(nn(x, w), y))]
for i in range(nb_of_iterations):
# Here your code
w_cost.append((w, cost(nn(x, w), y)))
for i in range(0, len(w_cost)):
print('w({}): {:.4f} \t cost: {:.4f}'.format(i, w_cost[i][0], w_cost[i][1]))
# Plotting the first gradient descent updates
plt.plot(ws, cost_ws, 'r-') # Plot the error curve
# Plot the updates
for i in range(1, len(w_cost)-2):
w1, c1 = w_cost[i-1]
w2, c2 = w_cost[i]
plt.plot(w1, c1, 'bo')
plt.plot([w1, w2],[c1, c2], 'b-')
plt.text(w1, c1+0.5, '$w({})$'.format(i))
# Plot the last weight, axis, and show figure
w1, c1 = w_cost[len(w_cost)-3]
plt.plot(w1, c1, 'bo')
plt.text(w1, c1+0.5, '$w({})$'.format(nb_of_iterations))
plt.xlabel('$w$', fontsize=15)
plt.ylabel('$\\xi$', fontsize=15)
plt.title('Gradient descent updates plotted on cost function')
plt.grid()
plt.gcf().set_size_inches((10,6))
plt.show()
w = 0
nb_of_iterations = 10
for i in range(nb_of_iterations):
dw = step(w, x, y, learning_rate)
w = w - dw
plt.plot(x, y, 'o', label='t')
plt.plot([0, 1], [f(0), f(1)], 'b-', label='f(x)')
plt.plot([0, 1], [0*w, 1*w], 'r-', label='fitted line')
plt.xlabel('input x')
plt.ylabel('target t')
plt.ylim([0,2])
plt.title('input vs. target')
plt.grid()
plt.legend(loc=2)
plt.gcf().set_size_inches((10,6))
plt.show() | 1. Basic Concepts I.ipynb | jvitria/DeepLearningBBVA2016 | mit |
Macierz kolumnowa w NumPy.
$$X =
\begin{pmatrix}
3 \
4 \
5 \
6
\end{pmatrix}$$ | x = np.array([[3,4,5,6]]).T
x | Cwiczenia/01/Uczenie Maszynowe - Ćwiczenia 1.3 - NumPy, algebra liniowa.ipynb | emjotde/UMZ | cc0-1.0 |
A macierz wierszowa w NumPy.
$$ X =
\begin{pmatrix}
3 & 4 & 5 & 6
\end{pmatrix}$$ | x = np.array([[3,4,5,6]])
x | Cwiczenia/01/Uczenie Maszynowe - Ćwiczenia 1.3 - NumPy, algebra liniowa.ipynb | emjotde/UMZ | cc0-1.0 |
Obiekty typu matrix
Macierze ogólne omówiliśmy już w poprzednich dokumentach:
$$A_{m,n} =
\begin{pmatrix}
a_{1,1} & a_{1,2} & \cdots & a_{1,n} \
a_{2,1} & a_{2,2} & \cdots & a_{2,n} \
\vdots & \vdots & \ddots & \vdots \
a_{m,1} & a_{m,2} & \cdots & a_{m,n}
\end{pmatrix}$$
Oprócz obiektów typu array istnieje wyspecjalizowany obiekt matrix, dla którego operacje * (mnożenie) oraz **-1 (odwracanie) są określone w sposób właściwy dla macierzy (w przeciwieństwu do operacji elementowych dla obietków array). | x = np.array([1,2,3,4,5,6,7,8,9]).reshape(3,3)
x
X = np.matrix(x)
X | Cwiczenia/01/Uczenie Maszynowe - Ćwiczenia 1.3 - NumPy, algebra liniowa.ipynb | emjotde/UMZ | cc0-1.0 |
Operacje na macierzach
Wyznacznik | a = np.array([[3,-9],[2,5]])
np.linalg.det(a) | Cwiczenia/01/Uczenie Maszynowe - Ćwiczenia 1.3 - NumPy, algebra liniowa.ipynb | emjotde/UMZ | cc0-1.0 |
Macierz odwrotna | A = np.array([[-4,-2],[5,5]])
A
invA = np.linalg.inv(A)
invA
np.round(np.dot(A,invA)) | Cwiczenia/01/Uczenie Maszynowe - Ćwiczenia 1.3 - NumPy, algebra liniowa.ipynb | emjotde/UMZ | cc0-1.0 |
Ponieważ $AA^{-1} = A^{-1}A = I$.
Wartości i wektory własne | a = np.diag((1, 2, 3))
a
w,v = np.linalg.eig(a)
w
v | Cwiczenia/01/Uczenie Maszynowe - Ćwiczenia 1.3 - NumPy, algebra liniowa.ipynb | emjotde/UMZ | cc0-1.0 |
The resulting dictionary accepted_papers contains a list of the accepted papers for each conference. | for conference, papers in sorted(accepted_papers.items()):
print('{conference} includes {papers} accepted papers.'.format(
conference=conference, papers=len(papers))) | paper_selection.ipynb | aaai2018-paperid-62/aaai2018-paperid-62 | mit |
Selection
A sample population of 100 papers is selected from each conference using Python's pseudo-random number module. As per the documentation on random.sample "The resulting list is in selection order so that all sub-slices will also be valid random samples." The seed is set to the unix timestamp for Jan 10 14:46:40 2017 UTC: 1484059600. | import random
random.seed(1484059600)
k = 100
samples = {}
# The order is set explicitly due to originally not sorting
# accepted_papers.items().
conferences = ['aaai-16', 'aaai-14', 'ijcai-13', 'ijcai-16']
for conference in conferences:
samples[conference] = random.sample(accepted_papers[conference], k) | paper_selection.ipynb | aaai2018-paperid-62/aaai2018-paperid-62 | mit |
Note that when originally generating the samples, the dictionary was iterated by the use of Python 3's dict.items() view. The order is not guaranteed. Due to the original generation not being sorted, the iteration needs to be set explicitly so future runs generate the same original sample populations.
The generated random samples are permanently stored to files in the ../data/ directory (Github: https://github.com/sidgek/msoppgave/tree/master/data/. | for conference, papers in samples.items():
outputfile = 'data/sampled_{conference}'.format(conference=conference)
with open(outputfile, 'w') as f:
for line in papers:
f.write(line) | paper_selection.ipynb | aaai2018-paperid-62/aaai2018-paperid-62 | mit |
Versions
Here's a generated output to keep track of software versions used to run this Jupyter notebook. | import IPython
import platform
print('Python version: {}'.format(platform.python_version()))
print('IPython version: {}'.format(IPython.__version__)) | paper_selection.ipynb | aaai2018-paperid-62/aaai2018-paperid-62 | mit |
Set where to write Lya ensemble dflux. | outdir = '/global/homes/m/mjwilson/sandbox/lya-signal/desimodel/0.14.0/data/tsnr' | doc/nb/Lya-tsnr-signal.ipynb | desihub/desispec | bsd-3-clause |
Load a set of Vi'd Lya QSOs. | dat = fits.open('/project/projectdirs/desi/spectro/redux/cascades/tiles/80609/deep/coadd-0-80609-deep.fits')
dat.info()
vi = pandas.read_csv('/project/projectdirs/desi/sv/vi/TruthTables/Blanc/QSO/desi-vi_QSO_tile80609_nightdeep_merged_all_210210_ADDING_object_info.csv')
vi
isin = (vi['best_spectype'] == 'QSO') & (vi['best_quality'] >= 2.5) & (vi['best_z'] >= 2.1)
vi = vi[isin]
vi
tids = vi['TARGETID']
gauss_kernel = Gaussian1DKernel(15)
isin = np.isin(dat['FIBERMAP'].data['TARGETID'], tids)
fmap_ids = dat['FIBERMAP'].data['TARGETID'][isin]
nin = np.count_nonzero(fmap_ids)
gmags = 22.5 - 2.5*np.log10(dat['FIBERMAP'].data['FLUX_G'][isin] / mwdust_transmission(dat['FIBERMAP'].data['EBV'][isin], 'G', dat['FIBERMAP'].data['PHOTSYS'][isin]))
gmags | doc/nb/Lya-tsnr-signal.ipynb | desihub/desispec | bsd-3-clause |
Our QSOs | fig, axes = plt.subplots(nin, 1, figsize=(5, 5 * nin))
for band in ['B','R','Z']:
for i, x in enumerate(dat['{}_FLUX'.format(band)].data[isin]):
axes[i].plot(dat['{}_WAVELENGTH'.format(band)].data, convolve(x, gauss_kernel), lw=0.5)
axes[i].set_ylim(bottom=-0.5) | doc/nb/Lya-tsnr-signal.ipynb | desihub/desispec | bsd-3-clause |
Take closest to g=22 to be our reference. | idx = np.where(np.abs(gmags - 22.) == np.abs(gmags - 22.).min())[0][0]
# Force 7
idx = 7
# Closest to 22.
master_fluxes = {'gmag': gmags[idx], 'tid': fmap_ids[idx]}
for band in ['B', 'R', 'Z']:
master_fluxes[band] = {'wave': dat['{}_WAVELENGTH'.format(band)].data,
'smoothflux': convolve(dat['{}_FLUX'.format(band)].data[isin][idx], gauss_kernel),
'ivar': dat['{}_IVAR'.format(band)].data[isin][idx]}
master_fluxes['tid']
master_fluxes['gmag']
vi[vi['TARGETID'] == master_fluxes['tid']]
master_fluxes['z'] = vi[vi['TARGETID'] == master_fluxes['tid']]['best_z']
master_fluxes['continuum'] = 0.43
pl.plot(master_fluxes['B']['wave'], master_fluxes['B']['smoothflux'])
pl.axhline(master_fluxes['continuum'], c='k', lw=0.5)
pl.xlabel('Wavelength [A]')
pl.ylabel('1.e-17 ergs/s/cm2/A') | doc/nb/Lya-tsnr-signal.ipynb | desihub/desispec | bsd-3-clause |
Later we use this (by eye) 'continuum' as our asymptotic 'signal' normalization at the blue end.
Get a QSO n(z) | # https://desi.lbl.gov/svn/code/desimodel/tags/0.14.0/data/targets/nz_qso.dat;
# Number per sq. deg. per dz=0.1
# Note: Cascades
zlo, zhi, Nz = np.loadtxt('/global/common/software/desi/cori/desiconda/20200801-1.4.0-spec/code/desimodel/0.14.0/data/targets/nz_qso.dat', unpack=True)
zmid = 0.5 * (zlo + zhi)
Nz /= Nz.max()
pl.plot(zmid, Nz, c='k', lw=0.5)
pl.xlabel('z')
zs = np.random.uniform(0.0, 5.0, 500000)
zs = np.sort(zs)
# pl.hist(zs, bins=np.arange(0.0, 5.0, 0.1))
draws = np.random.uniform(0.0, 1.0, 500000)
idx = np.digitize(zs, bins=np.arange(0.0, 5.1, 0.1))
probs = np.zeros_like(idx, dtype=np.float)
for i, uid in enumerate(np.unique(idx)[:-1]):
probs[idx == uid] = Nz[i]
draws
probs
isin = draws <= probs
qso_zs = zs[isin] | doc/nb/Lya-tsnr-signal.ipynb | desihub/desispec | bsd-3-clause |
Here we've drawn an ensemble of zs from this distribution. | pl.plot(zmid, 5000. * Nz, c='k', lw=0.5)
pl.hist(qso_zs, bins=np.arange(0.0, 5.0, 0.05), alpha=0.5)
pl.xlabel('z')
lya_zs = qso_zs[qso_zs > 2.1]
lya_zs
# lya_zs = lya_zs[:2]
# 1216. * (1. + lya_zs)
nlya = len(lya_zs) | doc/nb/Lya-tsnr-signal.ipynb | desihub/desispec | bsd-3-clause |
Our 'signal' will be unity bluer than Lya for a given redshift (zero otherwise). We then stack across the ensemble. | tracer = 'LYA'
hdr = fits.Header()
hdr['NMODEL'] = nlya
hdr['TRACER'] = tracer
hdr['FILTER'] = 'decam2014-g'
hdr['ZLO'] = 2.1
hdu_list = [fits.PrimaryHDU(header=hdr)]
for band in ['b', 'r', 'z']:
wave = dat['{}_WAVELENGTH'.format(band)].data
nwave = wave[:,None] * np.ones(nlya, dtype=float)[None,:]
weight = np.zeros(shape=(len(wave), nlya), dtype=float)
for i, z in zip(range(nlya), lya_zs):
weight[nwave[:,i] < (1. + z) * 1216., i] = 1.0
mweight = np.mean(weight, axis=1)
zpivot = 2.4
zfactor = (wave / (1. + zpivot) / 1216.)**0.95
zweight = zfactor * mweight
mweight = np.expand_dims(master_fluxes['continuum'] * mweight, axis=0)
zweight = np.expand_dims(master_fluxes['continuum'] * zweight, axis=0)
if band =='b':
pl.plot(wave, mweight[0], c='k', linestyle='--', label='No z weight')
pl.plot(wave, zweight[0], c='k', label='z weight')
else:
pl.plot(wave, mweight[0], c='k', linestyle='--', label='')
pl.plot(wave, zweight[0], c='k', label='')
hdu_list.append(fits.ImageHDU(wave, name='WAVE_{}'.format(band.upper())))
hdu_list.append(fits.ImageHDU(zweight, name='DFLUX_{}'.format(band.upper())))
hdu_list = fits.HDUList(hdu_list)
hdu_list.writeto('{}/tsnr-ensemble-{}.fits'.format(outdir, tracer.lower()), overwrite=True)
pl.xlabel('Wavelength [A]')
pl.ylabel('1.e-17 ergs/s/cm2/A')
pl.legend(frameon=False, loc=1)
print('Written to {}/tsnr-ensemble-{}.fits'.format(outdir, tracer.lower())) | doc/nb/Lya-tsnr-signal.ipynb | desihub/desispec | bsd-3-clause |
Finally, here we've used our reference continuum from above as the blue end normalization and write to disk at outdir.
Check against QSO tsnr. | ens = fits.open('/global/common/software/desi/cori/desiconda/20200801-1.4.0-spec/code/desimodel/0.14.0/data/tsnr/tsnr-ensemble-qso.fits')
ens.info()
ens['DFLUX_B'].shape | doc/nb/Lya-tsnr-signal.ipynb | desihub/desispec | bsd-3-clause |
Non-rigid surface deformation
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/graphics/blob/master/tensorflow_graphics/notebooks/non_rigid_deformation.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/graphics/blob/master/tensorflow_graphics/notebooks/non_rigid_deformation.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
Non-rigid surface deformation is a technique that, among other things, can be used to interactively manipulate meshes or to deform a template mesh to fit to a point-cloud. When manipulating meshes, this can for instance allow users to move the hand of a character, and have the rest of the arm deform in a realistic manner. It is interesting to note that the deformation can also be performed over the scale of parts or the entire mesh.
This notebook illustrates how to use Tensorflow Graphics to perform deformations similiar to the one contained in the above image.
Setup & Imports
If Tensorflow Graphics is not installed on your system, the following cell can install the Tensorflow Graphics package for you. | !pip install tensorflow_graphics | tensorflow_graphics/notebooks/non_rigid_deformation.ipynb | tensorflow/graphics | apache-2.0 |
Now that Tensorflow Graphics is installed, let's import everything needed to run the demo contained in this notebook. | import numpy as np
import tensorflow as tf
from tensorflow_graphics.geometry.deformation_energy import as_conformal_as_possible
from tensorflow_graphics.geometry.representation.mesh import utils as mesh_utils
from tensorflow_graphics.geometry.transformation import quaternion
from tensorflow_graphics.math.optimizer import levenberg_marquardt
from tensorflow_graphics.notebooks import threejs_visualization
from tensorflow_graphics.notebooks.resources import triangulated_stripe | tensorflow_graphics/notebooks/non_rigid_deformation.ipynb | tensorflow/graphics | apache-2.0 |
In this example, we build a mesh that corresponds to a flat and rectangular surface. Using the sliders, you can control the position of the deformation constraints applied to that surface, which respectively correspond to all the points along the left boundary, center, and right boundary of the mesh. | mesh_rest_pose = triangulated_stripe.mesh
connectivity = mesh_utils.extract_unique_edges_from_triangular_mesh(triangulated_stripe.mesh['faces'])
camera = threejs_visualization.build_perspective_camera(
field_of_view=40.0, position=(0.0, -5.0, 5.0))
width = 500
height = 500
_ = threejs_visualization.triangular_mesh_renderer([mesh_rest_pose],
width=width,
height=height,
camera=camera)
###############
# UI controls #
###############
#@title Constraints on the deformed pose { vertical-output: false, run: "auto" }
constraint_1_z = 0 #@param { type: "slider", min: -1, max: 1 , step: 0.05 }
constraint_2_z = -1 #@param { type: "slider", min: -1, max: 1 , step: 0.05 }
constraint_3_z = 0 #@param { type: "slider", min: -1, max: 1 , step: 0.05 }
vertices_rest_pose = tf.Variable(mesh_rest_pose['vertices'])
vertices_deformed_pose = np.copy(mesh_rest_pose['vertices'])
num_vertices = vertices_deformed_pose.shape[0]
# Adds the user-defined constraints
vertices_deformed_pose[0, 2] = constraint_1_z
vertices_deformed_pose[num_vertices // 2, 2] = constraint_1_z
vertices_deformed_pose[num_vertices // 4, 2] = constraint_2_z
vertices_deformed_pose[num_vertices // 2 + num_vertices // 4, 2] = constraint_2_z
vertices_deformed_pose[num_vertices // 2 - 1, 2] = constraint_3_z
vertices_deformed_pose[-1, 2] = constraint_3_z
mesh_deformed_pose = {
'vertices': vertices_deformed_pose,
'faces': mesh_rest_pose['faces']
}
vertices_deformed_pose = tf.Variable(vertices_deformed_pose)
# Builds a camera and render the mesh.
camera = threejs_visualization.build_perspective_camera(
field_of_view=40.0, position=(0.0, -5.0, 5.0))
_ = threejs_visualization.triangular_mesh_renderer([mesh_rest_pose],
width=width,
height=height,
camera=camera)
_ = threejs_visualization.triangular_mesh_renderer([mesh_deformed_pose],
width=width,
height=height,
camera=camera)
geometries = threejs_visualization.triangular_mesh_renderer(
[mesh_deformed_pose], width=width, height=height, camera=camera)
################
# Optimization #
################
def update_viewer_callback(iteration, objective_value, variables):
"""Callback to be called at each step of the optimization."""
geometries[0].getAttribute('position').copyArray(
variables[0].numpy().ravel().tolist())
geometries[0].getAttribute('position').needsUpdate = True
geometries[0].computeVertexNormals()
def deformation_energy(vertices_deformed_pose, rotation):
"""As conformal as possible deformation energy."""
return as_conformal_as_possible.energy(
vertices_rest_pose,
vertices_deformed_pose,
rotation,
connectivity,
aggregate_loss=False)
def soft_constraints(vertices_deformed_pose):
"""Soft constrains forcing results to obey the user-defined constraints."""
weight = 10.0
return (
weight * (vertices_deformed_pose[0, 2] - constraint_1_z),
weight * (vertices_deformed_pose[num_vertices // 2, 2] - constraint_1_z),
weight * (vertices_deformed_pose[num_vertices // 4, 2] - constraint_2_z),
weight * (vertices_deformed_pose[num_vertices // 2 + num_vertices // 4, 2] -
constraint_2_z),
weight *
(vertices_deformed_pose[num_vertices // 2 - 1, 2] - constraint_3_z),
weight * (vertices_deformed_pose[-1, 2] - constraint_3_z),
)
def fitting_energy(vertices_deformed_pose, rotation):
deformation = deformation_energy(vertices_deformed_pose, rotation)
constraints = soft_constraints(vertices_deformed_pose)
return tf.concat((deformation, constraints), axis=0)
rotations = tf.Variable(quaternion.from_euler(np.zeros((num_vertices, 3))))
max_iterations = 15 #@param { isTemplate: true, type: "integer" }
_ = levenberg_marquardt.minimize(
residuals=fitting_energy,
variables=(vertices_deformed_pose, rotations),
max_iterations=int(max_iterations),
callback=update_viewer_callback) | tensorflow_graphics/notebooks/non_rigid_deformation.ipynb | tensorflow/graphics | apache-2.0 |
Applying to Test Dataset | Org_blind_data = pd.read_csv('../data/nofacies_data.csv')
blind_data = Org_blind_data[Org_blind_data["NM_M"]==1]
X_blind = blind_data.drop(['Formation', 'Well Name', 'Depth'], axis=1).values
well_blind = blind_data['Well Name'].values
depth_blind = blind_data['Depth'].values
X_blind, padded_rows = augment_features(X_blind, well_blind, depth_blind, N_neig=1)
# Scaling
scl = preprocessing.MinMaxScaler().fit(X1org)
X_train = scl.transform(X1org)
X_blind = scl.transform(X_blind)
Y_train = np_utils.to_categorical(y1org, nb_classes)
# Method initialization
model = fDNN(in_dim, nb_classes)
# Training
model.fit(X_train, Y_train, nb_epoch=epoch, batch_size=bats, verbose=0)
# Predict
y_blind = model.predict_classes(X_blind, verbose=0)
y_blind = medfilt(y_blind, kernel_size=5)
Org_blind_data.ix[Org_blind_data["NM_M"]==1,"Facies"] = y_blind + 1 # return the original value (1-9)
blind_data = Org_blind_data[Org_blind_data["NM_M"]==2]
X_blind = blind_data.drop(['Formation', 'Well Name', 'Depth','Facies'], axis=1).values
well_blind = blind_data['Well Name'].values
depth_blind = blind_data['Depth'].values
X_blind, padded_rows = augment_features(X_blind, well_blind, depth_blind, N_neig=1)
# Scaling
scl = preprocessing.MinMaxScaler().fit(X2org)
X_train = scl.transform(X2org)
X_blind = scl.transform(X_blind)
Y_train = np_utils.to_categorical(y2org, nb_classes)
# Method initialization
model = fDNN(in_dim, nb_classes)
# Training
model.fit(X_train, Y_train, nb_epoch=epoch, batch_size=bats, verbose=0)
# Predict
y_blind = model.predict_classes(X_blind, verbose=0)
y_blind = medfilt(y_blind, kernel_size=5)
Org_blind_data.ix[Org_blind_data["NM_M"]==2,"Facies"] = y_blind + 1 # return the original value (1-9)
Org_blind_data.to_csv("PA_Team_Submission_4-revised.csv")
make_facies_log_plot(
Org_blind_data[Org_blind_data['Well Name'] == 'STUART'],
facies_colors)
make_facies_log_plot(
Org_blind_data[Org_blind_data['Well Name'] == 'CRAWFORD'],
facies_colors) | PA_Team/PA_Team_Submission_4-revised.ipynb | seg/2016-ml-contest | apache-2.0 |
Instead of writing s = s + i we could have written s += i (read this as "s is whatever it was before plus the value of i"). So we could rewrite that for loop as: | x = [2,4,6,8,10]
s = 0
for i in x:
s += i
print("The sum of x is:", s) | Mathematical-Notation-Sums-and-Products.ipynb | Bio204-class/bio204-notebooks | cc0-1.0 |
Now what if we wanted to calculate the sum of the reciprocals of each element of x? A simple change to our code give us: | x = [2,4,6,8,10]
s = 0
for i in x:
s += (1/i)
print("The sum of the reciprocals of x is:", s) | Mathematical-Notation-Sums-and-Products.ipynb | Bio204-class/bio204-notebooks | cc0-1.0 |
To bring things full circle, the equivalent mathematical notation to represent the operation of summing the reciprocals of all the elements of $\mathbf{x}$ would be:
$$
\sum_{i=1}^n \frac{1}{\mathbf{x}_i}
$$
The code above is somewhat fragile in that it's not easily re-usable. What if we wanted to sum the reciprocals of a list called y or z instead of x? We'd have to go through our code example and change each instance of x. That's boring and error prone. Instead let's write a Python function to abstract away the steps: | def sum_of_reciprocals(x):
s = 0
for i in x:
s += (1.0/i)
return s
# test our function with different inputs
x = [2,4,6,8,10]
y = [1,3,5,7,9]
z = [-1,1,-1,1]
print("The sum of the reciprocals of x is:", sum_of_reciprocals(x))
print("The sum of the reciprocals of y is:", sum_of_reciprocals(y))
print("The sum of the reciprocals of z is:", sum_of_reciprocals(z)) | Mathematical-Notation-Sums-and-Products.ipynb | Bio204-class/bio204-notebooks | cc0-1.0 |
A even more compact way of writing our sum of reciprocals operation, that still used the built in sum function would be to use a list comprehension as shown below: | sum_recip_x = sum([(1.0/i) for i in x])
print("The sum of the reciprocals of x is: ", sum_recip_x) | Mathematical-Notation-Sums-and-Products.ipynb | Bio204-class/bio204-notebooks | cc0-1.0 |
Note that our sum_of_reciprocals function (or our solution using list comprehensions) doesn't deal with all possible cases we might use as input. If one of the elements of x was zero what would happen (go ahead and try it)? What if we passed a list of strings to the function instead of numbers?
Product notation
Now that you (hopefully) understand sum notation, it should be easy to understand product notation. We use product notation to represent the products of the elements of a sequence (i.e. the value we get when we multiply the elements of the sequence). As we'll see later in the course, product notation arises frequently in discussions of probability.
The mathematical shorthand for taking the product of a sequence of numbers is the capital Greek Pi ($\Pi$). In parallel to our first example above, the product of the first ten elements of a sequence $\mathbf{x}$ could be written this way:
$$
\prod_{i=1}^{10} \mathbf{x}_i
$$
Other than the use of $\Pi$ rather than $\Sigma$, this is identical to the sum notation above. As before the notation includes information about the upper and lower bounds of the element indices for which we want to apply the operation.
In a similar manner to what we saw before, we can represent the operation of getting the product of an arbitrary sequence $\mathbf{x}$ of length $n$ as follows:
$$
\prod_{i=1}^{n} \mathbf{x}_i
$$
Products with for loops
Unlike sum, there is no built-in product function in Python (we will see an efficient implementation of the product operation when we get to the numerical Python libraries). However, as we saw above we can use for loops to write our own product function. | def product(x):
p = 1
for i in x:
p *= i # same as p = p * i
return p
x = [2,4,6,8,10]
product(x)
product([(1.0/i) for i in x]) # use list comprehension to get reciprocals of x | Mathematical-Notation-Sums-and-Products.ipynb | Bio204-class/bio204-notebooks | cc0-1.0 |
Label some/all states
In our structure of the code, the states should be a dictionary, the key is the index in the sequence (e.g. 0, 5) and the value is a one-out-of-n code of array where the kth value is 1 if the hidden state is k. n is the number of states in total.
In the following example, we assume that the "corr" column gives the correct hidden states. | states = {}
corr = np.array(speed['corr'])
for i in range(len(corr)):
state = np.zeros((2,))
if corr[i] == 'cor':
states[i] = np.array([0,1])
else:
states[i] = np.array([1,0]) | examples/notebooks/SupervisedIOHMM.ipynb | Mogeng/IO-HMM | mit |
Set up a simple model manully | # we choose 2 hidden states in this model
SHMM = SupervisedIOHMM(num_states=2)
# we set only one output 'rt' modeled by a linear regression model
SHMM.set_models(model_emissions = [OLS()],
model_transition=CrossEntropyMNL(solver='lbfgs'),
model_initial=CrossEntropyMNL(solver='lbfgs'))
# we set no covariates associated with initial/transitiojn/emission models
SHMM.set_inputs(covariates_initial = [], covariates_transition = [], covariates_emissions = [[]])
# set the response of the emission model
SHMM.set_outputs([['rt']])
# set the data and ground truth states
SHMM.set_data([[speed, states]]) | examples/notebooks/SupervisedIOHMM.ipynb | Mogeng/IO-HMM | mit |
See the training results | # the coefficients of the output model for each states
print(SHMM.model_emissions[0][0].coef)
print(SHMM.model_emissions[1][0].coef)
# the scale/dispersion of the output model of each states
print(np.sqrt(SHMM.model_emissions[0][0].dispersion))
print(np.sqrt(SHMM.model_emissions[1][0].dispersion))
# the transition probability from each state
print(np.exp(SHMM.model_transition[0].predict_log_proba(np.array([[]]))))
print(np.exp(SHMM.model_transition[1].predict_log_proba(np.array([[]])))) | examples/notebooks/SupervisedIOHMM.ipynb | Mogeng/IO-HMM | mit |
Save the trained model | json_dict = SHMM.to_json('../models/SupervisedIOHMM/')
json_dict
with open('../models/SupervisedIOHMM/model.json', 'w') as outfile:
json.dump(json_dict, outfile, indent=4, sort_keys=True) | examples/notebooks/SupervisedIOHMM.ipynb | Mogeng/IO-HMM | mit |
Load back the trained model | SHMM_from_json = SupervisedIOHMM.from_json(json_dict) | examples/notebooks/SupervisedIOHMM.ipynb | Mogeng/IO-HMM | mit |
See if the coefficients are any different | # the coefficients of the output model for each states
print(SHMM.model_emissions[0][0].coef)
print(SHMM.model_emissions[1][0].coef) | examples/notebooks/SupervisedIOHMM.ipynb | Mogeng/IO-HMM | mit |
Set up the model using a config file, instead of doing it manully | with open('../models/SupervisedIOHMM/config.json') as json_data:
json_dict = json.load(json_data)
SHMM_from_config = SupervisedIOHMM.from_config(json_dict) | examples/notebooks/SupervisedIOHMM.ipynb | Mogeng/IO-HMM | mit |
See if the training results are any different? | # the coefficients of the output model for each states
print(SHMM_from_config.model_emissions[0][0].coef)
print(SHMM_from_config.model_emissions[1][0].coef) | examples/notebooks/SupervisedIOHMM.ipynb | Mogeng/IO-HMM | mit |
iter()
The iter() method returns an iterator for the given object.
Syntax:
python
iter(object[, sentinel])
Where object is an object based on which the iterator needs to be constructed. The behavior of iterator is dependent on the value of sentinel, if sentinel is not provided then object should be an interator and the construct will behave as such, where as if sentinel is provided then object should be callable, and value returned will be treated as next call. Iteration ends when the value retuned equals to value in sentinel | class MyDummy(object):
def __init__(self):
self.lst = [1, 2, 3, 4, 5, 6]
self.i = 0
def __call__(self):
ret = self.lst[self.i]
self.i += 1
return ret
d = MyDummy()
for a in iter(d, 3):
print(a, end=" ")
m = MyIter([1, 2, 3, 4, 5, 6])
for a in iter(m):
print(a, end=" ") | Section 2 - Advance Python/Chapter S2.01 - Functional Programming/02_03_iter.ipynb | mayankjohri/LetsExplorePython | gpl-3.0 |
lets try another example, this time lets take a string | st = "Welcome to the city of lakes"
for a in iter(st):
print(a, end=" ") | Section 2 - Advance Python/Chapter S2.01 - Functional Programming/02_03_iter.ipynb | mayankjohri/LetsExplorePython | gpl-3.0 |
A non-$2^n$ FFT (10 points)
Now that we have implemented a fast radix-2 algorithm for vectors of length $2^n$, we can write a generic algorithm which can take any length input. This algorithm will check if the length of the input is divisible by 2, if so then it will use the FFT, otherwise it will default to the slower matrix-based DFT. | def generalFFT(x):
"""radix-2 DIT FFT
x: list or array of N values to perform FFT on, can be real or imaginary
"""
ox = np.asarray(x, dtype='complex') # assure the input is an array of complex values
# INSERT: assign a value to N, the size of the FFT
N = #??? 1 point
if N==1: return ox # base case
elif # INSERT: check if the length is divisible by 2, 1 point
# INSERT: do a FFT, use your ditrad2() code here, 3 points
# Hint: your ditrad2() code can be copied here, and will work with only a minor modification
else: # INSERT: if not divisable by 2, do a slow Fourier Transform
return # ??? 1 point | 2_Mathematical_Groundwork/fft_implementation_assignment.ipynb | griffinfoster/fundamentals_of_interferometry | gpl-2.0 |
Create the test table | with oracledb.connect(user=db_user, password=db_pass, dsn=db_connect_string) as ora_conn:
cursor = ora_conn.cursor()
# use this drop statement if you need to recreate the table
# cursor.execute("drop table data")
cursor.execute("begin dbms_random.seed(4242); end;")
cursor.execute("""
create table data as
select dbms_random.value * 100 random_value
from dual connect by level <=100
""")
| Oracle_Jupyter/Oracle_histograms.ipynb | LucaCanali/Miscellaneous | apache-2.0 |
Define the query to compute the histogram | table_name = "data" # table or temporary view containing the data
value_col = "random_value" # column name on which to compute the histogram
min = -20 # min: minimum value in the histogram
max = 90 # maximum value in the histogram
bins = 11 # number of histogram buckets to compute
step = (max - min) / bins
query = f"""
with bucketized as (
select width_bucket({value_col}, {min}, {max}, {bins}) as bucket
from {table_name}
),
hist as (
select bucket, count(*) as cnt
from bucketized
group by bucket
),
buckets as (
select rownum as bucket from dual connect by level <= {bins}
)
select
bucket, {min} + (bucket - 1/2) * {step} as value,
nvl(cnt, 0) as count
from hist right outer join buckets using(bucket)
order by bucket
""" | Oracle_Jupyter/Oracle_histograms.ipynb | LucaCanali/Miscellaneous | apache-2.0 |
Fetch the histogram data into a pandas dataframe | import pandas as pd
# query Oracle using ora_conn and put the result into a pandas Dataframe
with oracledb.connect(user=db_user, password=db_pass, dsn=db_connect_string) as ora_conn:
hist_pandasDF = pd.read_sql(query, con=ora_conn)
# Decription
#
# BUCKET: the bucket number, range from 1 to bins (included)
# VALUE: midpoint value of the given bucket
# COUNT: number of values in the bucket
hist_pandasDF
# Optionally normalize the event count into a frequency
# dividing by the total number of events
hist_pandasDF["FREQUENCY"] = hist_pandasDF["COUNT"] / sum(hist_pandasDF["COUNT"])
hist_pandasDF | Oracle_Jupyter/Oracle_histograms.ipynb | LucaCanali/Miscellaneous | apache-2.0 |
Histogram plotting
The first plot is a histogram with the event counts (number of events per bin).
The second plot is a histogram of the events frequencies (number of events per bin normalized by the sum of the events). | import matplotlib.pyplot as plt
plt.style.use('seaborn-darkgrid')
plt.rcParams.update({'font.size': 20, 'figure.figsize': [14,10]})
f, ax = plt.subplots()
# histogram data
x = hist_pandasDF["VALUE"]
y = hist_pandasDF["COUNT"]
# bar plot
ax.bar(x, y, width = 3.0, color='red')
ax.set_xlabel("Bucket values")
ax.set_ylabel("Event count")
ax.set_title("Distribution of event counts")
# Label for the resonances spectrum peaks
txt_opts = {'horizontalalignment': 'center',
'verticalalignment': 'center',
'transform': ax.transAxes}
plt.show()
import matplotlib.pyplot as plt
plt.style.use('seaborn-darkgrid')
plt.rcParams.update({'font.size': 20, 'figure.figsize': [14,10]})
f, ax = plt.subplots()
# histogram data
x = hist_pandasDF["VALUE"]
y = hist_pandasDF["FREQUENCY"]
# bar plot
ax.bar(x, y, width = 3.0, color='blue')
ax.set_xlabel("Bucket values")
ax.set_ylabel("Event frequency")
ax.set_title("Distribution of event frequencies")
# Label for the resonances spectrum peaks
txt_opts = {'horizontalalignment': 'center',
'verticalalignment': 'center',
'transform': ax.transAxes}
plt.show()
| Oracle_Jupyter/Oracle_histograms.ipynb | LucaCanali/Miscellaneous | apache-2.0 |
이전 제출물 | senten = input("What's going on? ")
senten = ".".join(senten)
senten = senten.split('.')
print(senten)
for dot, capi in morse.items():
if capi in senten:
print(dot,end=" ")
#dotted = sorted(morse.get(dot))
#print(sorted(morse.get(dot),reverse=True), end=" ")
print(morse.get(dot),end=" ") | midterm/kookmin_midterm_조재환_2.ipynb | initialkommit/kookmin | mit |
이전에 제출한 것은 출력하면 알파벳과 모스부호가 정렬되지 않았습니다. 입력한 문장대로 모스부호를 나타내고 싶었었는데 조금 더 공부하다보니 코드를 만들 수 있어서 다시 한번 제출합니다.
수정 | senten = input("What's going on? ") # 모스부호로 나타낼 문장을 입력
senten = ".".join(senten) # 모스부호의 형태가 '알파벳' : '모스부호'로 되어있어서 입력받은 문장을
# 알파벳 단위로 끊어주기 위해 "."join으로 각 단어 사이에 .을 넣습니다.
senten = senten.split('.') # .을 기준으로 단어들을 모두 끊어 줍니다.
print(senten)
for word in senten: # str형태를 for문으로 출력하면 값하나가 그대로 나옵니다.
for dot, capi in morse.items(): # 모스부호의 dictionary를 가져옵니다.
if word in capi: # senten안의 word가 모스부호의 알파벳과 같으면
print(capi,"=",dot, end=", ") # 알파벳에 해당하는 모스부호를 출력합니다.
senten = input("What's going on? ") # 모스부호로 나타낼 문장을 입력
print(senten)
for word in senten: # str형태를 for문으로 출력하면 값하나가 그대로 나옵니다.
for dot, capi in morse.items(): # 모스부호의 dictionary를 가져옵니다.
if word in capi: # senten안의 word가 모스부호의 알파벳과 같으면
print(capi,"=",dot, end=", ") # 알파벳에 해당하는 모스부호를 출력합니다.
sentens = 'IM LATE'
sentens[0]
morse.items() | midterm/kookmin_midterm_조재환_2.ipynb | initialkommit/kookmin | mit |
Examine a single patient | patientunitstayid = 141168
query = query_schema + """
select *
from pasthistory
where patientunitstayid = {}
order by pasthistoryoffset
""".format(patientunitstayid)
df = pd.read_sql_query(query, con)
df.head() | notebooks/pasthistory.ipynb | mit-eicu/eicu-code | mit |
We can make a few observations:
pasthistorypath is a slash delimited (/) hierarchical categorization of the past history recorded
pasthistoryvalue and pasthistoryvaluetext are often identical
pasthistoryoffset is the time of the condition, while pasthistoryenteredoffset is when it was documented, though from above it appears the pasthistoryoffset is not necessarily the start time of the condition
Identifying COPD patients
Let's look for patients who were admitted with a past history of COPD. | dx = 'COPD'
query = query_schema + """
select
pasthistoryvalue, count(*) as n
from pasthistory
where pasthistoryvalue ilike '%{}%'
group by pasthistoryvalue
""".format(dx)
df_copd = pd.read_sql_query(query, con)
df_copd
dx = 'COPD'
query = query_schema + """
select
patientunitstayid, count(*) as n
from pasthistory
where pasthistoryvalue ilike '%{}%'
group by patientunitstayid
""".format(dx)
df_copd = pd.read_sql_query(query, con)
print('{} unit stays with {}.'.format(df_copd.shape[0], dx)) | notebooks/pasthistory.ipynb | mit-eicu/eicu-code | mit |
Hospitals with data available | query = query_schema + """
with t as
(
select distinct patientunitstayid
from pasthistory
)
select
pt.hospitalid
, count(distinct pt.patientunitstayid) as number_of_patients
, count(distinct t.patientunitstayid) as number_of_patients_with_tbl
from patient pt
left join t
on pt.patientunitstayid = t.patientunitstayid
group by pt.hospitalid
""".format(patientunitstayid)
df = pd.read_sql_query(query, con)
df['data completion'] = df['number_of_patients_with_tbl'] / df['number_of_patients'] * 100.0
df.sort_values('number_of_patients_with_tbl', ascending=False, inplace=True)
df.head(n=10)
df[['data completion']].vgplot.hist(bins=10,
var_name='Number of hospitals',
value_name='Percent of patients with data') | notebooks/pasthistory.ipynb | mit-eicu/eicu-code | mit |
The output then needs to be flattened so it can be used in fully-connected (aka. dense) layers. | net = tf.contrib.layers.flatten(net)
# This should eventually be replaced by:
# net = tf.layers.flatten(net) | 13B_Visual_Analysis_MNIST.ipynb | newworldnewlife/TensorFlow-Tutorials | mit |
Loss-Function to be Optimized
To make the model better at classifying the input images, we must somehow change the variables of the neural network.
The cross-entropy is a performance measure used in classification. The cross-entropy is a continuous function that is always positive and if the predicted output of the model exactly matches the desired output then the cross-entropy equals zero. The goal of optimization is therefore to minimize the cross-entropy so it gets as close to zero as possible by changing the variables of the model.
TensorFlow has a function for calculating the cross-entropy, which uses the values of the logits-layer because it also calculates the softmax internally, so as to to improve numerical stability. | cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=y_true, logits=logits) | 13B_Visual_Analysis_MNIST.ipynb | newworldnewlife/TensorFlow-Tutorials | mit |
With these parameters, the model creates wide AP waveforms that are more reminiscent of muscle cells than neurons.
We now set up a simple optimisation problem with the model. | # First add some noise
sigma = 0.5
noisy = values + np.random.normal(0, sigma, values.shape)
# Plot the results
plt.figure()
plt.xlabel('Time')
plt.ylabel('Noisy values')
plt.plot(times, noisy)
plt.show() | examples/toy/model-fitzhugh-nagumo.ipynb | martinjrobins/hobo | bsd-3-clause |
Next, we set up a problem. Because this model has multiple outputs (2), we use a MultiOutputProblem. | problem = pints.MultiOutputProblem(model, times, noisy)
score = pints.SumOfSquaresError(problem) | examples/toy/model-fitzhugh-nagumo.ipynb | martinjrobins/hobo | bsd-3-clause |
Finally, we choose a wide set of boundaries and run! | # Select boundaries
boundaries = pints.RectangularBoundaries([0., 0., 0.], [10., 10., 10.])
# Select a starting point
x0 = [1, 1, 1]
# Perform an optimization
found_parameters, found_value = pints.optimise(score, x0, boundaries=boundaries)
print('Score at true solution:')
print(score(parameters))
print('Found solution: True parameters:' )
for k, x in enumerate(found_parameters):
print(pints.strfloat(x) + ' ' + pints.strfloat(parameters[k]))
# Plot the results
plt.figure()
plt.xlabel('Time')
plt.ylabel('Values')
plt.plot(times, noisy, '-', alpha=0.25, label='noisy signal')
plt.plot(times, values, alpha=0.4, lw=5, label='original signal')
plt.plot(times, problem.evaluate(found_parameters), 'k--', label='recovered signal')
plt.legend()
plt.show() | examples/toy/model-fitzhugh-nagumo.ipynb | martinjrobins/hobo | bsd-3-clause |
This shows the parameters are not retrieved entirely correctly, but the traces still strongly overlap.
Sampling with Monomial-gamma HMC
The Fitzhugh-Nagumo model has sensitivities calculated by the forward sensitivities approach, so we can use samplers that use gradients (although they will be slower per iteration; although perhaps not by ESS per second!), like Monomial-gamma HMC. | problem = pints.MultiOutputProblem(model, times, noisy)
# Create a log-likelihood function (adds an extra parameter!)
log_likelihood = pints.GaussianLogLikelihood(problem)
# Create a uniform prior over both the parameters and the new noise variable
log_prior = pints.UniformLogPrior(
[0, 0, 0, 0, 0],
[10, 10, 10, 20, 20]
)
# Create a posterior log-likelihood (log(likelihood * prior))
log_posterior = pints.LogPosterior(log_likelihood, log_prior)
# Choose starting points for 3 mcmc chains
real_parameters1 = np.array(parameters + [sigma, sigma])
xs = [
real_parameters1 * 1.1,
real_parameters1 * 0.9,
real_parameters1 * 1.15,
real_parameters1 * 1.5,
]
# Create mcmc routine
mcmc = pints.MCMCController(log_posterior, 4, xs, method=pints.MonomialGammaHamiltonianMCMC)
# Add stopping criterion
mcmc.set_max_iterations(200)
mcmc.set_log_interval(1)
# Run in parallel
mcmc.set_parallel(True)
for sampler in mcmc.samplers():
sampler.set_leapfrog_step_size([0.05, 0.2, 0.2, 0.1, 0.1])
sampler.set_leapfrog_steps(10)
# Run!
print('Running...')
chains = mcmc.run()
print('Done!')
import pints.plot
pints.plot.trace(chains)
plt.show() | examples/toy/model-fitzhugh-nagumo.ipynb | martinjrobins/hobo | bsd-3-clause |
Print results. | results = pints.MCMCSummary(
chains=chains,
time=mcmc.time(),
parameter_names=['a', 'b', 'c', 'sigma_V', 'sigma_R'],
)
print(results) | examples/toy/model-fitzhugh-nagumo.ipynb | martinjrobins/hobo | bsd-3-clause |
Plot the few posterior predictive simulations versus data. | import pints.plot
pints.plot.series(np.vstack(chains), problem)
plt.show() | examples/toy/model-fitzhugh-nagumo.ipynb | martinjrobins/hobo | bsd-3-clause |
Nous allons initialiser les differentes valeurs : | E=1.3 #en MPa
h=7.5 #en mm
b=20. #en mm
Lx=55. #en mm
Lyh=60. #en mm
Lyb=45 #en mm
I=b*(h**3)/12 #en mm^4
S=b*h #en mm^2
eps=10**(-3)
g=9.81 | Exercice 2.ipynb | qgoisnard/Exercice-update | mit |
Nous allons maintenant créer les noeuds et les éléments de la structure : | nodes= np.array([[0.,0.],[0.,Lyb],[0.,Lyh+Lyb],[Lx/2,Lyh+Lyb],[Lx,Lyh+Lyb],[Lx,Lyb],[Lx,0.]])
elements=np.array([[0,1],[1,5],[1,2],[2,3],[3,4],[4,5],[5,6]])
frame=LinearFrame(nodes,elements)
frame.plot_with_label()
ne = frame.nelements
ndof = frame.ndof
EI = np.ones(ne)*E*I
ES = np.ones(ne)*E*S
f_x = 0*np.ones(7)
f_y = 0*np.ones(7)
frame.set_distributed_loads(f_x, f_y)
frame.set_stiffness(EI, ES)
blocked_dof = np.array([0, 1, 2, ndof-3, ndof-2, ndof-1])
bc_values = np.array([0, 0, 0, 0, 0, 0])
K = frame.assemble_K()
u=np.array([0.,0.,0.,
2.,0.,0.,
5.,0.,0.,
0.,-3.,0.,
-3.,0.,0.,
-2.,0.,0.,
0.,0.,0.])
print (u)
F= np.dot(K,np.transpose(u))
F
m=F[10]/g
m | Exercice 2.ipynb | qgoisnard/Exercice-update | mit |
La masse accrochée serait d'environ 320g
Exercice 3
Dans cette exercice, on doit créer des fonctions à l'intérieur de notre classe frame afin de récupérer l'effort normal(N), l'effort tangentielle (T) et le moment sur z (M). | def find_N(self,element):
"""
Returns the normal force of an element.
"""
F = self.assemble_F()
N = F[3*element]
return N
def find_T(self,element):
"""
Returns the tangential force of an element.
"""
F = self.assemble_F()
T = F[3*element+1]
return T
def find_M(self,element):
"""
Returns the moment of an element.
"""
F = self.assemble_F()
M = F[3*element+2]
return M | Exercice 2.ipynb | qgoisnard/Exercice-update | mit |
Next, let's specify the measurements. Notice that we only measure the positions of the particle. | # Measurements
measurements = []
mean = torch.zeros(2)
# no correlations
cov = 1e-5 * torch.eye(2)
with torch.no_grad():
# sample independent measurement noise
dzs = pyro.sample('dzs', dist.MultivariateNormal(mean, cov).expand((num_frames,)))
# compute measurement means
zs = xs_truth[:, :2] + dzs | tutorial/source/ekf.ipynb | uber/pyro | apache-2.0 |
We'll use a Delta autoguide to learn MAP estimates of the position and measurement covariances. The EKFDistribution computes the joint log density of all of the EKF states given a tensor of sequential measurements. | def model(data):
# a HalfNormal can be used here as well
R = pyro.sample('pv_cov', dist.HalfCauchy(2e-6)) * torch.eye(4)
Q = pyro.sample('measurement_cov', dist.HalfCauchy(1e-6)) * torch.eye(2)
# observe the measurements
pyro.sample('track_{}'.format(i), EKFDistribution(xs_truth[0], R, ncv,
Q, time_steps=num_frames),
obs=data)
guide = AutoDelta(model) # MAP estimation
optim = pyro.optim.Adam({'lr': 2e-2})
svi = SVI(model, guide, optim, loss=Trace_ELBO(retain_graph=True))
pyro.set_rng_seed(0)
pyro.clear_param_store()
for i in range(250 if not smoke_test else 2):
loss = svi.step(zs)
if not i % 10:
print('loss: ', loss)
# retrieve states for visualization
R = guide()['pv_cov'] * torch.eye(4)
Q = guide()['measurement_cov'] * torch.eye(2)
ekf_dist = EKFDistribution(xs_truth[0], R, ncv, Q, time_steps=num_frames)
states= ekf_dist.filter_states(zs) | tutorial/source/ekf.ipynb | uber/pyro | apache-2.0 |
Initialize the tool (e.g., ddRAD)
You can generate single or paired-end data, and you will likely want to restrict the size of selected fragments to be within an expected size selection window, as is typically done in empirical data sets. Here I select all fragments occuring between two restriction enzymes where the intervening fragment is 300-500bp in length. I then ask that the analysis returns the digested fragments as 150bp fastq reads, and to provide 10 copies of each one. I also restrict it to only the first (largest) 12 scaffolds using the 'nscaffolds' arg. | digest = ipa.digest_genome(
fasta=genome,
name="amaranthus-digest",
workdir="digested_genomes",
re1="CTGCAG",
re2="AATTC",
ncopies=10,
readlen=150,
min_size=300,
max_size=500,
nscaffolds=12,
)
digest.run() | testdocs/analysis/cookbook-digest_genomes.ipynb | dereneaton/ipyrad | gpl-3.0 |
Check results | ! ls -l digested_genomes/ | testdocs/analysis/cookbook-digest_genomes.ipynb | dereneaton/ipyrad | gpl-3.0 |
Example 2 (original RAD data)
The original RAD method uses sonication rather than a second restriction digestion to cut all of the fragments down to an appropriate size for sequencing. Thus you only need to provide a single cut site and a selection window. | digest = ipa.digest_genome(
fasta=genome,
name="amaranthus-digest-RAD",
workdir="digested_genomes",
re1="CTGCAG",
re2=None,
paired=False,
ncopies=10,
readlen=100,
min_size=300,
max_size=500,
nscaffolds=12,
)
digest.run() | testdocs/analysis/cookbook-digest_genomes.ipynb | dereneaton/ipyrad | gpl-3.0 |
Data Structures
There are three fundamental data structures supported by Pandas:<br>
* Series: a one-dimensional labeled array capable of holding any data type (integers, strings, floating point numbers, Python objects, etc.). For those coming from an R background, Series is much like a Vector.
* DataFrame: a 2-dimensional labeled data structure with columns of potentially different types.
* Panel: also called longitudinal data or cross-sectional time series data, is data where multiple cases (people, firms, countries etc) were observed at two or more time periods. This is rarely used though, and I personally haven't come across this except for some Econometrics courses I had taken in my undergraduate years.
Series
The basic format to creat a series is:<br>
series_a = pd.Series(data, index = index_name)
The default value for the index is 1,2,3,4....and so on, and doesn't not need to be specified, except in the case of scalars. | import pandas as pd
# From Scalar Values
series_1 = pd.Series([1,2,3,4,5])
series_1 | 12.Introduction_to_Pandas.ipynb | prasants/pyds | mit |
Notice the 0,1,2,3... on the left side? That's called the Index. It starts from 0, but you can rename it. | series_1 = pd.Series([1,2,3,4,5], index = ['Mon','Tue','Wed','Thu','Fri'])
series_1
series_2 = pd.Series(1.0, index = ['a','b','c','d','e'])
series_2
import pandas as pd
import numpy as np
# From an array
# Just copy this for now, we'll cover the 'seed' in DataFrames
np.random.seed(42)
series_3 = pd.Series(np.random.randn(5))
series_3
np.random.seed(42)
series_3 = pd.Series(np.random.randn(5), index = ['a','b','c','d','e'])
series_3
np.random.seed(42)
ind_1 = ['a','b','c','d','e']
series_3 = pd.Series(np.random.randn(5), index = ind_1)
series_3
series_4 = pd.Series([1, 2, 3, 4, 5], index=['a', 'b', 'c', 'd', 'e'])
series_4 | 12.Introduction_to_Pandas.ipynb | prasants/pyds | mit |
We can subset and get values from the series. | series_4['a'] == series_4[0]
series_4[series_4>3]
series_4[series_4%2==0]
series_5 = pd.Series([1,2,3,4,5], index = ['HP', 'GS', 'IBM', 'AA', 'FB'])
series_5
series_5['IBM']
tech_pf1 = series_5[['HP', 'IBM', 'FB']]
tech_pf1
# From a Dictionary
dict_01 = {'Gavin' : 50, 'Russ' : 100, 'Erlich' : 150}
series_6 = pd.Series(dict_01)
series_6
# Reordering the previous series
index = ['Gavin', 'Russ', 'Erlich', 'Peter']
series_7 = pd.Series(dict_01, index=index)
series_7 | 12.Introduction_to_Pandas.ipynb | prasants/pyds | mit |
Notice the NaN, which stands for Not a Number. We will be dealing with it extensively when working with DataFrames. It is an indicator for missing or corrupted data. Here's how we test for it. | pd.isnull(series_7) | 12.Introduction_to_Pandas.ipynb | prasants/pyds | mit |
And here's a nice discussion on the topic from our friends at StackOverflow. | # Pandas is very smart, and aligns the series for mathematical operations
series_6 + series_7
# Renaming an Index
series_7.index.name = "Names"
series_7
# Naming a Series
series_7.name = "SV"
series_7 | 12.Introduction_to_Pandas.ipynb | prasants/pyds | mit |
Mini-Project | goals = pd.Series([20,19,21,24,1], index = ["Messi", "Neymar", "Zlatan", "Ronaldo", "N’Gog"])
goals
# Who scored less than 20 goals?
goals[goals<20]
# What is the average number of goals scored?
goals.mean()
# What is the median number of goals scored?
goals.median()
# What is the range of goals scored? (Range = Max - Min)
goals_range = goals.max() - goals.min()
print(goals_range)
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = (15,7)
# Plot the goals in a bar chart
goals.plot(kind = "bar")
# Let's beautify that a little
goals.plot(kind = "barh", title = "Goal Scorers") | 12.Introduction_to_Pandas.ipynb | prasants/pyds | mit |
Read more about these here.
DataFrames
DataFrames is in many respects, the real Pandas. Usually, if you're using Pandas, it will be to use DataFrames.<br>
We will begin with creating DataFrames, and the usual indexing and selection mechanisms. In reality, you will probably never have to 'create' a DataFrame, but practice these skills here to get comfortable with heirarchies, indices and selections. Then we will move on to reading data from multiple formats, including spreadsheets, JSON files and API endpoints.
By the way, during these examples, we will always set seed first when generating random numbers. If you're coming from R, this is the same as set.seed(). In Python, we use the random.seed statement from numpy, which you can read about here. You can set it to any number you like, and I usually set it to 42 just out of habit, but there's not to say you can't set it to an arbitrary number like 27 or 2012. Use the same numbers as this notebook though to replicate the results. Also note that we need to mention it in every cell that we want the results replicated.
You will see later about how this is good practice especially when sharing your work with other members of the team - they will be able to reproduce your work on their machines due to the pseudo-random number that is generated algorithmically. | import pandas as pd
import numpy as np
# Let's start with a standard array
arr1 = np.array([[40,40,75,95],[80,85,120,130],
[155,160,165,170],[200,245,250,260]])
print(arr1.shape)
print(arr1.size)
print(arr1)
# It is quite common to assign a dataframe the name 'df', although you can
# use a relevant name, such baseball_stats or book_sales
# It's always good to use context driven names - you should code expecting
# someone else to read it a few months down the line
df = pd.DataFrame(arr1, index = "Peter,Clarke,Bruce,Tony".split(","),
columns = "Jan,Feb,Mar,Apr".split(","))
df | 12.Introduction_to_Pandas.ipynb | prasants/pyds | mit |
Indexing and Selection
Selecting Columns | df = pd.DataFrame(arr1, index = "Peter,Clarke,Bruce,Tony".split(","),
columns = "Jan,Feb,Mar,Apr".split(","))
df
# Selecting columns
df[['Jan']]
df[['Jan','Feb']]
df[['Mar','Jan']] | 12.Introduction_to_Pandas.ipynb | prasants/pyds | mit |
It's interesting to note that the offical Pandas documentation refers to DataFrames as:
Can be thought of as a dict-like container for Series objects.
You can access it as a Series as below: | df['Jan']
print('Series:', type(df['Jan']))
print('DataFrame:',type(df[['Jan']])) | 12.Introduction_to_Pandas.ipynb | prasants/pyds | mit |
Using loc and iloc | df = pd.DataFrame(arr1, index = "Peter,Clarke,Bruce,Tony".split(","),
columns = "Jan,Feb,Mar,Apr".split(","))
df
# For selecting by Label
df.loc[['Tony']]
df.loc[['Peter','Bruce']]
df.loc[['Peter','Bruce'],['Jan','Feb']]
# All of Peter's data
df.loc[["Peter"]][:]
df.loc["Peter"][:]
df
# Integer-location based indexing for selection by position
# Note how this returns a Dataframe
df.iloc[[0]]
# and this returns a Series
df.iloc[0]
# Narrowing down further
df.iloc[[0],[1]]
# Replicating the results from our use of the loc statement
df.iloc[[0,2]]
# Compare to df.loc[['Peter','Bruce'],['A','D']]
df.iloc[[0,2],[0,3]] | 12.Introduction_to_Pandas.ipynb | prasants/pyds | mit |
There's another function named ix. I have rarely used it, and both loc and iloc take care of all my selection needs. You can read about it here.
Also, check out the similarity of outputs below: | df.ix[0:3]
df.iloc[0:3] | 12.Introduction_to_Pandas.ipynb | prasants/pyds | mit |
Conditional Selection
While exploring data sets, one often has to use conditional selection. Or this could be true for creating subsets to work. | df
df[df%2 == 0]
df%2 == 0
df < 100
df[df<100]
df
df[df['Jan']>100][['Apr']]
df[df['Jan']<100][['Feb','Apr']]
# Using multiple conditions
df[(df['Jan'] >= 80) & (df['Mar']>100)] | 12.Introduction_to_Pandas.ipynb | prasants/pyds | mit |
Did you notice that we used & instead of and? When using Pandas, we have to use the symbol, not the word. Here's a StackOverflow discussion on this.
Creating New Columns | df = pd.DataFrame(arr1, index = "Peter,Clarke,Bruce,Tony".split(","), columns = "Jan,Feb,Mar,Apr".split(","))
df
df["Dec"] = df["Jan"] + df["Mar"]
df | 12.Introduction_to_Pandas.ipynb | prasants/pyds | mit |
Removing Columns
While fundamentally adding and removing columns ought to be similar operations, there are a few differences. Let's see if you can figure it out. | df
df.drop('Dec', axis = 1) | 12.Introduction_to_Pandas.ipynb | prasants/pyds | mit |
First, we had to mention the axis. 0 is for rows, 1 is for columns. | df | 12.Introduction_to_Pandas.ipynb | prasants/pyds | mit |
Why is 'Dec' still there? Here lies the difference - while removing columns, we have to specify that the operation should be inplace. Read about it in the official documentation. | df.drop('Dec', axis = 1, inplace = True)
df | 12.Introduction_to_Pandas.ipynb | prasants/pyds | mit |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.