markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
In many cases it is useful to have both a high quality movie and a lower resolution gif of the same animation. If that is desired, just deactivate the `remove_movie` option and give a filename with `.gif`. xmovie will first render a high quality movie and then convert it to a gif, without removing the movie afterwards. Optional frame-generation progress bars Display a progressbar with `progress=True`, (requires tqdm). This can be helpful for long running animations.
mov.save('movie_combo.gif', remove_movie=False, progress=True)
_____no_output_____
MIT
docs/examples/quickstart.ipynb
zmoon/xmovie
Modify the framerate of the output with the keyword arguments `framerate` (for movies) and `gif_framerate` (for gifs).
mov.save('movie_fast.gif', remove_movie=False, progress=True, framerate=20, gif_framerate=20) mov.save('movie_slow.gif', remove_movie=False, progress=True, framerate=5, gif_framerate=5)
_____no_output_____
MIT
docs/examples/quickstart.ipynb
zmoon/xmovie
![movie_fast.gif](movie_fast.gif)![movie_slow.gif](movie_slow.gif) ![](movie_combo.gif) Frame dimension selection By default, the movie passes through the `'time'` dimension of the DataArray, but this can be easily changed with the `framedim` argument:
mov = Movie(ds.air, framedim='lon') mov.save('lon_movie.gif')
Movie created at lon_movie.mp4 GIF created at lon_movie.gif
MIT
docs/examples/quickstart.ipynb
zmoon/xmovie
![](lon_movie.gif) Modifying plots Rotating globe (preset)
from xmovie.presets import rotating_globe mov = Movie(ds.air, plotfunc=rotating_globe) mov.save('movie_rotating.gif', progress=True)
_____no_output_____
MIT
docs/examples/quickstart.ipynb
zmoon/xmovie
![movie_rotating.gif](movie_rotating.gif)
mov = Movie(ds.air, plotfunc=rotating_globe, style='dark') mov.save('movie_rotating_dark.gif', progress=True)
_____no_output_____
MIT
docs/examples/quickstart.ipynb
zmoon/xmovie
![](movie_rotating_dark.gif) Specifying xarray plot method to be used Change the plotting function with the parameter `plotmethod`.
mov = Movie(ds.air, rotating_globe, plotmethod='contour') mov.save('movie_cont.gif') mov = Movie(ds.air, rotating_globe, plotmethod='contourf') mov.save('movie_contf.gif')
Movie created at movie_cont.mp4 GIF created at movie_cont.gif Movie created at movie_contf.mp4 GIF created at movie_contf.gif
MIT
docs/examples/quickstart.ipynb
zmoon/xmovie
![](movie_cont.gif)![](movie_contf.gif) Changing preset settings
import numpy as np ds = xr.tutorial.open_dataset('rasm', decode_times=False).Tair # 36 times in total # Interpolate time for smoother animation ds['time'].values[:] = np.arange(len(ds['time'])) ds = ds.interp(time=np.linspace(0, 10, 60)) # `Movie` accepts keywords for the xarray plotting interface and provides a set of 'own' keywords like # `coast`, `land` and `style` to facilitate the styling of plots mov = Movie(ds, rotating_globe, # Keyword arguments to the xarray plotting interface cmap='RdYlBu_r', x='xc', y='yc', shading='auto', # Custom keyword arguments to `rotating_globe lat_start=45, lat_rotations=0.05, lon_rotations=0.2, land=False, coastline=True, style='dark') mov.save('movie_rasm.gif', progress=True)
_____no_output_____
MIT
docs/examples/quickstart.ipynb
zmoon/xmovie
![](movie_rasm.gif) User-provided Besides the presets, xmovie is designed to animate any custom plot which can be wrapped in a function acting on a matplotlib figure. This can contain xarray plotting commands, 'pure' matplotlib or a combination of both. This can come in handy when you want to animate a complex static plot.
ds = xr.tutorial.open_dataset('rasm', decode_times=False).Tair fig = plt.figure(figsize=[10,5]) tt = 30 station = dict(x=100, y=150) ds_station = ds.sel(**station) (ax1, ax2) = fig.subplots(ncols=2) ds.isel(time=tt).plot(ax=ax1) ax1.plot(station['x'], station['y'], marker='*', color='k' ,markersize=15) ax1.text(station['x']+4, station['y']+4, 'Station', color='k' ) ax1.set_aspect(1) ax1.set_facecolor('0.5') ax1.set_title(''); # Time series ds_station.isel(time=slice(0,tt+1)).plot.line(ax=ax2, x='time') ax2.set_xlim(ds.time.min().data, ds.time.max().data) ax2.set_ylim(ds_station.min(), ds_station.max()) ax2.set_title('Data at station'); fig.subplots_adjust(wspace=0.6) fig.savefig("static.png")
_____no_output_____
MIT
docs/examples/quickstart.ipynb
zmoon/xmovie
All you need to do is wrap your plotting calls into a functions `func(ds, fig, frame)`, where ds is an xarray dataset you pass to `Movie`, fig is a matplotlib.figure handle and tt is the movie frame.
def custom_plotfunc(ds, fig, tt, *args, **kwargs): # Define station location for timeseries station = dict(x=100, y=150) ds_station = ds.sel(**station) (ax1, ax2) = fig.subplots(ncols=2) # Map axis # Colorlimits need to be fixed or your video is going to cause seizures. # This is the only modification from the code above! ds.isel(time=tt).plot(ax=ax1, vmin=ds.min(), vmax=ds.max(), cmap='RdBu_r') ax1.plot(station['x'], station['y'], marker='*', color='k' ,markersize=15) ax1.text(station['x']+4, station['y']+4, 'Station', color='k' ) ax1.set_aspect(1) ax1.set_facecolor('0.5') ax1.set_title(''); # Time series ds_station.isel(time=slice(0,tt+1)).plot.line(ax=ax2, x='time') ax2.set_xlim(ds.time.min().data, ds.time.max().data) ax2.set_ylim(ds_station.min(), ds_station.max()) ax2.set_title('Data at station'); fig.subplots_adjust(wspace=0.6) return None, None # ^ This is not strictly necessary, but otherwise a warning will be raised. mov_custom = Movie(ds, custom_plotfunc) mov_custom.preview(30) mov_custom.save('movie_custom.gif', progress=True)
_____no_output_____
MIT
docs/examples/quickstart.ipynb
zmoon/xmovie
수학 기호 연습수식 기호들을 집어 넣는 연습을 해봅시다. $\theta = 1$ $1 \le 5 $ $\sum_{i=1}^{n} i^2 $ $$\sum_{i=1}^{n} \frac{1}{i} $$
1+1
_____no_output_____
MIT
math_symbol_prac.ipynb
Sumi-Lee/testrepository
Summarize titers and sequences by dateCreate a single histogram on the same scale for number of titer measurements and number of genomic sequences per year to show the relative contribution of each data source.
import Bio import Bio.SeqIO import matplotlib import matplotlib.pyplot as plt import numpy as np import pandas as pd %matplotlib inline # Configure matplotlib theme. fontsize = 14 matplotlib_params = { 'axes.labelsize': fontsize, 'font.size': fontsize, 'legend.fontsize': 12, 'xtick.labelsize': fontsize, 'ytick.labelsize': fontsize, 'text.usetex': False, 'figure.figsize': [6, 4], 'savefig.dpi': 300, 'figure.dpi': 300, 'text.usetex': False } plt.rcParams.update(matplotlib_params) # Turn off spines for all plots. plt.rc("axes.spines", top=False, right=False) matplotlib.get_configdir() plt.style.use("huddlej") plt.style.available
_____no_output_____
MIT
analyses/2018-11-07-summarize-titers-and-sequences-by-date.ipynb
blab/flu-forecasting
Load sequences
ls ../../seasonal-flu/data/*.fasta # Open FASTA of HA sequences for H3N2. sequences = Bio.SeqIO.parse("../../seasonal-flu/data/h3n2_ha.fasta", "fasta") # Get strain names from sequences. distinct_strains_with_sequences = pd.Series([sequence.name.split("|")[0].replace("-egg", "") for sequence in sequences]).drop_duplicates() distinct_strains_with_sequences.shape # Parse years from distinct strains with titers. sequence_years = distinct_strains_with_sequences.apply(lambda strain: int(strain.split("/")[-1])).values # Omit invalid sequence years. sequence_years = sequence_years[sequence_years < 2019] sequence_years.shape
_____no_output_____
MIT
analyses/2018-11-07-summarize-titers-and-sequences-by-date.ipynb
blab/flu-forecasting
Load titers
# Read titers into a data frame. titers = pd.read_table( "../../seasonal-flu/data/cdc_h3n2_egg_hi_titers.tsv", header=None, index_col=False, names=["test", "reference", "serum", "source", "titer", "assay"] ) titers.head() titers["test_year"] = titers["test"].apply(lambda strain: int(strain.replace("-egg", "").split("/")[-1])) (titers["test_year"] < 2007).sum() titers["test_year"].value_counts() titers.shape titers[titers["test_year"] < 2007]["test"].unique().shape titers[titers["test_year"] < 2007]["test"].unique() # Identify distinct viruses represented as test strains in titers. distinct_strains_with_titers = titers["test"].str.replace("-egg", "").drop_duplicates() # Parse years from distinct strains with titers. titer_years = distinct_strains_with_titers.apply(lambda strain: int(strain.split("/")[-1])).values # Omit invalid titer years. titer_years = titer_years[titer_years < 2019] titer_years.shape
_____no_output_____
MIT
analyses/2018-11-07-summarize-titers-and-sequences-by-date.ipynb
blab/flu-forecasting
Plot sequence and titer strains by year
sequence_years.min() sequence_years.max() [sequence_years, titer_years] sequence fig, ax = plt.subplots(1, 1) bins = np.arange(1968, 2019) ax.hist([sequence_years, titer_years], bins, histtype="bar", label=["HA sequence", "HI titer"]) legend = ax.legend( loc="upper left", ncol=1, frameon=False, handlelength=1, fancybox=False, handleheight=1 ) legend.set_title("Virus measurement", prop={"size": 12}) legend._legend_box.align = "left" ax.set_xlim(1990) ax.set_xlabel("Year") ax.set_ylabel("Number of viruses measured") fig, ax = plt.subplots(1, 1) bins = np.arange(1968, 2019) ax.hist([titer_years], bins, histtype="bar", label=["HI titer"]) ax.set_xlim(1990) ax.set_xlabel("Year") ax.set_ylabel("Viruses measured by HI") len(titer_years) (titer_years < 2010).sum()
_____no_output_____
MIT
analyses/2018-11-07-summarize-titers-and-sequences-by-date.ipynb
blab/flu-forecasting
HSMfile examples The [hsmfile module](https://github.com/hadfieldnz/hsmfile) is modelled on my IDL mgh_san routines and provides user-customisable access to remote (slow-access) and local (fast-access) files.This notebook exercises various aspects of the hsmfile module.Change history:MGH 2019-08-15 - afile is now called hsmfile.MGH 2019-08-07 - Modified for afile.MGH 2019-05-06 - Written for afile's predecessor, mgh_san.
import os import hsmfile
_____no_output_____
MIT
examples/HSMfile_examples.ipynb
hadfieldnz/notebooks
The following cell should be executed whenever the hsmfile module code has been changed.
from importlib import reload reload(hsmfile);
_____no_output_____
MIT
examples/HSMfile_examples.ipynb
hadfieldnz/notebooks
Print the volumes supported by the hsmfile module on this platform
print(hsmfile.volume.keys())
_____no_output_____
MIT
examples/HSMfile_examples.ipynb
hadfieldnz/notebooks
Specify the files for which we will search (Cook Strait Narrows 1 km run). Normally
vol = '/nesi/nobackup/niwa00020/hadfield' sub = 'work/cook/roms/sim34/run' pattern = 'bran-2009-2012-nzlam-1.20-detide/roms_avg_????.nc'
_____no_output_____
MIT
examples/HSMfile_examples.ipynb
hadfieldnz/notebooks
The hsmfile.path function returns a pathlib Path object. Here we construct the path names for the base directory on the remote, or master, volume (mirror = False) and the local, or mirror, volume (mirror = True)
hsmfile.path(sub=sub,vol=vol,mirror=False) if 'mirror' in hsmfile.volume[vol]: print(repr(hsmfile.path(sub=sub,vol=vol,mirror=True))) else: print('Volume has no mirror')
_____no_output_____
MIT
examples/HSMfile_examples.ipynb
hadfieldnz/notebooks
The hsmfile.search function uses the Path's glob function to create a generator object and from that generates and returns a sorted list of Path objects relative to the base.
match = hsmfile.search(pattern,sub=sub,vol=vol); match
_____no_output_____
MIT
examples/HSMfile_examples.ipynb
hadfieldnz/notebooks
The hsmfile.file function constructs and returns a list of path objects representing actual files. It checks for existence and copies from master to mirror as necessary.
file = [hsmfile.file(m,sub=sub,vol=vol) for m in match]; file
_____no_output_____
MIT
examples/HSMfile_examples.ipynb
hadfieldnz/notebooks
参数| | || --------- | ---------------------------------------------------------------------- || 参数 | 描述 || `pattern` | 匹配的正则表达式 || `string` | 要匹配的字符串。 || `flags` | 标志位,用于控制正则表达式的匹配方式,如:是否区分大小写,多行匹配等等 || | || ------ | -------------------------------------------------------------- || 修饰符 | 描述 || `re.I` | 使匹配对大小写不敏感 || `re.L` | 做本地化识别(locale-aware)匹配 || `re.M` | 多行匹配,影响 ^ 和 $ || `re.S` | 使 . 匹配包括换行在内的所有字符 || `re.U` | 根据Unicode字符集解析字符。这个标志影响 `\w`,` \W`, `\b`, `\B` || `re.X` | 该标志通过给予你更灵活的格式以便你将正则表达式写得更易于理解。 |可以使用`group(num)` 或 `groups()` 匹配对象函数来获取匹配表达式。| | || -------------- | --------------------------------------------------------- || 匹配对象方法 | 描述 || `group(num=0)` | 匹配的整个表达式的字符串,`group()` || | 可以一次输入多个组号,在这种情况下它 将返回一个包含那些组所对应值的元组。 || `groups()` | 返回一个包含所有小组字符串的元组,从 1 到 所含的小组号。 || | || --------------------- | -------------------------------------------------------------------------------------------------- || `group([group1, …]`) | 获得一个或多个分组匹配的字符串,当要获得整个匹配的子串时,可直接使用 `group()` 或 `group(0)` || `start([group])` | 获取分组匹配的子串在整个字符串中的起始位置(子串第一个字符的索引),参数默认值为 0; || `end([group])` | 获取分组匹配的子串在整个字符串中的结束位置(子串最后一个字符的索引+1),参数默认值为 0; || `span([group])` | 方法返回 `(start(group), end(group))` |
# re.sub(pattern, repl, string, count=0, flags=0) # pattern 正则中的模式字符串。 # repl 替换的字符串,也可为一个函数。 # string 要被查找替换的原始字符串。 # count 模式匹配后替换的最大次数,默认 0 表示替换所有的匹配。 phone = "123-456-789 # 这是一个电话号码" print(re.sub(r'#.*$', "", phone)) print(re.sub(r'\D', "", phone)) def double(matched): """将匹配的数字*2 :param matched: 传入的匹配的参数 value :return: str 类型的 value*2 """ value = int(matched.group('value')) return str(value * 2) s = 'A1111G4HFD2222' print(re.sub('(?P<value>\d+)', double, s)) # 编译表达式 re.compile(pattern[, flags]) # pattern 一个字符串形式的正则表达式 # flags 可选,表示匹配模式,比如忽略大小写,多行模式等,具体参数为 # re.I 忽略大小写 # re.L 表示特殊字符集 `\w`, `\W`, `\b`, `\B`, `\s`,`\S` 依赖于当前环境 # re.M 多行模式 # re.S 即为 . 并且包括换行符在内的任意字符(. 不包括换行符) # re.U 表示特殊字符集 `\w`, `\W`, `\b`, `\B`, `\d`, `\D`, `\s`, `\S` 依赖于 Unicode 字符属性据库 # re.X 为了增加可读性,忽略空格和 # 后面的注释 pattern = re.compile(r'\d+') math_item = pattern.match('one12twothree34four') print(1, math_item) math_item = pattern.match('one12twothree34four', 2, 10) print(2, math_item) math_item = pattern.match('one12twothree34four', 3, 10) print(3, math_item) # 返回一个 Match 对象 # 可省略 0 print(1, math_item.group(0)) print(2, math_item.start(0)) print(3, math_item.end(0)) print(4, math_item.span(0)) pattern = re.compile(r'([a-z]+) ([a-z]+)', re.I) math_item = pattern.match('Hello World Wide Web') print(1, math_item) # 匹配成功,返回一个 Match 对象 print(1, math_item.group(0)) # 返回匹配成功的整个子串 print(1, math_item.span(0)) # 返回匹配成功的整个子串的索引 print(2, math_item.group(1)) # 返回第一个分组匹配成功的子串 print(2, math_item.span(1)) # 返回第一个分组匹配成功的子串的索引 print(3, math_item.group(2)) # 返回第二个分组匹配成功的子串 print(3, math_item.span(2)) # 返回第二个分组匹配成功的子串 print(4, math_item.groups()) # 等价于 (m.group(1), m.group(2), ...) try: item = math_item.group(3) # 不存在第三个分组 except IndexError as e: print(e) # 查找所有 re.findall(string[, pos[, endpos]]) # string 待匹配的字符串。 # pos 可选参数,指定字符串的起始位置,默认为 0。 # endpos 可选参数,指定字符串的结束位置,默认为字符串的长度 pattern = re.compile(r'\d+') print(1, pattern.findall('qwer 123 google 456')) print(1, pattern.findall('qwe88rty123456google456', 0, 10)) # 查找所有 `re.finditer` 和 `re.findall` 类似,在字符串中找到正则表达式所匹配的所有子串,并把它们作为一个迭代器返回。 matchs = re.finditer(r"\d+", "12a32bc43jf3") print(2, matchs) for item in matchs: print(3, item.group()) # 分割 re.split(pattern, string[, maxsplit=0, flags=0]) # maxsplit 分隔次数,maxsplit = 1 分隔一次,默认为0,不限制次数 print(1, re.split('\W+', 'runoob, runoob, runoob.')) print(2, re.split('(\W+)', ' runoob, runoob, runoob.')) print(3, re.split('\W+', ' runoob, runoob, runoob.', 1)) print(4, re.split('a*', 'hello world')) # 对于一个找不到匹配的字符串而言,split 不会对其作出分割
_____no_output_____
MIT
_note_/内置包/re_正则处理.ipynb
By2048/_python_
其他```re.RegexObjectre.compile()返回RegexObject对象。re.MatchObjectgroup()返回被RE匹配的字符串。```
dytt_title = ".*\[(.*)\].*" name_0 = r"罗拉快跑BD国德双语中字[电影天堂www.dy2018.com].mkv" name_1 = r"[电影天堂www.dy2018.com]罗拉快跑BD国德双语中字.mkv" print(1, re.findall(dytt_title, name_0)) print(1, re.findall(dytt_title, name_1)) data = "xxxxxxxxxxxentry某某内容for-----------" result = re.findall(".*entry(.*)for.*", data) print(3, result)
_____no_output_____
MIT
_note_/内置包/re_正则处理.ipynb
By2048/_python_
Wizualizacja danych
df.price_value.hist(bins=100); df.price_value.max() df.price_value.describe() df.groupby(['param_marka-pojazdu'])['price_value'].mean() ( df .groupby(['param_marka-pojazdu'])['price_value'] .agg(np.mean) .sort_values(ascending=False) .head(50) ).plot(kind='bar', figsize=(20,5)) ( df .groupby(['param_marka-pojazdu'])['price_value'] .agg((np.mean, np.median, np.size)) .sort_values(by='size', ascending=False) .head(50) ).plot(kind='bar', figsize=(15,5), subplots=True) def plotter(feat_groupby, feat_agg='price_value', agg_funcs=[np.mean, np.median, np.size], feat_sort='mean', top=50, subplots=True): return ( df .groupby(feat_groupby)[feat_agg] .agg(agg_funcs) .sort_values(by=feat_sort, ascending=False) .head(top) ).plot(kind='bar', figsize=(15,5), subplots=subplots) plotter('param_marka-pojazdu') plotter('param_model-pojazdu', feat_sort='size') plotter('param_model', feat_sort='size') plotter('param_kraj-pochodzenia') plotter('param_color')
_____no_output_____
MIT
day2_visualization.ipynb
wudzitsu/dw_matrix_car
1- Class Activation Map with convolutionsIn this firt part, we will code class activation map as described in the paper [Learning Deep Features for Discriminative Localization](http://cnnlocalization.csail.mit.edu/)There is a GitHub repo associated with the paper:https://github.com/zhoubolei/CAMAnd even a demo in PyTorch:https://github.com/zhoubolei/CAM/blob/master/pytorch_CAM.pyThe code below is adapted from this demo but we will not use hooks only convolutions...
import io import requests from PIL import Image import torch import torch.nn as nn from torchvision import models, transforms from torch.nn import functional as F import torch.optim as optim import numpy as np import cv2 import pdb from matplotlib.pyplot import imshow # input image LABELS_URL = 'https://s3.amazonaws.com/deep-learning-models/image-models/imagenet_class_index.json' IMG_URL = 'http://media.mlive.com/news_impact/photo/9933031-large.jpg'
_____no_output_____
Apache-2.0
HW2/HW2_CAM_Adversarial.ipynb
Hmkhalla/notebooks
As in the demo, we will use the Resnet18 architecture. In order to get CAM, we need to transform this network in a fully convolutional network: at all layers, we need to deal with images, i.e. with a shape $\text{Number of channels} \times W\times H$ . In particular, we are interested in the last images as shown here:![](https://camo.githubusercontent.com/fb9a2d0813e5d530f49fa074c378cf83959346f7/687474703a2f2f636e6e6c6f63616c697a6174696f6e2e637361696c2e6d69742e6564752f6672616d65776f726b2e6a7067)As we deal with a Resnet18 architecture, the image obtained before applying the `AdaptiveAvgPool2d` has size $512\times 7 \times 7$ if the input has size $3\times 224\times 224 $:![resnet_Archi](https://pytorch.org/assets/images/resnet.png)A- The first thing, you will need to do is 'removing' the last layers of the resnet18 model which are called `(avgpool)` and `(fc)`. Check that for an original image of size $3\times 224\times 224 $, you obtain an image of size $512\times 7\times 7$.B- Then you need to retrieve the weights (and bias) of the `fc` layer, i.e. a matrix of size $1000\times 512$ transforming a vector of size 512 into a vector of size 1000 to make the prediction. Then you need to use these weights and bias to apply it pixelwise in order to transform your $512\times 7\times 7$ image into a $1000\times 7\times 7$ output (Hint: use a convolution). You can interpret this output as follows: `output[i,j,k]` is the logit for 'pixel' `[j,k]` for being of class `i`.C- From this $1000\times 7\times 7$ output, check that you can retrieve the original output given by the `resnet18` by using an `AdaptiveAvgPool2d`. Can you understand why this is true?D- In addition, you can construct the Class Activation Map. Draw the activation map for the class mountain bike, for the class lakeside. Validation:1. make sure that when running your notebook, you display both CAM for the class mountain bike and for the class lakeside.2. for question B above, what convolution did you use? Your answer, i.e. the name of the Pytorch layer with the correct parameters (in_channel,kernel...) here:Replace by your answer3. your short explanation of why your network gives the same predicition as the original `resnet18`:Replace by your answer4. Is your network working on an image which is not of size $224\times 224$, i.e. without resizing? and what about `resnet18`? Explain why?Replace by your answer
net = models.resnet18(pretrained=True) net.eval() x = torch.randn(5, 3, 224, 224) y = net(x) y.shape n_mean = [0.485, 0.456, 0.406] n_std = [0.229, 0.224, 0.225] normalize = transforms.Normalize( mean=n_mean, std=n_std ) preprocess = transforms.Compose([ transforms.Resize((224,224)), transforms.ToTensor(), normalize ]) # Display the image we will use. response = requests.get(IMG_URL) img_pil = Image.open(io.BytesIO(response.content)) imshow(img_pil); img_tensor = preprocess(img_pil) net = net.eval() logit = net(img_tensor.unsqueeze(0)) logit.shape img_tensor.shape # download the imagenet category list classes = {int(key):value for (key, value) in requests.get(LABELS_URL).json().items()} def print_preds(logit): # print the predicitions with their 'probabilities' from the logit h_x = F.softmax(logit, dim=1).data.squeeze() probs, idx = h_x.sort(0, True) probs = probs.numpy() idx = idx.numpy() # output the prediction for i in range(0, 5): print('{:.3f} -> {}'.format(probs[i], classes[idx[i]])) return idx idx = print_preds(logit) def returnCAM(feature_conv, idx): # input: tensor feature_conv of dim 1000*W*H and idx between 0 and 999 # output: image W*H with entries rescaled between 0 and 255 for the display cam = feature_conv[idx].detach().numpy() cam = cam - np.min(cam) cam_img = cam / np.max(cam) cam_img = np.uint8(255 * cam_img) return cam_img #some utilities def pil_2_np(img_pil): # transform a PIL image in a numpy array return np.asarray(img_pil) def display_np(img_np): imshow(Image.fromarray(np.uint8(img_np))) def plot_CAM(img_np, CAM): height, width, _ = img_np.shape heatmap = cv2.applyColorMap(cv2.resize(CAM,(width, height)), cv2.COLORMAP_JET) result = heatmap * 0.3 + img_np * 0.5 display_np(result) # here is a fake example to see how things work img_np = pil_2_np(img_pil) diag_CAM = returnCAM(torch.eye(7).unsqueeze(0),0) plot_CAM(img_np,diag_CAM) # your code here for your new network net_conv = # do not forget: net_conv = net_conv.eval() # to test things are right x = torch.randn(5, 3, 224, 224) y = net_conv(x) y.shape logit_conv = net_conv(img_tensor.unsqueeze(0)) logit_conv.shape # transfor this to a [1,1000] tensor with AdaptiveAvgPool2d logit_new = idx = print_preds(logit_new) i = #index of lakeside CAM1 = returnCAM(logit_conv.squeeze(),idx[i]) plot_CAM(img_np,CAM1) i = #index of mountain bike CAM2 = returnCAM(logit_conv.squeeze(),idx[i]) plot_CAM(img_np,CAM2)
_____no_output_____
Apache-2.0
HW2/HW2_CAM_Adversarial.ipynb
Hmkhalla/notebooks
2- Adversarial examples In this second part, we will look at [adversarial examples](https://arxiv.org/abs/1607.02533): "An adversarial example is a sample of input data which has been modified very slightly in a way that is intended to cause a machine learning classifier to misclassify it. In many cases, these modifications can be so subtle that a human observer does not even notice the modification at all, yet the classifier still makes a mistake. Adversarial examples pose security concerns because they could be used to perform an attack on machine learning systems..."Rules of the game:- the attacker cannot modify the classifier, i.e. the neural net with the preprocessing done on the image before being fed to the network. - even if the attacker cannot modifiy the classifier, we assume that the attacker knows the architecture of the classifier. Here, we will still work with `resnet18` and the standard Imagenet normalization. - the attacker can only modify the physical image fed into the network.- the attacker should fool the classifier, i.e. the label obtained on the corrupted image should not be the same as the label predicted on the original image.First, you will implement *Fast gradient sign method (FGSM)* wich is described in Section 2.1 of [Adversarial examples in the physical world](https://arxiv.org/abs/1607.02533). The idea is simple, suppose you have an image $\mathbf{x}$ and when you pass it through the network, you get the 'true' label $y$. You know that your network has been trained by minimizing the loss $J(\mathbf{\theta}, \mathbf{x}, y)$ with respect to the parameters of the network $\theta$. Now, $\theta$ is fixed as you cannot modify the classifier so you need to modify $\mathbf{x}$. In order to do so, you can compute the gradient of the loss with respect to $\mathbf{x}$ i.e. $\nabla_{\mathbf{x}} J(\mathbf{\theta}, \mathbf{x}, y)$ and use it as follows to get the modified image $\tilde{\mathbf{x}}$:$$\tilde{\mathbf{x}} = \text{Clamp}\left(\mathbf{x} + \epsilon *\text{sign}(\nabla_{\mathbf{x}} J(\mathbf{\theta}, \mathbf{x}, y)),0,1\right),$$where $\text{Clamp}(\cdot, 0,1)$ ensures that $\tilde{\mathbf{x}}$ is a proper image.Note that if instead of sign, you take the full gradient, you are now following the gradient i.e. increasing the loss $J(\mathbf{\theta}, \mathbf{x}, y)$ so that $y$ becomes less likely to be the predicited label. Validation:1. Implement this attack. Make sure to display the corrupted image.2. For what value of epsilon is your attack successful? What is the predicited class then?Replace by your answer3. plot the sign of the gradient and pass this image through the network. What prediction do you obtain? Compare to [Explaining and Harnessing Adversarial Examples](https://arxiv.org/abs/1412.6572) Replace by your answer
# Image under attack! url_car = 'https://cdn130.picsart.com/263132982003202.jpg?type=webp&to=min&r=640' response = requests.get(url_car) img_pil = Image.open(io.BytesIO(response.content)) imshow(img_pil); # same as above preprocess = transforms.Compose([ transforms.Resize((224,224)), transforms.ToTensor(), normalize ]) for p in net.parameters(): p.requires_grad = False x = preprocess(img_pil).clone().unsqueeze(0) logit = net(x) _ = print_preds(logit) t_std = torch.from_numpy(np.array(n_std, dtype=np.float32)).view(-1, 1, 1) t_mean = torch.from_numpy(np.array(n_mean, dtype=np.float32)).view(-1, 1, 1) def plot_img_tensor(img): imshow(np.transpose(img.detach().numpy(), [1,2,0])) def plot_untransform(x_t): x_np = (x_t * t_std + t_mean).detach().numpy() x_np = np.transpose(x_np, [1, 2, 0]) imshow(x_np) # here we display an image given as a tensor x_img = (x * t_std + t_mean).squeeze(0) plot_img_tensor(x_img) # your implementation of the attack def fgsm_attack(image, epsilon, data_grad): # Collect the element-wise sign of the data gradient # Create the perturbed image by adjusting each pixel of the input image # Adding clipping to maintain [0,1] range # Return the perturbed image return perturbed_image idx = 656 #minivan criterion = nn.CrossEntropyLoss() x_img.requires_grad = True logit = net(normalize(x_img).unsqueeze(0)) target = torch.tensor([idx]) #TODO: compute the loss to backpropagate _ = print_preds(logit) # your attack here epsilon = 0 x_att = fgsm_attack(x_img,epsilon,?) # the new prediction for the corrupted image logit = net(normalize(x_att).unsqueeze(0)) _ = print_preds(logit) # can you see the difference? plot_img_tensor(x_att) # do not forget to plot the sign of the gradient gradient = plot_img_tensor((1+gradient)/2) # what is the prediction for the gradient? logit = net(normalize(gradient).unsqueeze(0)) _ = print_preds(logit)
_____no_output_____
Apache-2.0
HW2/HW2_CAM_Adversarial.ipynb
Hmkhalla/notebooks
3- Transforming a car into a catWe now implement the *Iterative Target Class Method (ITCM)* as defined by equation (4) in [Adversarial Attacks and Defences Competition](https://arxiv.org/abs/1804.00097)To test it, we will transform the car (labeled minivan by our `resnet18`) into a [Tabby cat](https://en.wikipedia.org/wiki/Tabby_cat) (classe 281 in Imagenet). But you can try with any other target. Validation:1. Implement the ITCM and make sure to display the resulting image.
x = preprocess(img_pil).clone() xd = preprocess(img_pil).clone() xd.requires_grad = True idx = 281 #tabby optimizer = optim.SGD([xd], lr=0.01) for i in range(200): #TODO: your code here optimizer.zero_grad() loss.backward() optimizer.step() print(loss.item()) _ = print_preds(output) print(i,'-----------------') # TODO: break the loop once we are satisfied if ?: break _ = print_preds(output) # plot the corrupted image
_____no_output_____
Apache-2.0
HW2/HW2_CAM_Adversarial.ipynb
Hmkhalla/notebooks
Introduction to Deep Learning with PyTorchIn this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks. Neural NetworksDeep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.Mathematically this looks like: $$\begin{align}y &= f(w_1 x_1 + w_2 x_2 + b) \\y &= f\left(\sum_i w_i x_i +b \right)\end{align}$$With vectors this is the dot/inner product of two vectors:$$h = \begin{bmatrix}x_1 \, x_2 \cdots x_n\end{bmatrix}\cdot \begin{bmatrix} w_1 \\ w_2 \\ \vdots \\ w_n\end{bmatrix}$$ TensorsIt turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network.
# First, import PyTorch import torch def activation(x): """ Sigmoid activation function Arguments --------- x: torch.Tensor """ return 1/(1+torch.exp(-x)) ### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 5)) # True weights for our data, random normal variables again weights = torch.randn_like(features) # and a true bias term bias = torch.randn((1, 1))
_____no_output_____
MIT
intro-to-pytorch/Part 1 - Tensors in PyTorch (Exercises).ipynb
Yasel-Garces/deep-learning-v2-pytorch
Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one. `weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network. > **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.htmltorch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function.
## Calculate the output of this network using the weights and bias tensors activation(torch.sum(weights*features) + bias)
_____no_output_____
MIT
intro-to-pytorch/Part 1 - Tensors in PyTorch (Exercises).ipynb
Yasel-Garces/deep-learning-v2-pytorch
You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.htmltorch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.htmltorch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error```python>> torch.mm(features, weights)---------------------------------------------------------------------------RuntimeError Traceback (most recent call last) in ()----> 1 torch.mm(features, weights)RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033```As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.htmltorch.Tensor.view).* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.> **Exercise**: Calculate the output of our little network using matrix multiplication.
## Calculate the output of this network using matrix multiplication torch.matmul(features,weights.reshape(5,1))+bias
_____no_output_____
MIT
intro-to-pytorch/Part 1 - Tensors in PyTorch (Exercises).ipynb
Yasel-Garces/deep-learning-v2-pytorch
Stack them up!That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated $$\vec{h} = [h_1 \, h_2] = \begin{bmatrix}x_1 \, x_2 \cdots \, x_n\end{bmatrix}\cdot \begin{bmatrix} w_{11} & w_{12} \\ w_{21} &w_{22} \\ \vdots &\vdots \\ w_{n1} &w_{n2}\end{bmatrix}$$The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply$$y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)$$
### Generate some data torch.manual_seed(7) # Set the random seed so things are predictable # Features are 3 random normal variables features = torch.randn((1, 3)) # Define the size of each layer in our network n_input = features.shape[1] # Number of input units, must match number of input features n_hidden = 2 # Number of hidden units n_output = 1 # Number of output units # Weights for inputs to hidden layer W1 = torch.randn(n_input, n_hidden) # Weights for hidden layer to output layer W2 = torch.randn(n_hidden, n_output) # and bias terms for hidden and output layers B1 = torch.randn((1, n_hidden)) B2 = torch.randn((1, n_output))
_____no_output_____
MIT
intro-to-pytorch/Part 1 - Tensors in PyTorch (Exercises).ipynb
Yasel-Garces/deep-learning-v2-pytorch
> **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`.
## Your solution here activation(torch.matmul(activation(torch.matmul(features,W1) + B1),W2) + B2)
_____no_output_____
MIT
intro-to-pytorch/Part 1 - Tensors in PyTorch (Exercises).ipynb
Yasel-Garces/deep-learning-v2-pytorch
If you did this correctly, you should see the output `tensor([[ 0.3171]])`.The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions. Numpy to Torch and backSpecial bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method.
import numpy as np a = np.random.rand(4,3) a b = torch.from_numpy(a) b b.numpy()
_____no_output_____
MIT
intro-to-pytorch/Part 1 - Tensors in PyTorch (Exercises).ipynb
Yasel-Garces/deep-learning-v2-pytorch
The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well.
# Multiply PyTorch Tensor by 2, in place b.mul_(2) # Numpy array matches new values from Tensor a
_____no_output_____
MIT
intro-to-pytorch/Part 1 - Tensors in PyTorch (Exercises).ipynb
Yasel-Garces/deep-learning-v2-pytorch
Passive Membrane Tutorial This is a tutorial which is designed to allow users to explore the passive responses of neuron membrane potentials and how it changes under various conditions such as current injection, ion concentration (both inside and outside the cell), change in membrane capacitance and passive conductances.Written by Varun Saravanan; February 2018All units are in SI units. Parameters:
dt = 1e-4 #Integration time step. Reduce if you encounter NaN errors. t_sim = 0.5 #Total time plotted. Increase as desired. Na_in = 13 #Sodium ion concentration inside the cell. Default = 13 (in mM) Na_out = 120 #Sodium ion concentration outside the cell. Default = 120 (in mM) K_in = 140 #Potassium ion concentration inside the cell. Default = 140 (in mM) K_out = 8 #Potassium ion concentration outside the cell. Default = 8 (in mM) Cm = 1e-7 #Membrane capacitance. Default = 0.1 microF. gNa = 5e-7 #Passive sodium conductance. Default = 0.5 microS. gK = 1e-5 #Passive potassium conductance. Default = 10 microS.
_____no_output_____
MIT
Passive_Membrane_tutorial.ipynb
zbpvarun/Neuron
Nernst Potential Equations:
import math as ma Ena = -0.058*ma.log10(Na_in/Na_out); Ek = -0.058*ma.log10(K_in/K_out);
_____no_output_____
MIT
Passive_Membrane_tutorial.ipynb
zbpvarun/Neuron
If you wish to use pre-determined ENa and EK values, set them here and convert this cell into code from Markdown:Ena = ??;Ek = ??;
import numpy as np niter = int(t_sim//dt) #Total number of integration steps (constant). #Output variables: Vm = np.zeros(niter) Ie = np.zeros(niter) #Starting values: You can change the initial conditions of each simulation here: Vm[0] = -0.070;
_____no_output_____
MIT
Passive_Membrane_tutorial.ipynb
zbpvarun/Neuron
Current Injection
I_inj =-5e-8 #Current amplitude. Default = 50 nA. t_start = 0.150 #Start time of current injection. t_end = 0.350 #End time of current injection. Ie[int(t_start//dt):int(t_end//dt)] = I_inj
_____no_output_____
MIT
Passive_Membrane_tutorial.ipynb
zbpvarun/Neuron
Calculation - do the actual computation here:
#Integration steps - do not change: for i in np.arange(niter-1): Vm[i+1] = Vm[i] + dt/Cm*(Ie[i] - gNa*(Vm[i] - Ena) - gK*(Vm[i] - Ek));
_____no_output_____
MIT
Passive_Membrane_tutorial.ipynb
zbpvarun/Neuron
Plot results
import matplotlib.pyplot as plt %matplotlib notebook plt.figure() t = np.arange(niter)*dt; plt.plot(t,Vm); plt.xlabel('Time in s') plt.ylabel('Membrane Voltage in V')
_____no_output_____
MIT
Passive_Membrane_tutorial.ipynb
zbpvarun/Neuron
Circuit visualizeこのドキュメントでは scikit-qulacs に用意されている量子回路を可視化します。scikitqulacsには現在、以下のような量子回路を用意しています。- create_qcl_ansatz(n_qubit: int, c_depth: int, time_step: float, seed=None): [arXiv:1803.00745](https://arxiv.org/abs/1803.00745)- create_farhi_neven_ansatz(n_qubit: int, c_depth: int, seed: Optional[int] = None): [arXiv:1802.06002](https://arxiv.org/pdf/1802.06002)- create_ibm_embedding_circuit(n_qubit: int): [arXiv:1804.11326](https://arxiv.org/abs/1804.11326) - create_shirai_ansatz(n_qubit: int, c_depth: int = 5, seed: int = 0): [arXiv:2111.02951](http://arxiv.org/abs/2111.02951) 注:微妙に細部が異なる可能性あり- create_npqcd_ansatz(n_qubit: int, c_depth: int, c: float = 0.1): [arXiv:2108.01039](https://arxiv.org/abs/2108.01039)- create_yzcx_ansatz(n_qubit: int, c_depth: int = 4, c: float = 0.1, seed: int = 9):[arXiv:2108.01039](https://arxiv.org/abs/2108.01039)create_qcnn_ansatz(n_qubit: int, seed: int = 0):Creates circuit used in https://www.tensorflow.org/quantum/tutorials/qcnn?hl=en, Section 1.回路を見やすくするために、パラメータの値を通常より小さくしています。 量子回路の可視化には[qulacs-visualizer](https://github.com/Qulacs-Osaka/qulacs-visualizer)を使用しています。qulacs-visualizerはpipを使ってインストールできます。```bashpip install qulacsvis``` qcl_ansatzcreate_qcl_ansatz( n_qubit: int, c_depth: int, time_step: float = 0.5, seed: Optional[int] = None)[arXiv:1803.00745](https://arxiv.org/abs/1803.00745)
from skqulacs.circuit.pre_defined import create_qcl_ansatz from qulacsvis import circuit_drawer n_qubit = 4 c_depth = 2 time_step = 1. ansatz = create_qcl_ansatz(n_qubit, c_depth, time_step) circuit_drawer(ansatz._circuit,"latex")
_____no_output_____
MIT
doc/source/notebooks/circuit_visualize.ipynb
forest1040/scikit-qulacs
farhi_neven_ansatzcreate_farhi_neven_ansatz( n_qubit: int, c_depth: int, seed: Optional[int] = None)[arXiv:1802.06002](https://arxiv.org/abs/1802.06002)
from skqulacs.circuit.pre_defined import create_farhi_neven_ansatz n_qubit = 4 c_depth = 2 ansatz = create_farhi_neven_ansatz(n_qubit, c_depth) circuit_drawer(ansatz._circuit,"latex")
_____no_output_____
MIT
doc/source/notebooks/circuit_visualize.ipynb
forest1040/scikit-qulacs
farhi_neven_watle_ansatzfarhi_neven_ansatzを @WATLE さんが改良したものcreate_farhi_neven_watle_ansatz( n_qubit: int, c_depth: int, seed: Optional[int] = None)
from skqulacs.circuit.pre_defined import create_farhi_neven_watle_ansatz n_qubit = 4 c_depth = 2 ansatz = create_farhi_neven_watle_ansatz(n_qubit, c_depth) circuit_drawer(ansatz._circuit,"latex")
_____no_output_____
MIT
doc/source/notebooks/circuit_visualize.ipynb
forest1040/scikit-qulacs
ibm_embedding_circuitcreate_ibm_embedding_circuit(n_qubit: int)[arXiv:1802.06002](https://arxiv.org/abs/1802.06002)
from skqulacs.circuit.pre_defined import create_ibm_embedding_circuit n_qubit = 4 circuit = create_ibm_embedding_circuit(n_qubit) circuit_drawer(circuit._circuit,"latex")
_____no_output_____
MIT
doc/source/notebooks/circuit_visualize.ipynb
forest1040/scikit-qulacs
shirai_ansatzcreate_shirai_ansatz( n_qubit: int, c_depth: int = 5, seed: int = 0)[arXiv:2111.02951](https://arxiv.org/abs/2111.02951)
from skqulacs.circuit.pre_defined import create_shirai_ansatz n_qubit = 4 c_depth = 2 ansatz = create_shirai_ansatz(n_qubit, c_depth) circuit_drawer(ansatz._circuit,"latex")
_____no_output_____
MIT
doc/source/notebooks/circuit_visualize.ipynb
forest1040/scikit-qulacs
npqcd_ansatzcreate_npqcd_ansatz( n_qubit: int, c_depth: int, c: float = 0.1)[arXiv:2108.01039](https://arxiv.org/abs/2108.01039)
from skqulacs.circuit.pre_defined import create_npqc_ansatz n_qubit = 4 c_depth = 2 ansatz = create_npqc_ansatz(n_qubit, c_depth) circuit_drawer(ansatz._circuit,"latex")
_____no_output_____
MIT
doc/source/notebooks/circuit_visualize.ipynb
forest1040/scikit-qulacs
yzcx_ansatzcreate_yzcx_ansatz( n_qubit: int, c_depth: int = 4, c: float = 0.1, seed: int = 9)[arXiv:2108.01039](https://arxiv.org/abs/2108.01039)
from skqulacs.circuit.pre_defined import create_yzcx_ansatz n_qubit = 4 c_depth = 2 ansatz = create_yzcx_ansatz(n_qubit, c_depth) circuit_drawer(ansatz._circuit,"latex")
_____no_output_____
MIT
doc/source/notebooks/circuit_visualize.ipynb
forest1040/scikit-qulacs
qcnn_ansatzcreate_qcnn_ansatz(n_qubit: int, seed: int = 0)Creates circuit used in https://www.tensorflow.org/quantum/tutorials/qcnn?hl=en, Section 1.
from skqulacs.circuit.pre_defined import create_qcnn_ansatz n_qubit = 8 ansatz = create_qcnn_ansatz(n_qubit) circuit_drawer(ansatz._circuit,"latex")
_____no_output_____
MIT
doc/source/notebooks/circuit_visualize.ipynb
forest1040/scikit-qulacs
**Downloading data from Google Drive**
!pip install -U -q PyDrive import os from pydrive.auth import GoogleAuth from pydrive.drive import GoogleDrive from google.colab import auth from oauth2client.client import GoogleCredentials import zipfile from google.colab import drive # 1. Authenticate and create the PyDrive client. auth.authenticate_user() gauth = GoogleAuth() gauth.credentials = GoogleCredentials.get_application_default() drive = GoogleDrive(gauth) # choose a local (colab) directory to store the data. local_download_path = os.path.expanduser('content/data') try: os.makedirs(local_download_path) except: pass # 2. Auto-iterate using the query syntax # https://developers.google.com/drive/v2/web/search-parameters # list of files in Google Drive folder file_list = drive.ListFile( {'q': "'1MsgfnmWPV-Nod0s1ZejYfsvbIwRMKZg_' in parents"}).GetList() # find data in .zip format and save it for f in file_list: if f['title'] == "severstal-steel-defect-detection.zip": fname = os.path.join(local_download_path, f['title']) f_ = drive.CreateFile({'id': f['id']}) f_.GetContentFile(fname) # extract files from zip to "extracted/" directory, this directory will be # used for further data modelling zip_ref = zipfile.ZipFile(fname, 'r') zip_ref.extractall(os.path.join(local_download_path, "extracted")) zip_ref.close()
_____no_output_____
MIT
Defect_check.ipynb
franchukpetro/steel_defect_detection
Define working directories
working_dir = os.path.join(local_download_path, "extracted") # defining working folders and labels train_images_folder = os.path.join(working_dir, "train_images") train_labels_file = os.path.join(working_dir, "train.csv") test_images_folder = os.path.join(working_dir, "test_images") test_labels_file = os.path.join(working_dir, "sample_submission.csv") train_labels = pd.read_csv(train_labels_file) test_labels = pd.read_csv(test_labels_file)
_____no_output_____
MIT
Defect_check.ipynb
franchukpetro/steel_defect_detection
**Data preprocessing** Drop duplicates
train_labels.drop_duplicates("ImageId", keep="last", inplace=True)
_____no_output_____
MIT
Defect_check.ipynb
franchukpetro/steel_defect_detection
Add to the train dataframe all non-defective images, setting None as value of EncodedPixels column
images = os.listdir(train_images_folder) present_rows = train_labels.ImageId.tolist() for img in images: if img not in present_rows: train_labels = train_labels.append({"ImageId" : img, "ClassId" : 1, "EncodedPixels" : None}, ignore_index=True)
_____no_output_____
MIT
Defect_check.ipynb
franchukpetro/steel_defect_detection
Change EncodedPixels column, by setting 1 if images is defected and 0 otherwise
for index, row in train_labels.iterrows(): train_labels.at[index, "EncodedPixels"] = int(train_labels.at[index, "EncodedPixels"] is not None)
_____no_output_____
MIT
Defect_check.ipynb
franchukpetro/steel_defect_detection
In total we got 12,568 training samples
train_labels
_____no_output_____
MIT
Defect_check.ipynb
franchukpetro/steel_defect_detection
Create data flow using ImageDataGenerator, see example here: https://medium.com/@vijayabhaskar96/tutorial-on-keras-flow-from-dataframe-1fd4493d237c
from keras_preprocessing.image import ImageDataGenerator def create_datagen(): return ImageDataGenerator( fill_mode='constant', cval=0., rotation_range=10, height_shift_range=0.1, width_shift_range=0.1, vertical_flip=True, rescale=1./255, zoom_range=0.1, horizontal_flip=True, validation_split=0.15 ) def create_test_gen(): return ImageDataGenerator(rescale=1/255.).flow_from_dataframe( dataframe=test_labels, directory=test_images_folder, x_col='ImageId', class_mode=None, target_size=(256, 512), batch_size=1, shuffle=False ) def create_flow(datagen, subset_name): return datagen.flow_from_dataframe( dataframe=train_labels, directory=train_images_folder, x_col='ImageId', y_col='EncodedPixels', class_mode='other', target_size=(256, 512), batch_size=32, subset=subset_name ) data_generator = create_datagen() train_gen = create_flow(data_generator, 'training') val_gen = create_flow(data_generator, 'validation') test_gen = create_test_gen()
Found 10683 validated image filenames. Found 1885 validated image filenames. Found 5506 validated image filenames.
MIT
Defect_check.ipynb
franchukpetro/steel_defect_detection
**Building and fiting model**
from keras.applications import InceptionResNetV2 from keras.models import Model from keras.layers.core import Dense from keras.layers.pooling import GlobalAveragePooling2D from keras import optimizers model = InceptionResNetV2(weights='imagenet', input_shape=(256,512,3), include_top=False) #model.load_weights('/kaggle/input/inceptionresnetv2/inception_resent_v2_weights_tf_dim_ordering_tf_kernels_notop.h5') model.trainable=False x=model.output x=GlobalAveragePooling2D()(x) x=Dense(128,activation='relu')(x) x=Dense(64,activation='relu')(x) out=Dense(1,activation='sigmoid')(x) #final layer binary classifier model_binary=Model(inputs=model.input,outputs=out) model_binary.compile( loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'] )
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/optimizers.py:793: The name tf.train.Optimizer is deprecated. Please use tf.compat.v1.train.Optimizer instead. WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:3657: The name tf.log is deprecated. Please use tf.math.log instead. WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/ops/nn_impl.py:183: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.where in 2.0, which has the same broadcast rule as np.where
MIT
Defect_check.ipynb
franchukpetro/steel_defect_detection
Fittting the data
STEP_SIZE_TRAIN=train_gen.n//train_gen.batch_size STEP_SIZE_VALID=val_gen.n//val_gen.batch_size STEP_SIZE_TEST=test_gen.n//test_gen.batch_size model_binary.fit_generator(generator=train_gen, steps_per_epoch=STEP_SIZE_TRAIN, validation_data=val_gen, validation_steps=STEP_SIZE_VALID, epochs=15 )
Epoch 1/15 333/333 [==============================] - 637s 2s/step - loss: 0.5724 - acc: 0.7208 - val_loss: 1.1674 - val_acc: 0.3987 Epoch 2/15 333/333 [==============================] - 632s 2s/step - loss: 0.3274 - acc: 0.8580 - val_loss: 0.6656 - val_acc: 0.7275 Epoch 3/15 333/333 [==============================] - 621s 2s/step - loss: 0.2728 - acc: 0.8835 - val_loss: 0.6790 - val_acc: 0.7636 Epoch 4/15 333/333 [==============================] - 621s 2s/step - loss: 0.2439 - acc: 0.8963 - val_loss: 0.2292 - val_acc: 0.9007 Epoch 5/15 333/333 [==============================] - 621s 2s/step - loss: 0.2275 - acc: 0.9085 - val_loss: 0.3075 - val_acc: 0.8732 Epoch 6/15 333/333 [==============================] - 618s 2s/step - loss: 0.2094 - acc: 0.9168 - val_loss: 0.3808 - val_acc: 0.8381 Epoch 7/15 333/333 [==============================] - 645s 2s/step - loss: 0.2031 - acc: 0.9174 - val_loss: 0.1383 - val_acc: 0.9369 Epoch 8/15 333/333 [==============================] - 644s 2s/step - loss: 0.1876 - acc: 0.9245 - val_loss: 0.3507 - val_acc: 0.8392 Epoch 9/15 333/333 [==============================] - 644s 2s/step - loss: 0.1842 - acc: 0.9241 - val_loss: 0.5051 - val_acc: 0.7922 Epoch 10/15 333/333 [==============================] - 635s 2s/step - loss: 0.1767 - acc: 0.9278 - val_loss: 0.2712 - val_acc: 0.8931 Epoch 11/15 333/333 [==============================] - 634s 2s/step - loss: 0.1626 - acc: 0.9380 - val_loss: 0.5116 - val_acc: 0.8365 Epoch 12/15 333/333 [==============================] - 634s 2s/step - loss: 0.1593 - acc: 0.9355 - val_loss: 0.2529 - val_acc: 0.9045 Epoch 13/15 333/333 [==============================] - 630s 2s/step - loss: 0.1588 - acc: 0.9359 - val_loss: 0.4838 - val_acc: 0.7820 Epoch 14/15 333/333 [==============================] - 630s 2s/step - loss: 0.1444 - acc: 0.9439 - val_loss: 0.0859 - val_acc: 0.9628 Epoch 15/15 333/333 [==============================] - 621s 2s/step - loss: 0.1493 - acc: 0.9434 - val_loss: 0.1346 - val_acc: 0.9487
MIT
Defect_check.ipynb
franchukpetro/steel_defect_detection
Predicting test labels
test_gen.reset() pred=model_binary.predict_generator(test_gen, steps=STEP_SIZE_TEST, verbose=1)
5506/5506 [==============================] - 211s 38ms/step
MIT
Defect_check.ipynb
franchukpetro/steel_defect_detection
**Saving results** Create dataframe with probalities of having defects for each image
ids = np.array(test_labels.ImageId) pred = np.array([p[0] for p in pred]) probabilities_df = pd.DataFrame({'ImageId': ids, 'Probability': pred}, columns=['ImageId', 'Probability']) probabilities_df from google.colab import files df.to_csv('filename.csv') files.download('filename.csv') drive.mount('/content/gdrive') !cp /content/defect_present_probabilities.csv gdrive/My\ Drive
_____no_output_____
MIT
Defect_check.ipynb
franchukpetro/steel_defect_detection
For Loop
week = ["Sunday", "Monday", "Tuesday", "Wednesday", "Thursday","Friday", "Saturday"] for x in week: print (x)
Sunday Monday Tuesday Wednesday Thursday Friday Saturday
Apache-2.0
Loop_Statement.ipynb
cocolleen/CPEN-21A-CPE1-2
The Break Statement
for x in week: print (x) if x == "Thursday": break for x in week: if x == "Thursday": break print (x)
Sunday Monday Tuesday Wednesday
Apache-2.0
Loop_Statement.ipynb
cocolleen/CPEN-21A-CPE1-2
Looping through string
for x in "Programmming with python": print (x)
P r o g r a m m m i n g w i t h p y t h o n
Apache-2.0
Loop_Statement.ipynb
cocolleen/CPEN-21A-CPE1-2
The range function
for x in range(10): print (x)
0 1 2 3 4 5 6 7 8 9
Apache-2.0
Loop_Statement.ipynb
cocolleen/CPEN-21A-CPE1-2
Nested Loops
adjective = ["red", "big", "tasty"] fruits = ["apple","banana", "cherry"] for x in adjective: for y in fruits: print (x, y)
red apple red banana red cherry big apple big banana big cherry tasty apple tasty banana tasty cherry
Apache-2.0
Loop_Statement.ipynb
cocolleen/CPEN-21A-CPE1-2
While loop
i = 10 while i > 6: print(i) i -= 1 #Assignment operator for subtraction i = 1 - i
10 9 8 7
Apache-2.0
Loop_Statement.ipynb
cocolleen/CPEN-21A-CPE1-2
The break statement
i = 10 while i > 6: print (i) if i == 8: break i-=1
10 9 8
Apache-2.0
Loop_Statement.ipynb
cocolleen/CPEN-21A-CPE1-2
The continue statement
i = 10 while i>6: i = i - 1 if i == 8: continue print (i)
9 7 6
Apache-2.0
Loop_Statement.ipynb
cocolleen/CPEN-21A-CPE1-2
The else statement
i = 10 while i>6: i = i - 1 print (i) else: print ("i is no longer greater than 6")
9 8 7 6 i is no longer greater than 6
Apache-2.0
Loop_Statement.ipynb
cocolleen/CPEN-21A-CPE1-2
Aplication 1
#WHILE LOOP x = 0 while x <= 10: print ("Value", x) x+=1 #FOR LOOPS value = ["Value 1", "Value 2", "Value 3", "Value 4", "Value 5","Value 6", "Value 7", "Value 8", "Value 9", "Value 10"] for x in value: print (x)
Value 1 Value 2 Value 3 Value 4 Value 5 Value 6 Value 7 Value 8 Value 9 Value 10
Apache-2.0
Loop_Statement.ipynb
cocolleen/CPEN-21A-CPE1-2
Application 2
i = 20 while i>4: i -= 1 print (i) else: print ('i is no longer greater than 3')
19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 i is no longer greater than 3
Apache-2.0
Loop_Statement.ipynb
cocolleen/CPEN-21A-CPE1-2
PLOT FOR FOLLOWER
# THE FOLLOWERS'S VALUE AND NAME plt.plot(markFollower[:3], [1, 1, 0]) plt.suptitle("FOLLOWER - NANO") plt.show() plt.plot(markFollower[1:5], [0, 1, 1,0]) plt.suptitle("FOLLOWER - MICRO") plt.show() plt.plot(markFollower[3:], [0, 1, 1]) plt.suptitle("FOLLOWER - MEDIUM") plt.show() plt.plot(markFollower[:3], [1, 1, 0], label="NANO") plt.plot(markFollower[1:5], [0, 1, 1,0], label="MICRO") plt.plot(markFollower[3:], [0, 1, 1], label="MEDIUM") plt.suptitle("FOLLOWER") plt.show()
_____no_output_____
MIT
.ipynb_checkpoints/Fuzzy - Copy-checkpoint.ipynb
evanezcent/Fuzzing
PLOT FOR LINGUSITIC
# THE LINGUISTIC'S VALUE AND NAME markEngagement = [0, 0.6, 1.7, 4.7, 6.9, 8, 10] plt.plot(markEngagement[:3], [1, 1, 0]) plt.suptitle("ENGAGEMENT - NANO") plt.show() plt.plot(markEngagement[1:4], [0, 1, 0]) plt.suptitle("ENGAGEMENT - MICRO") plt.show() plt.plot(markEngagement[2:6], [0, 1, 1, 0]) plt.suptitle("ENGAGEMENT - MEDIUM") plt.show() plt.plot(markEngagement[4:], [0, 1, 1]) plt.suptitle("ENGAGEMENT - MEGA") plt.show() plt.plot(markEngagement[:3], [1, 1, 0], label="NANO") plt.plot(markEngagement[1:4], [0, 1, 0], label="MICRO") plt.plot(markEngagement[2:6], [0, 1, 1, 0], label="MEDIUM") plt.plot(markEngagement[4:], [0, 1, 1], label="MEGA") plt.suptitle("ENGAGEMENT") plt.show()
_____no_output_____
MIT
.ipynb_checkpoints/Fuzzy - Copy-checkpoint.ipynb
evanezcent/Fuzzing
Fuzzification
# FOLLOWER========================================= # membership function def fuzzyFollower(countFol): follower = [] # STABLE GRAPH if (markFollower[0] <= countFol and countFol < markFollower[1]): scoreFuzzy = 1 follower.append(Datafuzzy(scoreFuzzy, lingFollower[0])) # GRAPH DOWN elif (markFollower[1] <= countFol and countFol <= markFollower[2]): scoreFuzzy = np.absolute((markFollower[2] - countFol) / (markFollower[2] - markFollower[1])) follower.append(Datafuzzy(scoreFuzzy, lingFollower[0])) # MICRO # GRAPH UP if (markFollower[1] <= countFol and countFol <= markFollower[2]): scoreFuzzy = 1 - np.absolute((markFollower[2] - countFol) / (markFollower[2] - markFollower[1])) follower.append(Datafuzzy(scoreFuzzy, lingFollower[1])) # STABLE GRAPH elif (markFollower[2] < countFol and countFol < markFollower[3]): scoreFuzzy = 1 follower.append(Datafuzzy(scoreFuzzy, lingFollower[1])) # GRAPH DOWN elif (markFollower[3] <= countFol and countFol <= markFollower[4]): scoreFuzzy = np.absolute((markFollower[4] - countFol) / (markFollower[4] - markFollower[3])) follower.append(Datafuzzy(scoreFuzzy, lingFollower[1])) # MEDIUM # GRAPH UP if (markFollower[3] <= countFol and countFol <= markFollower[4]): scoreFuzzy = 1 - scoreFuzzy follower.append(Datafuzzy(scoreFuzzy, lingFollower[2])) # STABLE GRAPH elif (countFol > markFollower[4]): scoreFuzzy = 1 follower.append(Datafuzzy(scoreFuzzy, lingFollower[2])) return follower # ENGAGEMENT RATE ========================================= # membership function def fuzzyEngagement(countEng): engagement = [] # STABLE GRAPH if (markEngagement[0] < countEng and countEng < markEngagement[1]): scoreFuzzy = 1 engagement.append(Datafuzzy(scoreFuzzy, lingEngagement[0])) # GRAPH DOWN elif (markEngagement[1] <= countEng and countEng < markEngagement[2]): scoreFuzzy = np.absolute((markEngagement[2] - countEng) / (markEngagement[2] - markEngagement[1])) engagement.append(Datafuzzy(scoreFuzzy, lingEngagement[0])) # MICRO # THE GRAPH GOES UP if (markEngagement[1] <= countEng and countEng < markEngagement[2]): scoreFuzzy = 1 - scoreFuzzy engagement.append(Datafuzzy(scoreFuzzy, lingEngagement[1])) # GRAPH DOWN elif (markEngagement[2] <= countEng and countEng < markEngagement[3]): scoreFuzzy = np.absolute((markEngagement[3] - countEng) / (markEngagement[3] - markEngagement[2])) engagement.append(Datafuzzy(scoreFuzzy, lingEngagement[1])) #MEDIUM # THE GRAPH GOES UP if (markEngagement[2] <= countEng and countEng < markEngagement[3]): scoreFuzzy = 1 - scoreFuzzy engagement.append(Datafuzzy(scoreFuzzy, lingEngagement[2])) # STABLE GRAPH elif (markEngagement[3] <= countEng and countEng < markEngagement[4]): scoreFuzzy = 1 engagement.append(Datafuzzy(scoreFuzzy, lingEngagement[2])) # GRAPH DOWN elif (markEngagement[4] <= countEng and countEng < markEngagement[5]): scoreFuzzy = np.absolute((markEngagement[5] - countEng) / (markEngagement[5] - markEngagement[4])) engagement.append(Datafuzzy(scoreFuzzy, lingEngagement[2])) # MEGA # THE GRAPH GOES UP if (markEngagement[4] <= countEng and countEng < markEngagement[5]): scoreFuzzy = 1 - scoreFuzzy engagement.append(Datafuzzy(scoreFuzzy, lingEngagement[3])) # STABLE GRAPH elif (countEng > markEngagement[5]): scoreFuzzy = 1 engagement.append(Datafuzzy(scoreFuzzy, lingEngagement[3])) return engagement
_____no_output_____
MIT
.ipynb_checkpoints/Fuzzy - Copy-checkpoint.ipynb
evanezcent/Fuzzing
Inference
def cekDecission(follower, engagement): temp_yes = [] temp_no = [] if (follower.decission == "NANO"): # Get minimal score fuzzy every decision NO or YES temp_yes.append(min(follower.score,engagement[0].score)) # if get 2 data fuzzy Engagement if (len(engagement) > 1): temp_yes.append(min(follower.score,engagement[1].score)) elif (follower.decission == "MICRO"): if (engagement[0].decission == "NANO"): temp_no.append(min(follower.score, engagement[0].score)) else: temp_yes.append(min(follower.score, engagement[0].score)) if (len(engagement) > 1): if (engagement[1].decission == "NANO"): temp_no.append(min(follower.score, engagement[1].score)) else: temp_yes.append(min(follower.score, engagement[1].score)) else: if (engagement[0].decission == "NANO" or engagement[0].decission == "MICRO"): temp_no.append(min(follower.score, engagement[0].score)) else: temp_yes.append(min(follower.score, engagement[0].score)) # if get 2 data fuzzy engagement if (len(engagement) > 1): if (engagement[1].decission == "NANO" or engagement[1].decission == "MICRO"): temp_no.append(min(follower.score, engagement[1].score)) else: temp_yes.append(min(follower.score, engagement[1].score)) return temp_yes, temp_no # Fuzzy Rules def fuzzyRules(follower, engagement): temp_yes = [] temp_no = [] temp_y = [] temp_n = [] temp_yes, temp_no = cekDecission(follower[0], engagement) # if get 2 data fuzzy Follower if (len(follower) > 1): temp_y, temp_n = cekDecission(follower[1], engagement) temp_yes += temp_y temp_no += temp_n return temp_yes, temp_no
_____no_output_____
MIT
.ipynb_checkpoints/Fuzzy - Copy-checkpoint.ipynb
evanezcent/Fuzzing
Result
# Result def getResult(resultYes, resultNo): yes = 0 no = 0 if(resultNo): no = max(resultNo) if(resultYes): yes = max(resultYes) return yes, no
_____no_output_____
MIT
.ipynb_checkpoints/Fuzzy - Copy-checkpoint.ipynb
evanezcent/Fuzzing
Defuzzification
def finalDecission(yes, no): mamdani = (((10 + 20 + 30 + 40 + 50 + 60 + 70) * no) + ((80 + 90 + 100) * yes)) / ((7 * no) + (yes * 3)) return mamdani
_____no_output_____
MIT
.ipynb_checkpoints/Fuzzy - Copy-checkpoint.ipynb
evanezcent/Fuzzing
Main Function
def mainFunction(followerCount, engagementRate): follower = fuzzyFollower(followerCount) engagement = fuzzyEngagement(engagementRate) resultYes, resultNo = fuzzyRules(follower, engagement) yes, no = getResult(resultYes, resultNo) return finalDecission(yes, no) data = pd.read_csv('influencers.csv') data hasil = [] result = [] idd = [] for i in range (len(data)): # Insert ID and the score into the list hasil.append([data.loc[i, 'id'], mainFunction(data.loc[i, 'followerCount'], data.loc[i, 'engagementRate'])]) result.append([data.loc[i, 'id'], (data.loc[i, 'followerCount'] * data.loc[i, 'engagementRate'] / 100)]) # Sorted list of hasil by fuzzy score DECREMENT hasil.sort(key=lambda x:x[1], reverse=True) result.sort(key=lambda x:x[1], reverse=True) result = result[:20] hasil = hasil[:20] idd = [row[0] for row in result] hasil idd def cekAkurasi(hasil, result): count = 0 for i in range(len(hasil)): if (hasil[i][0] in idd): count += 1 return count print("AKURASI : ", cekAkurasi(hasil, result)/20*100, " %") chosen = pd.DataFrame(hasil[:20], columns=['ID', 'Score']) chosen chosen.to_csv('choosen.csv')
_____no_output_____
MIT
.ipynb_checkpoints/Fuzzy - Copy-checkpoint.ipynb
evanezcent/Fuzzing
Road Following - Live demo (TensorRT) with collision avoidance Added collision avoidance ResNet18 TRT threshold between free and blocked is the controller - action: just a pause as long the object is in front or by time increase in speed_gain requires some small increase in steer_gain (once a slider is blue (mouse click), arrow keys left/right can be used) 10/11/2020 TensorRT
import torch device = torch.device('cuda')
_____no_output_____
MIT
trt-Jetbot-RoadFollowing_with_CollisionRESNet_TRT.ipynb
tomMEM/Jetbot-Project
Load the TRT optimized models by executing the cell below
import torch from torch2trt import TRTModule model_trt = TRTModule() model_trt.load_state_dict(torch.load('best_steering_model_xy_trt.pth')) # well trained road following model model_trt_collision = TRTModule() model_trt_collision.load_state_dict(torch.load('best_model_trt.pth')) # anti collision model trained for one object to block and street signals (ground, strips) as free
_____no_output_____
MIT
trt-Jetbot-RoadFollowing_with_CollisionRESNet_TRT.ipynb
tomMEM/Jetbot-Project
Creating the Pre-Processing Function We have now loaded our model, but there's a slight issue. The format that we trained our model doesnt exactly match the format of the camera. To do that, we need to do some preprocessing. This involves the following steps:1. Convert from HWC layout to CHW layout2. Normalize using same parameters as we did during training (our camera provides values in [0, 255] range and training loaded images in [0, 1] range so we need to scale by 255.03. Transfer the data from CPU memory to GPU memory4. Add a batch dimension
import torchvision.transforms as transforms import torch.nn.functional as F import cv2 import PIL.Image import numpy as np mean = torch.Tensor([0.485, 0.456, 0.406]).cuda().half() std = torch.Tensor([0.229, 0.224, 0.225]).cuda().half() def preprocess(image): image = PIL.Image.fromarray(image) image = transforms.functional.to_tensor(image).to(device).half() image.sub_(mean[:, None, None]).div_(std[:, None, None]) return image[None, ...]
_____no_output_____
MIT
trt-Jetbot-RoadFollowing_with_CollisionRESNet_TRT.ipynb
tomMEM/Jetbot-Project
Awesome! We've now defined our pre-processing function which can convert images from the camera format to the neural network input format.Now, let's start and display our camera. You should be pretty familiar with this by now.
from IPython.display import display import ipywidgets import traitlets from jetbot import Camera, bgr8_to_jpeg camera = Camera() import IPython image_widget = ipywidgets.Image() traitlets.dlink((camera, 'value'), (image_widget, 'value'), transform=bgr8_to_jpeg)
_____no_output_____
MIT
trt-Jetbot-RoadFollowing_with_CollisionRESNet_TRT.ipynb
tomMEM/Jetbot-Project
We'll also create our robot instance which we'll need to drive the motors.
from jetbot import Robot robot = Robot()
_____no_output_____
MIT
trt-Jetbot-RoadFollowing_with_CollisionRESNet_TRT.ipynb
tomMEM/Jetbot-Project
Now, we will define sliders to control JetBot> Note: We have initialize the slider values for best known configurations, however these might not work for your dataset, therefore please increase or decrease the sliders according to your setup and environment1. Speed Control (speed_gain_slider): To start your JetBot increase ``speed_gain_slider`` 2. Steering Gain Control (steering_gain_sloder): If you see JetBot is woblling, you need to reduce ``steering_gain_slider`` till it is smooth3. Steering Bias control (steering_bias_slider): If you see JetBot is biased towards extreme right or extreme left side of the track, you should control this slider till JetBot start following line or track in the center. This accounts for motor biases as well as camera offsets> Note: You should play around above mentioned sliders with lower speed to get smooth JetBot road following behavior.
speed_gain_slider = ipywidgets.FloatSlider(min=0.0, max=1.0, step=0.01, description='speed gain') steering_gain_slider = ipywidgets.FloatSlider(min=0.0, max=1.0, step=0.01, value=0.10, description='steering gain') steering_dgain_slider = ipywidgets.FloatSlider(min=0.0, max=0.5, step=0.001, value=0.23, description='steering kd') steering_bias_slider = ipywidgets.FloatSlider(min=-0.3, max=0.3, step=0.01, value=0.0, description='steering bias') display(speed_gain_slider, steering_gain_slider, steering_dgain_slider, steering_bias_slider) #anti collision --------------------------------------------------------------------------------------------------- blocked_slider = ipywidgets.FloatSlider(description='blocked', min=0.0, max=1.0, orientation='horizontal') stopduration_slider= ipywidgets.IntSlider(min=1, max=1000, step=1, value=10, description='Manu. time stop') #anti-collision stop time #set value according the common threshold e.g. 0.8 block_threshold= ipywidgets.FloatSlider(min=0, max=1.2, step=0.01, value=0.8, description='Manu. bl threshold') #anti-collision block probability display(image_widget) d2 = IPython.display.display("", display_id=2) display(ipywidgets.HBox([blocked_slider, block_threshold, stopduration_slider])) # TIME STOP slider is to select manually time-for-stop when object has been discovered #x_slider = ipywidgets.FloatSlider(min=-1.0, max=1.0, description='x') #y_slider = ipywidgets.FloatSlider(min=0, max=1.0, orientation='vertical', description='y') #steering_slider = ipywidgets.FloatSlider(min=-1.0, max=1.0, description='steering') #speed_slider = ipywidgets.FloatSlider(min=0, max=1.0, orientation='vertical', description='speed') #display(ipywidgets.HBox([y_slider, speed_slider,x_slider, steering_slider])) #sliders take time , reduce FPS a couple of frames per second #observation sliders only from threading import Thread def display_class_probability(prob_blocked): global blocked_slide blocked_slider.value = prob_blocked return def model_new(image_preproc): global model_trt_collision,angle_last xy = model_trt(image_preproc).detach().float().cpu().numpy().flatten() x = xy[0] y = (0.5 - xy[1]) / 2.0 angle=math.atan2(x, y) pid =angle * steer_gain + (angle - angle_last) * steer_dgain steer_val = pid + steer_bias angle_last = angle robot.left_motor.value = max(min(speed_value + steer_val, 1.0), 0.0) robot.right_motor.value = max(min(speed_value - steer_val, 1.0), 0.0) return
_____no_output_____
MIT
trt-Jetbot-RoadFollowing_with_CollisionRESNet_TRT.ipynb
tomMEM/Jetbot-Project
Next, we'll create a function that will get called whenever the camera's value changes. This function will do the following steps1. Pre-process the camera image2. Execute the neural network3. Compute the approximate steering value4. Control the motors using proportional / derivative control (PD)
import time import os import math angle = 0.0 angle_last = 0.0 angle_last_block=0 count_stops=0 go_on=1 stop_time=20 #number of frames to remain stopped x=0.0 y=0.0 speed_value=speed_gain_slider.value t1=0 road_following=1 speed_value_block=0 def execute(change): global angle, angle_last, angle_last_block, blocked_slider, robot,count_stops, stop_time,go_on,x,y,block_threshold global speed_value, steer_gain, steer_dgain, steer_bias,t1,model_trt, model_trt_collision,road_following,speed_value_block steer_gain=steering_gain_slider.value steer_dgain=steering_dgain_slider.value steer_bias=steering_bias_slider.value image_preproc = preprocess(change['new']).to(device) #anti_collision model----- prob_blocked = float(F.softmax(model_trt_collision(image_preproc), dim=1) .flatten()[0]) #blocked_slider.value = prob_blocked #display of detection probability value for the four classes t = Thread(target = display_class_probability, args =(prob_blocked,), daemon=False) t.start() stop_time=stopduration_slider.value if go_on==1: if prob_blocked > block_threshold.value: # threshold should be above 0.5, #start of collision_avoidance count_stops +=1 go_on=2 road_following=2 x=0.0 #set steering zero y=0 #set steering zero speed_value_block=0 # set speed zero or negative or turn #anti_collision end------- else: #start of road following go_on=1 count_stops=0 speed_value = speed_gain_slider.value # t = Thread(target = model_new, args =(image_preproc,), daemon=True) t.start() road_following=1 else: count_stops += 1 if count_stops<stop_time: go_on=2 else: go_on=1 count_stops=0 road_following=1 #x_slider.value = x #take time 4 FPS #y_slider.value = y #y_speed if road_following>1: angle_block=math.atan2(x, y) pid =angle_block * steer_gain + (angle - angle_last) * steer_dgain steer_val_block = pid + steer_bias angle_last_block = angle_block robot.left_motor.value = max(min(speed_value_block + steer_val_block, 1.0), 0.0) robot.right_motor.value = max(min(speed_value_block - steer_val_block, 1.0), 0.0) t2 = time.time() s = f"""{int(1/(t2-t1))} FPS""" d2.update(IPython.display.HTML(s) ) t1 = time.time() execute({'new': camera.value})
_____no_output_____
MIT
trt-Jetbot-RoadFollowing_with_CollisionRESNet_TRT.ipynb
tomMEM/Jetbot-Project
Cool! We've created our neural network execution function, but now we need to attach it to the camera for processing.We accomplish that with the observe function. >WARNING: This code will move the robot!! Please make sure your robot has clearance and it is on Lego or Track you have collected data on. The road follower should work, but the neural network is only as good as the data it's trained on!
camera.observe(execute, names='value')
_____no_output_____
MIT
trt-Jetbot-RoadFollowing_with_CollisionRESNet_TRT.ipynb
tomMEM/Jetbot-Project
Awesome! If your robot is plugged in it should now be generating new commands with each new camera frame. You can now place JetBot on Lego or Track you have collected data on and see whether it can follow track.If you want to stop this behavior, you can unattach this callback by executing the code below.
import time camera.unobserve(execute, names='value') time.sleep(0.1) # add a small sleep to make sure frames have finished processing robot.stop() camera.stop()
_____no_output_____
MIT
trt-Jetbot-RoadFollowing_with_CollisionRESNet_TRT.ipynb
tomMEM/Jetbot-Project
Elastic search in CollabHad to install elastic search in colab for 'reasons' and this is the way it worked for me. Might be usefull for someone else also.Works with 7.9.2. Probably could be run also with 7.14.0, but didn't have time to debug the issues. If you want, you can try and just run the instance under the 'elasticsearch' user to get the proper error log.
#7.9.1 works with ES 7.9.2 !pip install -Iv elasticsearch==7.9.1 #download ES 7.92 and extract %%bash wget -q https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-oss-7.9.2-linux-x86_64.tar.gz wget -q https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-oss-7.9.2-linux-x86_64.tar.gz.sha512 shasum -a 512 -c elasticsearch-oss-7.9.2-linux-x86_64.tar.gz.sha512 tar -xzf elasticsearch-oss-7.9.2-linux-x86_64.tar.gz # create user elasticsearch and group elasticsearch, under which will the ES instance be running # ES can't run under root !sudo useradd elasticsearch !sudo grep elasticsearch /etc/passwd !sudo groupadd elasticsearch !sudo usermod -a -G elasticsearch elasticsearch !grep elasticsearch /etc/group # change the directory rights to user:group !sudo chown elasticsearch:elasticsearch -R elasticsearch-7.9.2 #run ES instance as a daemon %%bash --bg sudo -H -u elasticsearch elasticsearch-7.9.2/bin/elasticsearch # give time to start up import time time.sleep(20) #print the process %%bash ps -ef | grep elastic #test the instance %%bash curl -sX GET "localhost:9200/" # test the python client/lib from elasticsearch import Elasticsearch es = Elasticsearch() es.ping()
/usr/local/lib/python3.7/dist-packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.26.6) or chardet (3.0.4) doesn't match a supported version! RequestsDependencyWarning)
MIT
elasticsearch_install.ipynb
xSakix/AI_colan_notebooks
Building a Bayesian Network---In this tutorial, we introduce how to build a **Bayesian (belief) network** based on domain knowledge of the problem.If we build the Bayesian network in different ways, the built network can have different graphs and sizes, which can greatly affect the memory requirement and inference efficience. To represent the size of the Bayesian network, we first introduce the **number of free parameters**. Number of Free Parameters ---The size of a Bayesian network includes the size of the graph and the probability tables of each node. Obviously, the probability tables dominate the graph, thus we focus on the size of the probability tables.For the sake of convenience, we only consider **discrete** variables in the network, and the continuous variables will be discretised. Then, for each variable $X$ in the network, we have the following notations.- $\Omega(X)$: the domain (set of possible values) of $X$- $|\Omega(X)|$: the number of possible values of $X$- $parents(X)$: the parents (direct causes) of $X$ in the networkFor each variable $X$, the probability table contains the probabilities for $P(X\ |\ parents(X))$ for all possible $X$ values and $parent(X)$ values. Let's consider the following situations:1. $X$ does not have any parent. In this case, the table stores $P(X)$. There are $|\Omega(X)|$ probabilities, each for a possible value of $X$. However, due to the [normalisation rule](https://github.com/meiyi1986/tutorials/blob/master/notebooks/reasoning-under-uncertainty-basics.ipynb), all the probabilities add up to 1. Thus, we need to store only $|\Omega(X)|-1$ probabilities, and the last probability can be calculated by ($1-$the sum of the stored probabilities). Therefore, the probability table contains $|\Omega(X)|-1$ rows/probabilities.2. $X$ has one parent $Y$. In this case, for each condition $y \in \Omega(Y)$, we need to store the conditional probabilities $P(X\ |\ Y = y)$. Again, we need to store $|\Omega(X)|-1$ conditional probabilities for $P(X\ |\ Y = y)$, and can calculate the last conditional probability by the normalisation rule. Therefore, the probability table contains $(|\Omega(X)|-1)*|\Omega(Y)|$ rows/probabilities.3. $X$ has multiple parents $Y_1, \dots, Y_m$. In this case, there are $|\Omega(Y_1)|*\dots * |\Omega(Y_m)|$ possible conditions $[Y_1 = y_1, \dots, Y_m = y_m]$. For each condition, we need to store $|\Omega(X)|-1$ conditional probabilities for $P(X\ |\ Y_1 = y_1, \dots, Y_m = y_m)$. Therefore, the probability table contains $(|\Omega(X)|-1)*|\Omega(Y_1)|*\dots * |\Omega(Y_m)|$ rows/probabilities.As shown in the above alarm network, all the variables are binary, i.e. $|\Omega(X)| = 2$. Therefore, $B$ and $E$ have only 1 row in their probability tables, since they have no parent. $A$ has $1 \times 2 \times 2 = 4$ rows in its probability tables, since it has two binary parents $B$ and $E$, leading to four possible conditions.> **DEFINITION**: The **number of free parameters** of a Bayesian network is the number of probabilities we need to estimate (can NOT be derived/calculated) in the probability tables. Consider a Bayesian network with the factorisation$$\begin{aligned}& P(X_1, \dots, X_n) \\& = P(X_1\ |\ parents(X_1)) \dots * P(X_n\ |\ parents(X_n)),\end{aligned}$$the number of free parameters is$$\begin{aligned}P(X_1, \dots, X_n) & = (|\Omega(X_1)|-1)*\prod_{Y \in parents(X_1)}|\Omega(Y)| \\& + (|\Omega(X_2)|-1)*\prod_{Y \in parents(X_2)}|\Omega(Y)| \\& + \dots \\& + (|\Omega(X_n)|-1)*\prod_{Y \in parents(X_n)}|\Omega(Y)|. \\\end{aligned}$$ Let's calculate the number of free parameters of the following simple networks, assuming that all the variables are binary.- **Direct cause**: $P(A)$ has 1 free parameter, $P(B\ |\ A)$ has 2 free parameters. The network has $1+2 = 3$ free parameters.- **Indirect cause**: $P(A)$ has 1 free parameter, $P(B\ |\ A)$ and $P(C\ |\ B)$ have 2 free parameters. The network has $1+2+2 = 5$ free parameters.- **Common cause**: $P(A)$ has 1 free parameter, $P(B\ |\ A)$ and $P(C\ |\ A)$ have 2 free parameters. The network has $1+2+2 = 5$ free parameters.- **Common effect**: $P(A)$ and $P(B)$ have 1 free parameter, $P(C\ |\ A, B)$ has $2\times 2 = 4$ free parameters. The network has $1+1+4 = 6$ free parameters.> **NOTE**: We can see that the common effect dependency causes the most free parameters required for the network. Therefore, when building a Bayesian network, we should try to reduce the number of such dependencies to reduce the number of free parameters of the network. Building Bayesian Network from Domain Knowledge---Building a Bayesian network mainly consists of the following three steps:1. Identify a set of **random variables** that describe the problem, using domain knowledge.2. Build the **directed acyclic graph**, i.e., the **directed links** between the random variables based on domain knowledge about the causal relationships between the variables.3. Build the **conditional probability table** for each variable, by estimating the necessary probabilities using domain knowledge or historical data.Here, we introduce the Pearl's network construction algorithm, which is a way to build the network based on **node ordering**. ```Python Step 1: identify variablesIdentify the random variables that describe the world of reasoning Step 2: build the graph, add the linksSort the random variables by some orderSet bn = []for var in sorted_vars: Find the minimum subset of variables in bn so that P(var | bn) = P(var | subset) Add var into bn for bn_var in subset: Add a direct link [bn_var, var] Step 3: estimate the conditional probability table Estimate the conditional probabilities P(var | subset)``` In this algorithm, the **node ordering** is critical to determine the number of links between the nodes, and thus the size of the conditional probability tables. We show how the links are added in to the network under different node orders, using the alarm network as an example.---------- Order 1: $B \rightarrow E \rightarrow A \rightarrow J \rightarrow M$- **Step 1**: The node $B$ is added into the network. No edge is added, since there is only one node in the network.- **Step 2**: The node $E$ is added into the network. No edge from $B$ to $E$ is added, since $B$ and $E$ are independent.- **Step 3**: The node $A$ is added into the network. Two edges $[B, A]$ and $[E, A]$ are added. This is because $B$ and $E$ are both direct causes of $A$, and thus $A$ is dependent on $B$ and $E$. - **Step 4**: The node $J$ is added into the network. The minimum subset $A \subseteq \{B, E, A\}$ in the network is found to be the parent of $J$, since $J$ is conditionally independent from $B$ and $E$ given $A$, i.e., $P(J\ |\ B, E, A) = P(J\ |\ A)$. An edge $[A, J]$ is added into the network.- **Step 5**: The node $M$ is added into the network. The minimum subset $A \subseteq \{B, E, A, J\}$ in the network is found to be the parent of $M$, since $M$ is conditionally independent from $B$, $E$ and $J$ given $A$, i.e., $P(M\ |\ B, E, A, J) = P(M\ |\ A)$. An edge $[A, M]$ is added into the network.The built network is shows as follows. The number of free parameters in this network is $1 + 1 + 4 + 2 + 2 = 10$.---------- Order 2: $J \rightarrow M \rightarrow A \rightarrow B \rightarrow E$- **Step 1**: The node $J$ is added into the network. No edge is added, since there is only one node in the network.- **Step 2**: The node $M$ is added into the network. $M$ and $J$ are dependent (_note that the common cause $A$ has not been given yet at this step_), i.e., $P(M\ |\ J) \neq P(M)$. Therefore, an edge $[J, M]$ is added into the network.- **Step 3**: The node $A$ is added into the network. Two edges $[J, A]$ and $[M, A]$ are added, since $J$ and $M$ are both dependent on $A$.- **Step 4**: The node $B$ is added into the network. The minimum subset $A \subseteq \{J, M, A\}$ in the network is found to be the parent of $B$, since $B$ is conditionally independent from $J$ and $M$ given $A$, i.e., $P(B\ |\ J, M, A) = P(B\ |\ A)$. An edge $[A, B]$ is added into the network.- **Step 5**: The node $E$ is added into the network. The minimum subset $\{A, B\} \subseteq \{J, M, A, B\}$ in the network is found to be the parent of $E$, since $E$ is conditionally independent from $J$ and $M$ given $A$ and $E$, i.e., $P(M\ |\ J, M, A, B) = P(M\ |\ A, B)$ (_note that $B$ and $E$ have the common effect $A$, thus when $A$ is given, $B$ and $E$ are conditionally dependent_). Two edges $[A, E]$ and $[B, E]$ are added into the network.The built network is shows as follows. The number of free parameters in this network is $1 + 2 + 4 + 2 + 4 = 13$.---------- Order 3: $J \rightarrow M \rightarrow B \rightarrow E \rightarrow A$- **Step 1**: The node $J$ is added into the network. No edge is added, since there is only one node in the network.- **Step 2**: The node $M$ is added into the network. $M$ and $J$ are dependent (note that the common cause $A$ is not given at this step), i.e., $P(M\ |\ J) \neq P(M)$. Therefore, an edge $[J, M]$ is added into the network.- **Step 3**: The node $B$ is added into the network. Two edges $[J, B]$ and $[M, B]$ are added, since $J$ and $M$ are both dependent on $B$ (through $A$, which has not been added yet).- **Step 4**: The node $E$ is added into the network. There is NO conditional independence found among $\{J, M, B, E\}$ without giving $A$. Therefore, three edges $[J, E]$, $[M, E]$, $[B, E]$ are added into the network.- **Step 5**: The node $A$ is added into the network. First, two edges Two edges $[J, A]$ and $[M, A]$ are added, since $J$ and $M$ are both dependent on $A$. Then, another two edges $[B, A]$ and $[E, A]$ are also added, since $B$ and $E$ are both direct causes of $A$.The built network is shows as follows. The number of free parameters in this network is $1 + 2 + 4 + 8 + 16 = 31$.--------- We can see that different node orders can lead to greatly different graphs and numbers of free parameters. Therefore, we should find the **optimal node order** that leads to the most **compact** network (with the fewest free parameters).> **QUESTION**: How to find the optimal node order that leads to the most compact Bayesian network?The node order is mainly determined based on our **domain knowledge** about **cause and effect**. At first, we add the nodes with no cause (i.e., the root causes) into the ordered list. Then, at each step, we find the remaining nodes whose direct causes are all in the current ordered list (i.e., all their direct causes are given) and append them into the end of the ordered list. This way, we only need to add direct links from their direct causes to them.The pseucode of the node ordering is shown as follows. ```Pythondef node_ordering(all_nodes): Set ordered_nodes = [], remaining_nodes = all_nodes while remaining_nodes is not empty: Select the nodes whose direct causes are all in ordered_nodes Append the selected nodes into ordered_nodes Remove the selected nodes from remaining_nodes return ordered_nodes``` For the alarm network, first we add two nodes $\{B, E\}$ into the ordered list, since they are the root causes, and have no direct cause. Then, we add $A$ into the ordered list, since it has two direct causes $B$ and $E$, both are already in the ordered list. Finally, we add $J$ and $M$ into the list, since their direct cause $A$ is already in the ordered list. Building Alarm Network through `pgmpy` ---Here, we show how to build the alarm network through the Python [pgmpy](https://pgmpy.org) library. The alarm network is displayed again below.First, we install the library using `pip`.
pip install pgmpy
Requirement already satisfied: pgmpy in /Users/yimei/miniforge3/lib/python3.9/site-packages (0.1.17) Requirement already satisfied: torch in /Users/yimei/miniforge3/lib/python3.9/site-packages (from pgmpy) (1.11.0) Requirement already satisfied: statsmodels in /Users/yimei/miniforge3/lib/python3.9/site-packages (from pgmpy) (0.13.2) Requirement already satisfied: pyparsing in /Users/yimei/miniforge3/lib/python3.9/site-packages (from pgmpy) (3.0.6) Requirement already satisfied: networkx in /Users/yimei/miniforge3/lib/python3.9/site-packages (from pgmpy) (2.7.1) Requirement already satisfied: scipy in /Users/yimei/miniforge3/lib/python3.9/site-packages (from pgmpy) (1.8.0) Requirement already satisfied: scikit-learn in /Users/yimei/miniforge3/lib/python3.9/site-packages (from pgmpy) (1.0.2) Requirement already satisfied: tqdm in /Users/yimei/miniforge3/lib/python3.9/site-packages (from pgmpy) (4.62.1) Requirement already satisfied: pandas in /Users/yimei/miniforge3/lib/python3.9/site-packages (from pgmpy) (1.3.2) Requirement already satisfied: joblib in /Users/yimei/miniforge3/lib/python3.9/site-packages (from pgmpy) (1.1.0) Requirement already satisfied: numpy in /Users/yimei/miniforge3/lib/python3.9/site-packages (from pgmpy) (1.21.2) Requirement already satisfied: pytz>=2017.3 in /Users/yimei/miniforge3/lib/python3.9/site-packages (from pandas->pgmpy) (2021.1) Requirement already satisfied: python-dateutil>=2.7.3 in /Users/yimei/miniforge3/lib/python3.9/site-packages (from pandas->pgmpy) (2.8.2) Requirement already satisfied: six>=1.5 in /Users/yimei/miniforge3/lib/python3.9/site-packages (from python-dateutil>=2.7.3->pandas->pgmpy) (1.16.0) Requirement already satisfied: threadpoolctl>=2.0.0 in /Users/yimei/miniforge3/lib/python3.9/site-packages (from scikit-learn->pgmpy) (3.1.0) Requirement already satisfied: patsy>=0.5.2 in /Users/yimei/miniforge3/lib/python3.9/site-packages (from statsmodels->pgmpy) (0.5.2) Requirement already satisfied: packaging>=21.3 in /Users/yimei/miniforge3/lib/python3.9/site-packages (from statsmodels->pgmpy) (21.3) Requirement already satisfied: typing-extensions in /Users/yimei/miniforge3/lib/python3.9/site-packages (from torch->pgmpy) (4.1.1) Note: you may need to restart the kernel to use updated packages.
Apache-2.0
notebooks/bayesian-network-building.ipynb
meiyi1986/tutorials
Then, we import the necessary modules for the Bayesian network as follows.
from pgmpy.models import BayesianNetwork from pgmpy.factors.discrete import TabularCPD
_____no_output_____
Apache-2.0
notebooks/bayesian-network-building.ipynb
meiyi1986/tutorials
Now, we build the alarm Bayesian network as follows.1. We define the network structure by specifying the four links.2. We define (estimate) the discrete conditional probability tables, represented as the `TabularCPD` class.
# Define the network structure alarm_model = BayesianNetwork( [ ("Burglary", "Alarm"), ("Earthquake", "Alarm"), ("Alarm", "JohnCall"), ("Alarm", "MaryCall"), ] ) # Define the probability tables by TabularCPD cpd_burglary = TabularCPD( variable="Burglary", variable_card=2, values=[[0.999], [0.001]] ) cpd_earthquake = TabularCPD( variable="Earthquake", variable_card=2, values=[[0.998], [0.002]] ) cpd_alarm = TabularCPD( variable="Alarm", variable_card=2, values=[[0.999, 0.71, 0.06, 0.05], [0.001, 0.29, 0.94, 0.95]], evidence=["Burglary", "Earthquake"], evidence_card=[2, 2], ) cpd_johncall = TabularCPD( variable="JohnCall", variable_card=2, values=[[0.95, 0.1], [0.05, 0.9]], evidence=["Alarm"], evidence_card=[2], ) cpd_marycall = TabularCPD( variable="MaryCall", variable_card=2, values=[[0.99, 0.3], [0.01, 0.7]], evidence=["Alarm"], evidence_card=[2], ) # Associating the probability tables with the model structure alarm_model.add_cpds( cpd_burglary, cpd_earthquake, cpd_alarm, cpd_johncall, cpd_marycall )
_____no_output_____
Apache-2.0
notebooks/bayesian-network-building.ipynb
meiyi1986/tutorials
We can view the nodes of the alarm network.
# Viewing nodes of the model alarm_model.nodes()
_____no_output_____
Apache-2.0
notebooks/bayesian-network-building.ipynb
meiyi1986/tutorials
We can also view the edges of the alarm network.
# Viewing edges of the model alarm_model.edges()
_____no_output_____
Apache-2.0
notebooks/bayesian-network-building.ipynb
meiyi1986/tutorials
We can show the probability tables using the `print()` method. > **NOTE**: the `pgmpy` library stores ALL the probabilities (including the last probability). This requires a bit more memory, but can save time for calculating the last probability by normalisation rule.Let's print the probability tables for **Alarm** and **MaryCalls**. For each variable, the value (0) stands for `False`, while the value (1) is `True`.
# Print the probability table of the Alarm node print(cpd_alarm) # Print the probability table of the MaryCalls node print(cpd_marycall)
+------------+---------------+---------------+---------------+---------------+ | Burglary | Burglary(0) | Burglary(0) | Burglary(1) | Burglary(1) | +------------+---------------+---------------+---------------+---------------+ | Earthquake | Earthquake(0) | Earthquake(1) | Earthquake(0) | Earthquake(1) | +------------+---------------+---------------+---------------+---------------+ | Alarm(0) | 0.999 | 0.71 | 0.06 | 0.05 | +------------+---------------+---------------+---------------+---------------+ | Alarm(1) | 0.001 | 0.29 | 0.94 | 0.95 | +------------+---------------+---------------+---------------+---------------+ +-------------+----------+----------+ | Alarm | Alarm(0) | Alarm(1) | +-------------+----------+----------+ | MaryCall(0) | 0.99 | 0.3 | +-------------+----------+----------+ | MaryCall(1) | 0.01 | 0.7 | +-------------+----------+----------+
Apache-2.0
notebooks/bayesian-network-building.ipynb
meiyi1986/tutorials
We can find all the **(conditional) independencies** between the nodes in the network.
alarm_model.get_independencies()
_____no_output_____
Apache-2.0
notebooks/bayesian-network-building.ipynb
meiyi1986/tutorials
We can also find the **local (conditional) independencies of a specific node** in the network as follows.
# Checking independcies of a node alarm_model.local_independencies("JohnCall")
_____no_output_____
Apache-2.0
notebooks/bayesian-network-building.ipynb
meiyi1986/tutorials
1. Introduction
import os import sys module_path = os.path.abspath(os.path.join('..')) if module_path not in sys.path: sys.path.append(module_path) import numpy as np import matplotlib.pyplot as plt %matplotlib inline from prml.linear import ( LinearRegression, RidgeRegression, BayesianRegression ) from prml.preprocess.polynomial import PolynomialFeature
_____no_output_____
MIT
notebooks/ch01_Introduction.ipynb
wenbos3109/PRML
1.1. Example: Polynomial Curve Fitting
def create_toy_data(func, sample_size, std): x = np.linspace(0, 1, sample_size) t = func(x) + np.random.normal(scale=std, size=x.shape) return x, t def func(x): return np.sin(2 * np.pi * x) x_train, y_train = create_toy_data(func, 10, 0.25) x_test = np.linspace(0, 1, 100) y_test = func(x_test) plt.scatter(x_train, y_train, facecolor="none", edgecolor="b", s=50, label="training data") plt.plot(x_test, y_test, c="g", label="$\sin(2\pi x)$") plt.legend() plt.show() for i, degree in enumerate([0, 1, 3, 9]): plt.subplot(2, 2, i + 1) feature = PolynomialFeature(degree) X_train = feature.transform(x_train) X_test = feature.transform(x_test) model = LinearRegression() model.fit(X_train, y_train) y = model.predict(X_test) plt.scatter(x_train, y_train, facecolor="none", edgecolor="b", s=50, label="training data") plt.plot(x_test, y_test, c="g", label="$\sin(2\pi x)$") plt.plot(x_test, y, c="r", label="fitting") plt.ylim(-1.5, 1.5) plt.annotate("M={}".format(degree), xy=(-0.15, 1)) plt.legend(bbox_to_anchor=(1.05, 0.64), loc=2, borderaxespad=0.) plt.show() def rmse(a, b): return np.sqrt(np.mean(np.square(a - b))) training_errors = [] test_errors = [] for i in range(10): feature = PolynomialFeature(i) X_train = feature.transform(x_train) X_test = feature.transform(x_test) model = LinearRegression() model.fit(X_train, y_train) y = model.predict(X_test) training_errors.append(rmse(model.predict(X_train), y_train)) test_errors.append(rmse(model.predict(X_test), y_test + np.random.normal(scale=0.25, size=len(y_test)))) plt.plot(training_errors, 'o-', mfc="none", mec="b", ms=10, c="b", label="Training") plt.plot(test_errors, 'o-', mfc="none", mec="r", ms=10, c="r", label="Test") plt.legend() plt.xlabel("degree") plt.ylabel("RMSE") plt.show()
_____no_output_____
MIT
notebooks/ch01_Introduction.ipynb
wenbos3109/PRML
Regularization
feature = PolynomialFeature(9) X_train = feature.transform(x_train) X_test = feature.transform(x_test) model = RidgeRegression(alpha=1e-3) model.fit(X_train, y_train) y = model.predict(X_test) y = model.predict(X_test) plt.scatter(x_train, y_train, facecolor="none", edgecolor="b", s=50, label="training data") plt.plot(x_test, y_test, c="g", label="$\sin(2\pi x)$") plt.plot(x_test, y, c="r", label="fitting") plt.ylim(-1.5, 1.5) plt.legend() plt.annotate("M=9", xy=(-0.15, 1)) plt.show()
_____no_output_____
MIT
notebooks/ch01_Introduction.ipynb
wenbos3109/PRML
1.2.6 Bayesian curve fitting
model = BayesianRegression(alpha=2e-3, beta=2) model.fit(X_train, y_train) y, y_err = model.predict(X_test, return_std=True) plt.scatter(x_train, y_train, facecolor="none", edgecolor="b", s=50, label="training data") plt.plot(x_test, y_test, c="g", label="$\sin(2\pi x)$") plt.plot(x_test, y, c="r", label="mean") plt.fill_between(x_test, y - y_err, y + y_err, color="pink", label="std.", alpha=0.5) plt.xlim(-0.1, 1.1) plt.ylim(-1.5, 1.5) plt.annotate("M=9", xy=(0.8, 1)) plt.legend(bbox_to_anchor=(1.05, 1.), loc=2, borderaxespad=0.) plt.show()
_____no_output_____
MIT
notebooks/ch01_Introduction.ipynb
wenbos3109/PRML